Skip to main content
Chat models are language models that use a sequence of messages as inputs and return messages as outputs .
While all these LangChain classes support the indicated advanced feature, you may have to open the provider-specific documentation to learn which hosted models or backends support the feature.

Chat Completions API

Certain model providers offer endpoints that are compatible with OpenAI’s Chat Completions API. In such case, you can use ChatOpenAI with a custom base_url to connect to these endpoints.
To use OpenRouter, you will need to sign up for an account and obtain an API key.
from langchain_openai import ChatOpenAI

model = ChatOpenAI(
    model="...",  # Specify a model available on OpenRouter
    api_key="OPENROUTER_API_KEY",
    base_url="https://openrouter.ai/api/v1",
)
Refer to the OpenRouter documentation for more details.
To capture reasoning tokens,
  1. Switch imports from langchain_openai to langchain_deepseek
  2. Use ChatDeepSeek instead of ChatOpenAI. You will need to change param base_url to api_base.
  3. Adjust reasoning parameters as needed under extra_body, e.g.:
model = ChatDeepSeek(
    model="...",
    api_key="...",
    api_base="https://openrouter.ai/api/v1",
    extra_body={"reasoning": {"enabled": True}},
)
This is a known limitation with ChatOpenAI and will be addressed in a future release.

All chat models

Abso

AI21 Labs

AI/ML API

Alibaba Cloud PAI EAS

Anthropic

AzureAIChatCompletionsModel

Azure OpenAI

Azure ML Endpoint

Baichuan Chat

Baidu Qianfan

Baseten

AWS Bedrock

Cerebras

CloudflareWorkersAI

Cohere

ContextualAI

Coze Chat

Dappier AI

Databricks

DeepInfra

DeepSeek

Eden AI

EverlyAI

Featherless AI

Fireworks

ChatFriendli

Goodfire

Google Gemini

Google Cloud Vertex AI

GPTRouter

DigitalOcean Gradient

GreenNode

Groq

ChatHuggingFace

IBM watsonx.ai

JinaChat

Kinetica

Konko

LiteLLM

Llama 2 Chat

Llama API

LlamaEdge

Llama.cpp

maritalk

MiniMax

MistralAI

MLX

ModelScope

Moonshot

Naver

Nebius

Netmind

NVIDIA AI Endpoints

ChatOCIModelDeployment

OCIGenAI

ChatOctoAI

Ollama

OpenAI

Outlines

Perplexity

Pipeshift

ChatPredictionGuard

PremAI

PromptLayer ChatOpenAI

Qwen QwQ

Qwen

Reka

RunPod Chat Model

SambaNovaCloud

SambaStudio

ChatSeekrFlow

Snowflake Cortex

SparkLLM Chat

Nebula (Symbl.ai)

Tencent Hunyuan

Together

Tongyi Qwen

Upstage

vLLM Chat

Volc Engine Maas

ChatWriter

xAI

Xinference

YandexGPT

ChatYI

Yuan2.0

ZHIPU AI

If you’d like to contribute an integration, see Contributing integrations.

Connect these docs programmatically to Claude, VSCode, and more via MCP for real-time answers.