Featured providers
While all these LangChain classes support the indicated advanced feature, you may have to open the provider-specific documentation to learn which hosted models or backends support the feature.
Chat Completions API
Certain model providers offer endpoints that are compatible with OpenAI’s Chat Completions API. In such case, you can useChatOpenAI with a custom base_url to connect to these endpoints.
Example: OpenRouter
Example: OpenRouter
To use OpenRouter, you will need to sign up for an account and obtain an API key.Refer to the OpenRouter documentation for more details.
To capture reasoning tokens,This is a known limitation with 
- Switch imports from langchain_openaitolangchain_deepseek
- Use ChatDeepSeekinstead ofChatOpenAI. You will need to change parambase_urltoapi_base.
- Adjust reasoning parameters as needed under extra_body, e.g.:
ChatOpenAI and will be addressed in a future release.All chat models
Abso
AI21 Labs
AI/ML API
Alibaba Cloud PAI EAS
Anthropic
AzureAIChatCompletionsModel
Azure OpenAI
Azure ML Endpoint
Baichuan Chat
Baidu Qianfan
Baseten
AWS Bedrock
Cerebras
CloudflareWorkersAI
Cohere
ContextualAI
Coze Chat
Dappier AI
Databricks
DeepInfra
DeepSeek
Eden AI
EverlyAI
Featherless AI
Fireworks
ChatFriendli
Goodfire
Google Gemini
Google Cloud Vertex AI
GPTRouter
DigitalOcean Gradient
GreenNode
Groq
ChatHuggingFace
IBM watsonx.ai
JinaChat
Kinetica
Konko
LiteLLM
Llama 2 Chat
Llama API
LlamaEdge
Llama.cpp
maritalk
MiniMax
MistralAI
MLX
ModelScope
Moonshot
Naver
Nebius
Netmind
NVIDIA AI Endpoints
ChatOCIModelDeployment
OCIGenAI
ChatOctoAI
Ollama
OpenAI
Outlines
Perplexity
Pipeshift
ChatPredictionGuard
PremAI
PromptLayer ChatOpenAI
Qwen QwQ
Qwen
Reka
RunPod Chat Model
SambaNovaCloud
SambaStudio
ChatSeekrFlow
Snowflake Cortex
SparkLLM Chat
Nebula (Symbl.ai)
Tencent Hunyuan
Together
Tongyi Qwen
Upstage
vLLM Chat
Volc Engine Maas
ChatWriter
xAI
Xinference
YandexGPT
ChatYI
Yuan2.0
ZHIPU AI
If you’d like to contribute an integration, see Contributing integrations.
Connect these docs programmatically to Claude, VSCode, and more via MCP for    real-time answers.