Class | Package | Local | Serializable | JS support | Package downloads | Package latest |
---|---|---|---|---|---|---|
ChatAnthropic | langchain-anthropic | ❌ | beta | ✅ |
Tool calling | Structured output | JSON mode | Image input | Audio input | Video input | Token-level streaming | Native async | Token usage | Logprobs |
---|---|---|---|---|---|---|---|---|---|
✅ | ✅ | ❌ | ✅ | ❌ | ❌ | ✅ | ✅ | ✅ | ❌ |
langchain-anthropic
integration package.
langchain-anthropic
package:
langchain-anthropic>=0.3.13
AIMessage.tool_calls
):
thinking
parameter when initializing ChatAnthropic
. It can also be passed in as a kwarg during invocation.
You will need to specify a token budget to use this feature. See usage example below:
cache_control
key. See examples below:
"extended-cache-ttl-2025-04-11"
beta header:"cache_control": {"type": "ephemeral", "ttl": "1h"}
.Details of cached token counts will be included on the InputTokenDetails
of response’s usage_metadata
:cache_control
. Claude will automatically use the longest previously-cached prefix for follow-up messages.
Below, we implement a simple chatbot that incorporates this feature. We follow the LangChain chatbot tutorial, but add a custom reducer that automatically marks the last content block in each user message with cache_control
. See below:
cache_control
keys.
search result
content blocks with "citations": {"enabled": True}
are included in a query, Claude may generate citations in its response.
langchain-anthropic>=0.3.17
search_result
content blocks in Anthropic’s native format. For example:
search-results-2025-06-09
beta when instantiating ChatAnthropic. You can see an end-to-end example below.
langchain-anthropic>=0.3.13
langchain-anthropic>=0.3.14
langchain-anthropic>=0.3.14