useChat
useChat(
options?:object):UseChatResult
Defined in: src/react/useChat.ts:148
A React hook for managing chat completions with authentication.
This hook provides a convenient way to send chat messages to the LLM API with automatic token management and loading state handling. Streaming is enabled by default for better user experience.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
Optional configuration object |
|
|
|
Which API endpoint to use. Default: “responses”
|
|
|
|
Optional base URL for the API requests. |
|
|
() => |
An async function that returns an authentication token.
This token will be used as a Bearer token in the Authorization header.
If not provided, |
|
|
( |
Callback function to be called when a new data chunk is received. |
|
|
( |
Callback function to be called when an unexpected error is encountered. Note: This callback is NOT called for aborted requests (via |
|
|
( |
Callback function to be called when the chat completion finishes successfully. Receives raw API response - either Responses API or Completions API format. |
|
|
( |
Callback function to be called when a server-side tool (MCP) is invoked during streaming. Use this to show activity indicators like “Searching…” in the UI. |
|
|
( |
Callback function to be called when thinking/reasoning content is received. This is called with delta chunks as the model “thinks” through a problem. |
|
|
( |
Callback function to be called when a tool call is requested by the LLM. This is called for tools that don’t have an executor or have autoExecute=false. The app should execute the tool and send the result back. |
|
|
|
Controls adaptive output smoothing for streaming responses. Fast models can return text faster than is comfortable to read — smoothing buffers incoming chunks and releases them at a consistent, adaptive pace.
Default |
Returns
UseChatResult
Example
// Basic usage with API
const { isLoading, sendMessage, stop } = useChat({
getToken: async () => await getAuthToken(),
onFinish: (response) => console.log("Chat finished:", response),
onError: (error) => console.error("Chat error:", error)
});
const handleSend = async () => {
const result = await sendMessage({
messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }],
model: 'gpt-4o-mini'
});
};
// Using extended thinking (Anthropic Claude)
const result = await sendMessage({
messages: [{ role: 'user', content: [{ type: 'text', text: 'Solve this complex problem...' }] }],
model: 'anthropic/claude-3-7-sonnet-20250219',
thinking: { type: 'enabled', budget_tokens: 10000 },
onThinking: (chunk) => console.log('Thinking:', chunk)
});
// Using reasoning (OpenAI o-series)
const result = await sendMessage({
messages: [{ role: 'user', content: [{ type: 'text', text: 'Reason through this...' }] }],
model: 'openai/o1',
reasoning: { effort: 'high', summary: 'detailed' }
});