Skip to Content
SdkExpoHooksuseChat

useChat

useChat(options?: object): UseChatResult

Defined in: src/expo/useChat.ts:146 

A React hook for managing chat completions with authentication.

React Native version - This is a lightweight version that only supports API-based chat completions. Local chat and client-side tools are not available in React Native.

Parameters

ParameterTypeDescription

options?

object

Optional configuration object

options.apiType?

ApiType

Which API endpoint to use. Default: “responses”

  • “responses”: OpenAI Responses API (supports thinking, reasoning, conversations)
  • “completions”: OpenAI Chat Completions API (wider model compatibility)

options.baseUrl?

string

Optional base URL for the API requests.

options.getToken?

() => Promise<string | null>

An async function that returns an authentication token.

options.onData?

(chunk: string) => void

Callback function to be called when a new data chunk is received.

options.onError?

(error: Error) => void

Callback function to be called when an unexpected error is encountered.

Note: This callback is NOT called for aborted requests (via stop() or component unmount). Aborts are intentional actions and are not considered errors. To detect aborts, check the error field in the sendMessage result: result.error === "Request aborted".

options.onFinish?

(response: ApiResponse) => void

Callback function to be called when the chat completion finishes successfully. Receives raw API response - either Responses API or Completions API format.

options.onServerToolCall?

(toolCall: ServerToolCallEvent) => void

Callback function to be called when a server-side tool (MCP) is invoked during streaming. Use this to show activity indicators like “Searching…” in the UI.

options.onThinking?

(chunk: string) => void

Callback function to be called when thinking/reasoning content is received. This is called with delta chunks as the model “thinks” through a problem.

options.onToolCall?

(toolCall: LlmapiToolCall) => void

Callback function to be called when a tool call is requested by the LLM. This is called for tools that don’t have an executor or have autoExecute=false. The app should execute the tool and send the result back.

options.smoothing?

boolean | StreamSmoothingConfig

Controls adaptive output smoothing for streaming responses. Fast models can return text faster than is comfortable to read — smoothing buffers incoming chunks and releases them at a consistent, adaptive pace.

  • true or omitted: enabled with defaults (200→400 chars/sec over 3s)
  • false: disabled, callbacks fire immediately with raw chunks
  • StreamSmoothingConfig: custom speed/ramp configuration

Default

true

Returns

UseChatResult

An object containing:

  • isLoading: A boolean indicating whether a request is currently in progress
  • sendMessage: An async function to send chat messages
  • stop: A function to abort the current request

Example

const { isLoading, sendMessage, stop } = useChat({ getToken: async () => await getAuthToken(), onFinish: (response) => console.log("Chat finished:", response), onError: (error) => console.error("Chat error:", error) }); const handleSend = async () => { const result = await sendMessage({ messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }], model: 'gpt-4o-mini' }); };
Last updated on