Models
One of the main advantages of building with Anuma is access to models from OpenAI, Google, Anthropic, xAI, and open-source providers through a single API. You specify the model per request, so you can use a lightweight model for simple tasks and a reasoning model for complex ones — without changing any integration code.
Models span different capabilities: text generation, vision (image understanding), reasoning (extended thinking), image generation, code generation, and audio processing. Many models combine multiple capabilities — for example, GPT-4o and Claude handle both text and vision in one model.
To specify a model, pass its identifier when calling sendMessage:
await sendMessage({
content: "Explain quantum computing",
model: "gpt-4o-mini",
});To fetch the list of available models at runtime, use useModels. This returns the current models from the Portal API, so your app always reflects what’s available without hardcoding model names.
See the full list of available models.