Home - LLM models
We may earn compensation if you purchase via some links
An advanced multimodal model capable of processing text, images, audio and video simultaneously. Benefit from real-time generation of text and voice responses
Access OpenAI's most powerful reasoning model, now more accessible and less expensive. It's particularly effective for coding and science-related tasks
Code faster with two open-source models (1.3B and 123B parameters) and a CLI agent that explores, modifies, and executes multi-file changes in natural language. Features 256K context tokens and Git…
A 14-billion-parameter language model that excels at complex reasoning, especially mathematics. Trained mainly on high-quality synthetic data, it outperforms larger models on certain specialized tasks
Enjoy a powerful AI model that can think step-by-step or respond instantly, with exceptional programming and web development performance. Already available on all Anthropic platforms
An open source LLM model (under Apache 2.0 license) that delivers top-level performance. Accessible and affordable API
Create, deploy and monitor LLMs models with enterprise-optimized functionality
A family of open-source multimodal models with exceptional performance, including Scout (10M token popup) and Maverick (outperforming GPT-4o). Use of MoE architecture and native text-image fusion
An AI reasoning model that analyzes information before responding with improved accuracy. Benefit from a contextual window of a million tokens, excellent encoding capabilities and high multimodal comprehension
The OpenAI model built for long and complicated tasks. It has fewer hallucinations and improved multi-step reasoning. This LLM is great for creating tables, presentations, analyzing long documents, and coding
An AI under GPT-4 that can accurately describe an image, create a website, generate a story, etc.
An LLM model that handles complex reasoning, agents, tools, and long contexts. It adapts to tasks such as chatting, coding, searching, or productivity assistance