Home - LLM models - Page 7
We may earn compensation if you purchase via some links
Explore and integrate advanced AI models into your applications easily. Benefit from optimized performance and a wide range of capabilities for your projects
Google's latest AI model with multimodal and agentic capabilities. Generate text, images and audio using a variety of external tools
An audacious start-up that could accelerate the execution speed of AI models by up to 10 times thanks to the development of its LPU (Language Processing Unit)
New-generation open-source AI models from Google DeepMind. More powerful and more accurate, these LLM models are available in two versions: 9B and 27B
A new generation of AI language models from Meta, including lightweight and multimodal versions. Create AI applications on mobile, analyze images and text, all with open and customizable models
A Tencent diffusion language model capable of producing multiple tokens in parallel, with inference speeds up to 10 times faster than traditional autoregressive LLMs. Compatible with standard causal attention and…
Google's most intelligent AI model. Solve complex problems, code and perform mathematics with parallel thinking (1M contextual window). Available to Google AI Ultra subscribers
Discover the inner workings of GPT in an interactive spreadsheet. See the architecture, matrix calculations and data flow. Fun and instructive
A family of advanced multimodal language models developed by NVIDIA. Outperforms proprietary models on many vision and language tasks, while improving performance on text tasks
Anthropic's most powerful model, capable of handling contexts of 1 million tokens and reasoning adaptively. It coordinates multiple agents in parallel to solve complex tasks. You control the level of…
An open-source LLM with 200 billion parameters and exceptional performance (at low cost). 2x faster and 8x cheaper than most of its competitors
Google's most intelligent multimodal model: advanced reasoning, deep contextual understanding, autonomous agents, visual coding, and Deep Think mode. It performs particularly well for agentic coding