1 - 30 of 298 available models.
Model

Google’s latest Gemma, small yet strong for chat and generation

6m

500K+

55

Model

Qwen3 is the latest Qwen LLM, built for top-tier coding, math, reasoning, and language tasks.

4m

500K+

121

Model

Solid LLaMA 3 update, reliable for coding, chat, and Q&A tasks

11m

100K+

25

Model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

4m

100K+

42

Model

Tiny LLM built for speed, edge devices, and local development

6m

100K+

32

Model

Distilled LLaMA by DeepSeek, fast and optimized for real-world tasks

10m

100K+

76

Model

Qwen3-Coder is Qwen’s new series of coding agent models.

21d

100K+

21

Model

The most advanced Qwen model yet, with major gains in text, vision, video, and reasoning.

4m

100K+

9

Model

Versatile Qwen update with better language skills and wider support

11m

100K+

9

Model

Efficient multimodal AI for text, image, audio, and video on low-resource devices.

9m

50K+

10

Model

Google’s latest Gemma, in its QAT (quantization aware trained) variant

6m

50K+

21

Model

Microsoft’s compact model, surprisingly capable at reasoning and code

11m

50K+

22

Model

Newest LLama 3 release with improved reasoning and generation quality

11m

50K+

18

Model

Efficient open model with top-tier performance and fast inference

11m

50K+

20

Model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

3m

50K+

2

Model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

3m

10K+

1

Model

DeepCoder-14B-Preview is a code reasoning LLM fine-tuned to scale up to long context lengths

11m

10K+

13

Model

DeepSeek-V3.2 boosts efficiency and reasoning with DSA, scalable RL, agentic data—IMO/IOI wins.

3m

10K+

9

Model

SmolLM3 is a 3.1B model for efficient on-device use, with strong performance in chat

8m

10K+

7

Model

Google’s latest Gemma, small yet strong for chat and generation

5m

10K+

1

Model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

2m

10K+

3

Model

Meta’s LLama 3.1: Chat-focused, benchmark-strong, multilingual-ready.

11m

10K+

6

Model

Efficient 80B MoE coding model with 3B activated params, 256K context, and agentic capabilities

29d

10K+

1

Model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

4m

10K+

Model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

3m

10K+

4

Model

397B MoE model with 17B activation for reasoning, coding, agents, and multimodal understanding

22d

10K+

3

Model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

3m

10K+

1

Model

Granite Docling is a multimodal model for efficient document conversion.

5m

10K+

2

Model

Image generation model, uses a base latent diffusion model plus a refiner.

1m

10K+

2

Model

Safety reasoning models for policy-based text classification and foundational safety tasks.

4m

10K+

2