Mistral Small 3.2 24B

Release Date: Jun 20, 2025     Creator: Mistral AI

Mistral Small 3.2 (24B) is tuned for strong instruction following and cleaner responses. Reduces repetition while improving tool and function-calling reliability.

Model Specifications
Knowledge Cutoff
late 2023
N/A
Context (VividLLM)
64,000 Tokens
N/A Tokens
With a context window of 64,000 tokens, this model can 'remember' and analyze up to 96 pages of data at once. This makes it a top choice for processing technical manuals and multi-chapter reports.
Context (Native)
131,072 Tokens
N/A Tokens
Input ModalitiesTextImagePDFTextImagePDF
Output TypeTextText
Max Output81928192
Input Weight
0.50 x (Casual)
N/A
Output Weight
0.50 x (Casual)
N/A
Industry Benchmarks
Intelligence Index (Artificial Analysis) 15.1 N/A
Coding Index (Artificial Analysis)13.3N/A
Math Index (Artificial Analysis)27N/A
Response Speed
Output Tokens per second 143.67 N/A
Median Time to First Token Seconds0.37sN/A
Launch VividLLM Now => :

Benchmarks accessed via : Artificial Analysis

Disclaimer: We are providing Image Upload options to only limit models as of now, so some models which have image upload options might now have that feature on our site.