-
Ministral 3
Authors:
Alexander H. Liu,
Kartik Khandelwal,
Sandeep Subramanian,
Victor Jouault,
Abhinav Rastogi,
Adrien Sadé,
Alan Jeffares,
Albert Jiang,
Alexandre Cahill,
Alexandre Gavaudan,
Alexandre Sablayrolles,
Amélie Héliou,
Amos You,
Andy Ehrenberg,
Andy Lo,
Anton Eliseev,
Antonia Calvi,
Avinash Sooriyarachchi,
Baptiste Bout,
Baptiste Rozière,
Baudouin De Monicault,
Clémence Lanfranchi,
Corentin Barreau,
Cyprien Courtot,
Daniele Grattarola
, et al. (95 additional authors not shown)
Abstract:
We introduce the Ministral 3 series, a family of parameter-efficient dense language models designed for compute and memory constrained applications, available in three model sizes: 3B, 8B, and 14B parameters. For each model size, we release three variants: a pretrained base model for general-purpose use, an instruction finetuned, and a reasoning model for complex problem-solving. In addition, we p…
▽ More
We introduce the Ministral 3 series, a family of parameter-efficient dense language models designed for compute and memory constrained applications, available in three model sizes: 3B, 8B, and 14B parameters. For each model size, we release three variants: a pretrained base model for general-purpose use, an instruction finetuned, and a reasoning model for complex problem-solving. In addition, we present our recipe to derive the Ministral 3 models through Cascade Distillation, an iterative pruning and continued training with distillation technique. Each model comes with image understanding capabilities, all under the Apache 2.0 license.
△ Less
Submitted 13 January, 2026;
originally announced January 2026.
-
Devstral: Fine-tuning Language Models for Coding Agent Applications
Authors:
Abhinav Rastogi,
Adam Yang,
Albert Q. Jiang,
Alexander H. Liu,
Alexandre Sablayrolles,
Amélie Héliou,
Amélie Martin,
Anmol Agarwal,
Andy Ehrenberg,
Andy Lo,
Antoine Roux,
Arthur Darcet,
Arthur Mensch,
Baptiste Bout,
Baptiste Rozière,
Baudouin De Monicault,
Chris Bamford,
Christian Wallenwein,
Christophe Renaudin,
Clémence Lanfranchi,
Clément Denoix,
Corentin Barreau,
Darius Dabert Devon Mizelle,
Diego de las Casas,
Elliot Chane-Sane
, et al. (78 additional authors not shown)
Abstract:
We introduce Devstral-Small, a lightweight open source model for code agents with the best performance among models below 100B size. In this technical report, we give an overview of how we design and develop a model and craft specializations in agentic software development. The resulting model, Devstral-Small is a small 24B model, fast and easy to serve. Despite its size, Devstral-Small still atta…
▽ More
We introduce Devstral-Small, a lightweight open source model for code agents with the best performance among models below 100B size. In this technical report, we give an overview of how we design and develop a model and craft specializations in agentic software development. The resulting model, Devstral-Small is a small 24B model, fast and easy to serve. Despite its size, Devstral-Small still attains competitive performance compared to models more than an order of magnitude larger.
△ Less
Submitted 8 August, 2025;
originally announced September 2025.
-
Voxtral
Authors:
Alexander H. Liu,
Andy Ehrenberg,
Andy Lo,
Clément Denoix,
Corentin Barreau,
Guillaume Lample,
Jean-Malo Delignon,
Khyathi Raghavi Chandu,
Patrick von Platen,
Pavankumar Reddy Muddireddy,
Sanchit Gandhi,
Soham Ghosh,
Srijan Mishra,
Thomas Foubert,
Abhinav Rastogi,
Adam Yang,
Albert Q. Jiang,
Alexandre Sablayrolles,
Amélie Héliou,
Amélie Martin,
Anmol Agarwal,
Antoine Roux,
Arthur Darcet,
Arthur Mensch,
Baptiste Bout
, et al. (81 additional authors not shown)
Abstract:
We present Voxtral Mini and Voxtral Small, two multimodal audio chat models. Voxtral is trained to comprehend both spoken audio and text documents, achieving state-of-the-art performance across a diverse range of audio benchmarks, while preserving strong text capabilities. Voxtral Small outperforms a number of closed-source models, while being small enough to run locally. A 32K context window enab…
▽ More
We present Voxtral Mini and Voxtral Small, two multimodal audio chat models. Voxtral is trained to comprehend both spoken audio and text documents, achieving state-of-the-art performance across a diverse range of audio benchmarks, while preserving strong text capabilities. Voxtral Small outperforms a number of closed-source models, while being small enough to run locally. A 32K context window enables the model to handle audio files up to 40 minutes in duration and long multi-turn conversations. We also contribute three benchmarks for evaluating speech understanding models on knowledge and trivia. Both Voxtral models are released under Apache 2.0 license.
△ Less
Submitted 17 July, 2025;
originally announced July 2025.
-
Magistral
Authors:
Mistral-AI,
:,
Abhinav Rastogi,
Albert Q. Jiang,
Andy Lo,
Gabrielle Berrada,
Guillaume Lample,
Jason Rute,
Joep Barmentlo,
Karmesh Yadav,
Kartik Khandelwal,
Khyathi Raghavi Chandu,
Léonard Blier,
Lucile Saulnier,
Matthieu Dinot,
Maxime Darrin,
Neha Gupta,
Roman Soletskyi,
Sagar Vaze,
Teven Le Scao,
Yihan Wang,
Adam Yang,
Alexander H. Liu,
Alexandre Sablayrolles,
Amélie Héliou
, et al. (76 additional authors not shown)
Abstract:
We introduce Magistral, Mistral's first reasoning model and our own scalable reinforcement learning (RL) pipeline. Instead of relying on existing implementations and RL traces distilled from prior models, we follow a ground up approach, relying solely on our own models and infrastructure. Notably, we demonstrate a stack that enabled us to explore the limits of pure RL training of LLMs, present a s…
▽ More
We introduce Magistral, Mistral's first reasoning model and our own scalable reinforcement learning (RL) pipeline. Instead of relying on existing implementations and RL traces distilled from prior models, we follow a ground up approach, relying solely on our own models and infrastructure. Notably, we demonstrate a stack that enabled us to explore the limits of pure RL training of LLMs, present a simple method to force the reasoning language of the model, and show that RL on text data alone maintains most of the initial checkpoint's capabilities. We find that RL on text maintains or improves multimodal understanding, instruction following and function calling. We present Magistral Medium, trained for reasoning on top of Mistral Medium 3 with RL alone, and we open-source Magistral Small (Apache 2.0) which further includes cold-start data from Magistral Medium.
△ Less
Submitted 12 June, 2025;
originally announced June 2025.
-
AllTracker: Efficient Dense Point Tracking at High Resolution
Authors:
Adam W. Harley,
Yang You,
Xinglong Sun,
Yang Zheng,
Nikhil Raghuraman,
Yunqi Gu,
Sheldon Liang,
Wen-Hsuan Chu,
Achal Dave,
Pavel Tokmakov,
Suya You,
Rares Ambrus,
Katerina Fragkiadaki,
Leonidas J. Guibas
Abstract:
We introduce AllTracker: a model that estimates long-range point tracks by way of estimating the flow field between a query frame and every other frame of a video. Unlike existing point tracking methods, our approach delivers high-resolution and dense (all-pixel) correspondence fields, which can be visualized as flow maps. Unlike existing optical flow methods, our approach corresponds one frame to…
▽ More
We introduce AllTracker: a model that estimates long-range point tracks by way of estimating the flow field between a query frame and every other frame of a video. Unlike existing point tracking methods, our approach delivers high-resolution and dense (all-pixel) correspondence fields, which can be visualized as flow maps. Unlike existing optical flow methods, our approach corresponds one frame to hundreds of subsequent frames, rather than just the next frame. We develop a new architecture for this task, blending techniques from existing work in optical flow and point tracking: the model performs iterative inference on low-resolution grids of correspondence estimates, propagating information spatially via 2D convolution layers, and propagating information temporally via pixel-aligned attention layers. The model is fast and parameter-efficient (16 million parameters), and delivers state-of-the-art point tracking accuracy at high resolution (i.e., tracking 768x1024 pixels, on a 40G GPU). A benefit of our design is that we can train jointly on optical flow datasets and point tracking datasets, and we find that doing so is crucial for top performance. We provide an extensive ablation study on our architecture details and training recipe, making it clear which details matter most. Our code and model weights are available at https://alltracker.github.io
△ Less
Submitted 1 August, 2025; v1 submitted 8 June, 2025;
originally announced June 2025.
-
Pixtral 12B
Authors:
Pravesh Agrawal,
Szymon Antoniak,
Emma Bou Hanna,
Baptiste Bout,
Devendra Chaplot,
Jessica Chudnovsky,
Diogo Costa,
Baudouin De Monicault,
Saurabh Garg,
Theophile Gervet,
Soham Ghosh,
Amélie Héliou,
Paul Jacob,
Albert Q. Jiang,
Kartik Khandelwal,
Timothée Lacroix,
Guillaume Lample,
Diego Las Casas,
Thibaut Lavril,
Teven Le Scao,
Andy Lo,
William Marshall,
Louis Martin,
Arthur Mensch,
Pavankumar Muddireddy
, et al. (17 additional authors not shown)
Abstract:
We introduce Pixtral-12B, a 12--billion-parameter multimodal language model. Pixtral-12B is trained to understand both natural images and documents, achieving leading performance on various multimodal benchmarks, surpassing a number of larger models. Unlike many open-source models, Pixtral is also a cutting-edge text model for its size, and does not compromise on natural language performance to ex…
▽ More
We introduce Pixtral-12B, a 12--billion-parameter multimodal language model. Pixtral-12B is trained to understand both natural images and documents, achieving leading performance on various multimodal benchmarks, surpassing a number of larger models. Unlike many open-source models, Pixtral is also a cutting-edge text model for its size, and does not compromise on natural language performance to excel in multimodal tasks. Pixtral uses a new vision encoder trained from scratch, which allows it to ingest images at their natural resolution and aspect ratio. This gives users flexibility on the number of tokens used to process an image. Pixtral is also able to process any number of images in its long context window of 128K tokens. Pixtral 12B substanially outperforms other open models of similar sizes (Llama-3.2 11B \& Qwen-2-VL 7B). It also outperforms much larger open models like Llama-3.2 90B while being 7x smaller. We further contribute an open-source benchmark, MM-MT-Bench, for evaluating vision-language models in practical scenarios, and provide detailed analysis and code for standardized evaluation protocols for multimodal LLMs. Pixtral-12B is released under Apache 2.0 license.
△ Less
Submitted 10 October, 2024; v1 submitted 9 October, 2024;
originally announced October 2024.
-
Support-Set Context Matters for Bongard Problems
Authors:
Nikhil Raghuraman,
Adam W. Harley,
Leonidas Guibas
Abstract:
Current machine learning methods struggle to solve Bongard problems, which are a type of IQ test that requires deriving an abstract "concept" from a set of positive and negative "support" images, and then classifying whether or not a new query image depicts the key concept. On Bongard-HOI, a benchmark for natural-image Bongard problems, most existing methods have reached at best 69% accuracy (wher…
▽ More
Current machine learning methods struggle to solve Bongard problems, which are a type of IQ test that requires deriving an abstract "concept" from a set of positive and negative "support" images, and then classifying whether or not a new query image depicts the key concept. On Bongard-HOI, a benchmark for natural-image Bongard problems, most existing methods have reached at best 69% accuracy (where chance is 50%). Low accuracy is often attributed to neural nets' lack of ability to find human-like symbolic rules. In this work, we point out that many existing methods are forfeiting accuracy due to a much simpler problem: they do not adapt image features given information contained in the support set as a whole, and rely instead on information extracted from individual supports. This is a critical issue, because the "key concept" in a typical Bongard problem can often only be distinguished using multiple positives and multiple negatives. We explore simple methods to incorporate this context and show substantial gains over prior works, leading to new state-of-the-art accuracy on Bongard-LOGO (75.3%) and Bongard-HOI (76.4%) compared to methods with equivalent vision backbone architectures and strong performance on the original Bongard problem set (60.8%).
△ Less
Submitted 30 November, 2024; v1 submitted 6 September, 2023;
originally announced September 2023.
-
Federated Learning on Patient Data for Privacy-Protecting Polycystic Ovary Syndrome Treatment
Authors:
Lucia Morris,
Tori Qiu,
Nikhil Raghuraman
Abstract:
The field of women's endocrinology has trailed behind data-driven medical solutions, largely due to concerns over the privacy of patient data. Valuable datapoints about hormone levels or menstrual cycling could expose patients who suffer from comorbidities or terminate a pregnancy, violating their privacy. We explore the application of Federated Learning (FL) to predict the optimal drug for patien…
▽ More
The field of women's endocrinology has trailed behind data-driven medical solutions, largely due to concerns over the privacy of patient data. Valuable datapoints about hormone levels or menstrual cycling could expose patients who suffer from comorbidities or terminate a pregnancy, violating their privacy. We explore the application of Federated Learning (FL) to predict the optimal drug for patients with polycystic ovary syndrome (PCOS). PCOS is a serious hormonal disorder impacting millions of women worldwide, yet it's poorly understood and its research is stunted by a lack of patient data. We demonstrate that a variety of FL approaches succeed on a synthetic PCOS patient dataset. Our proposed FL models are a tool to access massive quantities of diverse data and identify the most effective treatment option while providing PCOS patients with privacy guarantees.
△ Less
Submitted 22 August, 2023;
originally announced August 2023.
-
Influence of Mobility Restrictions on Transmission of COVID-19 in the state of Maryland -- the USA
Authors:
Nandini Raghuraman,
Kartik Kaushik
Abstract:
Background: The novel coronavirus, COVID-19, was first detected in the United States in January 2020. To curb the spread of the disease in mid-March, different states issued mandatory stay-at-home (SAH) orders. These nonpharmaceutical interventions were mandated based on prior experiences, such as the 1918 influenza epidemic. Hence, we decided to study the impact of restrictions on mobility on red…
▽ More
Background: The novel coronavirus, COVID-19, was first detected in the United States in January 2020. To curb the spread of the disease in mid-March, different states issued mandatory stay-at-home (SAH) orders. These nonpharmaceutical interventions were mandated based on prior experiences, such as the 1918 influenza epidemic. Hence, we decided to study the impact of restrictions on mobility on reducing COVID-19 transmission. Methods: We designed an ecological time series study with our exposure variable as Mobility patterns in the state of Maryland for March- December 2020 and our outcome variable as the COVID-19 hospitalizations for the same period. We built an Extreme Gradient Boosting (XGBoost) ensemble machine learning model and regressed the lagged COVID-19 hospitalizations with Mobility volume for different regions of Maryland. Results: We found an 18% increase in COVID-19 hospitalizations when mobility was increased by a factor of five, similarly a 43% increase when mobility was further increased by a factor of ten. Conclusion: The findings of our study demonstrated a positive linear relationship between mobility and the incidence of COVID-19 cases. These findings are partially consistent with other studies suggesting the benefits of mobility restrictions. Although more detailed approach is needed to precisely understand the benefits and limitations of mobility restrictions as part of a response to the COVID-19 pandemic.
△ Less
Submitted 1 December, 2021; v1 submitted 24 September, 2021;
originally announced September 2021.