-
Nemotron 3 Super: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning
Authors:
NVIDIA,
:,
Aakshita Chandiramani,
Aaron Blakeman,
Abdullahi Olaoye,
Abhibha Gupta,
Abhilash Somasamudramath,
Abhinav Khattar,
Adeola Adesoba,
Adi Renduchintala,
Adil Asif,
Aditya Agrawal,
Aditya Vavre,
Ahmad Kiswani,
Aishwarya Padmakumar,
Ajay Hotchandani,
Akanksha Shukla,
Akhiad Bercovich,
Aleksander Ficek,
Aleksandr Shaposhnikov,
Alex Gronskiy,
Alex Kondratenko,
Alex Neefus,
Alex Steiner,
Alex Yang
, et al. (522 additional authors not shown)
Abstract:
We describe the pre-training, post-training, and quantization of Nemotron 3 Super, a 120 billion (active 12 billion) parameter hybrid Mamba-Attention Mixture-of-Experts model. Nemotron 3 Super is the first model in the Nemotron 3 family to 1) be pre-trained in NVFP4, 2) leverage LatentMoE, a new Mixture-of-Experts architecture that optimizes for both accuracy per FLOP and accuracy per parameter, a…
▽ More
We describe the pre-training, post-training, and quantization of Nemotron 3 Super, a 120 billion (active 12 billion) parameter hybrid Mamba-Attention Mixture-of-Experts model. Nemotron 3 Super is the first model in the Nemotron 3 family to 1) be pre-trained in NVFP4, 2) leverage LatentMoE, a new Mixture-of-Experts architecture that optimizes for both accuracy per FLOP and accuracy per parameter, and 3) include MTP layers for inference acceleration through native speculative decoding. We pre-trained Nemotron 3 Super on 25 trillion tokens followed by post-training using supervised fine tuning (SFT) and reinforcement learning (RL). The final model supports up to 1M context length and achieves comparable accuracy on common benchmarks, while also achieving up to 2.2x and 7.5x higher inference throughput compared to GPT-OSS-120B and Qwen3.5-122B, respectively. Nemotron 3 Super datasets, along with the base, post-trained, and quantized checkpoints, are open-sourced on HuggingFace.
△ Less
Submitted 14 April, 2026;
originally announced April 2026.
-
NVIDIA Nemotron 3: Efficient and Open Intelligence
Authors:
NVIDIA,
:,
Aaron Blakeman,
Aaron Grattafiori,
Aarti Basant,
Abhibha Gupta,
Abhinav Khattar,
Adi Renduchintala,
Aditya Vavre,
Akanksha Shukla,
Akhiad Bercovich,
Aleksander Ficek,
Aleksandr Shaposhnikov,
Alex Kondratenko,
Alexander Bukharin,
Alexandre Milesi,
Ali Taghibakhshi,
Alisa Liu,
Amelia Barton,
Ameya Sunil Mahabaleshwarkar,
Amir Klein,
Amit Zuker,
Amnon Geifman,
Amy Shen,
Anahita Bhiwandiwalla
, et al. (334 additional authors not shown)
Abstract:
We introduce the Nemotron 3 family of models - Nano, Super, and Ultra. These models deliver strong agentic, reasoning, and conversational capabilities. The Nemotron 3 family uses a Mixture-of-Experts hybrid Mamba-Transformer architecture to provide best-in-class throughput and context lengths of up to 1M tokens. Super and Ultra models are trained with NVFP4 and incorporate LatentMoE, a novel appro…
▽ More
We introduce the Nemotron 3 family of models - Nano, Super, and Ultra. These models deliver strong agentic, reasoning, and conversational capabilities. The Nemotron 3 family uses a Mixture-of-Experts hybrid Mamba-Transformer architecture to provide best-in-class throughput and context lengths of up to 1M tokens. Super and Ultra models are trained with NVFP4 and incorporate LatentMoE, a novel approach that improves model quality. The two larger models also include MTP layers for faster text generation. All Nemotron 3 models are post-trained using multi-environment reinforcement learning enabling reasoning, multi-step tool use, and support granular reasoning budget control. Nano, the smallest model, outperforms comparable models in accuracy while remaining extremely cost-efficient for inference. Super is optimized for collaborative agents and high-volume workloads such as IT ticket automation. Ultra, the largest model, provides state-of-the-art accuracy and reasoning performance. Nano is released together with its technical report and this white paper, while Super and Ultra will follow in the coming months. We will openly release the model weights, pre- and post-training software, recipes, and all data for which we hold redistribution rights.
△ Less
Submitted 23 December, 2025;
originally announced December 2025.
-
Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning
Authors:
NVIDIA,
:,
Aaron Blakeman,
Aaron Grattafiori,
Aarti Basant,
Abhibha Gupta,
Abhinav Khattar,
Adi Renduchintala,
Aditya Vavre,
Akanksha Shukla,
Akhiad Bercovich,
Aleksander Ficek,
Aleksandr Shaposhnikov,
Alex Kondratenko,
Alexander Bukharin,
Alexandre Milesi,
Ali Taghibakhshi,
Alisa Liu,
Amelia Barton,
Ameya Sunil Mahabaleshwarkar,
Amir Klein,
Amit Zuker,
Amnon Geifman,
Amy Shen,
Anahita Bhiwandiwalla
, et al. (289 additional authors not shown)
Abstract:
We present Nemotron 3 Nano 30B-A3B, a Mixture-of-Experts hybrid Mamba-Transformer language model. Nemotron 3 Nano was pretrained on 25 trillion text tokens, including more than 3 trillion new unique tokens over Nemotron 2, followed by supervised fine tuning and large-scale RL on diverse environments. Nemotron 3 Nano achieves better accuracy than our previous generation Nemotron 2 Nano while activa…
▽ More
We present Nemotron 3 Nano 30B-A3B, a Mixture-of-Experts hybrid Mamba-Transformer language model. Nemotron 3 Nano was pretrained on 25 trillion text tokens, including more than 3 trillion new unique tokens over Nemotron 2, followed by supervised fine tuning and large-scale RL on diverse environments. Nemotron 3 Nano achieves better accuracy than our previous generation Nemotron 2 Nano while activating less than half of the parameters per forward pass. It achieves up to 3.3x higher inference throughput than similarly-sized open models like GPT-OSS-20B and Qwen3-30B-A3B-Thinking-2507, while also being more accurate on popular benchmarks. Nemotron 3 Nano demonstrates enhanced agentic, reasoning, and chat abilities and supports context lengths up to 1M tokens. We release both our pretrained Nemotron 3 Nano 30B-A3B Base and post-trained Nemotron 3 Nano 30B-A3B checkpoints on Hugging Face.
△ Less
Submitted 23 December, 2025;
originally announced December 2025.
-
Llama-Nemotron: Efficient Reasoning Models
Authors:
Akhiad Bercovich,
Itay Levy,
Izik Golan,
Mohammad Dabbah,
Ran El-Yaniv,
Omri Puny,
Ido Galil,
Zach Moshe,
Tomer Ronen,
Najeeb Nabwani,
Ido Shahaf,
Oren Tropp,
Ehud Karpas,
Ran Zilberstein,
Jiaqi Zeng,
Soumye Singhal,
Alexander Bukharin,
Yian Zhang,
Tugrul Konuk,
Gerald Shen,
Ameya Sunil Mahabaleshwarkar,
Bilal Kartal,
Yoshi Suhara,
Olivier Delalleau,
Zijia Chen
, et al. (111 additional authors not shown)
Abstract:
We introduce the Llama-Nemotron series of models, an open family of heterogeneous reasoning models that deliver exceptional reasoning capabilities, inference efficiency, and an open license for enterprise use. The family comes in three sizes -- Nano (8B), Super (49B), and Ultra (253B) -- and performs competitively with state-of-the-art reasoning models such as DeepSeek-R1 while offering superior i…
▽ More
We introduce the Llama-Nemotron series of models, an open family of heterogeneous reasoning models that deliver exceptional reasoning capabilities, inference efficiency, and an open license for enterprise use. The family comes in three sizes -- Nano (8B), Super (49B), and Ultra (253B) -- and performs competitively with state-of-the-art reasoning models such as DeepSeek-R1 while offering superior inference throughput and memory efficiency. In this report, we discuss the training procedure for these models, which entails using neural architecture search from Llama 3 models for accelerated inference, knowledge distillation, and continued pretraining, followed by a reasoning-focused post-training stage consisting of two main parts: supervised fine-tuning and large scale reinforcement learning. Llama-Nemotron models are the first open-source models to support a dynamic reasoning toggle, allowing users to switch between standard chat and reasoning modes during inference. To further support open research and facilitate model development, we provide the following resources: 1. We release the Llama-Nemotron reasoning models -- LN-Nano, LN-Super, and LN-Ultra -- under the commercially permissive NVIDIA Open Model License Agreement. 2. We release the complete post-training dataset: Llama-Nemotron-Post-Training-Dataset. 3. We also release our training codebases: NeMo, NeMo-Aligner, and Megatron-LM.
△ Less
Submitted 9 September, 2025; v1 submitted 1 May, 2025;
originally announced May 2025.
-
Puzzle: Distillation-Based NAS for Inference-Optimized LLMs
Authors:
Akhiad Bercovich,
Tomer Ronen,
Talor Abramovich,
Nir Ailon,
Nave Assaf,
Mohammad Dabbah,
Ido Galil,
Amnon Geifman,
Yonatan Geifman,
Izhak Golan,
Netanel Haber,
Ehud Karpas,
Roi Koren,
Itay Levy,
Pavlo Molchanov,
Shahar Mor,
Zach Moshe,
Najeeb Nabwani,
Omri Puny,
Ran Rubin,
Itamar Schen,
Ido Shahaf,
Oren Tropp,
Omer Ullman Argov,
Ran Zilberstein
, et al. (1 additional authors not shown)
Abstract:
Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption. While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities. Using neural arch…
▽ More
Large language models (LLMs) offer remarkable capabilities, yet their high inference costs restrict wider adoption. While increasing parameter counts improves accuracy, it also broadens the gap between state-of-the-art capabilities and practical deployability. We present Puzzle, a hardware-aware framework that accelerates the inference of LLMs while preserving their capabilities. Using neural architecture search (NAS) at a large-scale, Puzzle optimizes models with tens of billions of parameters. Our approach utilizes blockwise local knowledge distillation (BLD) for parallel architecture exploration and employs mixed-integer programming for precise constraint optimization.
We showcase our framework's impact via Llama-3.1-Nemotron-51B-Instruct (Nemotron-51B) and Llama-3.3-Nemotron-49B, two publicly available models derived from Llama-70B-Instruct. Both models achieve a 2.17x inference throughput speedup, fitting on a single NVIDIA H100 GPU while retaining 98.4% of the original model's benchmark accuracies. These are the most accurate models supporting single H100 GPU inference with large batch sizes, despite training on 45B tokens at most, far fewer than the 15T used to train Llama-70B. Lastly, we show that lightweight alignment on these derived models allows them to surpass the parent model in specific capabilities. Our work establishes that powerful LLM models can be optimized for efficient deployment with only negligible loss in quality, underscoring that inference performance, not parameter count alone, should guide model selection.
△ Less
Submitted 3 June, 2025; v1 submitted 28 November, 2024;
originally announced November 2024.