-
Scalable Training of Mixture-of-Experts Models with Megatron Core
Authors:
Zijie Yan,
Hongxiao Bai,
Xin Yao,
Dennis Liu,
Tong Liu,
Hongbin Liu,
Pingtian Li,
Evan Wu,
Shiqing Fan,
Li Tao,
Robin Zhang,
Yuzhong Wang,
Shifang Xu,
Jack Chang,
Xuwen Chen,
Kunlun Li,
Yan Bai,
Gao Deng,
Nan Zheng,
Vijay Anand Korthikanti,
Abhinav Khattar,
Ethan He,
Soham Govande,
Sangkug Lym,
Zhongbo Zhu
, et al. (20 additional authors not shown)
Abstract:
Scaling Mixture-of-Experts (MoE) training introduces systems challenges absent in dense models. Because each token activates only a subset of experts, this sparsity allows total parameters to grow much faster than per-token computation, creating coupled constraints across memory, communication, and computation. Optimizing one dimension often shifts pressure to another, demanding co-design across t…
▽ More
Scaling Mixture-of-Experts (MoE) training introduces systems challenges absent in dense models. Because each token activates only a subset of experts, this sparsity allows total parameters to grow much faster than per-token computation, creating coupled constraints across memory, communication, and computation. Optimizing one dimension often shifts pressure to another, demanding co-design across the full system stack.
We address these challenges for MoE training through integrated optimizations spanning memory (fine-grained recomputation, offloading, etc.), communication (optimized dispatchers, overlapping, etc.), and computation (Grouped GEMM, fusions, CUDA Graphs, etc.). The framework also provides Parallel Folding for flexible multi-dimensional parallelism, low-precision training support for FP8 and NVFP4, and efficient long-context training. On NVIDIA GB300 and GB200, it achieves 1,233/1,048 TFLOPS/GPU for DeepSeek-V3-685B and 974/919 TFLOPS/GPU for Qwen3-235B. As a performant, scalable, and production-ready open-source solution, it has been used across academia and industry for training MoE models ranging from billions to trillions of parameters on clusters scaling up to thousands of GPUs.
This report explains how these techniques work, their trade-offs, and their interactions at the systems level, providing practical guidance for scaling MoE models with Megatron Core.
△ Less
Submitted 10 March, 2026; v1 submitted 8 March, 2026;
originally announced March 2026.
-
LatentMoE: Toward Optimal Accuracy per FLOP and Parameter in Mixture of Experts
Authors:
Venmugil Elango,
Nidhi Bhatia,
Roger Waleffe,
Rasoul Shafipour,
Tomer Asida,
Abhinav Khattar,
Nave Assaf,
Maximilian Golub,
Joey Guman,
Tiyasa Mitra,
Ritchie Zhao,
Ritika Borkar,
Ran Zilberstein,
Mostofa Patwary,
Mohammad Shoeybi,
Bita Rouhani
Abstract:
Mixture of Experts (MoEs) have become a central component of many state-of-the-art open-source and proprietary large language models. Despite their widespread adoption, it remains unclear how close existing MoE architectures are to optimal with respect to inference cost, as measured by accuracy per floating-point operation and per parameter. In this work, we revisit MoE design from a hardware-soft…
▽ More
Mixture of Experts (MoEs) have become a central component of many state-of-the-art open-source and proprietary large language models. Despite their widespread adoption, it remains unclear how close existing MoE architectures are to optimal with respect to inference cost, as measured by accuracy per floating-point operation and per parameter. In this work, we revisit MoE design from a hardware-software co-design perspective, grounded in empirical and theoretical considerations. We characterize key performance bottlenecks across diverse deployment regimes, spanning offline high-throughput execution and online, latency-critical inference. Guided by these insights, we introduce LatentMoE, a new model architecture resulting from systematic design exploration and optimized for maximal accuracy per unit of compute. Empirical design space exploration at scales of up to 95B parameters and over a 1T-token training horizon, together with supporting theoretical analysis, shows that LatentMoE consistently outperforms standard MoE architectures in terms of accuracy per FLOP and per parameter. Given its strong performance, the LatentMoE architecture has been adopted by the flagship Nemotron-3 Super and Ultra models and scaled to substantially larger regimes, including longer token horizons and larger model sizes, as reported in Nvidia et al. (arXiv:2512.20856).
△ Less
Submitted 25 January, 2026;
originally announced January 2026.
-
NVIDIA Nemotron 3: Efficient and Open Intelligence
Authors:
NVIDIA,
:,
Aaron Blakeman,
Aaron Grattafiori,
Aarti Basant,
Abhibha Gupta,
Abhinav Khattar,
Adi Renduchintala,
Aditya Vavre,
Akanksha Shukla,
Akhiad Bercovich,
Aleksander Ficek,
Aleksandr Shaposhnikov,
Alex Kondratenko,
Alexander Bukharin,
Alexandre Milesi,
Ali Taghibakhshi,
Alisa Liu,
Amelia Barton,
Ameya Sunil Mahabaleshwarkar,
Amir Klein,
Amit Zuker,
Amnon Geifman,
Amy Shen,
Anahita Bhiwandiwalla
, et al. (334 additional authors not shown)
Abstract:
We introduce the Nemotron 3 family of models - Nano, Super, and Ultra. These models deliver strong agentic, reasoning, and conversational capabilities. The Nemotron 3 family uses a Mixture-of-Experts hybrid Mamba-Transformer architecture to provide best-in-class throughput and context lengths of up to 1M tokens. Super and Ultra models are trained with NVFP4 and incorporate LatentMoE, a novel appro…
▽ More
We introduce the Nemotron 3 family of models - Nano, Super, and Ultra. These models deliver strong agentic, reasoning, and conversational capabilities. The Nemotron 3 family uses a Mixture-of-Experts hybrid Mamba-Transformer architecture to provide best-in-class throughput and context lengths of up to 1M tokens. Super and Ultra models are trained with NVFP4 and incorporate LatentMoE, a novel approach that improves model quality. The two larger models also include MTP layers for faster text generation. All Nemotron 3 models are post-trained using multi-environment reinforcement learning enabling reasoning, multi-step tool use, and support granular reasoning budget control. Nano, the smallest model, outperforms comparable models in accuracy while remaining extremely cost-efficient for inference. Super is optimized for collaborative agents and high-volume workloads such as IT ticket automation. Ultra, the largest model, provides state-of-the-art accuracy and reasoning performance. Nano is released together with its technical report and this white paper, while Super and Ultra will follow in the coming months. We will openly release the model weights, pre- and post-training software, recipes, and all data for which we hold redistribution rights.
△ Less
Submitted 23 December, 2025;
originally announced December 2025.
-
Nemotron 3 Nano: Open, Efficient Mixture-of-Experts Hybrid Mamba-Transformer Model for Agentic Reasoning
Authors:
NVIDIA,
:,
Aaron Blakeman,
Aaron Grattafiori,
Aarti Basant,
Abhibha Gupta,
Abhinav Khattar,
Adi Renduchintala,
Aditya Vavre,
Akanksha Shukla,
Akhiad Bercovich,
Aleksander Ficek,
Aleksandr Shaposhnikov,
Alex Kondratenko,
Alexander Bukharin,
Alexandre Milesi,
Ali Taghibakhshi,
Alisa Liu,
Amelia Barton,
Ameya Sunil Mahabaleshwarkar,
Amir Klein,
Amit Zuker,
Amnon Geifman,
Amy Shen,
Anahita Bhiwandiwalla
, et al. (289 additional authors not shown)
Abstract:
We present Nemotron 3 Nano 30B-A3B, a Mixture-of-Experts hybrid Mamba-Transformer language model. Nemotron 3 Nano was pretrained on 25 trillion text tokens, including more than 3 trillion new unique tokens over Nemotron 2, followed by supervised fine tuning and large-scale RL on diverse environments. Nemotron 3 Nano achieves better accuracy than our previous generation Nemotron 2 Nano while activa…
▽ More
We present Nemotron 3 Nano 30B-A3B, a Mixture-of-Experts hybrid Mamba-Transformer language model. Nemotron 3 Nano was pretrained on 25 trillion text tokens, including more than 3 trillion new unique tokens over Nemotron 2, followed by supervised fine tuning and large-scale RL on diverse environments. Nemotron 3 Nano achieves better accuracy than our previous generation Nemotron 2 Nano while activating less than half of the parameters per forward pass. It achieves up to 3.3x higher inference throughput than similarly-sized open models like GPT-OSS-20B and Qwen3-30B-A3B-Thinking-2507, while also being more accurate on popular benchmarks. Nemotron 3 Nano demonstrates enhanced agentic, reasoning, and chat abilities and supports context lengths up to 1M tokens. We release both our pretrained Nemotron 3 Nano 30B-A3B Base and post-trained Nemotron 3 Nano 30B-A3B checkpoints on Hugging Face.
△ Less
Submitted 23 December, 2025;
originally announced December 2025.
-
NVIDIA Nemotron Nano 2: An Accurate and Efficient Hybrid Mamba-Transformer Reasoning Model
Authors:
NVIDIA,
:,
Aarti Basant,
Abhijit Khairnar,
Abhijit Paithankar,
Abhinav Khattar,
Adithya Renduchintala,
Aditya Malte,
Akhiad Bercovich,
Akshay Hazare,
Alejandra Rico,
Aleksander Ficek,
Alex Kondratenko,
Alex Shaposhnikov,
Alexander Bukharin,
Ali Taghibakhshi,
Amelia Barton,
Ameya Sunil Mahabaleshwarkar,
Amy Shen,
Andrew Tao,
Ann Guan,
Anna Shors,
Anubhav Mandarwal,
Arham Mehta,
Arun Venkatesan
, et al. (192 additional authors not shown)
Abstract:
We introduce Nemotron-Nano-9B-v2, a hybrid Mamba-Transformer language model designed to increase throughput for reasoning workloads while achieving state-of-the-art accuracy compared to similarly-sized models. Nemotron-Nano-9B-v2 builds on the Nemotron-H architecture, in which the majority of the self-attention layers in the common Transformer architecture are replaced with Mamba-2 layers, to achi…
▽ More
We introduce Nemotron-Nano-9B-v2, a hybrid Mamba-Transformer language model designed to increase throughput for reasoning workloads while achieving state-of-the-art accuracy compared to similarly-sized models. Nemotron-Nano-9B-v2 builds on the Nemotron-H architecture, in which the majority of the self-attention layers in the common Transformer architecture are replaced with Mamba-2 layers, to achieve improved inference speed when generating the long thinking traces needed for reasoning. We create Nemotron-Nano-9B-v2 by first pre-training a 12-billion-parameter model (Nemotron-Nano-12B-v2-Base) on 20 trillion tokens using an FP8 training recipe. After aligning Nemotron-Nano-12B-v2-Base, we employ the Minitron strategy to compress and distill the model with the goal of enabling inference on up to 128k tokens on a single NVIDIA A10G GPU (22GiB of memory, bfloat16 precision). Compared to existing similarly-sized models (e.g., Qwen3-8B), we show that Nemotron-Nano-9B-v2 achieves on-par or better accuracy on reasoning benchmarks while achieving up to 6x higher inference throughput in reasoning settings like 8k input and 16k output tokens. We are releasing Nemotron-Nano-9B-v2, Nemotron-Nano12B-v2-Base, and Nemotron-Nano-9B-v2-Base checkpoints along with the majority of our pre- and post-training datasets on Hugging Face.
△ Less
Submitted 2 September, 2025; v1 submitted 20 August, 2025;
originally announced August 2025.
-
Llama-Nemotron: Efficient Reasoning Models
Authors:
Akhiad Bercovich,
Itay Levy,
Izik Golan,
Mohammad Dabbah,
Ran El-Yaniv,
Omri Puny,
Ido Galil,
Zach Moshe,
Tomer Ronen,
Najeeb Nabwani,
Ido Shahaf,
Oren Tropp,
Ehud Karpas,
Ran Zilberstein,
Jiaqi Zeng,
Soumye Singhal,
Alexander Bukharin,
Yian Zhang,
Tugrul Konuk,
Gerald Shen,
Ameya Sunil Mahabaleshwarkar,
Bilal Kartal,
Yoshi Suhara,
Olivier Delalleau,
Zijia Chen
, et al. (111 additional authors not shown)
Abstract:
We introduce the Llama-Nemotron series of models, an open family of heterogeneous reasoning models that deliver exceptional reasoning capabilities, inference efficiency, and an open license for enterprise use. The family comes in three sizes -- Nano (8B), Super (49B), and Ultra (253B) -- and performs competitively with state-of-the-art reasoning models such as DeepSeek-R1 while offering superior i…
▽ More
We introduce the Llama-Nemotron series of models, an open family of heterogeneous reasoning models that deliver exceptional reasoning capabilities, inference efficiency, and an open license for enterprise use. The family comes in three sizes -- Nano (8B), Super (49B), and Ultra (253B) -- and performs competitively with state-of-the-art reasoning models such as DeepSeek-R1 while offering superior inference throughput and memory efficiency. In this report, we discuss the training procedure for these models, which entails using neural architecture search from Llama 3 models for accelerated inference, knowledge distillation, and continued pretraining, followed by a reasoning-focused post-training stage consisting of two main parts: supervised fine-tuning and large scale reinforcement learning. Llama-Nemotron models are the first open-source models to support a dynamic reasoning toggle, allowing users to switch between standard chat and reasoning modes during inference. To further support open research and facilitate model development, we provide the following resources: 1. We release the Llama-Nemotron reasoning models -- LN-Nano, LN-Super, and LN-Ultra -- under the commercially permissive NVIDIA Open Model License Agreement. 2. We release the complete post-training dataset: Llama-Nemotron-Post-Training-Dataset. 3. We also release our training codebases: NeMo, NeMo-Aligner, and Megatron-LM.
△ Less
Submitted 9 September, 2025; v1 submitted 1 May, 2025;
originally announced May 2025.
-
Nemotron-H: A Family of Accurate and Efficient Hybrid Mamba-Transformer Models
Authors:
NVIDIA,
:,
Aaron Blakeman,
Aarti Basant,
Abhinav Khattar,
Adithya Renduchintala,
Akhiad Bercovich,
Aleksander Ficek,
Alexis Bjorlin,
Ali Taghibakhshi,
Amala Sanjay Deshmukh,
Ameya Sunil Mahabaleshwarkar,
Andrew Tao,
Anna Shors,
Ashwath Aithal,
Ashwin Poojary,
Ayush Dattagupta,
Balaram Buddharaju,
Bobby Chen,
Boris Ginsburg,
Boxin Wang,
Brandon Norick,
Brian Butterfield,
Bryan Catanzaro,
Carlo del Mundo
, et al. (176 additional authors not shown)
Abstract:
As inference-time scaling becomes critical for enhanced reasoning capabilities, it is increasingly becoming important to build models that are efficient to infer. We introduce Nemotron-H, a family of 8B and 56B/47B hybrid Mamba-Transformer models designed to reduce inference cost for a given accuracy level. To achieve this goal, we replace the majority of self-attention layers in the common Transf…
▽ More
As inference-time scaling becomes critical for enhanced reasoning capabilities, it is increasingly becoming important to build models that are efficient to infer. We introduce Nemotron-H, a family of 8B and 56B/47B hybrid Mamba-Transformer models designed to reduce inference cost for a given accuracy level. To achieve this goal, we replace the majority of self-attention layers in the common Transformer model architecture with Mamba layers that perform constant computation and require constant memory per generated token. We show that Nemotron-H models offer either better or on-par accuracy compared to other similarly-sized state-of-the-art open-sourced Transformer models (e.g., Qwen-2.5-7B/72B and Llama-3.1-8B/70B), while being up to 3$\times$ faster at inference. To further increase inference speed and reduce the memory required at inference time, we created Nemotron-H-47B-Base from the 56B model using a new compression via pruning and distillation technique called MiniPuzzle. Nemotron-H-47B-Base achieves similar accuracy to the 56B model, but is 20% faster to infer. In addition, we introduce an FP8-based training recipe and show that it can achieve on par results with BF16-based training. This recipe is used to train the 56B model. We are releasing Nemotron-H base model checkpoints with support in Hugging Face and NeMo.
△ Less
Submitted 5 September, 2025; v1 submitted 4 April, 2025;
originally announced April 2025.
-
Upcycling Large Language Models into Mixture of Experts
Authors:
Ethan He,
Abhinav Khattar,
Ryan Prenger,
Vijay Korthikanti,
Zijie Yan,
Tong Liu,
Shiqing Fan,
Ashwath Aithal,
Mohammad Shoeybi,
Bryan Catanzaro
Abstract:
Upcycling pre-trained dense language models into sparse mixture-of-experts (MoE) models is an efficient approach to increase the model capacity of already trained models. However, optimal techniques for upcycling at scale remain unclear. In this work, we conduct an extensive study of upcycling methods and hyperparameters for billion-parameter scale language models. We propose a novel "virtual grou…
▽ More
Upcycling pre-trained dense language models into sparse mixture-of-experts (MoE) models is an efficient approach to increase the model capacity of already trained models. However, optimal techniques for upcycling at scale remain unclear. In this work, we conduct an extensive study of upcycling methods and hyperparameters for billion-parameter scale language models. We propose a novel "virtual group" initialization scheme and weight scaling approach to enable upcycling into fine-grained MoE architectures. Through ablations, we find that upcycling outperforms continued dense model training. In addition, we show that softmax-then-topK expert routing improves over topK-then-softmax approach and higher granularity MoEs can help improve accuracy. Finally, we upcycled Nemotron-4 15B on 1T tokens and compared it to a continuously trained version of the same model on the same 1T tokens: the continuous trained model achieved 65.3% MMLU, whereas the upcycled model achieved 67.6%. Our results offer insights and best practices to effectively leverage upcycling for building MoE language models. Code is available.
△ Less
Submitted 15 June, 2025; v1 submitted 9 October, 2024;
originally announced October 2024.
-
Curvy: A Parametric Cross-section based Surface Reconstruction
Authors:
Aradhya N. Mathur,
Apoorv Khattar,
Ojaswa Sharma
Abstract:
In this work, we present a novel approach for reconstructing shape point clouds using planar sparse cross-sections with the help of generative modeling. We present unique challenges pertaining to the representation and reconstruction in this problem setting. Most methods in the classical literature lack the ability to generalize based on object class and employ complex mathematical machinery to re…
▽ More
In this work, we present a novel approach for reconstructing shape point clouds using planar sparse cross-sections with the help of generative modeling. We present unique challenges pertaining to the representation and reconstruction in this problem setting. Most methods in the classical literature lack the ability to generalize based on object class and employ complex mathematical machinery to reconstruct reliable surfaces. We present a simple learnable approach to generate a large number of points from a small number of input cross-sections over a large dataset. We use a compact parametric polyline representation using adaptive splitting to represent the cross-sections and perform learning using a Graph Neural Network to reconstruct the underlying shape in an adaptive manner reducing the dependence on the number of cross-sections provided.
△ Less
Submitted 1 September, 2024;
originally announced September 2024.
-
A Multi-scale Yarn Appearance Model with Fiber Details
Authors:
Apoorv Khattar,
Junqui Zhu,
Emiliano Padovani,
Jean-Marie Aurby,
Marc Droske,
Ling-Qi Yan,
Zahra Montazeri
Abstract:
Rendering realistic cloth has always been a challenge due to its intricate structure. Cloth is made up of fibers, plies, and yarns, and previous curved-based models, while detailed, were computationally expensive and inflexible for large cloth. To address this, we propose a simplified approach. We introduce a geometric aggregation technique that reduces ray-tracing computation by using fewer curve…
▽ More
Rendering realistic cloth has always been a challenge due to its intricate structure. Cloth is made up of fibers, plies, and yarns, and previous curved-based models, while detailed, were computationally expensive and inflexible for large cloth. To address this, we propose a simplified approach. We introduce a geometric aggregation technique that reduces ray-tracing computation by using fewer curves, focusing only on yarn curves. Our model generates ply and fiber shapes implicitly, compensating for the lack of explicit geometry with a novel shadowing component. We also present a shading model that simplifies light interactions among fibers by categorizing them into four components, accurately capturing specular and scattered light in both forward and backward directions. To render large cloth efficiently, we propose a multi-scale solution based on pixel coverage. Our yarn shading model outperforms previous methods, achieving rendering speeds 3-5 times faster with less memory in near-field views. Additionally, our multi-scale solution offers a 20% speed boost for distant cloth observation.
△ Less
Submitted 18 March, 2025; v1 submitted 23 January, 2024;
originally announced January 2024.
-
Analysis on Image Set Visual Question Answering
Authors:
Abhinav Khattar,
Aviral Joshi,
Har Simrat Singh,
Pulkit Goel,
Rohit Prakash Barnwal
Abstract:
We tackle the challenge of Visual Question Answering in multi-image setting for the ISVQA dataset. Traditional VQA tasks have focused on a single-image setting where the target answer is generated from a single image. Image set VQA, however, comprises of a set of images and requires finding connection between images, relate the objects across images based on these connections and generate a unifie…
▽ More
We tackle the challenge of Visual Question Answering in multi-image setting for the ISVQA dataset. Traditional VQA tasks have focused on a single-image setting where the target answer is generated from a single image. Image set VQA, however, comprises of a set of images and requires finding connection between images, relate the objects across images based on these connections and generate a unified answer. In this report, we work with 4 approaches in a bid to improve the performance on the task. We analyse and compare our results with three baseline models - LXMERT, HME-VideoQA and VisualBERT - and show that our approaches can provide a slight improvement over the baselines. In specific, we try to improve on the spatial awareness of the model and help the model identify color using enhanced pre-training, reduce language dependence using adversarial regularization, and improve counting using regression loss and graph based deduplication. We further delve into an in-depth analysis on the language bias in the ISVQA dataset and show how models trained on ISVQA implicitly learn to associate language more strongly with the final answer.
△ Less
Submitted 31 March, 2021;
originally announced April 2021.
-
Multimodal Medical Volume Colorization from 2D Style
Authors:
Aradhya Neeraj Mathur,
Apoorv Khattar,
Ojaswa Sharma
Abstract:
Colorization involves the synthesis of colors on a target image while preserving structural content as well as the semantics of the target image. This is a well-explored problem in 2D with many state-of-the-art solutions. We propose a novel deep learning-based approach for the colorization of 3D medical volumes. Our system is capable of directly mapping the colors of a 2D photograph to a 3D MRI vo…
▽ More
Colorization involves the synthesis of colors on a target image while preserving structural content as well as the semantics of the target image. This is a well-explored problem in 2D with many state-of-the-art solutions. We propose a novel deep learning-based approach for the colorization of 3D medical volumes. Our system is capable of directly mapping the colors of a 2D photograph to a 3D MRI volume in real-time, producing a high-fidelity color volume suitable for photo-realistic visualization. Since this work is first of its kind, we discuss the full pipeline in detail and the challenges that it brings for 3D medical data. The colorization of medical MRI volume also entails modality conversion that highlights the robustness of our approach in handling multi-modal data.
△ Less
Submitted 6 April, 2020;
originally announced April 2020.
-
What sets Verified Users apart? Insights, Analysis and Prediction of Verified Users on Twitter
Authors:
Indraneil Paul,
Abhinav Khattar,
Shaan Chopra,
Ponnurangam Kumaraguru,
Manish Gupta
Abstract:
Social network and publishing platforms, such as Twitter, support the concept of a secret proprietary verification process, for handles they deem worthy of platform-wide public interest. In line with significant prior work which suggests that possessing such a status symbolizes enhanced credibility in the eyes of the platform audience, a verified badge is clearly coveted among public figures and b…
▽ More
Social network and publishing platforms, such as Twitter, support the concept of a secret proprietary verification process, for handles they deem worthy of platform-wide public interest. In line with significant prior work which suggests that possessing such a status symbolizes enhanced credibility in the eyes of the platform audience, a verified badge is clearly coveted among public figures and brands. What are less obvious are the inner workings of the verification process and what being verified represents. This lack of clarity, coupled with the flak that Twitter received by extending aforementioned status to political extremists in 2017, backed Twitter into publicly admitting that the process and what the status represented needed to be rethought.
With this in mind, we seek to unravel the aspects of a user's profile which likely engender or preclude verification. The aim of the paper is two-fold: First, we test if discerning the verification status of a handle from profile metadata and content features is feasible. Second, we unravel the features which have the greatest bearing on a handle's verification status. We collected a dataset consisting of profile metadata of all 231,235 verified English-speaking users (as of July 2018), a control sample of 175,930 non-verified English-speaking users and all their 494 million tweets over a one year collection period. Our proposed models are able to reliably identify verification status (Area under curve AUC > 99%). We show that number of public list memberships, presence of neutral sentiment in tweets and an authoritative language style are the most pertinent predictors of verification status.
To the best of our knowledge, this work represents the first attempt at discerning and classifying verification worthy users on Twitter.
△ Less
Submitted 12 March, 2019;
originally announced March 2019.
-
Elites Tweet? Characterizing the Twitter Verified User Network
Authors:
Indraneil Paul,
Abhinav Khattar,
Ponnurangam Kumaraguru,
Manish Gupta,
Shaan Chopra
Abstract:
Social network and publishing platforms, such as Twitter, support the concept of verification. Verified accounts are deemed worthy of platform-wide public interest and are separately authenticated by the platform itself. There have been repeated assertions by these platforms about verification not being tantamount to endorsement. However, a significant body of prior work suggests that possessing a…
▽ More
Social network and publishing platforms, such as Twitter, support the concept of verification. Verified accounts are deemed worthy of platform-wide public interest and are separately authenticated by the platform itself. There have been repeated assertions by these platforms about verification not being tantamount to endorsement. However, a significant body of prior work suggests that possessing a verified status symbolizes enhanced credibility in the eyes of the platform audience. As a result, such a status is highly coveted among public figures and influencers. Hence, we attempt to characterize the network of verified users on Twitter and compare the results to similar analysis performed for the entire Twitter network. We extracted the entire network of verified users on Twitter (as of July 2018) and obtained 231,246 user profiles and 79,213,811 connections. Subsequently in the network analysis, we found that the sub-graph of verified users mirrors the full Twitter users graph in some aspects such as possessing a short diameter. However, our findings contrast with earlier findings on multiple aspects, such as the possession of a power law out-degree distribution, slight dissortativity and a significantly higher reciprocity rate, as elucidated in the paper. Moreover, we attempt to gauge the presence of salient components within this sub-graph and detect the absence of homophily with respect to popularity, which again is in stark contrast to the full Twitter graph. Finally, we demonstrate stationarity in the time series of verified user activity levels. To the best of our knowledge, this work represents the first quantitative attempt at characterizing verified users on Twitter.
△ Less
Submitted 12 March, 2019; v1 submitted 23 December, 2018;
originally announced December 2018.
-
Collective Classification of Spam Campaigners on Twitter: A Hierarchical Meta-Path Based Approach
Authors:
Srishti Gupta,
Abhinav Khattar,
Arpit Gogia,
Ponnurangam Kumaraguru,
Tanmoy Chakraborty
Abstract:
Cybercriminals have leveraged the popularity of a large user base available on Online Social Networks to spread spam campaigns by propagating phishing URLs, attaching malicious contents, etc. However, another kind of spam attacks using phone numbers has recently become prevalent on OSNs, where spammers advertise phone numbers to attract users' attention and convince them to make a call to these ph…
▽ More
Cybercriminals have leveraged the popularity of a large user base available on Online Social Networks to spread spam campaigns by propagating phishing URLs, attaching malicious contents, etc. However, another kind of spam attacks using phone numbers has recently become prevalent on OSNs, where spammers advertise phone numbers to attract users' attention and convince them to make a call to these phone numbers. The dynamics of phone number based spam is different from URL-based spam due to an inherent trust associated with a phone number. While previous work has proposed strategies to mitigate URL-based spam attacks, phone number based spam attacks have received less attention. In this paper, we aim to detect spammers that use phone numbers to promote campaigns on Twitter. To this end, we collected information about 3,370 campaigns spread by 670,251 users. We model the Twitter dataset as a heterogeneous network by leveraging various interconnections between different types of nodes present in the dataset. In particular, we make the following contributions: (i) We propose a simple yet effective metric, called Hierarchical Meta-Path Score (HMPS) to measure the proximity of an unknown user to the other known pool of spammers. (ii) We design a feedback-based active learning strategy and show that it significantly outperforms three state-of-the-art baselines for the task of spam detection. Our method achieves 6.9% and 67.3% higher F1-score and AUC, respectively compared to the best baseline method. (iii) To overcome the problem of less training instances for supervised learning, we show that our proposed feedback strategy achieves 25.6% and 46% higher F1-score and AUC respectively than other oversampling strategies. Finally, we perform a case study to show how our method is capable of detecting those users as spammers who have not been suspended by Twitter (and other baselines) yet.
△ Less
Submitted 12 February, 2018;
originally announced February 2018.
-
White or Blue, the Whale gets its Vengeance: A Social Media Analysis of the Blue Whale Challenge
Authors:
Abhinav Khattar,
Karan Dabas,
Kshitij Gupta,
Shaan Chopra,
Ponnurangam Kumaraguru
Abstract:
The Blue Whale Challenge is a series of self-harm causing tasks that are propagated via online social media under the disguise of a "game." The list of tasks must be completed in a duration of 50 days and they cause both physical and mental harm to the player. The final task is to commit suicide. The game is supposed to be administered by people called "curators" who incite others to cause self-mu…
▽ More
The Blue Whale Challenge is a series of self-harm causing tasks that are propagated via online social media under the disguise of a "game." The list of tasks must be completed in a duration of 50 days and they cause both physical and mental harm to the player. The final task is to commit suicide. The game is supposed to be administered by people called "curators" who incite others to cause self-mutilation and commit suicide. The curators and potential players are known to contact each other on social networking websites and the conversations between them are suspected to take place mainly via direct messages which are difficult to track. Though, in order to find curators, the players make public posts containing certain hashtags/keywords to catch their attention. Even though a lot of these social networks have moderated posts talking about the game, yet some posts manage to pass their filters. Our research focuses on (1) understanding the social media spread of the challenge, (2) spotting the behaviour of the people taking interest in Blue Whale challenge and, (3) analysing demographics of the users who may be involved in playing the game.
△ Less
Submitted 17 January, 2018;
originally announced January 2018.