-
Olmo 3
Authors:
Team Olmo,
:,
Allyson Ettinger,
Amanda Bertsch,
Bailey Kuehl,
David Graham,
David Heineman,
Dirk Groeneveld,
Faeze Brahman,
Finbarr Timbers,
Hamish Ivison,
Jacob Morrison,
Jake Poznanski,
Kyle Lo,
Luca Soldaini,
Matt Jordan,
Mayee Chen,
Michael Noukhovitch,
Nathan Lambert,
Pete Walsh,
Pradeep Dasigi,
Robert Berry,
Saumya Malik,
Saurabh Shah,
Scott Geng
, et al. (44 additional authors not shown)
Abstract:
We introduce Olmo 3, a family of state-of-the-art, fully-open language models at the 7B and 32B parameter scales. Olmo 3 model construction targets long-context reasoning, function calling, coding, instruction following, general chat, and knowledge recall. This release includes the entire model flow, i.e., the full lifecycle of the family of models, including every stage, checkpoint, data point, a…
▽ More
We introduce Olmo 3, a family of state-of-the-art, fully-open language models at the 7B and 32B parameter scales. Olmo 3 model construction targets long-context reasoning, function calling, coding, instruction following, general chat, and knowledge recall. This release includes the entire model flow, i.e., the full lifecycle of the family of models, including every stage, checkpoint, data point, and dependency used to build it. Our flagship model, Olmo 3 Think 32B, is the strongest fully-open thinking model released to-date.
△ Less
Submitted 15 December, 2025;
originally announced December 2025.
-
FlexOlmo: Open Language Models for Flexible Data Use
Authors:
Weijia Shi,
Akshita Bhagia,
Kevin Farhat,
Niklas Muennighoff,
Pete Walsh,
Jacob Morrison,
Dustin Schwenk,
Shayne Longpre,
Jake Poznanski,
Allyson Ettinger,
Daogao Liu,
Margaret Li,
Dirk Groeneveld,
Mike Lewis,
Wen-tau Yih,
Luca Soldaini,
Kyle Lo,
Noah A. Smith,
Luke Zettlemoyer,
Pang Wei Koh,
Hannaneh Hajishirzi,
Ali Farhadi,
Sewon Min
Abstract:
We introduce FlexOlmo, a new class of language models (LMs) that supports (1) distributed training without data sharing, where different model parameters are independently trained on closed datasets, and (2) data-flexible inference, where these parameters along with their associated data can be flexibly included or excluded from model inferences with no further training. FlexOlmo employs a mixture…
▽ More
We introduce FlexOlmo, a new class of language models (LMs) that supports (1) distributed training without data sharing, where different model parameters are independently trained on closed datasets, and (2) data-flexible inference, where these parameters along with their associated data can be flexibly included or excluded from model inferences with no further training. FlexOlmo employs a mixture-of-experts (MoE) architecture where each expert is trained independently on closed datasets and later integrated through a new domain-informed routing without any joint training. FlexOlmo is trained on FlexMix, a corpus we curate comprising publicly available datasets alongside seven domain-specific sets, representing realistic approximations of closed sets. We evaluate models with up to 37 billion parameters (20 billion active) on 31 diverse downstream tasks. We show that a general expert trained on public data can be effectively combined with independently trained experts from other data owners, leading to an average 41% relative improvement while allowing users to opt out of certain data based on data licensing or permission requirements. Our approach also outperforms prior model merging methods by 10.1% on average and surpasses the standard MoE trained without data restrictions using the same training FLOPs. Altogether, this research presents a solution for both data owners and researchers in regulated industries with sensitive or protected data. FlexOlmo enables benefiting from closed data while respecting data owners' preferences by keeping their data local and supporting fine-grained control of data access during inference.
△ Less
Submitted 22 August, 2025; v1 submitted 9 July, 2025;
originally announced July 2025.
-
2 OLMo 2 Furious
Authors:
Team OLMo,
Pete Walsh,
Luca Soldaini,
Dirk Groeneveld,
Kyle Lo,
Shane Arora,
Akshita Bhagia,
Yuling Gu,
Shengyi Huang,
Matt Jordan,
Nathan Lambert,
Dustin Schwenk,
Oyvind Tafjord,
Taira Anderson,
David Atkinson,
Faeze Brahman,
Christopher Clark,
Pradeep Dasigi,
Nouha Dziri,
Allyson Ettinger,
Michal Guerquin,
David Heineman,
Hamish Ivison,
Pang Wei Koh,
Jiacheng Liu
, et al. (18 additional authors not shown)
Abstract:
We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes a family of dense autoregressive language models at 7B, 13B and 32B scales with fully released artifacts -- model weights, full training data, training code and recipes, training logs and thousands of intermediate checkpoints. In this work, we describe our modified model architecture and training recipe, focu…
▽ More
We present OLMo 2, the next generation of our fully open language models. OLMo 2 includes a family of dense autoregressive language models at 7B, 13B and 32B scales with fully released artifacts -- model weights, full training data, training code and recipes, training logs and thousands of intermediate checkpoints. In this work, we describe our modified model architecture and training recipe, focusing on techniques for achieving better training stability and improved per-token efficiency. Our updated pretraining data mixture introduces a new, specialized data mix called Dolmino Mix 1124, which significantly improves model capabilities across many downstream task benchmarks when introduced via late-stage curriculum training (i.e. specialized data during the annealing phase of pretraining). Finally, we incorporate best practices from Tülu 3 to develop OLMo 2-Instruct, focusing on permissive data and extending our final-stage reinforcement learning with verifiable rewards (RLVR). Our OLMo 2 base models sit at the Pareto frontier of performance to training compute, often matching or outperforming open-weight only models like Llama 3.1, Qwen 2.5, and Gemma 2 while using fewer FLOPs and with fully transparent training data, code, and recipe. Our fully open OLMo 2-Instruct models are competitive with open-weight only models of comparable size and even some proprietary models like GPT-3.5 Turbo and GPT 4o Mini.
△ Less
Submitted 8 October, 2025; v1 submitted 31 December, 2024;
originally announced January 2025.
-
OLMoE: Open Mixture-of-Experts Language Models
Authors:
Niklas Muennighoff,
Luca Soldaini,
Dirk Groeneveld,
Kyle Lo,
Jacob Morrison,
Sewon Min,
Weijia Shi,
Pete Walsh,
Oyvind Tafjord,
Nathan Lambert,
Yuling Gu,
Shane Arora,
Akshita Bhagia,
Dustin Schwenk,
David Wadden,
Alexander Wettig,
Binyuan Hui,
Tim Dettmers,
Douwe Kiela,
Ali Farhadi,
Noah A. Smith,
Pang Wei Koh,
Amanpreet Singh,
Hannaneh Hajishirzi
Abstract:
We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat an…
▽ More
We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high specialization, and open-source all aspects of our work: model weights, training data, code, and logs.
△ Less
Submitted 2 March, 2025; v1 submitted 3 September, 2024;
originally announced September 2024.
-
OLMo: Accelerating the Science of Language Models
Authors:
Dirk Groeneveld,
Iz Beltagy,
Pete Walsh,
Akshita Bhagia,
Rodney Kinney,
Oyvind Tafjord,
Ananya Harsh Jha,
Hamish Ivison,
Ian Magnusson,
Yizhong Wang,
Shane Arora,
David Atkinson,
Russell Authur,
Khyathi Raghavi Chandu,
Arman Cohan,
Jennifer Dumas,
Yanai Elazar,
Yuling Gu,
Jack Hessel,
Tushar Khot,
William Merrill,
Jacob Morrison,
Niklas Muennighoff,
Aakanksha Naik,
Crystal Nam
, et al. (18 additional authors not shown)
Abstract:
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclosed. Given the importance of these details in scientifically studying these models…
▽ More
Language models (LMs) have become ubiquitous in both NLP research and in commercial product offerings. As their commercial importance has surged, the most powerful models have become closed off, gated behind proprietary interfaces, with important details of their training data, architectures, and development undisclosed. Given the importance of these details in scientifically studying these models, including their biases and potential risks, we believe it is essential for the research community to have access to powerful, truly open LMs. To this end, we have built OLMo, a competitive, truly Open Language Model, to enable the scientific study of language models. Unlike most prior efforts that have only released model weights and inference code, we release OLMo alongside open training data and training and evaluation code. We hope this release will empower the open research community and inspire a new wave of innovation.
△ Less
Submitted 7 June, 2024; v1 submitted 1 February, 2024;
originally announced February 2024.
-
Dolma: an Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
Authors:
Luca Soldaini,
Rodney Kinney,
Akshita Bhagia,
Dustin Schwenk,
David Atkinson,
Russell Authur,
Ben Bogin,
Khyathi Chandu,
Jennifer Dumas,
Yanai Elazar,
Valentin Hofmann,
Ananya Harsh Jha,
Sachin Kumar,
Li Lucy,
Xinxi Lyu,
Nathan Lambert,
Ian Magnusson,
Jacob Morrison,
Niklas Muennighoff,
Aakanksha Naik,
Crystal Nam,
Matthew E. Peters,
Abhilasha Ravichander,
Kyle Richardson,
Zejiang Shen
, et al. (11 additional authors not shown)
Abstract:
Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training dat…
▽ More
Information about pretraining corpora used to train the current best-performing language models is seldom discussed: commercial models rarely detail their data, and even open models are often released without accompanying training data or recipes to reproduce them. As a result, it is challenging to conduct and advance scientific research on language modeling, such as understanding how training data impacts model capabilities and limitations. To facilitate scientific research on language model pretraining, we curate and release Dolma, a three-trillion-token English corpus, built from a diverse mixture of web content, scientific papers, code, public-domain books, social media, and encyclopedic materials. We extensively document Dolma, including its design principles, details about its construction, and a summary of its contents. We present analyses and experimental results on intermediate states of Dolma to share what we have learned about important data curation practices. Finally, we open-source our data curation toolkit to enable reproduction of our work as well as support further research in large-scale data curation.
△ Less
Submitted 6 June, 2024; v1 submitted 31 January, 2024;
originally announced February 2024.
-
Paloma: A Benchmark for Evaluating Language Model Fit
Authors:
Ian Magnusson,
Akshita Bhagia,
Valentin Hofmann,
Luca Soldaini,
Ananya Harsh Jha,
Oyvind Tafjord,
Dustin Schwenk,
Evan Pete Walsh,
Yanai Elazar,
Kyle Lo,
Dirk Groeneveld,
Iz Beltagy,
Hannaneh Hajishirzi,
Noah A. Smith,
Kyle Richardson,
Jesse Dodge
Abstract:
Evaluations of language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains--varying distributions of language. We introduce Perplexity Analysis for Language Model Assessment (Paloma), a benchmark to measure LM fit to 546 English and code domains, instead of assuming perplexity on one distribution extrapolate…
▽ More
Evaluations of language models (LMs) commonly report perplexity on monolithic data held out from training. Implicitly or explicitly, this data is composed of domains--varying distributions of language. We introduce Perplexity Analysis for Language Model Assessment (Paloma), a benchmark to measure LM fit to 546 English and code domains, instead of assuming perplexity on one distribution extrapolates to others. We include two new datasets of the top 100 subreddits (e.g., r/depression on Reddit) and programming languages (e.g., Java on GitHub), both sources common in contemporary LMs. With our benchmark, we release 6 baseline 1B LMs carefully controlled to provide fair comparisons about which pretraining corpus is best and code for others to apply those controls to their own experiments. Our case studies demonstrate how the fine-grained results from Paloma surface findings such as that models pretrained without data beyond Common Crawl exhibit anomalous gaps in LM fit to many domains or that loss is dominated by the most frequently occurring strings in the vocabulary.
△ Less
Submitted 7 December, 2024; v1 submitted 16 December, 2023;
originally announced December 2023.
-
SPOC: Imitating Shortest Paths in Simulation Enables Effective Navigation and Manipulation in the Real World
Authors:
Kiana Ehsani,
Tanmay Gupta,
Rose Hendrix,
Jordi Salvador,
Luca Weihs,
Kuo-Hao Zeng,
Kunal Pratap Singh,
Yejin Kim,
Winson Han,
Alvaro Herrasti,
Ranjay Krishna,
Dustin Schwenk,
Eli VanderBilt,
Aniruddha Kembhavi
Abstract:
Reinforcement learning (RL) with dense rewards and imitation learning (IL) with human-generated trajectories are the most widely used approaches for training modern embodied agents. RL requires extensive reward shaping and auxiliary losses and is often too slow and ineffective for long-horizon tasks. While IL with human supervision is effective, collecting human trajectories at scale is extremely…
▽ More
Reinforcement learning (RL) with dense rewards and imitation learning (IL) with human-generated trajectories are the most widely used approaches for training modern embodied agents. RL requires extensive reward shaping and auxiliary losses and is often too slow and ineffective for long-horizon tasks. While IL with human supervision is effective, collecting human trajectories at scale is extremely expensive. In this work, we show that imitating shortest-path planners in simulation produces agents that, given a language instruction, can proficiently navigate, explore, and manipulate objects in both simulation and in the real world using only RGB sensors (no depth map or GPS coordinates). This surprising result is enabled by our end-to-end, transformer-based, SPOC architecture, powerful visual encoders paired with extensive image augmentation, and the dramatic scale and diversity of our training data: millions of frames of shortest-path-expert trajectories collected inside approximately 200,000 procedurally generated houses containing 40,000 unique 3D assets. Our models, data, training code, and newly proposed 10-task benchmarking suite CHORES are available in https://spoc-robot.github.io.
△ Less
Submitted 7 August, 2024; v1 submitted 5 December, 2023;
originally announced December 2023.
-
What's In My Big Data?
Authors:
Yanai Elazar,
Akshita Bhagia,
Ian Magnusson,
Abhilasha Ravichander,
Dustin Schwenk,
Alane Suhr,
Pete Walsh,
Dirk Groeneveld,
Luca Soldaini,
Sameer Singh,
Hanna Hajishirzi,
Noah A. Smith,
Jesse Dodge
Abstract:
Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination). In this work, we propose What's In My Big Data? (WIMBD), a platform and a set of sixteen analyses that allow us to reveal and compare the contents of large text corp…
▽ More
Large text corpora are the backbone of language models. However, we have a limited understanding of the content of these corpora, including general statistics, quality, social factors, and inclusion of evaluation data (contamination). In this work, we propose What's In My Big Data? (WIMBD), a platform and a set of sixteen analyses that allow us to reveal and compare the contents of large text corpora. WIMBD builds on two basic capabilities -- count and search -- at scale, which allows us to analyze more than 35 terabytes on a standard compute node. We apply WIMBD to ten different corpora used to train popular language models, including C4, The Pile, and RedPajama. Our analysis uncovers several surprising and previously undocumented findings about these corpora, including the high prevalence of duplicate, synthetic, and low-quality content, personally identifiable information, toxic language, and benchmark contamination. For instance, we find that about 50% of the documents in RedPajama and LAION-2B-en are duplicates. In addition, several datasets used for benchmarking models trained on such corpora are contaminated with respect to important benchmarks, including the Winograd Schema Challenge and parts of GLUE and SuperGLUE. We open-source WIMBD's code and artifacts to provide a standard set of evaluations for new text-based corpora and to encourage more analyses and transparency around them.
△ Less
Submitted 5 March, 2024; v1 submitted 31 October, 2023;
originally announced October 2023.
-
Objaverse: A Universe of Annotated 3D Objects
Authors:
Matt Deitke,
Dustin Schwenk,
Jordi Salvador,
Luca Weihs,
Oscar Michel,
Eli VanderBilt,
Ludwig Schmidt,
Kiana Ehsani,
Aniruddha Kembhavi,
Ali Farhadi
Abstract:
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets…
▽ More
Massive data corpora like WebText, Wikipedia, Conceptual Captions, WebImageText, and LAION have propelled recent dramatic progress in AI. Large neural models trained on such datasets produce impressive results and top many of today's benchmarks. A notable omission within this family of large-scale datasets is 3D data. Despite considerable interest and potential applications in 3D vision, datasets of high-fidelity 3D models continue to be mid-sized with limited diversity of object categories. Addressing this gap, we present Objaverse 1.0, a large dataset of objects with 800K+ (and growing) 3D models with descriptive captions, tags, and animations. Objaverse improves upon present day 3D repositories in terms of scale, number of categories, and in the visual diversity of instances within a category. We demonstrate the large potential of Objaverse via four diverse applications: training generative 3D models, improving tail category segmentation on the LVIS benchmark, training open-vocabulary object-navigation models for Embodied AI, and creating a new benchmark for robustness analysis of vision models. Objaverse can open new directions for research and enable new applications across the field of AI.
△ Less
Submitted 15 December, 2022;
originally announced December 2022.
-
A-OKVQA: A Benchmark for Visual Question Answering using World Knowledge
Authors:
Dustin Schwenk,
Apoorv Khandelwal,
Christopher Clark,
Kenneth Marino,
Roozbeh Mottaghi
Abstract:
The Visual Question Answering (VQA) task aspires to provide a meaningful testbed for the development of AI models that can jointly reason over visual and natural language inputs. Despite a proliferation of VQA datasets, this goal is hindered by a set of common limitations. These include a reliance on relatively simplistic questions that are repetitive in both concepts and linguistic structure, lit…
▽ More
The Visual Question Answering (VQA) task aspires to provide a meaningful testbed for the development of AI models that can jointly reason over visual and natural language inputs. Despite a proliferation of VQA datasets, this goal is hindered by a set of common limitations. These include a reliance on relatively simplistic questions that are repetitive in both concepts and linguistic structure, little world knowledge needed outside of the paired image, and limited reasoning required to arrive at the correct answer. We introduce A-OKVQA, a crowdsourced dataset composed of a diverse set of about 25K questions requiring a broad base of commonsense and world knowledge to answer. In contrast to the existing knowledge-based VQA datasets, the questions generally cannot be answered by simply querying a knowledge base, and instead require some form of commonsense reasoning about the scene depicted in the image. We demonstrate the potential of this new dataset through a detailed analysis of its contents and baseline performance measurements over a variety of state-of-the-art vision-language models. Project page: http://a-okvqa.allenai.org/
△ Less
Submitted 3 June, 2022;
originally announced June 2022.
-
Iconary: A Pictionary-Based Game for Testing Multimodal Communication with Drawings and Text
Authors:
Christopher Clark,
Jordi Salvador,
Dustin Schwenk,
Derrick Bonafilia,
Mark Yatskar,
Eric Kolve,
Alvaro Herrasti,
Jonghyun Choi,
Sachin Mehta,
Sam Skjonsberg,
Carissa Schoenick,
Aaron Sarnat,
Hannaneh Hajishirzi,
Aniruddha Kembhavi,
Oren Etzioni,
Ali Farhadi
Abstract:
Communicating with humans is challenging for AIs because it requires a shared understanding of the world, complex semantics (e.g., metaphors or analogies), and at times multi-modal gestures (e.g., pointing with a finger, or an arrow in a diagram). We investigate these challenges in the context of Iconary, a collaborative game of drawing and guessing based on Pictionary, that poses a novel challeng…
▽ More
Communicating with humans is challenging for AIs because it requires a shared understanding of the world, complex semantics (e.g., metaphors or analogies), and at times multi-modal gestures (e.g., pointing with a finger, or an arrow in a diagram). We investigate these challenges in the context of Iconary, a collaborative game of drawing and guessing based on Pictionary, that poses a novel challenge for the research community. In Iconary, a Guesser tries to identify a phrase that a Drawer is drawing by composing icons, and the Drawer iteratively revises the drawing to help the Guesser in response. This back-and-forth often uses canonical scenes, visual metaphor, or icon compositions to express challenging words, making it an ideal test for mixing language and visual/symbolic communication in AI. We propose models to play Iconary and train them on over 55,000 games between human players. Our models are skillful players and are able to employ world knowledge in language models to play with words unseen during training. Elite human players outperform our models, particularly at the drawing task, leaving an important gap for future research to address. We release our dataset, code, and evaluation setup as a challenge to the community at http://www.github.com/allenai/iconary.
△ Less
Submitted 1 December, 2021;
originally announced December 2021.
-
X-LXMERT: Paint, Caption and Answer Questions with Multi-Modal Transformers
Authors:
Jaemin Cho,
Jiasen Lu,
Dustin Schwenk,
Hannaneh Hajishirzi,
Aniruddha Kembhavi
Abstract:
Mirroring the success of masked language models, vision-and-language counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question: Can these model…
▽ More
Mirroring the success of masked language models, vision-and-language counterparts like ViLBERT, LXMERT and UNITER have achieved state of the art performance on a variety of multimodal discriminative tasks like visual question answering and visual grounding. Recent work has also successfully adapted such models towards the generative task of image captioning. This begs the question: Can these models go the other way and generate images from pieces of text? Our analysis of a popular representative from this model family - LXMERT - finds that it is unable to generate rich and semantically meaningful imagery with its current training setup. We introduce X-LXMERT, an extension to LXMERT with training refinements including: discretizing visual representations, using uniform masking with a large range of masking ratios and aligning the right pre-training datasets to the right objectives which enables it to paint. X-LXMERT's image generation capabilities rival state of the art generative models while its question answering and captioning abilities remains comparable to LXMERT. Finally, we demonstrate the generality of these training refinements by adding image generation capabilities into UNITER to produce X-UNITER.
△ Less
Submitted 23 September, 2020;
originally announced September 2020.
-
RoboTHOR: An Open Simulation-to-Real Embodied AI Platform
Authors:
Matt Deitke,
Winson Han,
Alvaro Herrasti,
Aniruddha Kembhavi,
Eric Kolve,
Roozbeh Mottaghi,
Jordi Salvador,
Dustin Schwenk,
Eli VanderBilt,
Matthew Wallingford,
Luca Weihs,
Mark Yatskar,
Ali Farhadi
Abstract:
Visual recognition ecosystems (e.g. ImageNet, Pascal, COCO) have undeniably played a prevailing role in the evolution of modern computer vision. We argue that interactive and embodied visual AI has reached a stage of development similar to visual recognition prior to the advent of these ecosystems. Recently, various synthetic environments have been introduced to facilitate research in embodied AI.…
▽ More
Visual recognition ecosystems (e.g. ImageNet, Pascal, COCO) have undeniably played a prevailing role in the evolution of modern computer vision. We argue that interactive and embodied visual AI has reached a stage of development similar to visual recognition prior to the advent of these ecosystems. Recently, various synthetic environments have been introduced to facilitate research in embodied AI. Notwithstanding this progress, the crucial question of how well models trained in simulation generalize to reality has remained largely unanswered. The creation of a comparable ecosystem for simulation-to-real embodied AI presents many challenges: (1) the inherently interactive nature of the problem, (2) the need for tight alignments between real and simulated worlds, (3) the difficulty of replicating physical conditions for repeatable experiments, (4) and the associated cost. In this paper, we introduce RoboTHOR to democratize research in interactive and embodied visual AI. RoboTHOR offers a framework of simulated environments paired with physical counterparts to systematically explore and overcome the challenges of simulation-to-real transfer, and a platform where researchers across the globe can remotely test their embodied models in the physical world. As a first benchmark, our experiments show there exists a significant gap between the performance of models trained in simulation when they are tested in both simulations and their carefully constructed physical analogs. We hope that RoboTHOR will spur the next stage of evolution in embodied computer vision. RoboTHOR can be accessed at the following link: https://ai2thor.allenai.org/robothor
△ Less
Submitted 14 April, 2020;
originally announced April 2020.
-
Learning Generalizable Visual Representations via Interactive Gameplay
Authors:
Luca Weihs,
Aniruddha Kembhavi,
Kiana Ehsani,
Sarah M Pratt,
Winson Han,
Alvaro Herrasti,
Eric Kolve,
Dustin Schwenk,
Roozbeh Mottaghi,
Ali Farhadi
Abstract:
A growing body of research suggests that embodied gameplay, prevalent not just in human cultures but across a variety of animal species including turtles and ravens, is critical in developing the neural flexibility for creative problem solving, decision making, and socialization. Comparatively little is known regarding the impact of embodied gameplay upon artificial agents. While recent work has p…
▽ More
A growing body of research suggests that embodied gameplay, prevalent not just in human cultures but across a variety of animal species including turtles and ravens, is critical in developing the neural flexibility for creative problem solving, decision making, and socialization. Comparatively little is known regarding the impact of embodied gameplay upon artificial agents. While recent work has produced agents proficient in abstract games, these environments are far removed from the real world and thus these agents can provide little insight into the advantages of embodied play. Hiding games, such as hide-and-seek, played universally, provide a rich ground for studying the impact of embodied gameplay on representation learning in the context of perspective taking, secret keeping, and false belief understanding. Here we are the first to show that embodied adversarial reinforcement learning agents playing Cache, a variant of hide-and-seek, in a high fidelity, interactive, environment, learn generalizable representations of their observations encoding information such as object permanence, free space, and containment. Moving closer to biologically motivated learning strategies, our agents' representations, enhanced by intentionality and memory, are developed through interaction and play. These results serve as a model for studying how facets of vision develop through interaction, provide an experimental framework for assessing what is learned by artificial agents, and demonstrates the value of moving from large, static, datasets towards experiential, interactive, representation learning.
△ Less
Submitted 25 February, 2021; v1 submitted 17 December, 2019;
originally announced December 2019.
-
Imagine This! Scripts to Compositions to Videos
Authors:
Tanmay Gupta,
Dustin Schwenk,
Ali Farhadi,
Derek Hoiem,
Aniruddha Kembhavi
Abstract:
Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval, and Fusion Network (CRAFT), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. CRAFT explicitly p…
▽ More
Imagining a scene described in natural language with realistic layout and appearance of entities is the ultimate test of spatial, visual, and semantic world knowledge. Towards this goal, we present the Composition, Retrieval, and Fusion Network (CRAFT), a model capable of learning this knowledge from video-caption data and applying it while generating videos from novel captions. CRAFT explicitly predicts a temporal-layout of mentioned entities (characters and objects), retrieves spatio-temporal entity segments from a video database and fuses them to generate scene videos. Our contributions include sequential training of components of CRAFT while jointly modeling layout and appearances, and losses that encourage learning compositional representations for retrieval. We evaluate CRAFT on semantic fidelity to caption, composition consistency, and visual quality. CRAFT outperforms direct pixel generation approaches and generalizes well to unseen captions and to unseen video databases with no text annotations. We demonstrate CRAFT on FLINTSTONES, a new richly annotated video-caption dataset with over 25000 videos. For a glimpse of videos generated by CRAFT, see https://youtu.be/688Vv86n0z8.
△ Less
Submitted 10 April, 2018;
originally announced April 2018.