Skip to main content

Showing 1–5 of 5 results for author: Li-Bell, A

Searching in archive cs. Search in all archives.
.
  1. arXiv:2511.14759  [pdf, ps, other

    cs.LG cs.RO

    $π^{*}_{0.6}$: a VLA That Learns From Experience

    Authors: Physical Intelligence, Ali Amin, Raichelle Aniceto, Ashwin Balakrishna, Kevin Black, Ken Conley, Grace Connors, James Darpinian, Karan Dhabalia, Jared DiCarlo, Danny Driess, Michael Equi, Adnan Esmail, Yunhao Fang, Chelsea Finn, Catherine Glossop, Thomas Godden, Ivan Goryachev, Lachy Groom, Hunter Hancock, Karol Hausman, Gashon Hussein, Brian Ichter, Szymon Jakubczak, Rowan Jen , et al. (31 additional authors not shown)

    Abstract: We study how vision-language-action (VLA) models can improve through real-world deployments via reinforcement learning (RL). We present a general-purpose method, RL with Experience and Corrections via Advantage-conditioned Policies (RECAP), that provides for RL training of VLAs via advantage conditioning. Our method incorporates heterogeneous data into the self-improvement process, including demon… ▽ More

    Submitted 18 November, 2025; v1 submitted 18 November, 2025; originally announced November 2025.

  2. arXiv:2505.23705  [pdf, ps, other

    cs.LG cs.RO

    Knowledge Insulating Vision-Language-Action Models: Train Fast, Run Fast, Generalize Better

    Authors: Danny Driess, Jost Tobias Springenberg, Brian Ichter, Lili Yu, Adrian Li-Bell, Karl Pertsch, Allen Z. Ren, Homer Walke, Quan Vuong, Lucy Xiaoyang Shi, Sergey Levine

    Abstract: Vision-language-action (VLA) models provide a powerful approach to training control policies for physical systems, such as robots, by combining end-to-end learning with transfer of semantic knowledge from web-scale vision-language model (VLM) training. However, the constraints of real-time control are often at odds with the design of VLMs: the most powerful VLMs have tens or hundreds of billions o… ▽ More

    Submitted 29 May, 2025; originally announced May 2025.

  3. arXiv:2504.16054  [pdf, other

    cs.LG cs.RO

    $π_{0.5}$: a Vision-Language-Action Model with Open-World Generalization

    Authors: Physical Intelligence, Kevin Black, Noah Brown, James Darpinian, Karan Dhabalia, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Manuel Y. Galliker, Dibya Ghosh, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Devin LeBlanc, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Allen Z. Ren , et al. (11 additional authors not shown)

    Abstract: In order for robots to be useful, they must perform practically relevant tasks in the real world, outside of the lab. While vision-language-action (VLA) models have demonstrated impressive results for end-to-end robot control, it remains an open question how far such models can generalize in the wild. We describe $π_{0.5}$, a new model based on $π_{0}$ that uses co-training on heterogeneous tasks… ▽ More

    Submitted 22 April, 2025; originally announced April 2025.

  4. arXiv:2502.19417  [pdf, ps, other

    cs.RO cs.AI cs.LG

    Hi Robot: Open-Ended Instruction Following with Hierarchical Vision-Language-Action Models

    Authors: Lucy Xiaoyang Shi, Brian Ichter, Michael Equi, Liyiming Ke, Karl Pertsch, Quan Vuong, James Tanner, Anna Walling, Haohuan Wang, Niccolo Fusai, Adrian Li-Bell, Danny Driess, Lachy Groom, Sergey Levine, Chelsea Finn

    Abstract: Generalist robots that can perform a range of different tasks in open-world settings must be able to not only reason about the steps needed to accomplish their goals, but also process complex instructions, prompts, and even feedback during task execution. Intricate instructions (e.g., "Could you make me a vegetarian sandwich?" or "I don't like that one") require not just the ability to physically… ▽ More

    Submitted 15 July, 2025; v1 submitted 26 February, 2025; originally announced February 2025.

    Comments: ICML 2025

  5. arXiv:2410.24164  [pdf, ps, other

    cs.LG cs.RO

    $π_0$: A Vision-Language-Action Flow Model for General Robot Control

    Authors: Kevin Black, Noah Brown, Danny Driess, Adnan Esmail, Michael Equi, Chelsea Finn, Niccolo Fusai, Lachy Groom, Karol Hausman, Brian Ichter, Szymon Jakubczak, Tim Jones, Liyiming Ke, Sergey Levine, Adrian Li-Bell, Mohith Mothukuri, Suraj Nair, Karl Pertsch, Lucy Xiaoyang Shi, James Tanner, Quan Vuong, Anna Walling, Haohuan Wang, Ury Zhilinsky

    Abstract: Robot learning holds tremendous promise to unlock the full potential of flexible, general, and dexterous robot systems, as well as to address some of the deepest questions in artificial intelligence. However, bringing robot learning to the level of generality required for effective real-world systems faces major obstacles in terms of data, generalization, and robustness. In this paper, we discuss… ▽ More

    Submitted 8 January, 2026; v1 submitted 31 October, 2024; originally announced October 2024.

    Comments: See project website for videos: https://physicalintelligence.company/blog/pi0 Published in RSS 2025