Andy Zeng Explained

Andy Zeng is an American computer scientist and AI engineer at Google DeepMind. He is best known for his research in robotics and machine learning, including robot learning algorithms that enable machines to intelligently interact with the physical world and improve themselves over time. Zeng was a recipient of the Gordon Y.S. Wu Fellowship in Engineering and Wu Prize in 2016, and the Princeton SEAS Award for Excellence in 2018.[1] [2]

Early life and education

Zeng studied computer science and mathematics as an undergraduate student at the University of California, Berkeley.[3] He then moved to Princeton University, where he completed his Ph.D. in 2019. His thesis focused on deep learning algorithms that enable robots to understand the visual world and interact with unfamiliar physical objects.[4] He developed a class of deep network architectures inspired by the concept of affordances in cognitive psychology (perceiving the world in terms of actions), which allow machines to learn complex skills that can quickly adapt and generalize to new scenarios.[5] As a doctoral student, he co-led Team MIT-Princeton[6] to win 1st Place of the Stow Task[7] at the Amazon Picking Challenge,[8] a worldwide competition focused on bin picking. He also spent time as a student researcher at Google Brain.[9] His graduate studies were supported by the NVIDIA Fellowship.[10]

Research and career

Zeng investigates the capabilities of robots to intelligently improve themselves over time through self-supervised learning algorithms, such as learning how to assemble objects by disassembling them,[11] or acquiring new dexterous skills by watching videos of people.[12] Notable demonstrations include Google's TossingBot,[13] a robot that can learn to grasp and throw unfamiliar objects using physics as a prior model of how the world works. His research also investigates 3D computer vision algorithms.

He pioneered the use of Foundation models in robotics, from systems that take action by write their own code,[14] to robots that can plan and reason by grounding language in affordances.[15] [16] He co-developed large multimodal models, and showed that they can be used for intelligent robot navigation, world modeling, and assistive agents.[17] He also worked on algorithms that allow large language models to know when they don't know and ask for help.[18]

In 2024, Zeng was awarded the IEEE Early Career Award in Robotics and Automation “for outstanding contributions to robot learning.”[19]

Notes and References

  1. Web site: Princeton Robotics Seminar: Language as Robot Middleware Computer Science Department at Princeton University . Princeton University.
  2. Web site: Andy Zeng . IEEE.
  3. Web site: CSL Seminar - Embodied Intelligence . Massachusetts Institute of Technology.
  4. Web site: Learning Visual Affordances for Robotic Manipulation - ProQuest . www.proquest.com . en.
  5. Web site: Visual Transfer Learning for Robotic Manipulation . . en.
  6. Web site: MIT-Princeton at the Amazon Robotics Challenge . Princeton University.
  7. Web site: Australian Centre for Robotic Vision from Australia Wins Grand Championship at 2017 Amazon Robotics Challenge . Press Center . en . 1 August 2017.
  8. Web site: Malamut . Layla . Nathans . Aaron . Princeton graduate student teams advance in robotics, intelligent systems competitions . . en.
  9. Web site: Google's Tossingbot Can Toss Over 500 Objects Per Hour Into Target Locations . NVIDIA Technical Blog . 28 March 2019.
  10. Web site: 2018 Grad Fellows Research . research.nvidia.com.
  11. Web site: Learning to Assemble and to Generalize from Self-Supervised Disassembly . research.google . en.
  12. Web site: Robot See, Robot Do . research.google . en.
  13. Web site: Inside Google's Rebooted Robotics Program . The New York Times.
  14. Web site: Heater. Brian. 2022-11-02. Google wants robots to generate their own code. 2024-10-18. TechCrunch. en-US.
  15. Web site: PaLM-SayCan. 2024-10-18. families.google.com. en.
  16. News: Google is training its robots to be more like humans. The Washington Post.
  17. Web site: Visual language maps for robot navigation. 2024-10-18. research.google. en.
  18. Web site: These robots know when to ask for help. 2024-10-18. MIT Technology Review. en.
  19. Web site: 2024-03-22. 2024 IEEE RAS Award Recipients Announced! - IEEE Robotics and Automation Society. 2024-10-18. www.ieee-ras.org. en-gb.