Andy Zeng is an American computer scientist and AI engineer at Google DeepMind. He is best known for his research in robotics and machine learning, including robot learning algorithms that enable machines to intelligently interact with the physical world and improve themselves over time. Zeng was a recipient of the Gordon Y.S. Wu Fellowship in Engineering and Wu Prize in 2016, and the Princeton SEAS Award for Excellence in 2018.[1] [2]
Zeng studied computer science and mathematics as an undergraduate student at the University of California, Berkeley.[3] He then moved to Princeton University, where he completed his Ph.D. in 2019. His thesis focused on deep learning algorithms that enable robots to understand the visual world and interact with unfamiliar physical objects.[4] He developed a class of deep network architectures inspired by the concept of affordances in cognitive psychology (perceiving the world in terms of actions), which allow machines to learn complex skills that can quickly adapt and generalize to new scenarios.[5] As a doctoral student, he co-led Team MIT-Princeton[6] to win 1st Place of the Stow Task[7] at the Amazon Picking Challenge,[8] a worldwide competition focused on bin picking. He also spent time as a student researcher at Google Brain.[9] His graduate studies were supported by the NVIDIA Fellowship.[10]
Zeng investigates the capabilities of robots to intelligently improve themselves over time through self-supervised learning algorithms, such as learning how to assemble objects by disassembling them,[11] or acquiring new dexterous skills by watching videos of people.[12] Notable demonstrations include Google's TossingBot,[13] a robot that can learn to grasp and throw unfamiliar objects using physics as a prior model of how the world works. His research also investigates 3D computer vision algorithms.
He pioneered the use of Foundation models in robotics, from systems that take action by write their own code,[14] to robots that can plan and reason by grounding language in affordances.[15] [16] He co-developed large multimodal models, and showed that they can be used for intelligent robot navigation, world modeling, and assistive agents.[17] He also worked on algorithms that allow large language models to know when they don't know and ask for help.[18]
In 2024, Zeng was awarded the IEEE Early Career Award in Robotics and Automation “for outstanding contributions to robot learning.”[19]