For the First Time – A Robot Has Learned To Imagine Itself



A robot developed by Columbia Engineers learns more about itself than about its surroundings.

As every athlete or fashion-conscious person knows, our impression of our bodies is not always accurate or practical, but it plays an important role in how we behave in society. While you play ball or get ready, your brain is continuously planning for movement so that you can move your body without bumping, tripping, or falling.

Infant humans create their own bodily models, and robots are now beginning to do the same. Today, a team at Columbia Engineering announced the creation of a robot that, for the first time, can learn a model of its entire body from scratch without the assistance of a human. In a recent work published in Science Robotics, the researchers describe how their robot constructed a kinematic model of itself and how it used that model to plan motions, achieve goals, and avoid obstacles in a variety of circumstances. Even physical damage was automatically found and repaired.

A robot may adapt to various motion planning and control tasks by learning full-body morphology through visual self-modeling. Credit: Columbia Engineering/Jane Nisselson and Yinuo Qin

The robot observes itself like a child playing with itself in a room full with mirrors.

A robotic arm was positioned in front of a group of five streaming video cameras by the researchers. Through the cameras, the robot observed itself as it freely oscillated. The robot squirmed and twisted to discover precisely how its body moved in reaction to various motor inputs, like a baby discovering itself for the first time in a hall of mirrors. The robot eventually halted after roughly three hours. Its inbuilt deep neural network had finished figuring out how the robot's movements related to how much space it took up in its surroundings.

Hod Lipson, professor of mechanical engineering and director of Columbia's Creative Machines Lab, where the work was done, said, "We were particularly intrigued to understand how the robot envisaged itself." But because a neural network is a dark box, you can't merely glance inside one. The self-image eventually came into existence after the researchers tried with numerous visualization techniques. The robot's three-dimensional body looked to be engulfed by a type of softly flashing cloud, according to Lipson. "The flickering mist gently followed the robot as it traveled." The self-model of the robot was precise to 1% of its workspace.

Self-modeling robots will result in autonomous systems that are more self-sufficient.

Robots should be able to create models of themselves without assistance from engineers for a variety of reasons. It not only reduces labor costs, but also enables the robot to maintain its own wear and tear, as well as identify and repair damage. The authors contend that this capability is crucial since increased independence is required of autonomous systems. For example, a factory robot could see that something isn't moving properly and make adjustments or request assistance.

Boyuan Chen, the study's first author and an assistant professor at Duke University, said, "We humans obviously have a notion of self. "Close your eyes and attempt to picture how your body would move if you were to do something, like extend your arms forward or go backward. We have a self-model, or notion of self, somewhere in our brains that tells us how much of our immediate surroundings we occupy and how that volume changes as we move.

The project is a component of Lipson's decades-long search for strategies to give robots a semblance of self-awareness. He said, "Self-modeling is a basic sort of self-awareness. A robot, animal, or human that has a realistic self-model has an evolutionary advantage because it can function better in the real environment and make better decisions.

The limitations, dangers, and issues associated with providing robots more autonomy through self-awareness are known to the researchers. The level of self-awareness shown in this study is, as Lipson notes, "trivial compared to that of humans, but you have to start somewhere." Lipson is eager to acknowledge this. We must go cautiously and deliberately in order to maximize our chances of success and reduce our exposure to risk.

Boyuan Chen, Robert Kwiatkowski, Carl Vondrick, and Hod Lipson, "Fully bodied visual self-modeling of robot morphologies," Science Robotics, 13 July 2022.

The National Science Foundation, Facebook, Northrop Grumman, the Defense Advanced Research Projects Agency, and others provided funding for the study.

By COLUMBIA UNIVERSITY SCHOOL OF ENGINEERING AND APPLIED SCIENCE 


Comments

Popular posts from this blog

Do You Sleep on Your Back or Side? Here's The Research on 'Optimal' Sleep Positions

The Science of Beards

Scientists Uncover a Surprising Connection Between Appetite and Sun Exposure