Lab teaches robot to put down objects properly

Cornell University researchers, using a 3D camera, have taught a robot how to put down simple household objects accurately in office and home scenes.

09/12/2011


After scanning a room, a robot points to the keyboard it was asked to locate. It uses context to identify objects, such as the fact that a keyboard is usually in front of a monitor. Courtesy: Cornell University Personal Robotics LabFor robots, picking up objects is easy. Putting them down – in proper context – is not so simple.

Researchers at Cornell’s Personal Robotics Laboratory, led by Ashutosh Saxena, assistant professor of computer science, has found that placing objects for robots is harder because there are many options. Drinking cups, for example, go upright when placed on a table, but must be placed upside down in a dishwasher.

In early tests, the researchers placed a plate, mug, martini glass, bowl, candy cane, disc, spoon and tuning fork on a flat surface, on a hook, in a stemware holder, in a pen holder and on several different dish racks.

The research robot surveyed its environment with a 3D camera, then it randomly tested space as suitable locations for placement. For some objects the robot tests for “caging” – the presence of vertical supports that would hold an object upright. It also gives priority to “preferred” locations: A plate goes flat on a table, but upright in a dishwasher.

After training, the Saxena lab’s robot placed objects correctly 98 percent of the time when it had seen the objects and environments previously. When working with new objects in new environments, the robots placed the objects correctly 92 percent of the time.

But first, the robot has to find the dish rack. Just as humans assess a room when walking in, the Cornell researchers (Saxena and colleague Thorsten Joachims, Cornell associate professor of computer science) have developed a system that enables a robot to scan a room and identify its objects.

Pictures from the robot’s camera are stitched together to form a 3-D image of the entire room, which is then divided into segments.

The researchers trained a robot by giving it 24 office scenes and 28 home scenes in which they had labeled most objects. The computer examines such features as color, texture and what is nearby and decides what characteristics all objects with the same label have in common. In a new environment, it compares each segment of its scan with the objects in its memory and chooses the best fit.

In tests, the robot correctly identified objects about 73 percent of the time in home scenes and 84 percent in offices. In a final test, the robot successfully located a keyboard in an unfamiliar room. Again, context gives this robot an advantage. The keyboard only shows up as a few pixels in the image, but the monitor is easily found, and the robot uses that information to locate the keyboard. Saxena will present some of this research at the Neural Information Processing Systems conference in Granada, Spain, in December 2011.

www.cornell.edu

http://www.news.cornell.edu/stories/Sept11/RobotsLearn.html

Cornell University

- Edited by Chris Vavra, Control Engineering, www.controleng.com