Skip to main content



RESEARCH SPOTLIGHT

There's a 'personal robot' in your future

Robotic arm hands Ashutosh Saxena a cup

A robotic arm hands Ashutosh Saxena a cup while his students watch in 2010. Background from left: M.Eng. student Stephen Moseson, graduate student Yun Jiang and TP Wong '10. See larger image

Someday you may have a personal robot to help around the house. It will clear the table and do the dishes, mop the floors, maybe even wait for the cable guy.

In Cornell's Personal Robotics Lab, Ashutosh Saxena, assistant professor of computer science, is developing the technology to make such devices possible. They might appear first as assistants for the disabled and elderly and could cost less than $20,000 – perhaps even as little as a few thousand dollars.

It won't happen, Saxena explains, until we make robots more adaptable. Industrial robots are programmed to repeat an exact series of actions over and over. A household robot will have to adjust to constantly changing conditions: It will have to find the dishes on the table and find empty spaces in the dishwasher rack. It should be aware of where you are and what you're doing, so it can help if needed and not interrupt when you don't want it to.

"While the hardware is getting there, we need software that can make these robots truly smart," Saxena says.

The underlying technology is what computer scientists call "machine learning," in which a computer program takes note of events and in effect reprograms itself in response. Machine learning often works on the principle of "What I tell you many times is true, but not exactly." Show a computer a lot of different cups and tell it that each one is a cup, and with the right programming, it will find the things that all the cups have in common and use that to identify cups in the future. Since sizes and shapes will never be exactly the same, the computer calculates the probability that a new object fits each of the models in its memory and chooses the one that scores highest. A similar process teaches the robot to find a cup's handle and grasp it correctly. This is not unlike what humans do in the first few months of life.

Placing objects is harder than picking them up, because there are many options. A cup should be right side up on a table but upside down in the dishwasher. A plate can lie flat on a table or slide vertically into a dish rack slot. So robots are programmed with different procedures for each type of object.

But first the robot has to find the dish rack.

Robotic arm places dish in dish rack

To properly place dishes in a rack, a robot must identify empty spaces and place each plate in the correct upright position. See larger image

Saxena and colleague Thorsten Joachims, associate professor of computer science, have developed a system that enables a robot to scan a room and identify the objects it sees. Several pictures from a camera mounted on a rolling robot are stitched together to form a 3-D image of an entire room. The robot's computer divides the image into segments, based on discontinuities and distances between objects. The goal is to label each segment.

The researchers trained a robot by giving it 24 office scenes and 28 home scenes in which they had labeled most objects. The computer was programmed to examine an array of features of each object, including color, texture and context – a keyboard, for example, is usually in front of a monitor – and decide what characteristics all objects with the same label have in common. In a new environment it compared each segment of its scan with the objects in memory and chose the one with the best fit. In early tests, robots successfully located objects, including a keyboard and a shoe, in an unfamiliar room.

Similarly, robots are learning to observe human activity by breaking 3-D video into a series of steps and learning the steps of common actions like brushing teeth or drinking coffee. In experiments with four different people in five environments, including a kitchen, living room and office, a computer correctly identified one of 12 specified activities 84 percent of the time when it was observing a person it had trained with, and 64 percent of the time when working with a new person. It also was successful ignoring random activities that didn't fit any of the known patterns.

But robots still have a long way to go to learn like humans. "I would be really happy if we could build a robot that would act like a six-month-old baby," Saxena says.

Back to top