Demonstration-Guided Motion Planning for Assistive Robots

Robots have the potential to assist people with a variety of routine tasks in homes and workplaces. From assisting a person with an activity of daily living (such as cooking or cleaning) to assisting a small business owner with a small-scale manufacturing task, assistive robots need to be capable of planning and executing motions in unstructured environments that may contain unforeseen obstacles. Further complicating the planning challenge, many assistive tasks involve significant constraints on motion. For example, when carrying a plate of food, a person knows that tilting the plate sideways, while feasible, is undesirable because it will spill the food. In order to autonomously and safely accomplish many assistive tasks, a robot must be aware of such task constraints and must plan and execute motions that consider these constraints while avoiding obstacles.

The Baxter robot performs powder transfer and stirring tasks while reacting to the movement of task-relevant objects.

We are developing demonstration-guided motion planning (DGMP), a framework which enables robotic manipulators to compute motion plans that (1) avoid obstacles in unstructured environments and (2) aim to satisfy learned features of the motion that are required for the task to be successfully accomplished. At the core of DGMP is an asymptotically optimal sampling-based motion planner that computes motion plans that are both collision-free and globally minimize a cost metric that encodes learned features of the motion. The motivation for our cost metric is that if the robot is shown multiple demonstrations of a task in various settings, features of the demonstrations that are consistent across all the demonstrations are likely to be critical to task success, while features that vary substantially across the demonstrations are likely unimportant.

The Aldebaran Nao uses DGMP to perform a powder transfer task and a table wiping task.

We have demonstrated the effectiveness of DGMP using the Aldebaran Nao robot and the Rethink Robotics Baxter robot performing simple household tasks in cluttered environments, including transferring powder from a container to a bowl, wiping the surface of a table, and pushing a button.

The Aldebaran Nao robot performs a powder transfer task and a button pushing task.

We are also investigating methods for enabling the robot to learn tasks using less human-provided information. For example, in recent work the robot automatically learns which objects in the environment (and specific features of those objects) are relevant to successfully performing the task.

The Baxter robot pours liquids and uses a spoon. The algorithmic extensions enable the robot to use human-guided demonstrations to automatically learn task-relevant landmarks and the motion needed to accomplish the task relative to those landmarks, which reduces the amount of human input required for the robot to learn the task.

We are also investigating methods to better temporally align demonstrations to enable the robot to learn a task from fewer demonstrations and to perform the learned task more quickly and smoothly.

Many existing methods that learn robot motion planning task models or control policies from demonstrations require that the demonstrations be temporally aligned. Temporal registration involves an assignment of individual observations from a demonstration to the ordered steps in some reference model, which facilitates learning features of the motion over time. We introduce probability-weighted temporal registration (PTR) and show that incorporating PTR yields higher-quality learned task models that enable faster task executions and higher task success rates.

Publications


This research is made possible by support from the National Science Foundation (NSF) under awards IIS-1117127, IIS-1149965, CNS-1305286, and CCF-1533844. Any opinions, findings, and conclusions or recommendations expressed on this web site do not necessarily reflect the views of NSF.