As a robot moves through an environment to accomplish a task, uncertainty may arise in (1) the robot’s motion due to unmodeled or unpredictable external forces, and (2) the robot’s sensing of its state due to noisy or incomplete sensor measurements. These forms of uncertainty are common in a variety of practical robotics tasks, including guiding aerial vehicles in turbulent conditions, maneuvering mobile robots in unfamiliar terrain, and robotically steering flexible medical needles to clinical targets in soft tissues. Explicitly considering motion and sensing uncertainty when computing motion plans can improve the quality of computed plans. The objective of motion planning under uncertainty is to plan motions for a robot such that the expected cost (as defined by a user-specified cost-function) is minimized.
To fully consider the impact of uncertainty in motion and sensing, a motion planner should not merely compute a static path through the robot’s configuration space but rather a control policy that defines the motion to perform given any current state information. A key challenge is that the robot often cannot directly observe its current state but instead estimates a distribution over the set of possible states (i.e., its belief state) based on sensor measurements that are both noisy and partial (i.e., only a subset of the state vector can be sensed). The problem of computing a control policy over the space of belief states is formally described as a partially observable Markov decision process (POMDP), which has been studied extensively. Solutions to POMDPs are known to be extremely complex, since the belief space (over which a control policy is to be computed) is, in the most general formulation, an infinite-dimensional space of all possible probability distributions over the finite-dimensional state space. Solutions based on discrete or discretized state and action spaces are inherently subject to the ‘curse of dimensionality,’ and have only been successfully applied to very small and low-dimensional state spaces.
We are developing new approaches to motion planning under sensing and motion uncertainty by computing a locally optimal solution to a continuous partially observable Markov decision process (POMDP). Our approach begins by finding an initial feasible plan to the goal using a sampling-based motion planner. We then locally optimize the plan in belief space.
Our approaches represent beliefs (the distributions of the robot’s state estimate) by Gaussian distributions and are applicable to robot systems with non-linear dynamics and observation models. We approximate the belief dynamics using an extended Kalman filter and represent the value function in the vicinity of a nominal trajectory through belief space. We then use iterative optimization methods to converge towards a linear control policy over the belief space that is locally optimal with respect to a user-defined cost function. Our approaches account for obstacles in the environment and do not require discretization of the state and action spaces. The approaches we have developed have polynomial complexity per iteration with respect to the dimension of the state space, with the complexity depending on the specifics of the objective function. We have demonstrated the potential of our approaches in simulation for holonomic and non-holonomic robots maneuvering through environments with obstacles with noisy and partial sensing and with non-linear dynamics and observation models.
This research is made possible by generous support from the National Science Foundation (NSF) under awards IIS-0905344, IIS-1117127, and IIS-1149965. Any opinions, findings, and conclusions or recommendations expressed on this web site do not necessarily reflect the views of NSF.