Motion Planning Under Uncertainty In Highly Deformable Environments

(a) Initial environment

(b) RRT with deformations
(prob. of success: 60%)

(c) LQG-MP only
(prob. of success: 25%)

(d) Our method
(prob. of success: 97%)

Guiding bevel-tip steerable needles through slices of deformable tissue. Current motion planning solutions for deformable environments either assume deterministic deformations (b), which may result in paths through narrow passageways that are highly likely to result in obstacle collision, or compute plans in a static world and consider deformations as a type of perturbation (c), which neglects the large time-dependent motions of the obstacles and target. Our unified framework (d), which accounts for uncertainty in deformation models, noisy sensing, and unpredictable actuation, results in a significantly higher probability of success in plan execution.

Many tasks in robot-assisted surgery, food handling, manufacturing, and other applications require planning and controlling the motions of manipulators or other devices that must interact with highly deformable objects. Due to their difficulty, tasks involving uncertainty and highly deformable objects are still routinely completed manually rather than automatically or semi-autonomously using robot assistance. Automating these tasks could increase productivity and improve outcomes by decreasing the time and costs associated with manual operation while simultaneously increasing accuracy and precision.

Motion planning for tasks in highly deformable environments is challenging because it requires the robot to anticipate deformations in its environment while simultaneously considering uncertainty in those deformations and in its sensing of the system state. We present a unified approach for motion planning under uncertainty in deformable environments that maximizes probability of success by accounting for uncertainty in deformation models, noisy sensing, and unpredictable actuation.

Unlike prior planners that assume deterministic deformations or treat deformations as a type of small perturbation, our method explicitly considers the uncertainty in large, time-dependent deformations. Our method requires a simulator of deformable objects but places no significant restrictions on the simulator used. We use a sampling-based motion planner in conjunction with the simulator to generate a set of candidate plans based on expected deformations. Our method then uses the simulator and optimal control to numerically estimate time-dependent state distributions based on uncertain parameters (e.g. deformable material properties or actuation errors). We then select the plan with the highest estimated probability of successfully avoiding obstacles and reaching the goal region.

Using FEM-based simulation of deformable tissues, we demonstrate the ability of our method to generate high quality plans in two medical-inspired scenarios: (1) guiding bevel-tip steerable needles through slices of deformable tissue around obstacles for minimally invasive biopsies and drug-delivery, and (2) manipulating planar tissues to align interior points at desired coordinates for precision treatment.

Anatomical environment

High quality plan selected using our method

Illustration of autonomous bevel-tip steerable needle guidance through a slice of liver tissue to a goal region (green) while avoiding sensitive anatomical regions (orange). The high quality plan computed by our method increases probability of successful execution by combining deformation modeling and feedback control and avoiding the narrow passage between the insertion location and the goal region.


NSF logo NIH logo This research is made possible by generous support from the National Science Foundation (NSF) under award IIS-0905344 and by the National Institutes of Health (NIH) under award R21EB011628. Any opinions, findings, and conclusions or recommendations expressed on this web site do not necessarily reflect the views of NSF or NIH.

About    |    Projects    |    People    |    Publications    |    News    |    Contact