Cloud-based Computation for Robot Motion Planning

Cloud computing can provide vast amounts of on-demand computing power, but algorithms for robot motion planning using the cloud must take into account the latency and bandwidth of the connection between the cloud and the robot.

Cloud-based computing offers a vast amount of low-cost computation power on-demand. It offers the ability to quickly scale up and down compute resources so that you can have more computing when you need it, and not pay for it when you do not. To place in context the price of cloud computation power, the July 2016 prices for 1 second of 360-cores of computation can be less than $0.0047. This implies that with an embarrassingly parallel algorithm, a 5-minute computation can be cut to less than 1 second. And because you pay for the resources that you use, the same computation would require $0.0047 whether using 1-core for 360 seconds, or 360-cores for 1 second. To access these immense computing resources, the only thing that is required is a connection to the internet.

Mobile robots are often designed and built to keep weight and power consumption as low as possible to achieve an acceptable duration of autonomy before requiring recharging. This design concern naturally dictates that the computation power on such a robot is limited—for example, to a low-power single-core processor. Motion planning is a computationally intensive process, and as such, if the mobile robot has more than a few degrees of freedom, its computational demands for motion planning can quickly exceed its available computational power.

We introduce a method for splitting the computation of a robot’s motion plan between the robot’s low-power embedded computer, and a high-performance cloud-based compute service. In our method, the robot communicates its configuration, its goals, and the obstacles to the cloud-based service. The cloud-based service takes into account the latency and bandwidth of the connection between it and the robot and computes and returns a motion plan within the time frame necessary for the robot to meet requirements of a dynamic and interactive scenario. The cloud-based service parallelizes construction of a roadmap, and returns a sparse subset of the roadmap giving the robot the ability to adapt to changes between updates from the server. In our results, we show that with typical latency and bandwidth limitations, our method gains significant improvement in the responsiveness and quality of motion plans in interactive scenarios.

The Fetch robot using our cloud-based motion planning for the task of grasping the bottle resting on the table while avoiding both the static obstacles (e.g., table) and the dynamic obstacle (a tube sensed via an RGBD camera). In frame (a) after the Fetch approaches the table with its arm in its standard rest configuration and it initiates the cloud-computation process. The Fetch’s embedded CPU is tasked with sensing and avoiding dynamic obstacles, while a cloud-computer simultaneously generates and refines its roadmap. In frame (b), the Fetch begins its motion, only to be blocked in frame (c) by a new placement of the obstacle. The Fetch is again blocked in frame (d), moves again around the obstacle in frame (e), and reaches the goal in frame (f).



  1. Jeffrey Ichnowski, Jan Prins, and Ron Alterovitz, "Cloud-based Motion Plan Computation for Power-Constrained Robots," in Algorithmic Foundations of Robotics (WAFR 2016), Dec. 2016. (Download PDF)
  2. Jeffrey Ichnowski and Ron Alterovitz, "Scalable Multicore Motion Planning Using Lock-Free Concurrency," IEEE Transactions on Robotics, vol. 30, no. 5, pp. 1123-1136, Oct. 2014. (Publisher) (Download PDF)
  3. Jeffrey Ichnowski, Jan F. Prins, and Ron Alterovitz, "Cache-Aware Asymptotically-Optimal Sampling-Based Motion Planning," in Proc. IEEE International Conference on Robotics and Automation (ICRA), May 2014, pp. 5804-5810. (Publisher) (Download PDF)
  4. Jeffrey Ichnowski and Ron Alterovitz, "Parallel Sampling-Based Motion Planning with Superlinear Speedup," in Proc. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct. 2012, pp. 1206-1212. (Publisher) (Download PDF)

NSF logo This research is made possible by generous support from the National Science Foundation (NSF) under awards CCF-1533844 and IIS-1149965. Any opinions, findings, and conclusions or recommendations expressed on this web site do not necessarily reflect the views of NSF.

About    |    Projects    |    People    |    Publications    |    News    |    Contact