Intelligent Robot Lab
Brown University, Providence RI

Research

Broadly speaking, our research falls into three areas:

  • AI Robotics, where we aim to establish a link between the low-level at which robots must perceive and act, and high-level AI techniques that enable robust, flexible, long-horizon planning.

  • Mobile Manipulation, where we are concerned with developing algorithms for performing general-purpose object manipulation with robots, and for planning in tasks that combine aspects of both navigation and object manipulation.

  • Reinforcement Learning, where we study foundational algorithms for learning how to behave in the world through trial-and-error learning.

Here are a few sample projects:

  1. Robot Skill Acquisition.
  2. Constructing High-Level Symbolic Representations for Planning (forthcoming).
  3. Planning for the Decentralized Control of Multi-Robot Teams.
  4. Robot Motion Planning on a Chip.
  5. The Fourier Basis.

Robot Skill Acquisition

Skill acquisition is the ability to create new skills, refine them through practice, and apply them in new task contexts. It lies at the heart of two important aspects of human intelligence:

  1. Humans are able to perpetually improve their solutions to difficult control tasks through practice, moving from inefficient, planned movements that require a great deal of attention, to smooth, optimized movements that are executed efficiently without conscious thought. This type of learning underlies our ability to specialize at tasks by devoting time and effort to them.

  2. Through the retention and refinement of solutions to important subproblems, humans become able to solve increasingly difficult problems over time.

We developed CST, an algorithm for learning from demonstration - it segments demonstrated trajectories into skills, and determines a control policy for each skill, and which variables each skill depends upon. CST allows a user to naturally show a robot how to achieve a task, rather than program it.

Here's a video (courtesy of Scott Kuindersma) of the uBot-5 acquiring skills by demonstration using CST:

We next used CST as a core algorithm for achieving completely autonomous robot skill acquisition. We placed the uBot-5 in a pair of tasks which it must learn to complete by navigating to and manipulating a series of objects. The uBot learned to sequence a set of existing controllers to achieve an optimal solution to the first task, and then applied CST to the resulting solution trajectory to extract skills. These skills substantially improved the uBot's ability to solve the second task.

This video (which won Best Student Video at the 2011 AAAI Video Competition!) summarizes the experiment:

This work is described in the following paper:

... and in much more detail in Professor Konidaris's PhD thesis:

Planning for the Decentralized Control of Multi-Robot Teams

The decreasing cost and increasing sophistication of robots is creating many opportunities for applications where teams of relatively cheap robots can be deployed to solve real problems. However, if multi-robot systems are to become truly general-purpose tools, they will require general-purpose planning techniques that enable them to automatically synthesize solutions to new tasks.

The key to successfully developing such generic techniques is a general-purpose formulation of cooperative multi-robot problems. The Decentralized POMDP with macro-actions (MacDec-POMDP) is a general formulation of cooperative multirobot problems that captures the problem of planning for teams of agents that must act without perfect knowledge of each other's state (hence decentralized), in environments that they can only sense locally (hence partially observable), using built-in robot controllers as actions (hence macro-actions). The MacDec-POMDP accounts for uncertainty in action outcome, sensors, and information about other agents; any problem where multiple robots share the same utility function can be formalized as a MacDec-POMDP.

Here is a recent video of our work with Professor Chris Amato from UNH, and other colleagues at MIT, using a generic planner to solve a multi-robot problem in a warehouse domain:


... and here's another for cooperative bartending using a heterogenous team of robots (a PR2 and two Turtlebots):


The MacDec-POMDP formulation was introduced in:

... and the videos show experiments from the following two papers:

  • C. Amato, G.D. Konidaris, G. Cruz, C. Maynor, J.P. How, and L.P. Kaelbling. Planning for Decentralized Control of Multiple Robots Under Uncertainty. To appear in Proceedings of the 2015 IEEE International Conference on Robotics and Automation, May 2015.

  • C. Amato, G.D. Konidaris, A. Anders, G. Cruz, J.P. How, and L.P. Kaelbling. Policy Search for Multi-Robot Coordination under Uncertainty. To appear, Robotics: Science and Systems XI, July 2015.

Robot Motion Planning on a Chip

The motion planning problem - how to move a robot from a start pose to a goal pose, without colliding with any obstacles - is critical to designing robots that operate in real environments. However, despite decades of research on this topic, software motion planners still take seconds to find a single plan, while recent efforts with GPUs have improved that, but can still take hundreds of milliseconds (and hundreds of Watts of power) to find a single plan.

In collaboration with computer architect Professor Dan Sorin at Duke ECE and his students, we have recently developed a new approach that builds custom circuitry that exploits the massive parallelism present in the problem to achieve real-time motion planning. This approach was able to find motion plans for real robot problems in less than one millisecond, while consuming far less power than a GPU (or even a CPU).

Here are two videos describing the technology:






This work is described in the following papers:

The technology is the basis for a startup, Realtime Robotics. Real time motion planning coming soon to a robot near you!