Multi-modal Environment Understanding

Simultaneous Localization And Mapping (SLAM) has been instrumental in transitioning robots from factory floors to unstructured environments. State-of-the-art SLAM techniques can track visual-inertial sensors over long trajectories in real time, while providing a geometric map of the environment. However, SLAM has advanced mostly in isolation from the impressive progress in object recognition and scene understanding, enabled by deep convolutional neural networks and structured object models. Our work focuses on developing learning and online inference algorithms that capture geometric, semantic, topological, and temporal properties in a common environment model. We are interested in problems such as estimating the poses and shapes of the objects in the environment, using objects for loop closure and re-localization of the sensing system, as well as reconstructing the dense surfaces and their semantic categories to form a metric-semantic map.


Autonomous Navigation

Autonomous robot operation in unknown, complex, unstructured environments requires online generation of dynamically feasible trajectories and control techniques with guaranteed safety and stability properties. Our work focuses on techniques for joint online mapping and planning, planning of smooth time-parameterized trajectories, and guaranteeing safe control with uncertain or learned system dynamics or environment models. Instead of navigating from point A to point B, we are also interested in specifying and executing complex robot tasks defined over semantically meaningful entities in the environment. We are developing task and motion planning algorithms for executing complex mission specifications as well as inverse reinforcement learning techniques to learn desirable robot behavior from expert demonstrations.


Active Information Acquisition

Robots operating in unknown environments need to recognize sudden, dynamic, or even adversarial changes and reduce uncertainty in their dynamics and environment models. This is similar to the natural curiosity exhibited by humans and animals in new environments. Our research focuses on optimal control and reinforcement learning problems in which the cost captures uncertainty in the robot and environment models, measured using entropy, mutual information, or probability of error. We aim to learn or compute uncertainty-reducing control policies that allow the robots to actively decide how to improve the accuracy of their models and explore unknown areas or situations.


Distributed Intelligence

Collaboration among multiple robots with heterogeneous sensing, memory, computation, and motion capabilities offers increased efficiency, accuracy, and robustness of information collection and mission execution, compared to any individual agent. The goal of our research is to establish principles for collaborative inference and decision making among robots with heterogeneous capabilities. We are interested in problems such as relative localization in a robot team, collaborative mapping and object tracking, coordination and distributed control of robot teams for active uncertainty reduction or execution of complex multi-robot tasks.