Our research lies at the intersection of simultaneous localization and mapping (SLAM), motion planning, and reinforcement learning applied to robotics with emphasis on online execution, safety, and robustness.
We develop environment models that unify geometric, semantic, and temporal reasoning to achieve understanding of space, objects, and dynamics from onboard robot sensing.
We develop algorithms for robot task planning and autonomous robot navigation that enable adaptive and safe behaviors in novel operational conditions.
We develop principles for collaborative inference and decision making in heterogeneous robot teams to achieve distributed intelligence.
Simultaneous Localization And Mapping (SLAM) has been instrumental in transitioning robots from factory floors to unstructured environments. State-of-the-art SLAM techniques track sensors over long trajectories while constructing maps in real time. Our research focuses on developing learning representations and online inference algorithms to capture geometric, semantic, and temporal properties of the environment from various sensor observations like images, point clouds, and inertial measurements. We study problems such as estimating object poses and shapes, using objects for loop closure and sensor re-localization, and reconstructing dense surface, semantic categories, and environment dynamics.
Robots operating in unknown environments need to recognize dynamic changes and reduce uncertainty in their models. This is similar to the natural curiosity exhibited by humans and animals in new environments. Our research focuses on active perception problems, formulated as optimal control or reinforcement learning with cost capturing the uncertainty in the robot and environment models. We develop uncertainty-reducing control policies that allow robots to actively decide how to improve the accuracy of their models, balancing exploitation of known states with exploration of novel ones.
We develop MISO, a hierarchical multiresolution submap optimization method for neural implicit mapping that uses 3D implicit features and local submap fusion to improve the efficiency and global consistency of large‑scale signed distance function (SDF) reconstruction.
We extend the famous 3D occupancy mapping method OctoMap by introducing multiple semantic categories in occupied space. Our semantic OctoMap maintains categorical distributions over semantic classes in an octree and computes Shannon mutual information between the map and RGBD images to guide efficient robotic exploration of large environments.
We develop an online probabilistic metric-semantic mapping approach that uses sparse Gaussian Process (GP) regression to build dense uncertainty-aware semantically-labeled 3D maps from RGB-D images.
We develop OrcVIO, a visual-inertial odometry method that incorporates object-level residuals to jointly optimize robot motion and object poses, improving trajectory accuracy and enabling consistent object-level mapping in real time.
We develop an algorithm that plans robot sensing trajectories to efficiently build and reduce uncertainty in signed distance field (SDF) maps by maximizing the expected information gain along the candidate motion sequences.
We develop a SLAM method that uses semantic keypoints and segmentation masks from monocular camera images to optimize deformable mesh models of object shapes alongside object poses and camera poses, generating shape-aware object-level maps.
We develop a probabilistic data association approach for robust asynchronous event-based feature tracking and integrate it with inertial measurements in visual-inertial odometry to deliver accurate, high-rate 6-DoF motion estimation under fast motions and high dynamic-range conditions.
We develop an active tactile sensing approach that uses Monte Carlo Tree Search (MCTS) to adaptively select end-effector poses that minimizes the number of touches required to reliably recognize an object.
We develop a Bayesian localization method that uses sets of detected objects as observations to robustly estimate a robot's pose within a prior object-level map. The key idea is that object set likelihoods with unknown data association can be computed via matrix permanents.
We formulate active object recognition as a nonmyopic planning and hypothesis testing problem and develop a POMDP-based view planning algorithm that selects informative viewpoints to improve object class and pose estimates.
Autonomous robot operation in unknown unstructured environments, besides perception and mapping, requires online generation of dynamically feasible trajectories and control techniques with guaranteed safety and stability properties. Our work focuses on integrated mapping, planning, and control with safety guarantees and robustness to learned system dynamics and environment models. We also develop task and motion planning algorithms for executing complex tasks, provided in natural language, as well as inverse reinforcement learning techniques to learn desirable robot behaviors from expert demonstrations.
We develop LTLCodeGen, a method that uses large language model (LLM) code generation to translate natural language robot navigation instructions into syntactically correct linear temporal logic (LTL) formulas that can be combined with a semantic occupancy map to generate robot trajectories satisfying the specified tasks.
We develop a control strategy for pursuit-evasion with occlusions that formulates visibility and safety constraints as control barrier functions (CBFs) and combines sampling‑based planning with convex control synthesis to maintain evader line-of-sight and avoid obstacles.
We formulate distributionally robust control barrier functions (DR-CBFs) to incorporate noisy sensor measurements directly into optimization-based control synthesis, guaranteeing safe and efficient autonomous navigation in dynamic environments.
We introduce a neural-network model that estimates inertial measurement biases from past inertial data to enable invariant visual-inertial odometry without a bias state, leading to improved robustness and accuracy, especially when visual information is intermittent or unavailable.
We develop an environment-aware safe tracking (EAST) method that integrates obstacle clearance costs, convex-set motion prediction, and safety constraints via a reference governor and control barrier functions to enable adaptive, collision-aware robot navigation in unknown dynamic environments.
We develop a port-Hamiltonian neural ODE formulation on Lie groups that embeds energy conservation and Lie-group structure into learned robot dynamics and pairs it with energy-shaping control to enable stable trajectory tracking across a range of robotic platforms.
We develop an inverse reinforcement learning approach that learns a cost function on a semantic map from expert navigation data and uses differentiable planning to infer policies that generalize demonstrated behavior to novel autonomous navigation scenarios.
We develop a tracking controller using a virtual reference governor system, whose state acts as a stabilization point for the actual system and is controlled to track a desired path while enforcing runtime-sensed safety constraints.
We develop a sparse Bayesian kernel-based occupancy mapping method that incrementally builds a probabilistic map from streaming sensor data and supports efficient collision checking by representing obstacles with a sparse set of relevance vectors.
We develop a locomotion planning and control approach that uses linear temporal logic specifications and motion primitives to compute dynamically feasible locomotion plans for bipedal robots and tracks them with a quadratic-programming-based controller.
We develop a fully autonomous quadrotor system, integrating state estimation, mapping, planning, and control, to achieve high-speed navigation through unknown, GPS-denied, and cluttered 3D environments using only onboard sensing and computation.
We develop search-based motion planning algorithms for quadrotor robots that compute dynamically feasible, smooth, minimum-time trajectories in cluttered environments using motion primitives and heuristic guidance from closed-form optimal control.
Collaboration among multiple robots with heterogeneous sensing, memory, computation, and action capabilities offers increased efficiency and robustness of perception and task execution, compared to any individual robot. Our research aims to establish principles for collaborative estimation and decision making among robots with heterogeneous capabilities. We are interested in problems such as relative localization in a robot team, collaborative mapping and object tracking, and coordination and distributed control of robot teams for uncertainty reduction and execution of complex multi-robot tasks.
We develop a real-time decentralized metric-semantic SLAM method that enables heterogeneous robot teams to collaboratively build object-level maps and perform accurate multi-robot localization and loop closure without GPS, while keeping communication and computation lightweight.
We develop a physics-informed multi-agent reinforcement learning (MARL) method that uses a distributed port-Hamiltonian policy with self-attention to capture agent physics and interactions with varying numbers of agents in cooperative and competitive settings.
We develop ROAM, a method that enables a team of robots to collaboratively build consistent 3-D semantic maps and plan informative trajectories by performing consensus-constrained Riemannian optimization with only local peer-to-peer communication.
We formulate multi-robot object SLAM as a variational inference problem over a communication graph and develop a distributed mirror descent algorithm that enables robots to collaboratively estimate consistent object maps and trajectories using only local observations and one-hop communication.
Anytime decentralized planning algorithm enables a team of mobile robots to progressively improve information-gathering trajectories in real time.
We formulate active information acquisition for mobile sensing systems as an optimal control problem that minimizes estimation uncertainty, develop decentralized planning algorithms with performance guarantees, and demonstrate them in multi-robot active SLAM.
Distributed source-seeking algorithms guide robots to locate the maximum of a noisy signal field by following stochastic gradients of either direct measurements (model-free) or mutual information (model-based).