Current Projects

Distributed and Collaborative Intelligent Systems and Technology (ARL CRA DCIST)

The Distributed and Collaborative Intelligent Systems and Technology (DCIST) Collaborative Research Alliance (CRA) will create Autonomous, Resilient, Cognitive, Heterogeneous Swarms that can enable humans to participate in wide range of missions in dynamically changing, harsh and contested environments. These include search and rescue of hostages, information gathering after terrorist attacks or natural disasters, and humanitarian missions. Teams of humans and robots will operate as a cohesive team with robots preventing humans from coming in harms way (Force Protection) and extending and amplifying their reach to allow 1 human to do the work of 10 humans (Force Multiplication). The team is led by the University of Pennsylvania and includes collaborators from the U.S. Army Research Laboratory, Massachusetts Institute of Technology, Georgia Institute of Technology, University of California and University of Southern California.


RAIDER: Resilient Actionable Intelligence for Distributed Environment understanding and Reasoning (ONR Science of AI)

This project tackles the need to develop scalable computational methods for joint perception and planning that enable a team of autonomous agents to extract task-relevant information from heterogeneous and distributed data sources to support collaborative and distributed optimal planning with formal performance guarantees. Future control and decision-making systems need to strike a balance between guaranteed performance in the presence of uncertainty, extraction of information relevant to the task at hand, and reduced algorithmic complexity to enable real-time inference, planning, and learning. This confluence of control, estimation, machine learning, and computational science and engineering is necessary for high-confidence, high-reliability, minimal-supervision autonomous systems that can understand and act in high optempo missions. This project develops unified perception/action representations that can be used across different modalities and spatiotemporal scales; task-aware perception methods that account for the underlying control objective and the available informational and computational resources across the team; quantification of uncertainty in the unified multi-modal representations to enable principled information exchange among agents; hierarchical team architectures and abstractions that can model planning and estimation problems at the right level of granularity; distributed decision-making and inference methods capable of dealing with streaming multi-resolution data.


Distributed Bayesian Learning and Safe Control for Autonomous Wildfire Detection (NSF NRI)

This project aims to take advantage of the hyperconvergence of computation, storage, sensing, and communication in small unmanned aerial vehicles (UAVs) to realize large-scale mapping of environmental factors such as temperature, vegetation, pressure, and chemical concentration that contribute to fire initiation. Developing UAV teams that recharge autonomously and communicate intermittently among each other and with static sensors will aid firefighters with continuous real-time surveillance and early detection of ensuing fires. This project focuses on three fundamental innovations to address the scientific challenges associated with autonomous, collaborative environmental monitoring. First, a new Satisfiability Modulo Optimal Control framework is proposed to handle mixed continuous flight dynamics and discrete constraints and ensure collision avoidance, persistent communication, and autonomous recharging for UAV navigation. Second, a distributed systems architecture using new uncertainty-weighted models will be developed to enable cooperative mapping across a heterogeneous team of UAVs and static sensors and avoid bandwidth-intensive data streaming. Lastly, a new Bayesian learning and inference approach is proposed to generate multi-modal (e.g., thermal, semantic, geometric, chemical) maps of real-time environmental conditions with adaptive accuracy and uncertainty quantification.


Representation Learning for Semantic Mapping and Safe Robot Navigation (NSF RI)

This project aims to develop new theoretical and algorithmic tools for advancing the ability of autonomous systems to comprehend their surroundings online from sensory observations and adapt their operation safely in response to changing conditions. The key innovations include techniques for online inference of object shapes and robot dynamics models from sensory observations as well as control design for the learned robot dynamics, subject to safety constraints from the observed objects. We are developing online perception algorithms to estimate object poses, shapes, and robot motion jointly, which enables specification of safety constraints for autonomous navigation. We are leveraging the Koopman operators and Bayesian neural networks to learn robot dynamics and infer object shape. Our algorithms provide an adaptive way of estimating robot dynamics from online data, while relying on approximation error bounds to guarantee that the control design satisfies the safety constraints provided by the perceptual system. This project will enable autonomous robot operation in unknown environments that is adaptable, due to the use of learned robot and object models, and safe, due to the use of perception- and uncertainty-aware constraints in the control design.


Past Projects

Lyapunov-Certified Cognitive Control for Safe Autonomous Navigation in Unknown Environments (NSF CRII RI)

Applications for unmanned aerial and ground vehicles requiring autonomous navigation in unknown, cluttered, and dynamically changing environments are increasing in fields such as transportation, delivery, agriculture, environmental monitoring, and construction. To achieve safe, resilient, and self-improving autonomous navigation, this project focuses on the design of adaptive online environment understanding and Lyapunov-theoretic control techniques to guarantees stable and collision-free operation in challenging conditions. This research direction is important because current practices rely on prior or hand-crafted maps that attempt to capture the whole environment, even if parts are irrelevant for specific navigation tasks. This increases memory and computation requirements, spreads the effects of noise, and makes current approaches brittle, particularly in conditions involving dynamic obstacles, unreliable localization, or illumination variation.


Autonomous Exploration and Mapping using Information-theoretic Motion Planning (Brain Corp)

The goal of this project is to develop planning algorithms for autonomous exploration and mapping. This is an important problem for robots operating in unknown environments with applications to floor cleaning and environment monitoring. Our approach maintains an occupancy grid map of the environment and constructs a maximal-clearance graph from it. The graph is searched for a potential robot trajectory that maximizes the Cauchy-Schwarz quadratic mutual information between the map and future lidar scans of the environment. The use of an information measure to optimize the sensing trajectories leads to both faster exploration and higher fidelity map reconstruction compared to methods that use simple geometric objectives such as visibility maximization or frontier-based exploration.