Our research activities branch into two complementary directions. Co-Adaptive Human–Robot Collaboration focuses on enabling humans and robots to learn and act together through shared feedback, forming tightly coupled systems that build trust and improve performance over time. In parallel, Structured Learning for Reliable Robot Control develops learning-based controllers that are guided by established principles such as stability, robustness, and optimality, ensuring safe and predictable operation even in dynamic, unstructured environments. Together, these directions aim to create robotic systems that are both adaptable and dependable in real-world applications.
This research explores a collaborative approach where humans and robots learn together through shared feedback. In many human-in-the-loop systems, the human is treated as an expert whose behavior the robot should imitate. But what if neither agent initially knows how to solve the task? In this framework, the human adapts in real time based on feedback generated via performance metrics measured by the robot, while the robot simultaneously learns from the combined human–robot behavior. As performance improves, the human naturally reduces their level of intervention, building confidence as the robot continues to perform well with less input. This creates a gradual shift from active guidance to autonomous execution, grounded in trust. Notably, this co-adaptive process can achieve 3× to 10× faster learning convergence compared to fully autonomous robot learning, even when the human is not an expert in the task.
This research direction focuses on embedding well-established principles such as Lyapunov-based stability, robustness to disturbances, and optimality directly into learning-based robot controllers. Rather than relying on unconstrained learning, the controller is shaped by known structure and prior knowledge, allowing critical properties like safety, stability, and performance bounds to be explicitly enforced or continuously monitored. This “gray-box” paradigm ensures that learning is guided instead of rediscovering fundamental behaviors from scratch. The approach is particularly suited for challenging robotic tasks in dynamically changing, unstructured environments, where the system must adapt in real time to uncertainty, variability, and unforeseen conditions. By maintaining stability and performance guarantees during adaptation, the robot can operate safely while still improving its behavior online, enabling reliable deployment in complex, real-world scenarios.