This work is motivated by the desire to learn robot policies directly from motor impaired users of assistive devices. The motor impaired human is precisely who the autonomous agent should learn from; especially when the human and autonomy are co-located and there is also an opportunity to directly learn from the human during daily use.
Rather than treating these constraints as limitations, we explore how the human signal's unique characteristics can be leveraged to improve machine learning algorithms. Our contributions include algorithmic approaches specifically designed for motor-impaired users and evaluations of these methods in real-world settings.
Rather than treating these constraints as limitations, we explore how the human signal's unique characteristics can be leveraged to improve machine learning algorithms. Our contributions include algorithmic approaches specifically designed for motor-impaired users and evaluations of these methods in real-world settings.
As a foundation, we conducted large-scale studies of teleoperation using diverse interfaces, creating an open-source interface assessment package to quantify usage. Studies with spinal cord injured and uninjured participants revealed significant differences in performance across interfaces and their impact on cognitive load, as measured by heart rate variability.
Building on this, we now explicitly model interface activation to account for discrepancies between intended and measured commands. Our follow-on work uses transfer learning to first develop expert models of interface use and adapt them to individual end-users, advancing robot learning for motor-impaired teachers.