Robots are increasingly being used to collaborate with humans, whether that be in the workplace, on the road, or at home. A large body of existing research has developed mechanical and controller techniques to ensure that these robots are safe, so that, if the human and robot come into contact, the robot is compliant. But contact between the human and robot is not always something to be avoided or ignored; in fact, physical interactions can convey useful information about how the robot should be behaving. When the human pulls, pushes, twists, or adjusts the robot, they are using physical interactions to correct the robot. Learning from physical interaction enables the robot to interpret these corrections, and infer what the human actually wants the robot to do.
My research on learning from physical interaction focuses on determining the meaning behind the human’s physical correction, and then using that correction to update how the robot behaves in the future.
We hope that this research leads to more responsive robots that intuitively learn from the human!
Publications on Learning from Physical Interaction
- D. P. Losey and M. K. O’Malley, “Trajectory deformations from physical human-robot interaction,” IEEE Transactions on Robotics, (in press). PDF.
- A. Bajcsy*, D. P. Losey*, M. K. O’Malley, and A. D. Dragan, “Learning robot objectives from physical human interaction,” Proc. Conference on Robot Learning (CoRL), pp 217-226, 2017. PDF.
Some of this work on learning occurred during my visit to the University of California, Berkeley, where I worked with Anca D. Dragan and Andrea Bajcsy. I’m excited to maintain this collaboration as we continue to explore physical human-robot interaction!