This project aimed at providing human-centric control of mobile and expressive robots to ease human-robot collaboration.
The robot was afforded the ability to understand locations in terms of their names (kitchen, hallway, bedroom etc.) and the locations of people in terms of the person’s name.
The operator controls the robot using a simple overhead map and “point-and-go” navigation, as well as giving it voice commands to either mark people or places, example: “This is Jason”, “You are in the kitchen” or “That is Angela”. The operator can also tell the robot to move to locations or people that have already been marked, example: “Go to Angela” or “Go to the kitchen”.
Check out the video for more details.