One of the main research themes in the IS&UE RCE is the exploration and development of innovative user interfaces for robotic control. Our focus lies in how we can leverage 3D spatial user interface technologies for tele-operation as well as guidance for semi-autonomous robots including unmanned aerial vehicles (UAVs), humanoid robots, and robotic arms. We aim to support interaction with both single and robot teams.
We are interested in how to map the human body to interaction strategies that support control of a variety of different robotic platforms. Specifically, in this project we are focused on how full-body interaction can be used to control different types of robots. We are exploring both tele-operation and semi-autonomous robot guidance.
We are exploring upper body 3D spatial interaction metaphors for control and communication with Unmanned Aerial Vehicles (UAV) such as the Parrot AR Drone. We have designed and implemented five interaction techniques using the Microsoft Kinect, based on metaphors inspired by UAVs, to support a variety of flying operations a UAV can perform. Techniques include a first-person interaction metaphor where a user takes a pose like a winged aircraft, a game controller metaphor, where a user's hands mimic the control movements of console joysticks, "proxy" manipulation, where the user imagines manipulating the UAV as if it were in their grasp, and a pointing metaphor in which the user assumes the identity of a monarch and commands the UAV as such.
We present a modeling approach to develop an agent that assists users discreetly in teleoperation when avateering a robot. Avateering is a powerful metaphor, and can be an effective teleoperational strategy. However, it is difficult to interact with objects remotely with high accuracy due to factors such as viewpoint and lack of informative tactile feedback. Our research explores the addition of an assistive agent that arbitrates user input without disrupting the overall experience and expectation. Additionally, our agent assists with maintaining a higher level of accuracy for interaction tasks, in our case, a grasping and lifting scenario. Using the Webots robot simulator, we implemented 4 assistive agents to augment the user in avateering the Darwin-OP robot. The agent iterations are described, and results of a user study are presented. We discuss user perception towards the avateering metaphor when enhanced by the agent and also when unassisted, including perceived easiness of the task, responsiveness of the robot, and accuracy.
We are exploring how sketch-based interfaces can be used to assist robots through geometry extraction, behavior modification, and guided autonomy.