If we’ve learned one thing from some of our favorite YouTube robots, it’s that human-robot interaction can be a tricky business. Developing ways to get rigid robotic arms to perform delicate tasks around soft human bodies is easier said than done.
This week, a team from MIT’s CSAL department is showing off their work using robotic arms to help people get dressed. The promise of such technology is clear: helping people with mobility issues perform tasks that many of us take for granted.
Among the biggest hurdles is creating algorithms that are able to efficiently navigate around the human form, without hurting the person it is trying to help. Preprogrammed modes can run into all kinds of variables, including size and human reactions. Over-reacting to variables, on the other hand, can effectively freeze the robot, which is unsure of the best route to take.
So, the team set out to develop a system that could adapt to different scenarios and learn as they progressed.
“To provide a theoretical guarantee of human safety, the team’s algorithm causes uncertainty in the human model. Instead of having a single, default model, where the robot perceives only one possible response, the team designed the machine to model a number of possible models. understanding, to more closely mimic how a human can understand other humans,” MIT writes in a blog post. As the robot collects more data, it will reduce uncertainty and refine those models.”
The team says it will also research how human subjects react to these types of tasks.