Robots can deliver food on a campus and hit a hole-in-one on a golf course, but even the most complex robot cannot perform the basic social interactions essential to human daily life.
MIT researchers have now integrated certain social interactions into a framework for robots, allowing machines to understand what it means to help or hinder each other, and learn to perform these social behaviors on their own. In a simulated environment, a robot watches its companion, guesses what task it wants to accomplish, and then helps or hinders that other robot based on its own goals.
The researchers also showed that their model creates realistic and predictable social interactions. When they showed humans videos of these simulated robots interacting with each other, human viewers mostly agreed with the model about what kind of social behavior was going on.
Allowing robots to demonstrate social skills may lead to smoother and more positive human-robot interactions. For example, a robot in a assisted living facility could use these capabilities to help create a more nurturing environment for the elderly. The new model could also allow scientists to quantitatively measure social interactions, which could help psychologists study autism or analyze the effects of antidepressants.
“Robots are going to live in our world soon enough and they really need to learn how to communicate with us from a human perspective. They have to understand when it is time to help them and when it is time for them to see what they can do to prevent something from happening. It is very early work and we are barely scratching the surface, but I I feel it’s a very serious first attempt to understand what it means for humans and machines to interact socially,” says Boris Katz, principal investigator and head of the InfoLab group at the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a member of the Center for Brains, Minds, and Machines (CBMM).
Joining Katz on the paper is co-lead author Ravi Tijwani, a research assistant at CSAIL; Co-lead author Yen-Ling Kuo, CSAIL PhD student; Tianmin Xu, postdoctoral researcher in the Department of Brain and Cognitive Sciences; and lead author Andre Barbeau, a researcher at CSAIL and CBMM. The research will be presented at the Robot Learning Conference in November.
To study social interactions, the researchers created a simulated environment in which robots pursue physical and social goals by navigating a two-dimensional network.
The material objective relates to the environment. For example, a bot’s physical goal may be to navigate to a tree at a specific point on the network. The social goal is to guess what another bot is trying to do and then act on that guess, such as helping another bot water a tree.
The researchers use their model to determine the physical goals of the robot, what its social goals are, and how much importance it should place one over the other. The robot is rewarded for the actions it takes that bring it closer to achieving its goals. If a bot tries to help a mate, it adjusts its reward to match the reward of the other bot; If he tries to foul, he adjusts his reward to be the opposite. The scheme, an algorithm that decides what actions the bot should take, uses this constantly updated reward to guide the bot in achieving a combination of physical and social goals.
“We’ve opened up a new mathematical framework for modeling the social interaction between two agents. If you’re a bot and you want to go to location X, and I’m another bot and I see you trying to go to location X, I can collaborate by helping you get there faster at location X. This could mean Rounding out X for you, finding another better X, or taking whatever action you need to do in X. Our formulation of the plan allows you to discover the “how”; we define the “what” in terms of what social interactions mean mathematically,” explains Tijuani.
It is important to combine the physical and social goals of a robot to create realistic interactions because the humans helping each other have limits on how far they can travel. For example, a rational person would likely not give their wallet to a stranger, says Barbeau.
The researchers used this mathematical framework to identify three types of robots. A level 0 robot has only physical purposes and cannot think socially. A level 1 robot has both physical and social goals, but it assumes that all other robots have only physical goals. Level 1 bots can take actions based on the physical goals of other bots, such as assisting and disabling. A level 2 robot assumes that other robots have social and physical purposes; These bots can take more complex actions such as joining together to help.
To see how their model compares to human perspectives of social interactions, they devised 98 different scenarios with robots at levels 0, 1, and 2. Twelve people watched 196 videos of robots interacting and then were asked to estimate physical and social effects. The goals of these robots.
In most cases, their model matched what humans believed in the social interactions that occurred in each image.
“We have this long-term interest, both in building computational models of robots, but also in researching the human aspects of that. We want to know what features of these videos are that humans use to understand social interactions. Can we do an objective test of your ability to recognize social interactions? Perhaps there are A way to teach people to recognize these social interactions and improve their abilities. We’re a long way from that, but even just being able to measure social interactions effectively is a huge step forward,” says Barbeau.
Towards further development
Researchers are developing a system using 3D agents in an environment that allows many types of interactions, such as manipulating household objects. They also plan to modify their model to include environments in which procedures may fail.
The researchers also want to incorporate a neural network-based bot scheduler into the model, which learns from experience and runs faster. Finally, they hope to conduct an experiment to collect data on the features humans use to determine whether two robots engage in social interaction.
“My hope is that we will have a standard that will allow all researchers to work on these social interactions and inspire the kinds of scientific and technical advances we’ve seen in other areas like object recognition and sharing,” Barbeau said.
This research was supported by the Center for Minds, Minds, and Machines, the National Science Foundation, the MIT CSAIL Systems Learning Initiative, the MIT-IBM Watson AI Laboratory, the DARPA Program for Artificial Social Intelligence for Successful Teams, the U.S. Air Force Research Lab and the U.S. Air Force AI Accelerator and Research Office Navy.
“Organizer. Social media geek. General communicator. Bacon scholar. Proud pop culture trailblazer.”