Dr. Sabiha Alam Choudhury is currently working as the Head of Department of Psychology and Counselling at School of Humanities and Social Sciences, Assam Don Bosco University, Tapesia, India.

Her research areas are Positive Psychology, Counselling & Psychotherapy, and Marriage and Family Counselling.

Email: sabiha.choudhury[at]dbuniversity.ac.in , sabihachoudhury9[at]gmail.com

LinkedIn

What Makes A Good Robot Teacher?

By Emma Young

For young children, the biggest source of knowledge about the world is other people. Unsurprisingly, then, they have all kinds of strategies for deciding whom to trust — only learning from confident adults if that confidence has proven to be justified, for example, and showing more trust in claims made by nice people.

However, our world is increasingly filled with technological devices designed to impart information, and educational robots are being used to teach children languages, maths, science, and more. As Kimberly Brink and Henry Wellman at the University of Michigan write in a new paper, published in Developmental Psychology: “The rapid expansion of robot ‘instructors’ makes it increasingly important to investigate whether and when robots effectively transmit information to children.”

The pair reports on just such an investigation on pre-schoolers, with some fascinating results: to learn well from a robot, it seems that a child has to believe that it has some key human-like attributes. Without these, even a device that reliably gives accurate information is not trusted.

For the first of two studies, Brink and Wellman recruited 60 three-year-olds. The children watched videos featuring two small humanoid ‘Nao’ robots and a woman who sat between them and interacted with them. These robots have limbs, moveable heads, eyes and a nose. In this experiment, they differed physically only in colour — one was purple, and the other orange.

In the first video, one robot correctly answered the woman when she asked about the names of some familiar objects, including a hairbrush and a teddy bear. The other robot got all the names wrong. (The colour of the accurate vs inaccurate robot was varied across the trials.)

Next came a video featuring images of unfamiliar objects (including blobby plastic objects). A researcher asked the child which robot they would ask for help to learn the names of these objects, and their answers were noted. Each of the robots then provided a made-up answer (one said “gobi”, for example, while the other said “danu”), and the child indicated which name they thought was right.

The children were then asked about the extent to which they thought these robots had psychological agency — whether, and how much, they could choose to move and think for themselves, for example. They were also asked about perceptual experiences — how much they thought the robots could feel pain, for instance, or feel scared.

Brink and Wellman found that children did indeed prefer to learn from the robot that had shown itself to be accurate, over the inaccurate robot. And, interestingly, children who more strongly attributed the ability to think and make decisions to these robots were more likely to favour the accurate robot. “These findings support the hypothesis that children are increasingly likely to treat social robots similarly to human teachers and monitor the robot’s accuracy when those robots engender perceptions of agency,” the researchers write.

This conclusion was supported by the findings of the second study, which involved a fresh group of 47 three-year-olds. The study was the same as the first, except that instead of the two robots, the children saw two faceless blobs, described to them as “machines” (they looked a bit like plastic sea urchins), which did not interact with the woman. Instead, a hand came into view to pull a string to “activate their voice” (in fact, the same robotic voice as in the first study).

This time, the results were very different. While the children recognised that one machine was accurate in its initial responses, and the other was not, they did not go on consistently to trust the accurate one. And this time, whether a child inferred more or less agency made no difference to which robot they plumped for.

The findings are interesting for all sorts of reasons. First, it seems that young children do trust — and mistrust — humanoid robots in the same way as people. But clearly, their opinions about the mental capacities of the information-giver are important for this. “At the least, psychological agency seems to be a critical factor in determining whether young children trust the testimony of social robots and may well be important more generally,” the researchers observe.

Educational devices range from humanoid robots to disembodied voices — like Apple’s Siri. More work is now needed to establish just how much — or little — psychological agency has to be inferred by a young child for it to trust a machine in the way it does a human informant. It may well also be the case that older children react differently, of course. Only further work will tell.

Still, as the researchers write, the findings certainly do imply that when designing educational robots for young children, designers should bear in mind the impact of cues that suggest psychological agency. For this age group at least, evidence of accuracy alone is not enough.

Robot teachers for children? Young children trust robots depending on their perceived accuracy and agency.

Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest



View more here.
Credit- BPS Research Digest. Published by- Dr. Sabiha : www.drsabiha.blogspot.com