Dr. Sabiha Alam Choudhury is currently working as the Head of Department of Psychology and Counselling at School of Humanities and Social Sciences, Assam Don Bosco University, Tapesia, India.

Her research areas are Positive Psychology, Counselling & Psychotherapy, and Marriage and Family Counselling.

Email: sabiha.choudhury[at]dbuniversity.ac.in , sabihachoudhury9[at]gmail.com

LinkedIn

We’re More Willing To Use Deceptive Tactics When A Bot Does The Negotiating

By Emma Young

Artificial intelligence agents play ever more influential roles in our lives. As the authors of a new paper, published in the Journal of Artificial Intelligence Research, point out, they do everything from suggesting new friends and connections to recommending purchases and filtering the news that reaches us. They are even beginning to drive our cars. Another role that they are tipped to take over is negotiating on our behalf to sell a car, say, or resolve a legal dispute.

So, reasoned Jonathan Mell and colleagues at the Institute for Creative Technologies at the University of Southern California, it’s important to know whether using a bot might affect how we negotiate —and it turns out that it does. One of the most striking findings from the team’s series of studies is that less experienced negotiators are more willing to be deceitful if they assign an AI agent to do their dirty work for them. The studies also illuminate how our stance on various negotiating tactics alters through experience — information that would be needed to program negotiating bots to accurately represent us.

In the first online study, 741 participants were asked about their experience in negotiating for good deals or prices. Then they were told to imagine that they were negotiating for something important to them, such as a house or a car. Next, they were told either that they would negotiate for themselves, or they would program a bot to do the job for them. They all then completed the team’s new Agent Negotiation Tactics Inventory (ANTI), indicating just how tough, deceptive and pleasant or otherwise they wanted to be (in the case of the first group) or wanted their bot to be.

For example, they could endorse making an opening demand far greater than that they’d be willing to settle for (a “tough” stance) and/or convey a “positive disposition” or express sympathy with the opponent’s plight (and so come across as pleasant). But they could also indicate that they — or their agent — would “strategically express anger toward the opponent to extract concessions” or “convey dissatisfaction with the encounter so that the other party will think he/she is losing interest”. These are both examples of deceptive strategies. The team found that participants who thought about programming the agent to negotiate on their behalf endorsed deceptive tactics more than those thinking about negotiating for themselves.

The second online study found that more negative previous negotiation experiences (arguing over a deal, for example) were linked to a greater willingness to use deceptive tactics across the board. For the final study, again run online, 190 participants completed the ANTI then engaged in a 10-minute negotiation using an online platform in which “players” must negotiate (using pre-written phrases, questions, statements and also emotion buttons) to split a number of items between them. Each player is told how much each item is worth to them, but they don’t know how much it’s worth to their opponent. In this study, each participant in fact went up against a bot, rather than a human player.  The bots had one of four different negotiating profiles: nice plus competitive, nice plus consensus-building, nasty plus competitive and nasty plus consensus-building. None, however, used any deceptive tactics.

At the end of the negotiation, the participants filled out the ANTI again, to indicate how they would behave next time. The team found that whether the agent was nice or nasty didn’t change the ANTI ratings. However, while interacting with a competitive “tough” agent increased endorsement of deceptive tactics, interacting with a “fair” consensus-building agent reduced intentions to be deceptive. Though the agents had not used deception, even brief experience with a hard-ball negotiator bot made the participants more willing to be underhand. As the researchers write, even if participants are initially keen for their representative to negotiate fairly, “exposure to the real world of aggressive, tough negotiators is enough make them forsake their qualms and embrace deception”.

When designing future agents to accurately represent us in real-world negotiations, these findings should, then, be taken into account, the researchers say — the work suggests that bots should be programmed to become more deceptive in response to a tough negotiation. As the team writes, “Shying away from this during agent design could lead to unsatisfied users and would have severe implications for the adoption of future agent representative systems.”

The Effects of Experience on Deception in Human-Agent Negotiation

Emma Young (@EmmaELYoung) is a staff writer at BPS Research Digest



View more here.
Credit- BPS Research Digest. Published by- Dr. Sabiha : www.drsabiha.blogspot.com