Ensuring you’re well-informed before making a choice is, on the whole, a sensible thing to do. This is especially true of big decisions — just pretending you’ve read the terms and conditions of a new website might be okay, but we’re unlikely to be so lax about our health or finances.
But could too much information lead us to make worse, not better, decisions? A study published in Cognitive Research: Principles and Implications suggests that sometimes that might be the case.
Min Zheng and colleagues were motivated by advances in machine learning that have increased the information available to feed into decision-making, particularly when it comes to models of cause and effect: these algorithms can reveal potential outcomes likely to happen from certain actions or inputs.
But when it comes to human decision-making, it’s unclear how useful such information is. Simply presenting causal information to people may not actually help them make decisions: as senior author Samantha Kleinberg says, “being accurate is not enough for information to be useful”. In particular, we don’t know how this information might interact with the knowledge and beliefs people already have.
An initial experiment tested how causal information influences the kinds of decisions we make in everyday life. The team presented 1,718 participants with a real world scenario and asked them to give advice on how they would approach it. Participants were told that Jane, a university fresher, wanted to avoid putting on weight but continue to have fun with her friends, and were asked what one thing she should do to achieve her goal: go for a half hour walk every weekend, maintain a healthy diet, avoid seeing her friends, or watch less TV.
A third of the participants were given no extra information to help them answer the question, another third were given a text-based causal explanation of the impact of diet and regular exercise on weight loss, and the final third were given a simple causal diagram. Those in the latter two categories were also asked whether the causal explanations had been helpful or affected their answer.
Those in the no-information condition were highly accurate: 88.8% picked the correct answer (i.e. that Jane should maintain a healthy diet). But those in the two conditions provided with causal information performed worse: 82.7% got the correct answer with the text-based explanation, and 80.1% with the diagram.
The second experiment followed a similar method, except in this case the scenario was one that only some participants would have experience dealing with: the management of type 2 diabetes. Participants were asked whether it would be best for someone with diabetes to manage their condition through behaving as normal, through walking more, or by choosing a healthier dinner and going cycling. They were either given no information or a diagram showing the impact of exercise and healthy eating.
When no causal information was given, participants with and without type 2 diabetes chose the right answer (i.e. choosing a healthy meal and going for a bike ride) at about the same rate. Additional causal information was useful for those without personal experience of diabetes, increasing accuracy to 86.6% — but when the type 2 diabetes group were given causal information, accuracy dropped to just 50%.
This, along with the results from the first experiment, suggests that causal diagrams themselves are not inherently useless: rather, their ability to improve our decision-making depends on our personal experience. When a situation is new to us, we can benefit from causal information, but when we are already familiar with it, this information can actually lead us to making worse decisions.
The team thinks this has something to do with our existing mental models. We don’t take causal information at face value, but instead our existing beliefs and experience influence our decisions. This can sometimes mean we don’t make the right choice: with extra information we can lose confidence and start second-guessing ourselves.
A final experiment confirmed these findings: when participants had personal experience with a topic, causal information reduced their accuracy, but in made-up scenarios, such as a question on alien mind reading, participants were able to use causal information to make accurate choices. “In situations where people do not have background knowledge, they become more confident with the new information and make better decisions,” says Kleinberg.
That doesn’t mean that information uncovered by machine learning isn’t valuable — quite the contrary. But understanding what people already believe, and tailoring information to that, may be the best way to communicate it.
– How causal information affects decisions
Emily Reynolds (@rey_z) is a staff writer at BPS Research Digest
View more here.
Credit- BPS Research Digest. Published by- Dr. Sabiha : www.drsabiha.blogspot.com