How far would you trust AI to make important decisions?
How far would you trust AI to make important decisions?
18 February 2025
From tailored Netflix recommendations to personalised Facebook feeds, artificial intelligence (AI) adeptly serves content that matches our preferences and past behaviours. But while a restaurant tip or two is handy, how comfortable would you be if AI-algorithms were in charge of your medical expert or new hire?
Now, a new study from the University of South Australia shows that most people are more likely to trust AI in situations where the stakes are low, such as music suggestions, but less likely to trust AI in high-stakes situations, such as medical decisions.
However, those with poor statistical literacy or little familiarity with AI were just as likely to trust algorithms for trivial choices as they were for critical decisions.
Assessing responses from nearly 2000 participants across 20 countries, researchers found that statistical literacy affects trust differently. People who understand that AI-algorithms work through pattern-based predictions (but also have risks and biases) were more sceptical of AI in high-stakes situations, but less so in low-stakes situations.
They also found that older people and men were generally more cautious of algorithms, as were people in highly industrialised nations like Japan, the US, and the UK.
Understanding how and when people trust AI-algorithms is essential, particularly as society continues to introduce and adopt machine-learning technologies.
AI adoption rates have increased dramatically with 72% of organisations now using AI in their business.
Lead author and human and artificial cognition expert, Dr Fernando Marmolejo-Ramos, says the speed at which smart technologies are being used to outsource decisions is outpacing our understanding to successfully integrate them into society.
“Algorithms are becoming increasingly influential in our lives, impacting everything from minor choices about music or food, to major decisions about finances, healthcare, and even justice,” Dr Marmolejo-Ramos says.
“But the use of algorithms to help make decisions implies that there should be some confidence in their reliability. That’s why it’s so important to understand what influences people’s trust in algorithmic decision-making.
“Our research found that in low-stakes scenarios, such as restaurant recommendations or music selection, people with higher levels of statistical literacy were more likely to trust algorithms.
“Yet, when the stakes were high, for things like health or employment, the opposite was true; those with better statistical understanding were less likely to place their faith in algorithms.”
UniSA’s Dr Florence Gabriel says there should be a concentrated effort to promote statistical and AI literacy among the general population so that people can better judge when to trust algorithmic decisions.
“An AI-generated algorithm is only as good as the data and coding that it’s based on,” Dr Gabriel says.
“We only need to look at the recent banning of DeepSeek to grasp how algorithms can produce biased or risky data depending on the content that it was built upon.
“On the flip side, when an algorithm has been developed through a trusted and transparent source, such as the custom-build EdChat chatbot for South Australian schools, it’s more easily trusted.
“Learning these distinctions is important. People need to know more about how algorithms work, and we need to find ways to deliver this in clear, simple ways that are relevant to the user’s needs and concerns.
“People care about what the algorithm does and how it affects them. We need clear, jargon-free explanations that align with the user’s concerns and context. That way we can help people to responsibly engage with AI.”
…………………………………………………………………………………………………………………………
Contacts for interview: Dr Florence Gabriel E: Florence.Gabriel@unisa.edu.au
Dr Fernando Marmolejo-Ramos (now at Flinders University) E: fernando.marmolejoramos@flinders.edu.au
Media contact: Annabel Mansfield M: +61 479 182 489 E: Annabel.Mansfield@unisa.edu.au