home Digital Australians are suspicious of AI technology

Australians are suspicious of AI technology

Despite the substantial economic benefits AI (Artificial Intelligence) can deliver, according to a new research report from Queensland University and KPMG, Australian consumers remain skeptical about the ethical issues surrounding AI and machine learning. This lack of trust may impact the level of uptake at a time when investment in these new technologies are likely to be critical to Australia’s future prosperity.

The Trust in Artificial Intelligence report is the first national survey to examine how much Australians understand about AI and how it impacts their day-to-day lives. It set out to determine the extent Australians trust AI and support its use. The research found that more than half (61 per cent) of Australians know little about Artificial intelligence (AI) and many are unaware that it is being used in everyday applications, like social media. While 42 per cent generally accept it only 16 per cent approve of AI.

Other findings from the report include:

  • Only one in three Australians are willing to trust AI
  • 45% are unwilling to share their data with an AI system
  • 42% generally accept AI but only 25% approve or embrace it
  • 96% expect AI to be regulated with the majority expecting government oversight
  • But 86% want to know more about AI

“The benefits and promise of AI for society and business are undeniable,” said Professor Nicole Gillespie, KPMG Chair in Organisational Trust and Professor of Management at the University of Queensland Business School.  “AI helps people make better predictions and informed decisions, it enables innovation, and can deliver productivity gains, improve efficiency, and drive lower costs. Through such measures as AI driven fraud detection, it is helping protect physical and financial security – and facilitating the current global fight against COVID-19.”

Even though there are substantial benefits associated with AI, it does raise some very serious issues and challenges. These include the risk of codifying and reinforcing unfair biases, infringing on human rights such as privacy, spreading fake online content, technological unemployment and the dangers stemming from mass surveillance technologies, critical AI failures and autonomous weapons.

“It’s clear that these issues are causing public concern and raising questions about the trustworthiness and regulation of AI. Trust in AI systems is low in Australia, with only one in three Australians reporting that they are willing to trust AI systems. A little under half of the public (45 per cent) are unwilling to share their information or data with an AI system and two in five (40 per cent) are unwilling to trust the output of an AI system (eg a recommendation or decision).”

Being a force for good

To be a force for good rather than evil, according to Libby Dale, co-founder of AI solution developer SmartMeasures, companies need to look at a range of issues to ensure they don’t breach the trust and confidence of consumers’. She says, “To work effectively AI typically needs access to lots and lots of data. It’s imperative that you are transparent and open about how you use the data you’re collecting as well as highlighting what security protocols you have in place to ensure its protected.”

AI systems should be developed from the ground up with a strong awareness of the ethical implications and issues involved. Dale says, “The ethical use of AI centres around the goals you set for the applications being developed. If your intentions are good and they are explained and well communicated to the consumer, then their trust will be earned”.

According to the report there are four key drivers that influence Australians’ trust in AI systems:

The report emphasises four key drivers that influence Australian’s trust in AI systems:

  1. Adequate regulation – beliefs about the adequacy of current regulations and laws to make AI use safe.
  2. Impact on society – the perceived uncertain impact of AI on society.
  3. Impact on jobs – the perceived impact of AI on jobs.
  4. Understanding of AI – the familiarity with and extent of understanding of AI.

“Of these drivers, the perceived adequacy of current regulations and laws is clearly the strongest,” said Professor Gillespie. “This demonstrates the importance of developing adequate regulatory and legal mechanisms that people believe will protect them from the risks associated with AI use. Our findings suggest this is central to shoring up trust in AI.”

Mark Atterby

Mark Atterby has 18 years media, publishing and content marketing experience.

Leave a Reply