Australians lack trust in artificial intelligence: research


Wednesday, 16 June, 2021

Australians lack trust in artificial intelligence: research

Trust is an issue when it comes to artificial intelligence (AI) according to a University of Queensland study that found 72% of people don’t trust it, with Australians leading the pack.

Trust experts from the UQ Business School, Professor Nicole Gillespie, Dr Steve Lockey and Dr Caitlin Curtis, led the study in partnership with KPMG, surveying more than 6000 people in Australia, the US, Canada, Germany and the UK to unearth attitudes about AI.

Professor Gillespie said trust in AI was low across the five countries, with one nation particularly concerned about its effect on employment.

“Australians are especially mistrusting of AI when it comes to its impact on jobs, with 61% believing AI will eliminate more jobs than it creates, versus 47% overall,” she said.

The research identified critical areas needed to build trust and acceptance of AI, including strengthening current regulations and laws, increasing understanding of AI, and embedding the principles of trustworthy AI in practice. The survey also revealed that people believe most organisations use AI for financial reasons — to cut labour costs rather than to benefit society.

It found that while people are comfortable with AI for task automation, only one in five believe it will create more jobs than it eliminates.

One positive finding was that people have more confidence in universities and research institutions to develop, use and govern AI in the public’s best interests.

Professor Gillespie said the research showed that distrust came from low awareness and understanding of when and how AI technology was used across all five countries.

“For example, our study found while 76% of people report using social media, 59% were unaware that social media uses AI,” she said.

Professor Gillespie said despite the gap in understanding, 95% of those surveyed across all countries expected organisations to uphold ethical principles of AI.

“For people to embrace AI more openly, organisations must build trust with ethical AI practices, including increased data privacy, human oversight, transparency, fairness and accountability,” she said. “Putting in place mechanisms that reassure the community that AI is being developed and used responsibly, such as AI ethical review boards and openly discussing how AI technologies impact the community, is vital in building trust.”

Professor Gillespie is the KPMG Chair of Organisational Trust and currently integrating the study findings for building trustworthy AI into the new UQ Master of Business Analytics program. The full research report is available online.

Image: Professor Nicole Gillespie presenting at UQ's Trust, Ethics and Governance Alliance Symposium (2020).

Related News

Fortescue ship is world's first to use ammonia as fuel

World's first use of ammonia as a marine fuel in a dual-fuelled ammonia-powered vessel in the...

CSIRO and Swinburne advance manufacturing sector with Industry 4.0 Testlab

The National Industry 4.0 Testlab has showcased its world-first fully automated industrial-scale...

CEFC confirms commitment to Kathleen Valley lithium project

The CEFC is providing funding to Liontown Resources to back the development of low-carbon lithium...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd