A large study of "Trust, attitudes and use of artificial intelligence" completed by KPMG and MBS. Apparently people like AI. They trust it. They believe it will bring great benefits. They use it in their work, some apparently don't believe they can do their work without AI anymore. Also they don't bother to check if the AI is correct or not in its output. All good. Trust friend computer!
Led by the University of Melbourne in collaboration with KPMG, Trust, attitudes and use of Artificial Intelligence: A global study 2025, surveyed more than 48,000 people across 47 countries to explore the impact AI is having on individuals and organizations. It is one of the most wide-ranging global studies into the public's trust, use, and attitudes towards AI to date.
• 66% of people use AI regularly, and 83% believe the use of AI will result in a wide range of benefits.
• Yet, trust remains a critical challenge: only 46% of people globally are willing to trust AI systems.
• There is a public mandate for national and international AI regulation with 70% believing regulation is needed.
• Many rely on AI output without evaluating accuracy (66%) and are making mistakes in their work due to AI (56%).
However, the use of AI at work is also creating complex risks for organisations. Almost half of employees admit to using AI in ways that contravene company policies, including uploading sensitive company information into free public AI tools like ChatGPT.
What makes these risks challenging to manage is over half (57%) of employees say they hide their use of AI and present AI-generated work as their own.
AI [increases] the security risk at work. Or they don't want to let their employer know that they could easily be replaced by a bot.
Sources:
https://mbs.edu/news/Global-study-reveals-trust-of-AI-remains-a-critical-challenge
https://ai.uq.edu.au/project/trust-artificial-intelligence-global-study
Additional sources:
https://kpmg.com/xx/en/our-insights/ai-and-technology/trust-attitudes-and-use-of-ai.html
https://assets.kpmg.com/content/dam/kpmgsites/xx/pdf/2025/05/trust-attitudes-and-use-of-ai-global-report.pdf
Processed by jelizondo
(Score: 4, Informative) by Anonymous Coward on Wednesday June 04, @06:53PM
grunt.
(Score: 4, Insightful) by Anonymous Coward on Wednesday June 04, @07:17PM (2 children)
> only 46% of people globally are willing to trust AI systems.
I'm firmly in the 54%. I recently turned off Google Gemini results, using a Firefox extension called, "No More Gemini", since I don't want an "AI" answer on the top of my search results.
The hope was that this would also save the penny (approx) of electricity used by Gemini. However, I frequently see a Gemini answer for a short flash, then it is erased. Thus, "No More Gemini" is removing the "AI" results after they are generated. Yesterday (maybe when Google was heavily loaded) the Gemini answer stayed visible for several seconds before it vanished.
Anyone have a suggestion for a better way to do this that stops Gemini before it gets started?
(Score: 2, Funny) by Anonymous Coward on Wednesday June 04, @08:00PM
I have an excellent way, but it requires a very large number of axe murderers.
--
You're never alone with a whore.
(Score: 5, Touché) by bart9h on Wednesday June 04, @09:33PM
how about stop using google altogether?
(Score: 5, Insightful) by istartedi on Wednesday June 04, @07:25PM
It's a dramatic over simplification to say this is just a simple positive feedback loop, but if that holds as an analogy we can expect a deafening squeal at some point. As AI incorporates more data from "human" authors that are really just using AI, the potential for hallucinations to amplify becomes a concern--they've already considered it but I don't think they've got a good solution. You're tempted to say that humans will check the hallucinations with common sense... but we've seen plenty of human decision makers hallucinate on their own without AI.
Appended to the end of comments you post. Max: 120 chars.
(Score: 4, Insightful) by pTamok on Wednesday June 04, @08:13PM (4 children)
I should like to see a survey of academics who develop LLMs and other 'AI'-like systems, and what their attitudes to the public's mode of use of such systems are.
There can easily be a disconnect between what the public's opinions in aggregate are, and that of informed experts. Compare what clinical dieticians advise as a healthy diet, and what people in aggregate put in their supermarket trolleys. This also illustrates that people are quite capable of ignoring well-founded advice, which does not bode well for asking people to desist from using AI-like technologies inappropriately.
(Score: 0, Troll) by Anonymous Coward on Thursday June 05, @01:24AM (2 children)
So you can't use such AI for stuff where it'll be a problem if someone manages to inject commands into the data.
(Score: 2, Informative) by pTamok on Thursday June 05, @07:54AM
Yes. In-band control signalling is a bad idea. Ask any U.S. telecommunications company last century (2600 Hz [wikipedia.org])
(Score: 1, Informative) by Anonymous Coward on Thursday June 05, @10:58AM
> So you can't use such AI for stuff where it'll be a problem
Yes, no need to go further. As pointed out by veteran software engineer David Parnas in this recent talk, you can't use "AI" for anything critical - https://www.youtube.com/watch?v=YyFouLdwxY0 [youtube.com] (even if you don't watch the video, the text description is worth a look).
(Score: 2) by VLM on Thursday June 05, @10:49PM
There's an unironic medical journal article about that, and following corporate dietary advice has made americans VERY fat indeed (and the high carb low fat corporations very wealthy)
Adherence is not 100%, but its a larger percentage than you'd think. Obedience to authority, when the authorities are crooked and solely out for profit and hate you and your demographic group, often negatively correlates very negatively with success. That business model is being pushed in a lot of areas, not just authoritarian dietary advice.
(Score: 2) by VLM on Thursday June 05, @10:45PM (1 child)
What does "use" or "evaluating accuracy" mean? Without a firm definition, it's just sophistry and propaganda and clickbaiting.
I don't evaluate for accuracy if I'm making a funny meme picture using AI. I just spam out prompts until the meme looks funny enough to not require more effort. Or if its junk content. "Given this article by Looorg titled "Trust, Attitudes and Use of Artificial Intelligence 2025" where I will cut and paste the article summary below, turn it into a funny gangster rap" I might laugh but I won't evaluate for accuracy. "turn this email from my ex-boss into a funny mash-up meme using the movie Office Space and make sure to include Jennifer Anniston with lots of flair because she used to be hot"
I "use" AI to do work in the sense of exhaust my real bibliography then ask AI for some more leads, just like I use search engines for leads and in the old days used library card catalogs for lead generation. I'm well aware there are lazy people doing the "cut and paste and send it" but those people seem to get burned alive on a regular basis given the 20% or so failure rate of AI prompts. Lazy people always gonna lazy and get burned and then blame absolutely everyone and everything other than themselves, oldest story in the book. I would assume there were lazy people shitting on the original first printing press because they're too lazy to filter out ink smears or something. Kids these days are ruining their minds because of this newfangled "library card catalog" thing made out of little index cards what a ripoff.
So, sure, I "use" AI without "evaluating accuracy" and its just fine because I know what I'm doing and do it sensibly.
(Score: 1) by anubi on Saturday June 07, @12:38AM
I guess if you used some ayahuasa, you would not need the AI and still get noteworthy results.
"Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]