Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 12 submissions in the queue.
posted by martyb on Sunday July 07 2024, @06:44PM   Printer-friendly
from the how-can-you-tell? dept.

(Editor's note: This story is ~1,400 words, but it looks as several non-obvious problems that need to be addressed. Well worth reading! --Martyb/Bytram)

AI lie detectors are better than humans at spotting lies:

But the technology could break down trust and social bonds.

Can you spot a liar? It's a question I imagine has been on a lot of minds lately, in the wake of various televised political debates. Research has shown has shown that we're generally pretty bad at telling a truth from a lie.

Some believe that AI could help improve our odds, and do better than dodgy old fashioned techniques like polygraph tests. AI-based lie detection systems could one day be used to help us sift fact from fake news, evaluate claims, and potentially even spot fibs and exaggerations in job applications. The question is whether we will trust them. And if we should.

AI isn't great at decoding human emotions. So why are regulators targeting the tech?

AI, emotion recognition, and Darwin

[...]

Journal Reference:
Just a moment..., (DOI: https://journals.sagepub.com/doi/10.1207/s15327957pspr1003_2)


Original Submission

This discussion was created by martyb (76) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 3, Touché) by ls671 on Sunday July 07 2024, @07:05PM (2 children)

    by ls671 (891) on Sunday July 07 2024, @07:05PM (#1363383) Homepage

    It makes sense since AI has more experience in creating lies which seem to come from absolutely nowhere.

    --
    Everything I write is lies, including this sentence.
    • (Score: 2) by Ox0000 on Sunday July 07 2024, @08:31PM

      by Ox0000 (5111) on Sunday July 07 2024, @08:31PM (#1363399)

      Exactly, takes one to know one...

    • (Score: 1, Redundant) by RedGreen on Monday July 08 2024, @02:51PM

      by RedGreen (888) on Monday July 08 2024, @02:51PM (#1363447)

      "It makes sense since AI has more experience in creating lies which seem to come from absolutely nowhere."

      I was thinking the same thing around here the old saying is "It takes one to know one".

      --
      Those people are not attacking Tesla dealerships. They are tourists showing love. I learned that on Jan. 6, 2021.
  • (Score: 5, Insightful) by bzipitidoo on Sunday July 07 2024, @07:23PM (4 children)

    by bzipitidoo (4388) on Sunday July 07 2024, @07:23PM (#1363386) Journal

    I suspect the AI is better at detecting when people know they're telling a lie. But, there are an awful lot of people who are entirely too good at denying reality and rewriting their recollections (to make themselves look good, of course) so thoroughly that they convince themselves of their own honesty. Can the AI detect those frauds?

    • (Score: 4, Insightful) by aafcac on Sunday July 07 2024, @08:27PM (2 children)

      by aafcac (17646) on Sunday July 07 2024, @08:27PM (#1363397)

      I'm curious how they know if people are lying. So often these sorts of things are put into use and turn out to be completely subjective. But, if they're just going for the easily verifiable lies, then there's no guarantee that this will work for people who are more purposeful at it.

      Plus, why are they even doing this sort of research? It has very little legitimate use that wouldn't also be useful for dictators and despots.

      • (Score: 2) by VLM on Monday July 08 2024, @08:31PM (1 child)

        by VLM (445) Subscriber Badge on Monday July 08 2024, @08:31PM (#1363487)

        My guess is you could ask a LLM today to categorize someone's argument at the following URL

        https://en.wikipedia.org/wiki/List_of_fallacies [wikipedia.org]

        And if it can, it might be true but people usually don't ship truth using sophistry so its probably false. If it doesn't fit one of the categories then its probably true?

        See also https://en.wikipedia.org/wiki/List_of_paradoxes [wikipedia.org] or maybe https://en.wikipedia.org/wiki/Apologetics [wikipedia.org]

        • (Score: 2) by aafcac on Monday July 08 2024, @09:57PM

          by aafcac (17646) on Monday July 08 2024, @09:57PM (#1363493)

          I suppose, although a lot of people will use fallacious reasoning and believe every word of it. I guess it comes down to whether you care more about the accuracy of the statements or the truth. There's also potentially issues because formal logic doesn't always translate very well into the real world where there may be degrees of truth to something. For instance, even though being President of the US is rather binary, the 2000 Gore v. Bush match is a bit of an anomaly in that Gore won the popular vote, the electoral college, but did not become President due to a ton of ballots being inappropriately thrown out as spoiled due to people writing on them. It shouldn't have impacted the validity of the votes, but it did, so any number of statements that one could make about which candidate won wouldn't really fit neatly into the usual lines of logical reasoning as they don't include possibilities like that normally.

    • (Score: 1) by anubi on Sunday July 07 2024, @11:15PM

      by anubi (2828) on Sunday July 07 2024, @11:15PM (#1363406) Journal

      Remember the vocal stress lie detector?

      https://html.duckduckgo.com/html?q=vocal%20microtremor%20lie%20detector [duckduckgo.com]

      Wasn't one line of these designed for the executive office so an executive could monitor realtime how stressed the guy on the other end of the line was?

      --
      "Prove all things; hold fast that which is good." [KJV: I Thessalonians 5:21]
  • (Score: 2) by mrpg on Sunday July 07 2024, @08:27PM (2 children)

    by mrpg (5708) <reversethis-{gro ... yos} {ta} {gprm}> on Sunday July 07 2024, @08:27PM (#1363398) Homepage

    I read that as "all lie detectors".

    • (Score: 3, Funny) by Gaaark on Sunday July 07 2024, @10:00PM (1 child)

      by Gaaark (41) on Sunday July 07 2024, @10:00PM (#1363404) Journal

      AI says you're lying.

      In other words, "Computer says no." *cough* :)

      --
      --- Please remind me if I haven't been civil to you: I'm channeling MDC. I have always been here. ---Gaaark 2.0 --
      • (Score: 2) by Ox0000 on Monday July 08 2024, @03:46PM

        by Ox0000 (5111) on Monday July 08 2024, @03:46PM (#1363453)

        I've observed in the last undefined years that a lot of the technology that is being built is "no!" technology, technology that disables and dis-empowers those who (are forced to) use it. It's mostly technology that takes agency away, that denies abilities and opportunities to its 'subjects' (that's a very deliberately chosen word).

        I really wonder where the empowering technology, the "Yes!" technology is, whether there's even still a market for things like that.

  • (Score: 3, Insightful) by looorg on Sunday July 07 2024, @09:48PM (3 children)

    by looorg (578) on Sunday July 07 2024, @09:48PM (#1363402)

    In their study published in the journal iScience, von Schenk and her colleagues asked volunteers to write statements about their weekend plans. Half the time, people were incentivized to lie; a believable yet untrue statement was rewarded with a small financial payout. In total, the team collected 1,536 statements from 768 people.

    This seems somewhat dodgy. How do they know they lied in the statements and who did they lie to? The authors, the AI or the statement in general. It's open to maximum mindfuckery. Was the incentive to get paid if I tricked the AI or if you just wrote a lie? Did I have to manage to fool the AI to get paid?

    This reliance can shape our behavior. Normally, people tend to assume others are telling the truth. That was borne out in this study—even though the volunteers knew half of the statements were lies, they only marked out 19% of them as such. But that changed when people chose to make use of the AI tool: the accusation rate rose to 58%.

    They do? I might assume friends and family are telling the truth, more or less. Other people? Not so much. Total strangers? Even less so. But it also kind of depends on the subject. For everyone that is a step out it's perhaps trust, but verify. The further out you are then there is less trust up front. You earn that. It's not given from the start.

    But if they know that the other person is using tech (or AI) the assumption of lies skyrocketed? That seems to be test related since they know that this was a test about detecting lies after all. If they had wanted to test them they would have had at least three groups, one that gets fed a mix of statements, on that gets fed nothing but lies and a third getting nothing but the truth.

    But overall this seems to be more for written statements, even if it would be trivial to have some computer read it in a voice of the listeners choice. Also then interesting to see if different voices, languages, syntax would give you different results. Are people that write "nice" more trustworthy? People with poor grammar are nothing but filthy liars or people that have some kind of foreign accent (if you go vocal) are worse off?

    As normal lie-detectors are more or less only for verbal responses. Along then with the various biological indicators and variables. The best I have seen so far I think are MRI machines as lie detectors, they just are not very practical due to the size and cost.

    ... or hunt for fake details in a job hunter’s resume or interview responses.

    Perhaps they should hunt for fake details in (HR-) Job postings cause those are all full of lies and shit for the most part. Complete fabrications or twists of reality. That is the part of job hunting, everybody lies. Some just lie better then others.

    Despite that, polygraph lie detector tests have endured in some settings, and have caused plenty of harm when they’ve been used to hurl accusations at people who fail them on reality TV shows.

    They don't believe that this "AI powered lie detector" is going to cause harm? Von Schenk appears to be a bit naive if she thinks this is only going to be used for good.

    This runs the risk of being horrifically Orwellian and Big Brother is always watching and listening to all your statements, verbal or written. People will adapt to it. They will start to give no significant information, minimal answers, answers open to interpretation. No straight answers or statements anymore. It will suck to talk to people or read anything.

    Not to mention all those people that get flagged as liars are going to have a horrible time. As idiots somehow believe in the mechanically infallibility of the machine. Machine says this is a lie. Stop lying to us. But it's the truth? No it's not. Machine says so.

    Quite frankly I would rather not have this piece of tech at all. Nothing but trouble is going to come from it. But I still believe it's going to be a thing. After all lots of government agencies etc are already "AI" screening applications for welfare or insurance claims or whatnot.

    • (Score: 3, Insightful) by Spamalope on Monday July 08 2024, @04:01AM

      by Spamalope (5233) on Monday July 08 2024, @04:01AM (#1363417) Homepage

      It'll allow officials a 'say it's a lie' button to discredit whoever opposes their current plans.

    • (Score: 2) by VLM on Monday July 08 2024, @08:43PM

      by VLM (445) Subscriber Badge on Monday July 08 2024, @08:43PM (#1363488)

      even though the volunteers knew half of the statements were lies, they only marked out 19% of them as such

      What if they were highly socially acceptable lies?

      Not just controversial stuff like religious-political beliefs where claiming to believe an obvious falsehood puts you in the cool-kids-club, but "little white lies" like New Years resolutions? "I plan to improve my diet and exercise this year" LOL yeah right, nobody doing that.

    • (Score: 3, Interesting) by VLM on Monday July 08 2024, @09:06PM

      by VLM (445) Subscriber Badge on Monday July 08 2024, @09:06PM (#1363491)

      They will start to give no significant information, minimal answers, answers open to interpretation. No straight answers or statements anymore.

      This reminds me of being a teenage boy trying to talk to teenage girls about relationship-stuff.

      People get mad if google search bar refuses to answer "2+2=" but consider it business as usual in other scenarios such as above. My guess is this will change discourse less than expected.

      Now we didn't have Siri back then, in fact, back then we still thought Z80s were cool, but "Hey Siri since you watch and listen to everything I see and do, does that chick Sherri in my eleventh-grade English class like me or not?" sure, in the current year plus, the AI and anyone around me is going to give me a shitty non-answer as you describe, but it's not going to be ANY WORSE than the actual convos I had with/about chicks in the 80s/90s, LOL. Ironically, with thirty or so additional years of life experience and gossip with former classmates providing considerable inside information, the answer about Sherri STILL can not be more precise than "it's complicated" anyway... Just because it's an AI doesn't mean all answers are crystal clear.

      Sort of humorously, sort of not, you can consider that most of the guys talking about indeterminate undecidable answers from quantum computers are really just mathing-up the experience of what it was like for the rest of us when we flirted with chicks in high school, we already know what it means when an answer is a quantum superposition of all possible answers, what is an undecidable result, etc. Surprisingly, you can usefully teach a dude what a Hilbert Space of a result is, fairly well by comparing it to asking a teen girl your age whom you've gone on a couple dates with "So, what are we, exactly?". Sometimes I'll read theoretical books about QC or read about MS Q# or whatever and occasionally think to myself, "Clearly this guy was not flirting with the ladies as much as I did in school, because this concept that seems to mystify him seems pretty obvious to me"

  • (Score: 2, Informative) by Runaway1956 on Sunday July 07 2024, @10:03PM (1 child)

    by Runaway1956 (2926) Subscriber Badge on Sunday July 07 2024, @10:03PM (#1363405) Journal

    For whatever reason, the Sagepub DOI link in TFS isn't working for me. (could be a problem with my VPN? who knows) A couple searches found this PDF: https://www.cell.com/action/showPdf?pii=S2589-0042(24)01426-3 [cell.com] Same author, same subject matter, the methodology looks the same, I think it's the same paper.

    Alternative URL contained within the PDF is formatted in a more readable manner, overall looks better: https://www.cell.com/iscience/fulltext/S2589-0042(24)01426-3?_returnURL=https://linkinghub.elsevier.com/retrieve/pii/S2589004224014263?showall=true [cell.com]

    --
    “I have become friends with many school shooters” - Tampon Tim Walz
    • (Score: 2) by captain normal on Monday July 08 2024, @08:28PM

      by captain normal (2205) on Monday July 08 2024, @08:28PM (#1363486)

      I had the same problem, from the DOI Foundation (DOI Foundation ?). Thank you.

      --
      The Musk/Trump interview appears to have been hacked, but not a DDOS hack...more like A Distributed Denial of Reality.
(1)