Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 19 submissions in the queue.
posted by janrinok on Wednesday December 14 2022, @12:02PM   Printer-friendly
from the creepy dept.

MIT presents the "Wearable Reasoner," a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not to prompt people to question and reflect on the justification of their own beliefs and the arguments of others:

In an experimental study, we explored the impact of argumentation mining and explainability of the AI feedback on the user through a verbal statement evaluation task. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and those without. When assisted by an AI system with explainable feedback, users significantly consider claims given with reasons or evidence more reasonable than those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, stating that they were happy to have a second opinion present, and emphasizing the improved evaluation of presented arguments.

Based on recent advances in artificial intelligence (AI), argument mining, and computational linguistics, we envision the possibility of having an AI assistant as a symbiotic counterpart to the biological human brain. As a "second brain," the AI serves as an extended, rational reasoning organ that assists the individual and can teach them to become more rational over time by making them aware of biased and fallacious information through just-in-time feedback. To ensure the transparency of the AI system, and prevent it from becoming an AI "black box,'' it is important for the AI to be able to explain how it generates its classifications. This Explainable AI additionally allows the person to speculate, internalize and learn from the AI system, and prevents an over-reliance on the technology.

https://doi.org/10.1145/3384657.3384799

Will this help the fight against misinformation/disinformation? Originally spotted on The Eponymous Pickle.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Thexalon on Wednesday December 14 2022, @01:18PM (3 children)

    by Thexalon (636) on Wednesday December 14 2022, @01:18PM (#1282367)

    Let's say you have access to an all-knowing machine that can tell you what would be the wisest course of action available. And let's say, for the sake of argument, that this machine works perfectly every time.

    Does that mean you'll always behave wisely? Not at all! For the same reason that the warnings to not smoke cigarettes, or the PSAs telling you not to drive drunk, or the doctor telling you to eat beans-and-greens rather than pizza and soda, or the friends warning you about the hot person you're about to bed don't work all the time or all that well: The smart part of human brains is wired to justify the decisions the stupid impulsive part already made.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 1) by khallow on Wednesday December 14 2022, @03:25PM

    by khallow (3766) Subscriber Badge on Wednesday December 14 2022, @03:25PM (#1282376) Journal
    OTOH, it does mean that you can mitigate the problem. Just because people behave irrationally, doesn't mean that they'll behave irrationally with the same frequency.
  • (Score: 2) by tangomargarine on Wednesday December 14 2022, @08:26PM (1 child)

    by tangomargarine (667) on Wednesday December 14 2022, @08:26PM (#1282415)

    Let's say you have access to an all-knowing machine that can tell you what would be the wisest course of action available. And let's say, for the sake of argument, that this machine works perfectly every time.

    Sorry to be a wet blanket here...but I imagine this would violate some sort of law of the universe on a quantum level. You'd need to be able to see the future, because while there may be a *logical* best course of action, that always relies to a greater or lesser extent on *other people* also approaching the problem logically.

    And, well...after the Trump presidency and COVID and everything the last 6 years, that's one belief of mine that has been shattered.

    Logically, there's a pandemic happening, and thousands of people are dying. How do you solve this problem? Quarantine. Until a cure is developed, this is not in any way debatable.

    But no, people whining about "my freedoms" and refusing to behave like an adult, all over a silly minor little demand like putting a piece of fabric over your face.

    Does that mean you'll always behave wisely? Not at all!

    And yes, even when we logically know something is a bad idea, sometimes we do it anyway.

    --
    "Is that really true?" "I just spent the last hour telling you to think for yourself! Didn't you hear anything I said?"
    • (Score: 2) by Thexalon on Wednesday December 14 2022, @11:41PM

      by Thexalon (636) on Wednesday December 14 2022, @11:41PM (#1282437)

      Sorry to be a wet blanket here...but I imagine this would violate some sort of law of the universe on a quantum level.

      I'm well aware of that - the point I was making is that even if you've solved the technical problems perfectly, you still haven't solved the human problems.

      --
      The only thing that stops a bad guy with a compiler is a good guy with a compiler.