Slash Boxes

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 18 submissions in the queue.
posted by janrinok on Wednesday December 14 2022, @12:02PM   Printer-friendly
from the creepy dept.

MIT presents the "Wearable Reasoner," a proof-of-concept wearable system capable of analyzing if an argument is stated with supporting evidence or not to prompt people to question and reflect on the justification of their own beliefs and the arguments of others:

In an experimental study, we explored the impact of argumentation mining and explainability of the AI feedback on the user through a verbal statement evaluation task. The results demonstrate that the device with explainable feedback is effective in enhancing rationality by helping users differentiate between statements supported by evidence and those without. When assisted by an AI system with explainable feedback, users significantly consider claims given with reasons or evidence more reasonable than those without. Qualitative interviews demonstrate users' internal processes of reflection and integration of the new information in their judgment and decision making, stating that they were happy to have a second opinion present, and emphasizing the improved evaluation of presented arguments.

Based on recent advances in artificial intelligence (AI), argument mining, and computational linguistics, we envision the possibility of having an AI assistant as a symbiotic counterpart to the biological human brain. As a "second brain," the AI serves as an extended, rational reasoning organ that assists the individual and can teach them to become more rational over time by making them aware of biased and fallacious information through just-in-time feedback. To ensure the transparency of the AI system, and prevent it from becoming an AI "black box,'' it is important for the AI to be able to explain how it generates its classifications. This Explainable AI additionally allows the person to speculate, internalize and learn from the AI system, and prevents an over-reliance on the technology.

Will this help the fight against misinformation/disinformation? Originally spotted on The Eponymous Pickle.

Original Submission

This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 2) by Frosty Piss on Wednesday December 14 2022, @12:04PM (3 children)

    by Frosty Piss (4971) on Wednesday December 14 2022, @12:04PM (#1282364)

    Probable just makes a lot of Wikipedia queries. Seriously, the things they call "AI"...

    Starting Score:    1  point
    Karma-Bonus Modifier   +1  

    Total Score:   2  
  • (Score: 0) by Anonymous Coward on Wednesday December 14 2022, @12:21PM (1 child)

    by Anonymous Coward on Wednesday December 14 2022, @12:21PM (#1282365)

    Seconded. I'll stick with, "Do your own thinking".

    • (Score: 2) by Immerman on Thursday December 15 2022, @02:09AM

      by Immerman (3985) on Thursday December 15 2022, @02:09AM (#1282448)

      It think the point is that the overwhelming majority of people *really suck* at thinking rationally. Our brains aren't designed for it, it's a skill that tends to take decades of practice to get good at, and most students are more interested in memorizing the answers and playing social games, and then never again seriously training their mind after graduating.

      Hell, most people probably can't name even three common logical fallacies - and they've got names precisely because they're so easy to fall prey to that almost everyone not watching out for them does.

  • (Score: 2) by Immerman on Thursday December 15 2022, @02:17AM

    by Immerman (3985) on Thursday December 15 2022, @02:17AM (#1282450)

    As described, that would be useless.

    It doesn't sound like they're making a fact checker - which a Wikipedia search could provide, if you trust Wikipedia.

    It sounds from the summary like they're analyzing the structure of argument to see if supporting evidence is provided at all, rather than if it's accurate. Presumably as opposed to the much more common case where no supporting evidence is provided at all, and the argument is all bombast and empty rhetoric without even an attempt to provide evidence. See: almost everything that has ever came out of Trump's mouth, and a great many other politicians for that matter.

    One of the great things about logic is that you can analyze the integrity of an argument entirely independently from the accuracy of any statements within it. If the logical integrity is flawed, then it doesn't matter how accurate the statements are, they can't carry you from premise to conclusion.