
from the Genuine-People-Personality-software dept.
Mathematical Formula Tackles Complex Moral Decision-Making in AI:
An interdisciplinary team of researchers has developed a blueprint for creating algorithms that more effectively incorporate ethical guidelines into artificial intelligence (AI) decision-making programs. The project was focused specifically on technologies in which humans interact with AI programs, such as virtual assistants or "carebots" used in healthcare settings.
[...] "For example, let's say that a carebot is in a setting where two people require medical assistance. One patient is unconscious but requires urgent care, while the second patient is in less urgent need but demands that the carebot treat him first. How does the carebot decide which patient is assisted first? Should the carebot even treat a patient who is unconscious and therefore unable to consent to receiving the treatment?
"Previous efforts to incorporate ethical decision-making into AI programs have been limited in scope and focused on utilitarian reasoning, which neglects the complexity of human moral decision-making," Dubljević says. "Our work addresses this and, while I used carebots as an example, is applicable to a wide range of human-AI teaming technologies."
[...] To address the complexity of moral decision-making, the researchers developed a mathematical formula and a related series of decision trees that can be incorporated into AI programs. These tools draw on something called the Agent, Deed, and Consequence (ADC) Model, which was developed by Dubljević and colleagues to reflect how people make complex ethical decisions in the real world.
[...] "With the rise of AI and robotics technologies, society needs such collaborative efforts between ethicists and engineers. Our future depends on it."
Journal Reference:
Michael Pflanzer, Zachary Traylor, Joseph B. Lyons, et al. Ethics in human–AI teaming: principles and perspectives [open]. AI Ethics (2022). https://doi.org/10.1007/s43681-022-00214-z
(Score: 1, Funny) by Anonymous Coward on Friday October 28 2022, @07:48PM
Teach the bot about real "triage" and to ignore entitled screaming ninnies
(Score: 1, Touché) by Anonymous Coward on Friday October 28 2022, @07:49PM (10 children)
... the boss is right?
But now no need to guess, it's what AI says. Our wait is over, the overlords have arrived.
(Score: 4, Insightful) by JoeMerchant on Friday October 28 2022, @08:53PM (2 children)
A formula "tackles" diverse and conflicting ethical concerns?
Obfuscates, sets an arbitrary line in the sand begging to be ignored, fails miserably to account for all variables and their interdependencies, attempts to put forth one group's view as "the one", yes.
Tackles? Not even close.
🌻🌻🌻 [google.com]
(Score: 2) by Barenflimski on Saturday October 29 2022, @06:12AM (1 child)
If we can create God ourselves, then we have a chance at understanding it. No?
(Score: 2) by JoeMerchant on Saturday October 29 2022, @12:15PM
There is strength in our diversity.
There is also inconsistency and conflict.
How many "Gods" have we created, and what rules do they all follow?
🌻🌻🌻 [google.com]
(Score: 2) by Thexalon on Friday October 28 2022, @09:27PM (6 children)
These bots will I'm sure learn the Golden Rule: Whoever has the gold makes the rules.
"Think of how stupid the average person is. Then realize half of 'em are stupider than that." - George Carlin
(Score: 1) by HammeredGlass on Friday October 28 2022, @09:40PM (4 children)
money doesn't matter anymore, if it ever did . . .
power, as ever, is the real currency
printer go brrrrrr
(Score: 2) by JoeMerchant on Friday October 28 2022, @10:16PM (3 children)
Money is power, until you want power over people with lots of money, then it gets complicated.
Why else do you think they don't let too many people have lots of money?
🌻🌻🌻 [google.com]
(Score: 1) by HammeredGlass on Friday October 28 2022, @11:06PM
you can name they now
didn't know if you heard
(Score: 0) by Anonymous Coward on Saturday October 29 2022, @12:24AM (1 child)
No... for AI, wattage is power
(Score: 0) by Anonymous Coward on Saturday October 29 2022, @01:24AM
Schoolhouse Rocky told me that knowledge is power.
(Score: 0) by Anonymous Coward on Friday October 28 2022, @09:40PM
How could an algorithm based on such an ironclad rule be wrong? The AI decided it so it must be right.
(Score: 4, Funny) by RamiK on Friday October 28 2022, @10:39PM (2 children)
I'd like to remind them that I'm neither obese nor anywhere near a trolley and never really cared much for anything Isaac Asimov said or put to paper.
compiling...
(Score: 2) by krishnoid on Friday October 28 2022, @11:25PM (1 child)
Tell it to the crowdsourced dataset [moralmachine.net].
(Score: 0) by Anonymous Coward on Saturday October 29 2022, @02:10AM
Is it appropriate that when I follow that link I get a blank page?
(Score: 2) by legont on Friday October 28 2022, @10:57PM
Should the AI in question know the color of the patient or should we make him blind?
See, no news reports these days ever mention color. They say a perpetrator described as 5'6" male assaulted... please contact us when you see him. See what?
P.S. I am sure the male part is going away too. P.P.S. As a short guy I believe 5'6" is discrimination too.
"Wealth is the relentless enemy of understanding" - John Kenneth Galbraith.
(Score: 2, Insightful) by Anonymous Coward on Saturday October 29 2022, @04:20AM (1 child)
Responsibility is an obligatory condition for morality. No algorithm can be held responsible for its actions, thus it is amoral. And no amoral being, entity or system should take decisions about anything where morality is a concern.
And, even if we disregard the above...
>The first factor [considered by humans when making moral judgements] is the intent of a given action and the character of the agent performing the action. Is it benevolent or malevolent?
We humans shouldn't be fooling ourselves on first place, by wishful believing that we know each others' intents. We do not; at most we can assume them. So even if an AI algorithm was able to take moral decisions (it isn't), we shouldn't be tweaking it to have the same flaws that we do, such as assuming.
What we do know better however are the likely results of the actions being taken. We can contrast those results with goals, and say "I don't want this" or "I want that".
>The second factor is the action itself. For example, people tend to view certain actions, such as lying, as inherently bad.
Again, we should not be tweaking it to have the same stupid flaws as we do, such as considering things intrinsically good or bad.
Even lying is morally acceptable in some situations, as the following excerpt shows:
>but if a nurse lies to a patient making obnoxious demands in order to prioritize treating a second patient in more urgent need, most people would view this as morally acceptable.
And here we are back again at utilitarianism.
(Score: 4, Touché) by maxwell demon on Saturday October 29 2022, @05:22AM
The ultimate goal of "moral" machines is not to make the best decision for those involved in the situation. It is to make the best decision for the stakeholders, that is, the one that makes them least likely to be successfully sued.
The Tao of math: The numbers you can count are not the real numbers.