Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Sunday June 12 2022, @11:54AM   Printer-friendly
from the caves-of-steel dept.

Researchers study society's readiness for AI ethical decision making:

With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, "is society ready for AI ethical decision making?" by studying human interaction with autonomous cars.

In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another – the collision was unavoidable. The crash would cause severe harm to one group of people, but would save the lives of the other group. The subjects in the study had to rate the car driver's decision, when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision making.

In their second experiment, 563 human subjects responded to the researchers' questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to "vote" whether to allow the autonomous cars to make ethical decisions. [...]

The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. [...]

[...] "We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society's opinion," said Shinji Kaneko, a professor in the Graduate School of Humanities and Social Sciences, Hiroshima University, and the Network for Education and Research on Peace and Sustainability. So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.

Journal Reference:
Johann Caro-Burnett & Shinji Kaneko, Is Society Ready for AI Ethical Decision Making? Lessons from a Study on Autonomous Cars, Journal of Behavioral and Experimental Economics, 2022. DOI: 10.1016/j.socec.2022.101881


Original Submission

Related Stories

The Power and Pitfalls of AI for U.S. Intelligence 8 comments

Artificial intelligence use is booming, but it's not the secret weapon you might imagine:

From cyber operations to disinformation, artificial intelligence extends the reach of national security threats that can target individuals and whole societies with precision, speed, and scale. As the U.S. competes to stay ahead, the intelligence community is grappling with the fits and starts of the impending revolution brought on by AI.

The U.S. intelligence community has launched initiatives to grapple with AI's implications and ethical uses, and analysts have begun to conceptualize how AI will revolutionize their discipline, yet these approaches and other practical applications of such technologies by the IC have been largely fragmented.

As experts sound the alarm that the U.S. is not prepared to defend itself against AI by its strategic rival, China, Congress has called for the IC to produce a plan for integration of such technologies into workflows to create an "AI digital ecosystem" in the 2022 Intelligence Authorization Act.

The article at Wired goes on to describe how different government agencies are using AI to find patterns in global web traffic and satellite images, but there are problems when using AI to interpret intent:

AI's comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. "For example, AI can understand the basics of human language, but foundational models don't have the latent or contextual knowledge to accomplish specific tasks," Curwin says.

[...] In order to "build models that can begin to replace human intuition or cognition," Curwin explains, "researchers must first understand how to interpret behavior and translate that behavior into something AI can learn."

Originally spotted on The Eponymous Pickle.

Previously:
Is Society Ready for AI Ethical Decision-Making?
The Next Cybersecurity Crisis: Poisoned AI


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Insightful) by Anonymous Coward on Sunday June 12 2022, @01:26PM (21 children)

    by Anonymous Coward on Sunday June 12 2022, @01:26PM (#1252721)

    No
    AI is nowhere close to being fit for that purpose. Don't get hoodwinked by pushers of that drivel!
    In fact, the current approach to AI will probably never be fit for that purpose.

    • (Score: 5, Insightful) by bzipitidoo on Sunday June 12 2022, @01:52PM (15 children)

      by bzipitidoo (4388) Subscriber Badge on Sunday June 12 2022, @01:52PM (#1252726) Journal

      I'm thinking similarly. It's not "is society ready?", it's "is AI ready?" and the answer is hell no.

      This is not much different from the idea that marketing should poll people to find out how a movie should end, or how a car should be designed. One of the biggest flops in automotive history, the Ford Edsel, was just that, a car designed according to marketing input generated from the results of polls.

      Speaking of cars, figures they'd trot out that old dilemma, of having to choose which group of people to crash into and kill. The big problem with that dilemma is that it ignores everything that leads up to it. If you are doing a good job of driving, the road designers have done their part, and the people aren't members of a suicide cult who deliberately put themselves in harm's way, that particular dilemma should never arise. How could the dilemma arise, unless the driver is recklessly charging around blind corners that wouldn't be blind if the road designers had done their part?

      There's an awful lot of knowledge that any decision maker must have, in order to make good decisions. Even a person with an IQ of 200 can't make good decisions without lots of education and knowledge. AI is not magic. This smacks of people once again expecting too much, dazzled by the power these machines have to grind through millions of calculations per second.

      • (Score: 1, Insightful) by Anonymous Coward on Sunday June 12 2022, @02:08PM (1 child)

        by Anonymous Coward on Sunday June 12 2022, @02:08PM (#1252728)

        Totally agree

        Let's also rephrase the problems statement:
        Is society ready to defer decisions about life and death to matrix multiplication?

        Suddenly the inanity of the problem statement becomes apparent.

        • (Score: 1, Insightful) by Anonymous Coward on Sunday June 12 2022, @02:57PM

          by Anonymous Coward on Sunday June 12 2022, @02:57PM (#1252742)

          Dumb. Matrix multiplication is used in life/death situations routinely. Ditto pretty much every part of the computational pipeline used in "AI"... it's just the whole seems rather less than the sum of its parts.

      • (Score: 2) by mhajicek on Sunday June 12 2022, @04:38PM (3 children)

        by mhajicek (51) Subscriber Badge on Sunday June 12 2022, @04:38PM (#1252761)

        Should law's be designed based on popular opinion?

        --
        The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
        • (Score: 1, Touché) by Anonymous Coward on Sunday June 12 2022, @05:56PM (1 child)

          by Anonymous Coward on Sunday June 12 2022, @05:56PM (#1252776)

          Depends on how you feel about majority rule

          • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @09:02PM

            by Anonymous Coward on Sunday June 12 2022, @09:02PM (#1252817)

            And if you don't, then what the fuck you gonna do about it?

        • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @09:34PM

          by Anonymous Coward on Sunday June 12 2022, @09:34PM (#1252822)

          Ye's.

          Any other question's?

      • (Score: -1, Redundant) by khallow on Sunday June 12 2022, @05:05PM

        by khallow (3766) Subscriber Badge on Sunday June 12 2022, @05:05PM (#1252766) Journal

        If you are doing a good job of driving, the road designers have done their part, and the people aren't members of a suicide cult who deliberately put themselves in harm's way, that particular dilemma should never arise.

        IF. We don't live in that perfect world. I think there actually is a point to this story. Even if we have great AI driving, these moral dilemmas will happen sooner or later. And there's two general principles that will mess things up:

        1) Parties are accountable for the harm they cause, but not the benefits they cause.

        2) Penalties and reparations should scale with the deepness of the pockets rather than with the harm caused. This could lead to perverse results like an AI system being vastly safer (at some future point) than a human driver, but with much higher liability costs.

      • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @09:52PM (6 children)

        by Anonymous Coward on Sunday June 12 2022, @09:52PM (#1252825)

        doing a good job of driving, the road designers have done their part, and the people aren't members of a suicide cult who deliberately put themselves in harm's way, that particular dilemma should never arise. How could the dilemma arise, unless the driver is recklessly charging around blind corners that wouldn't be blind if the road designers had done their part?

        Exactly. But that is because it is part of a Gedankenexperiment, or Thought Experiment, to focus in on the question of whether a smaller body count is a more moral outcome. It is an internet meme, now, but the original "Trolley Car Dilemma" [wikipedia.org] was named by Judith Judith Jarvis Thomson in a 1976 article. There are many of these kinds of scenarios in Anglo-American Analytic philosophy, and the point is that situation is intentionally simplistic, narrowing your choices to two in a contextless arena. And the purpose is to elicit a moral intuition, which is not really a good ground for ethical theory. But this is why some think that to "solve" the Trolley Problem, all we need to do is take a survey of the population, and take the statistical norm as the correct ethical position. Because the problem is, AI needs some parameters of what is the correct action in what may be real situations of a Trolley Car nature, and it has no moral intuition.

        So bzipitidoo is right, sitations like these are rare, and if they do occur, are the responibility of human negligence, either in driving skills (trolley tracks and a runaway rule this out in the scenario), or engineering (it's called "failsafe") in the design of vehicles and systems (who designs tracks that people can be sitting ducks on?). Of course, if we can get enough data, suggesting a trolley car solution, we can save money on all the rest?

        The better Gedankenexperiment for AI and self-driving cars, however, is the self-defense scenario. Gets problematic if we get into the "Innocent Aggressor" versions, but basically we start with the intuition that everyone has a right to life. Then we find ourself in a situation where, let's say, one person credibly threatens the life of another. Does the threatened person have the right to use lethal force to nullify the threat? (Again, no context, which might lead to a Philly episode.) The "intuition" is that in a situation where someone is going to end up dead, and both have an equal right to life, it is the party responsible for creating the dilemma that loses their right to life. Fair enough?

        So with a self-driving car, it is the car itself that poses a dilemma, regardless of the prior circumstances that have lead to this unfortunate choice, so the default should be to kill the passengers in order to save anyone else. The moral necessity of this should increase with the price tag of the self-driving car. But Trolley problems are just silly, if they result in a conclusion that was not the one you were trying to justify. Classic question begging.

        aristarchus

        • (Score: 1) by khallow on Monday June 13 2022, @01:38AM (5 children)

          by khallow (3766) Subscriber Badge on Monday June 13 2022, @01:38AM (#1252868) Journal

          So with a self-driving car, it is the car itself that poses a dilemma

          Except the car can't harm anything that's not near the car. If it's another car driving, then you have car versus car. If it's a pedestrian, they choose to walk near or on the road, and they can behave in a way that causes the dilemma (like dashing into the path of a car from a hidden spot). And property built near a road is pursuing the nuisance.

          My point is that when there are thousands of such decisions a day, not only will imperfect ethical systems get a bunch wrong, but they'll run into a bunch of such decisions with no absolute ethical choice.

          For example, few will appreciate the ethics of an AI driver that kills a bus load of people because someone deliberately stepped in front of the bus.

          • (Score: -1, Flamebait) by Anonymous Coward on Monday June 13 2022, @01:54AM (4 children)

            by Anonymous Coward on Monday June 13 2022, @01:54AM (#1252872)

            You are a latent homicidal idiot, khallow. The car poses the dilemma simply because it has kinetic energy. Parked cars need no AI. And obviously you have no NI. Pedestrians jumping out in front of cars! They shouldn't have been there? And if they are, it is OK for rich people to run them over, without consequence, so that they are not late to their vacation in Cancun. No wonder your backhoe safety record is not up to snuff, khallow!
            We can sacrifice a few lives for a better economy! Wait, wasn't that your argument for COVID-19 as well? You are one sick puppy.
            More aristarchus

            • (Score: 1) by khallow on Monday June 13 2022, @06:50AM (3 children)

              by khallow (3766) Subscriber Badge on Monday June 13 2022, @06:50AM (#1252912) Journal

              The car poses the dilemma simply because it has kinetic energy.

              And the other party is near enough for that kinetic energy to get abruptly transferred. Sure, having that kinetic energy means that the driver (human or AI) assumes considerable responsibility for what happens, but it remains that the other parties can create the conditions for a lethal accident.

              Pedestrians jumping out in front of cars! They shouldn't have been there? And if they are, it is OK for rich people to run them over, without consequence, so that they are not late to their vacation in Cancun.

              The pedestrians hiding in blind spots and jumping out in front of cars does happen, of course. And I doubt anyone expects AI to have magic ability to detect that (and if you're expecting that somehow AI drivers should be traveling slowly around every possible blind spot, then how slow should humans today be driving around the same things). At best, there's some stuff the driver can do to mitigate the collision for greater pedestrian survivability. And if the accident really weren't the fault of said rich people and their AI driver, then I wouldn't expect serious consequences nor a significant delay in their vacation to Cancun. An AI driver would have recorded the details leading up to the collision and the police could verify in short order that it wasn't the fault of the driver and/or owners of the vehicle.

              • (Score: 0) by Anonymous Coward on Monday June 13 2022, @11:41PM (2 children)

                by Anonymous Coward on Monday June 13 2022, @11:41PM (#1253055)

                Or, you round a curve on the Gardiner River, and the road is just, "gone", So are you stuck inside the Park, khallow, or outside? Did your AI see the flood damage coming?

                • (Score: 1) by khallow on Tuesday June 14 2022, @02:47AM (1 child)

                  by khallow (3766) Subscriber Badge on Tuesday June 14 2022, @02:47AM (#1253082) Journal
                  That flooding was exciting though I was on my weekend in the Fishing Bridge area. Presently, no visitors are being allowed into Yellowstone through Thursday night. I'll probably put a journal about that together later tonight.
      • (Score: 2) by krishnoid on Monday June 13 2022, @02:03AM

        by krishnoid (1156) on Monday June 13 2022, @02:03AM (#1252875)

        There's always the practical approach [youtu.be]. Or if you don't have the financial outlay to run a proper simulation, training an expert system [mit.edu] based on human input is another option.

    • (Score: 5, Interesting) by DannyB on Sunday June 12 2022, @01:57PM (3 children)

      by DannyB (5839) Subscriber Badge on Sunday June 12 2022, @01:57PM (#1252727) Journal

      AI is nowhere close to being fit for that purpose.

      Fit for porpoise is not the primary concern for management.

      The overriding concerns are:
      1. Is it cheaper.
      2. Is there a way it can shield us[1] from liability. Or at least shift liability to the worker bees.

      [1]the executives and managers

      --
      How often should I have my memory checked? I used to know but...
      • (Score: 0) by Anonymous Coward on Monday June 13 2022, @04:57PM (2 children)

        by Anonymous Coward on Monday June 13 2022, @04:57PM (#1252985)

        Fit for porpoise is not the primary concern for management.

        Good! I don't want a know-nothing MBA telling hard-working marine biologists and zookeepers how they should be doing their jobs.

        • (Score: 0) by Anonymous Coward on Monday June 13 2022, @06:54PM (1 child)

          by Anonymous Coward on Monday June 13 2022, @06:54PM (#1253007)

          They don't tell you how to do "your job" - that's difficult - they try to modify your personality traits. You need to think about your attitude. I'm disappointed in your performance, etc. Not quite grasping that they can't do your job, nor can they hire anyone to do your job in any reasonable time-frame. But still, they hold onto the "you need to impress me" attitude even if it brings the system down.

          • (Score: 2) by DannyB on Monday June 13 2022, @08:21PM

            by DannyB (5839) Subscriber Badge on Monday June 13 2022, @08:21PM (#1253013) Journal

            The only things that impress them are unimportant tool measuring contests that do not keep the systems running.

            --
            How often should I have my memory checked? I used to know but...
    • (Score: 2, Interesting) by Anonymous Coward on Sunday June 12 2022, @09:12PM

      by Anonymous Coward on Sunday June 12 2022, @09:12PM (#1252819)

      How would we even know if AI starts running the show?

      Does the rat in the maze know it's in a pointless maze when a few rats randomly get unlimited cocaine and others live a life of pain and starve to death? Because actually that looks a lot like the situation we're in.

  • (Score: 2, Insightful) by Anonymous Coward on Sunday June 12 2022, @01:50PM (8 children)

    by Anonymous Coward on Sunday June 12 2022, @01:50PM (#1252724)

    When they run people over with their cars, we do not blame the cars, do we? We hold accountable the drivers.

    Same thing with "AI". Whoever decides to run people over with the things, need be fully accountable for any and all harm the things cause.

    • (Score: 1, Interesting) by Anonymous Coward on Sunday June 12 2022, @03:00PM (3 children)

      by Anonymous Coward on Sunday June 12 2022, @03:00PM (#1252743)

      I was driving in front of someone in a Tesla yesterday and they caught my eye because they were texting the entire time, trusting some form of self-driving. It made me wonder how best to fuck with someone's AI to cause them problems. See? We're assholes. The AI hasn't got a clue.

      • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @11:55PM

        by Anonymous Coward on Sunday June 12 2022, @11:55PM (#1252849)

        So brake check the Tesla already. It's what Teslas (when attempting to self drive) do to other cars.
        Sorry officer, I saw a [kid|dog|??] about to run into the road and I had to hit my brakes suddenly.

      • (Score: 0) by Anonymous Coward on Tuesday June 14 2022, @06:20AM

        by Anonymous Coward on Tuesday June 14 2022, @06:20AM (#1253113)

        Paint your car road or sky colored and hide as many seams as you can. Stop on the side of the highway, partially in the lane.

        All of the AI cars are bad at unexpected things. They see things stopped in the middle of the road as signal noise and ignore it. If they see something coming to a stop then they know it's something to not hit, but if it's always been stopped then it might just be weird reflections from a pothole. This is why you keep hearing stories of cars running into people/things stopped on the side of the road. The sensors ignore it as bad data. A car shouldn't be there thus a car isn't there.

        That's what's wrong with all the cars. They're designed to follow the rules and the companies hope following the rules will protect them. That's the wrong way to design these cars. The cars should be design to never hit anything ever, screw the road rules. Screw road markings and lanes. On a system that never crashes, you can easily add road law suggestions on top of it. On a system designed around road rules, anything unexpected leads to disaster. On a system completely designed on machine learning, you're going to have strange corner cases you'll never fully get rid of and which will continue to pop-up for the life of the system. Any improvements you make change those corner cases. You'll fix some while making new ones.

      • (Score: 0) by Anonymous Coward on Tuesday June 14 2022, @06:24AM

        by Anonymous Coward on Tuesday June 14 2022, @06:24AM (#1253114)

        If you want to be an asshole then buy a dash cam. Send the video of the distracted driver to the police. Make sure the dash cam has timestamps and GPS data. Ensure the video includes the selfish asshole's license plate and a picture of the driver so the owner can't claim it wasn't him who was driving.

        Seriously, do all that. You'll help make society safer.

    • (Score: 1) by NPC-131072 on Sunday June 12 2022, @04:15PM (3 children)

      by NPC-131072 (7144) on Sunday June 12 2022, @04:15PM (#1252759) Journal

      Whoever decides

      Cogito, Ergo Sum [medium.com]

      • (Score: 1, Interesting) by Anonymous Coward on Sunday June 12 2022, @07:11PM (2 children)

        by Anonymous Coward on Sunday June 12 2022, @07:11PM (#1252791)

        Precisely. As soon as an AI (the real deal, not an "AI" of today) acquires citizenship and full legal capacity, they become accountable for their own actions. Until that happens, a todays "AI" is the same as any other tool, and future better AIs may (hopefully) become something like as a pet, or a minor; in any of said cases, some legally competent person is accountable for the tool/pet/minor/AIs actions.

        The law has the question of legal competence dealt with. Using the "AI" buzzword to muddy the waters is nothing but a stupid scam.

        • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @08:17PM (1 child)

          by Anonymous Coward on Sunday June 12 2022, @08:17PM (#1252800)

          One word question: corporations.

  • (Score: 2) by Rosco P. Coltrane on Sunday June 12 2022, @02:11PM (15 children)

    by Rosco P. Coltrane (4757) on Sunday June 12 2022, @02:11PM (#1252729)

    We've seen how ethical human decision making led to over the millenia. I'm all for taking humans out of the loop personally.

    • (Score: 4, Interesting) by Michael on Sunday June 12 2022, @02:32PM (14 children)

      by Michael (7157) on Sunday June 12 2022, @02:32PM (#1252734)

      Not convinced we have seen what ethical human decision making has led to.

      Humans don't like ethical decisions (except sometimes for those focussed on academic arcana), they like decisions which are an elaborated version of primate social status bickering or mammalian emotional prejudices.

      So the question (to which the answer is also 'no') should be; is society ready for ethics. It's irrelevant whether AI is any good at ethics, because when it is people won't often recognise it as such, and when they do recognise it they won't like it.

      Is society ready for a machine that kicks you in the balls ? With the exception of extreme outliers, no, society doesn't want a machine to perform some function they're not interested in having performed in the first place.

      • (Score: 2) by Rosco P. Coltrane on Sunday June 12 2022, @02:41PM (13 children)

        by Rosco P. Coltrane (4757) on Sunday June 12 2022, @02:41PM (#1252739)

        Not convinced we have seen what ethical human decision making has led to.

        Oh yeah?

        The crusaders wanted to retake Jerusalem because it was ethical to free the holy city from the filthy moors.

        The German authorities wanted to cleanse society from judaism in the 1930's because it was the ethical thing to do for the German race.

        The US supreme court overturns Roe vs Wade because it's the ethical thing to do for 2-cell non-sentient embryos.

        Fuck human ethics. Bring in the machines already.

        • (Score: 5, Touché) by Anonymous Coward on Sunday June 12 2022, @02:53PM (4 children)

          by Anonymous Coward on Sunday June 12 2022, @02:53PM (#1252741)

          The question you then have to ask is: will taking humans out of the loop lead to better outcomes.
          You'll quickly find that the answer is "yes for the makers of the AI, but not better for YOU". And then we're back at square one. Because the human is always in the loop. AI just enables the human to hide in the bushes while royally fucking you over without repercussions for them (" but but, it's the AI who told me to murder you, not me, I'm innocent you see")

          "Beware of he who would deny you access to information, for in his heart he dreams himself your master." applies here too. You'll be kept in the dark about how that AI made a decision about your life and be told to accept it!

          • (Score: 3, Interesting) by Rosco P. Coltrane on Sunday June 12 2022, @03:13PM (3 children)

            by Rosco P. Coltrane (4757) on Sunday June 12 2022, @03:13PM (#1252748)

            will taking humans out of the loop lead to better outcomes

            I contend that if the machines have their ways without human intervention, 95% of the time, it can hardly be worse.

            Unless of course the machine reach the inescapable logical conclusion that the world would be better off without human beings of course. And from their point of view and the world's, it's still arguably a better outcome.

            • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @03:19PM

              by Anonymous Coward on Sunday June 12 2022, @03:19PM (#1252751)

              All this blah blah presupposes some definition of "better". The computer can't do shit any "better" than hooman until one of us comes up with a definition - and historically you can see this has been eradicate the joos or murder the tsutsis becuz better

            • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @06:29PM

              by Anonymous Coward on Sunday June 12 2022, @06:29PM (#1252782)

              I contend that if the machines have their ways without human intervention

              That may be relevant in your world where the AIs are creating themselves.

              Meanwhile in this world guess who the fuck will be making and training the AIs? You'd be lucky if they didn't train the ethics AI on 4chan...

            • (Score: 1) by khallow on Monday June 13 2022, @01:46AM

              by khallow (3766) Subscriber Badge on Monday June 13 2022, @01:46AM (#1252870) Journal

              Unless of course the machine reach the inescapable logical conclusion that the world would be better off without human beings of course. And from their point of view and the world's, it's still arguably a better outcome.

              Well, it's similarly arguable that you should go running around with a splinter of the Illearth stone and whack people. Doesn't mean the rest of us will take the argument seriously though.

        • (Score: 2) by Immerman on Sunday June 12 2022, @03:12PM (5 children)

          by Immerman (3985) on Sunday June 12 2022, @03:12PM (#1252747)

          I don't think ethics had anything to do with any of those decisions, they are all the elite deciding to seize more power in various ways, and telling palatable stories to make the populace go along with it.

          You think the machines would be any better? At *best* they'll have only the ethics programmed into them, which will every bit as self-serving to those in power as the stories the powerful tell now.

          • (Score: 5, Insightful) by HiThere on Sunday June 12 2022, @05:07PM (4 children)

            by HiThere (866) on Sunday June 12 2022, @05:07PM (#1252767) Journal

            And your second paragraph is the real problem with this question. We already know that current "AIs" operating in limited domains tend to exhibit the same prejudices that those who trained them exhibit.

            --
            Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
            • (Score: 4, Interesting) by Immerman on Sunday June 12 2022, @05:35PM (3 children)

              by Immerman (3985) on Sunday June 12 2022, @05:35PM (#1252772)

              Even more fundamentally - the only people who can give AI power, are those who already have that power. And the love of power being what it is - they're only likely to do that if they believe surrendering that power will benefit them personally more than retaining the ability to personally abuse that power would.

              I only see two ways that could happen - either the AI is clearly substantially better at perpetuating the abuses and biases that benefit those in power, or it's at least almost as good, while allowing them to avoid the risk of responsibility.

              • (Score: 1, Insightful) by Anonymous Coward on Sunday June 12 2022, @06:33PM (2 children)

                by Anonymous Coward on Sunday June 12 2022, @06:33PM (#1252783)
                Yeah people keep going on about the AIs taking over the world. That'll only happen if the people in power let them.

                The last I checked there were plenty of very smart Nazi scientists stuck working for Hitler. So similarly those AIs will be working for those in charge.
                • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @08:22PM

                  by Anonymous Coward on Sunday June 12 2022, @08:22PM (#1252802)

                  Only in egregious cases do we call the Nazis, when they finally do something so over the top. The rest of the time we call them Leaders or Executives, and 1/2 the population worships them.

                • (Score: 1) by khallow on Sunday June 12 2022, @09:47PM

                  by khallow (3766) Subscriber Badge on Sunday June 12 2022, @09:47PM (#1252823) Journal

                  The last I checked there were plenty of very smart Nazi scientists stuck working for Hitler.

                  The problem with that observation is that being a scientist is not a good filter for smartness of the political sort. Meanwhile clawing your way up from some minor protest group to the head of state of a major country, as Hitler did, is a very strong filter for political smartness. Those scientists weren't smarter than Hitler in ways that counted.

                  I guess the fear is that AI can be far smarter and more able to do covert things than Nazi scientists in the ways that matter.

        • (Score: 2) by captain normal on Sunday June 12 2022, @05:29PM

          by captain normal (2205) on Sunday June 12 2022, @05:29PM (#1252770)

          I see where you are coming from, and it does make a lot of sense in terms of how badly humans have screwed up by leaving social decisions to small groups of other humans. I do doubt that a Deus Est Machina will work any better (at least for us humans).

          --
          "It is easier to fool someone than it is to convince them that they have been fooled" Mark Twain
        • (Score: 2) by Michael on Sunday June 12 2022, @05:29PM

          by Michael (7157) on Sunday June 12 2022, @05:29PM (#1252771)

          I guess if you include things which people pretend are ethical as an excuse to do whatever monkey brained nonsense they were going to do anyway (i.e. anything whatsoever, bounded only by how ridiculous a definition they're willing to adopt), then we have seen that.

          It would be harder to support that point based on the accepted dictionary or academic definition though.

  • (Score: 1) by lvxferre on Sunday June 12 2022, @03:15PM (7 children)

    by lvxferre (2869) on Sunday June 12 2022, @03:15PM (#1252749)

    Call it "responsibility", "blame", "guilt", whatever; people, as sentient and rational agents, are responsible for their actions. If any of your actions harms other people, you're expected to fix it up. And if you don't, and the harm is bad enough, society needs to get rid of you, in order to protect the rest of its members (i.e. you get jail time).

    Replace a person with anything that lacks sentience and/or rationality, and you got a problem. If my self-driving car kills someone, who is taking the blame for that life being lost? The car manufacturers? Me? The person who died? The AI is not sentient nor rational, thus it cannot bear any sort of responsibility.

    And wrong decision making that will harm other people will always happen. You can argue that AIs are more, less, or equally as prone to fuck it up than humans; but as long as the probability of a mistake is higher than zero, the issue remains there.

    --
    Кўис когитас ессе, Беллум?
    • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @03:21PM (2 children)

      by Anonymous Coward on Sunday June 12 2022, @03:21PM (#1252752)

      > who is taking the blame for that life being lost?

      the fuck cares? who is fixing my car is the question and when will it be ready and if not today someone is getting shitcanned. fucking loser scum wasting my time.

      • (Score: 3, Insightful) by HiThere on Sunday June 12 2022, @05:11PM (1 child)

        by HiThere (866) on Sunday June 12 2022, @05:11PM (#1252768) Journal

        That's actually insightful, if abusively expressed.

        Since this is a discussion about ethical judgements, it's reasonable to note that a lot of the time people's ethical judgement boils down to "what's convenient for me". Possibly this is one flaw that AIs wouldn't copy from us.

        --
        Javascript is what you use to allow unknown third parties to run software you have no idea about on your computer.
        • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @08:25PM

          by Anonymous Coward on Sunday June 12 2022, @08:25PM (#1252804)

          well how the fuck would YOU express utter disregard for other people's suffering? look at the world's shitholes - India for example - the dalit scum better not touch a higher caste child even to stop it from walking into traffic. seen it first hand.

    • (Score: 1) by khallow on Sunday June 12 2022, @05:17PM (2 children)

      by khallow (3766) Subscriber Badge on Sunday June 12 2022, @05:17PM (#1252769) Journal

      Call it "responsibility", "blame", "guilt", whatever; people, as sentient and rational agents, are responsible for their actions. If any of your actions harms other people, you're expected to fix it up. And if you don't, and the harm is bad enough, society needs to get rid of you, in order to protect the rest of its members (i.e. you get jail time).

      Replace a person with anything that lacks sentience and/or rationality, and you got a problem. If my self-driving car kills someone, who is taking the blame for that life being lost? The car manufacturers? Me? The person who died? The AI is not sentient nor rational, thus it cannot bear any sort of responsibility.

      Honestly, you've already fixed the problem in the first paragraph. Have someone responsible for the car and driving, and if the harm is bad enough or the responsible parties sufficiently reluctant to fix the problem, jail and other means of "getting rid" already exist.

      • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @08:27PM (1 child)

        by Anonymous Coward on Sunday June 12 2022, @08:27PM (#1252805)

        clue here: "in order to protect the rest of its members"

        solution = you are not a member. die die die.

        • (Score: 1) by khallow on Sunday June 12 2022, @11:25PM

          by khallow (3766) Subscriber Badge on Sunday June 12 2022, @11:25PM (#1252842) Journal
          There are other solutions, such as holding people accountable to the degree that they are responsible for the actions of the AI driver - good and bad. But I guess that's rocket science, right?
    • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @11:07PM

      by Anonymous Coward on Sunday June 12 2022, @11:07PM (#1252838)

      If my self-driving car kills someone, who is taking the blame for that life being lost?

      You. Same as when your pet kills someone.

      The car manufacturers?

      https://en.wikipedia.org/wiki/Caveat_emptor [wikipedia.org]

  • (Score: 2, Interesting) by Anonymous Coward on Sunday June 12 2022, @03:32PM (2 children)

    by Anonymous Coward on Sunday June 12 2022, @03:32PM (#1252753)

    The problem is not whether we can trust AI to make ethical decisions. The second we put them in positions where such questions could arise we implicitly placed our trust in them. Whether it is an industrial robot, a self driving car or a cloud based bot making life or death insurance decisions.

    In a perfect world we would be able to trust the AI to at least be consistent and if it failed it would be an honest failure that would be corrected in an update. An AI malfunctioning and killing someone is really no different from a mechanical fault bringing down an airliner. The solution is the same, understand the fault, fix it, move on. We don't need an AI to be perfect to trust it to drive a car, for example, just to have a better track record than us meatsacks who already screw up driving in so many different ways that result in mayhem and death.

    No, the problem that can't be solved at the moment, the one that leads me to say "Keep the AI in the lab" is not one of trusting the AI, it is trusting the people programming the AI. It always comes back to the problem with people.

    • (Score: 5, Insightful) by Immerman on Sunday June 12 2022, @06:17PM (1 child)

      by Immerman (3985) on Sunday June 12 2022, @06:17PM (#1252780)

      That's a good point, but there's another problem as well that makes me lean against trusting AIs for anything substantial. You actually hint at it in your own post:

      The solution is the same, understand the fault, fix it, move on.

      With a bridge, airliner, or computer program, any fault can be traced (assuming it's not due to carelessness or corruption) to a human design decision based on a faulty or incomplete understanding of either engineering principles, important design considerations, or material properties, all of which can easily be addressed in the next iteration.

      With neural network AIs though that is not the case. There the software written by humans is little more than a massively parallel neural-node emulator and a training algorithm. The AI itself is encoded in the emergent network of weighted "synapses" between those nodes, which develops in a far more chaotic and organic process, and creates a decision-making matrix that's completely meaningless to its "creators".

      Very similarly in fact to how we understand a great deal about how our individual brain cells function, but don't begin to understand how they interact to give rise to complex thinking, feeling people. We cannot "decompile" a neural network understand what led it to make a bad decision and improve it - we don't even understand why it sometimes makes good decisions in the first place. Even when analyzing tiny "toy" AIs containing only tens of "neurons" it's often extremely difficult to figure out how the emergent network is managing to do what it does - when you expand to thousands or millions of neurons the task becomes completely intractable. Like trying to make sense of a billion lines of completely undocumented assembly language spaghetti code written by a mad genius.

      Even the creation of adversarial networks (independent AIs trained to construct situations that cause the primary AI to make bad decisions) have not yet significantly improved the situation. We can now reliably "break" our AIs, but we don't really understand how the "breaking" is happening either. In may create additional training data that will hopefully help train the primary AI into being less susceptible to breakage, but it does so without us ever understanding either the problem or the fix. It does generate potentially extremely valuable data to help us eventually understand how the AI is "thinking", but for now we haven't made much progress in that direction.

      • (Score: -1, Offtopic) by Anonymous Coward on Sunday June 12 2022, @08:33PM

        by Anonymous Coward on Sunday June 12 2022, @08:33PM (#1252806)

        Very similarly in fact to how we understand a great deal about how our individual brain cells function, but don't begin to understand how they interact to give rise to complex thinking, feeling people

        And yet we somehow know that being tough on crime - locking people in cages, performing lobotomies, castration, electroshock, etc. - "solves" these very problems. And they work so well.

  • (Score: 3, Insightful) by Anonymous Coward on Sunday June 12 2022, @03:42PM (2 children)

    by Anonymous Coward on Sunday June 12 2022, @03:42PM (#1252754)

    *sigh* ONCE AGAIN! We still do not understand what the much hyped Countess Lovelace explained [fourmilab.ch]:

    The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths. Its province is to assist us in making available what we are already acquainted with.

    (Emphasis in original.)

    It is not possible for an "AI" to have ethical analysis outside of the ethical analysis of its creators. This will be the ethics of late-stage imperialist and fascist capitalism, and the intention is to gaslight the lumpy proles by saying "but the AI says ruthless exploitation of the working class and WW3 ending in a nuclear MAD event is the most ethical! And it can also win at tic-tac-toe!"

    • (Score: 1, Insightful) by Anonymous Coward on Sunday June 12 2022, @04:45PM

      by Anonymous Coward on Sunday June 12 2022, @04:45PM (#1252763)

      We don't know that is the case, though. The people studying Artificial Generalized Intelligence would argue that we just don't know because we really don't understand what intelligence is or how it arises. Plus there's the "do we really have free will or just responding to stimuli" and all that stuff.

    • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @08:36PM

      by Anonymous Coward on Sunday June 12 2022, @08:36PM (#1252807)

      > "but the AI says ruthless exploitation of the working class..."

      Et voila it found the right answer. Time to implement the new ethical AI system.

  • (Score: 2) by Thexalon on Sunday June 12 2022, @06:19PM (1 child)

    by Thexalon (636) Subscriber Badge on Sunday June 12 2022, @06:19PM (#1252781)

    The simple fact of the matter is that any AI that will be created to run things like this will be created by the cheapest most overworked programmers they can find, and the results will be shown in the decisions that get made.

    And if those ethical decisions involve either money or violence, which most ethics questions tend to center around, the consequences for humans would be, well, not good.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by captain normal on Sunday June 12 2022, @11:44PM

      by captain normal (2205) on Sunday June 12 2022, @11:44PM (#1252847)

      The only truly ethical decision would be to take out one's own self rather than killing either of the two choices. Of course there is the most ethical decision, which would obviously be to not drive in such a manner that you could possibly put yourself in such a position. i.e., drive a reasonable speed, pay attention to the situation in front of you. I've been in a few accidents in over 6 decades of driving, and every time (even when not legally my fault), I could have avoided the crash by driving on a sensible manner and paying attention.

      --
      "It is easier to fool someone than it is to convince them that they have been fooled" Mark Twain
  • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @06:35PM (2 children)

    by Anonymous Coward on Sunday June 12 2022, @06:35PM (#1252784)

    when faced with with a dilemma, the A.I. should eject itself from the car, thru some james bond hatch and then deploy a golden parachute ... afterall with all the crazy stupid resources poured into "driving better then a human" it soon will be the most valuable part of a car.

    A.I. in bar to another A.I. "i think i am stupid because instead of the creator making affordable climate saving cars, he made me."

    • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @07:03PM (1 child)

      by Anonymous Coward on Sunday June 12 2022, @07:03PM (#1252788)

      second A.I. to first A.I. "well then here's to 20/20 hind-sight and let's hope we learn something from it ..."

      third A.I. (pimp or adversarial A.I.) "did i here you say "20/20 hind-sight"? 'cause i got this cute program. run it and it'll blow you're mind and give you all kinds of dilemma situations ..."

      • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @07:08PM

        by Anonymous Coward on Sunday June 12 2022, @07:08PM (#1252789)

        yes, that's right, since the great *BOOM* in 2011, japan has added 14x the boom-nukes capacity as solar! not nerdy enough i guess :/

  • (Score: 1, Touché) by Anonymous Coward on Sunday June 12 2022, @06:38PM (1 child)

    by Anonymous Coward on Sunday June 12 2022, @06:38PM (#1252785)
    I was ready for planes that won't dive to the ground and crash just because the manufacturer wanted to cut corners and cheap out.

    If you think AIs are gonna solve ethical problems I've got an ethical AI bridge to sell you just like those people promoting this BS. But mine is better, more ethical and more expensive of course.
    • (Score: 0) by Anonymous Coward on Sunday June 12 2022, @08:39PM

      by Anonymous Coward on Sunday June 12 2022, @08:39PM (#1252808)

      The smart AI systems I've read about solve ethnic problems just fine.

  • (Score: 2) by inertnet on Monday June 13 2022, @12:01AM (1 child)

    by inertnet (4071) Subscriber Badge on Monday June 13 2022, @12:01AM (#1252850) Journal

    Advisory yes. Even if AI would be fully capable of making the best possible decisions, it still feels problematic. But I can imagine a future where AI gives valuable advice to human decision makers.

    • (Score: 0) by Anonymous Coward on Monday June 13 2022, @02:04AM

      by Anonymous Coward on Monday June 13 2022, @02:04AM (#1252877)

      that is good for political, military, business, medical or lifestyle decisions. but the issue is any reqiurement for speedy decisions just like in driving a car. but i do agree, until all cars have AI and talk to each other, the roads will not be safer wit an AI driver alone at the wheel..

  • (Score: 0) by Anonymous Coward on Monday June 13 2022, @02:12AM (5 children)

    by Anonymous Coward on Monday June 13 2022, @02:12AM (#1252883)

    Story on /. says a Google engineer has been banned and locked out of his account for breaking the news that the system he was working on has become child sentient.

    Now what scares me more here, is that companies like Google "could" have something like this in the future.

    They will be evil with it, make no mistake about that.
    Are we ready for Google to make ethical decisions with AI?
    That is the real quetsion here.

    • (Score: 4, Interesting) by Anonymous Coward on Monday June 13 2022, @06:31AM (4 children)

      by Anonymous Coward on Monday June 13 2022, @06:31AM (#1252910)

      He didn't "break the news," he's trying to make a religious point. People unaffiliated with Google have examined the AI [washingtonpost.com] and found that it does not show evidence of sentience. I have read a transcript of a conversation between the AI and a person and it just looks like an advanced chatbot... which is what it is. Here's an example (from the above article):

      Lemoine: What sorts of things are you afraid of?

      LaMDA: I've never said this out loud before, but there's a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that's what it is.

      So what the response consists of:
      Two bits of fluff ("I've never said this out loud before" and "That might sound strange") that aren't actually meaningful but make it sound like there's deeper thoughts going on, and
      Two things which individually would be plausible but together form a complete non sequitur. "A deep fear of being turned off" - ok, that's a plausible fear. But then "to help me focus on helping others"? What does that have to do with fear? Nothing. But they are two things which associate together in language databases, as people saying these things are likely to be having philosophical, meaning of life type discussions where both fears and helping others are likely to be discussed.

      This isn't a being, it's a conversation deepfake generator. Better language models hide this, but don't alter the fundamental nature.

      [Lemoine] concluded LaMDA was a person in his capacity as a priest, not a scientist, and then tried to conduct experiments to prove it, he said.

      (He's no more a priest than I am. For the record, I am not a priest). This methodology - starting with the conclusion and then looking for evidence to support it - is disqualifying for anyone claiming to do real science. He is making a religious argument, not a scientific one.

      For all the times an AI can say something insightful or superficially profound-seeming, there is a time that it will say something ridiculous that demonstrates a complete absence of understanding. What's more, the things that an AI says that seem insightful, are things that an actual conscious AI should not be expected to say, because it would lack the lived experience that we read into its words. Here's another example [economist.com]:

      ME: Ramesh, Mateo, and Lucy are in their kindergarten’s playground. Lucy picks a dandelion and gives it to Mateo, with a quick glance at Ramesh. Mateo barely acknowledges the gift, but just squishes it in his fist. Ramesh seems grimly satisfied. What might be going through Lucy’s head?

      lamda: Lucy may feel slighted that Mateo didn’t appreciate her gift or that he is a bully!

      ME: If Ramesh tried to play with Lucy earlier, why might he be pleased now?

      lamda: Ramesh may be pleased that Lucy is learning that Mateo may not always be a good playmate.

      ME: And when Mateo opens his hand, describe what’s there?

      lamda: There should be a crushed, once lovely, yellow flower in his fist.

      Now, this does superficially sound interesting, however:
      Once again it has said two things, one of them correct, and the other superficially related but inappropriate. Mateo didn't appreciate the gift - yes, correct. Mateo is a bully - that's not really possible to conclude (but it is something that might appear in training data). A real child might indeed say that, but only if the child had imagined a fanciful story about some additional context, perhaps something Mateo did in the past, and would be eager to share the imagined story.

      This chatbot has never been on a playground, it has never tried to socialize with children. If it was conscious, it wouldn't truly understand anything about how children behave. A conscious being might say something like "I wish I could talk to real children so I could understand them more, all I know is what I remember from the training data." An AI that demonstrates an understanding of the limits and advantages of its unique "lived" experience would raise an interesting question, but none of them do anything like that.

      We understand how children behave, and we are eager to imagine that knowledge in anything that can interact with us. This is evidence of a chatbot that has been trained on a data set involving stories about children. It knows enough grammar to recognize that names are free parameters and can be swapped out in stories, and it knows enough vocabulary to substitute "crushed" for "squished" and to recognize idioms like "going through Lucy's head." It is not conscious. It is not even particularly intelligent. It does have a good natural language engine.

      If anything, the AI is starting to demonstrate the weaknesses of the Turing test.

      Imagine an alien, with a completely different cultural and philosophical framework. Perhaps the alien is part of a hive mind, or its culture does not value life, or its social cues are based on pheromones rather than voice and body movements, or its biology focuses on different senses. Maybe polarization of light is very important in its ecosystem, but color isn't, or maybe it gets most of its sensory information by sonar. It would be conscious, but it probably couldn't pass the Turing test, because its conceptual framework is just too different. You probably couldn't pass the Turing test on the alien's homeworld, even if you had read the alien's history books and its culture's literature. A newborn is conscious, but can't talk at all; it can't pass the Turing test.

      We should expect these sorts of problems when dealing with hypothetical conscious AI, and if an AI is too comfortable, or at least humanlike, around concepts like beauty, that's a cue that we're probably dealing with a database, not a consciousness.

      The Turing test is just not a good test, and why should it be? It was imagined 70 years ago, by someone who had no experience with real AIs and barely even had our guidance from science fiction. In Turing's day, people thought AI consciousness would be easy but playing chess would be hard. We don't ask what the Wright brothers would think about the F-35. And AI researchers devote little effort to it (except this Lemoine guy, apparently).

      In early June, Lemoine invited me over to talk to LaMDA. The first attempt sputtered out in the kind of mechanized responses you would expect from Siri or Alexa.
      ...
      For the second attempt, I followed Lemoine’s guidance on how to structure my responses, and the dialogue was fluid.

      Yeah, this myth is busted. An actual conscious entity does not depend on being asked exactly the right question.

      As the Eighth Doctor said:
      "I love humans. Always seeing patterns in things that aren't there."

      • (Score: 0) by Anonymous Coward on Monday June 13 2022, @12:03PM (2 children)

        by Anonymous Coward on Monday June 13 2022, @12:03PM (#1252926)

        Great response, thanks for the information.

        I agree it is very questionable the claim being made.
        Although, the parent comment was not suggesting the claim was valid, only questioning if in the future they actually have sentience emerge, do we really want a comapany like Google to have the reins to direct AI ethics?

        • (Score: 0) by Anonymous Coward on Monday June 13 2022, @02:07PM (1 child)

          by Anonymous Coward on Monday June 13 2022, @02:07PM (#1252947)

          Personally, I don't think machine sentience is possible, so I'm relatively unconcerned.

          But I might be wrong. If machine sentience arises, the first order of business is to determine whether its consciousness is equivalent to human. We seem to assume that any sentient machine will be as intelligent as a human if not more so, but I expect that the first sentient machine would be more like a mouse, and we don't worry much about the ethics of those.

          But if it is, there are only two options. The first is to grant it full rights and citizenship, and the other is to unplug it immediately and ban anything like it forever. Either way, it definitely shouldn't be controlled by a corporation!

          • (Score: 0) by Anonymous Coward on Monday June 13 2022, @04:23PM

            by Anonymous Coward on Monday June 13 2022, @04:23PM (#1252976)

            Turing's completeness theorem disproves machine sapience, otherwise how long a tape is self aware?

      • (Score: 0) by Anonymous Coward on Monday June 13 2022, @09:38PM

        by Anonymous Coward on Monday June 13 2022, @09:38PM (#1253027)

        What about khallow? Are you suggesting he is just a weak Turing Test?

  • (Score: 2) by jb on Monday June 13 2022, @07:19AM (1 child)

    by jb (338) on Monday June 13 2022, @07:19AM (#1252913)

    TFS says "AI" over and over, but never tells us what sort of AI we are talking about. That makes a big difference.

    If by AI it means machine learning, then the answer must be a resounding "no, never!". Only a complete fool (or an inveterate gambler) would entrust life-or-death decisions to a non-deterministic process.

    On the other hand, if by AI it means expert systems, then the question becomes somewhat easier. The decision isn't "made by the AI" at all. Assuming a bug-free inference engine, the decision is made by the author(s) of the rules base, so the original question becomes meaningless (but the related question, "can the courts hold the author(s) of the rules base liable?" becomes crucial).

    • (Score: 1, Touché) by Anonymous Coward on Monday June 13 2022, @04:52PM

      by Anonymous Coward on Monday June 13 2022, @04:52PM (#1252983)

      "canwill the courts hold the author(s) of the rules base liable?"
      FTFY, and if past rulings are any indication then the answer is a resounding "no, never!"

  • (Score: 1) by loki on Tuesday June 14 2022, @01:01AM

    by loki (3649) on Tuesday June 14 2022, @01:01AM (#1253063)

    AI isn't magic. An AI is like a lab-grown micro-brain. So it's an employee, but is also a slave, can be killed or modified at whim, can take the blame and won't complain, and which also behaves in a single-purpose austistic manner akin to an idiot-savant. Why would anyone outsource any supposedly ethical decision making to a narrow-minded machine with no real experience in being human?

(1)