Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Sunday June 26 2022, @09:55AM   Printer-friendly

Artificial intelligence use is booming, but it's not the secret weapon you might imagine:

From cyber operations to disinformation, artificial intelligence extends the reach of national security threats that can target individuals and whole societies with precision, speed, and scale. As the U.S. competes to stay ahead, the intelligence community is grappling with the fits and starts of the impending revolution brought on by AI.

The U.S. intelligence community has launched initiatives to grapple with AI's implications and ethical uses, and analysts have begun to conceptualize how AI will revolutionize their discipline, yet these approaches and other practical applications of such technologies by the IC have been largely fragmented.

As experts sound the alarm that the U.S. is not prepared to defend itself against AI by its strategic rival, China, Congress has called for the IC to produce a plan for integration of such technologies into workflows to create an "AI digital ecosystem" in the 2022 Intelligence Authorization Act.

The article at Wired goes on to describe how different government agencies are using AI to find patterns in global web traffic and satellite images, but there are problems when using AI to interpret intent:

AI's comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. "For example, AI can understand the basics of human language, but foundational models don't have the latent or contextual knowledge to accomplish specific tasks," Curwin says.

[...] In order to "build models that can begin to replace human intuition or cognition," Curwin explains, "researchers must first understand how to interpret behavior and translate that behavior into something AI can learn."

Originally spotted on The Eponymous Pickle.

Previously:
Is Society Ready for AI Ethical Decision-Making?
The Next Cybersecurity Crisis: Poisoned AI


Original Submission

Related Stories

The Next Cybersecurity Crisis: Poisoned AI 26 comments

Machine-learning systems require a huge number of correctly-labeled information samples to start getting good at prediction. What happens when the information is manipulated to poison the data?

For the past decade, artificial intelligence has been used to recognize faces, rate creditworthiness and predict the weather. At the same time, increasingly sophisticated hacks using stealthier methods have escalated. The combination of AI and cybersecurity was inevitable as both fields sought better tools and new uses for their technology. But there's a massive problem that threatens to undermine these efforts and could allow adversaries to bypass digital defenses undetected.

The danger is data poisoning: manipulating the information used to train machines offers a virtually untraceable method to get around AI-powered defenses. Many companies may not be ready to deal with escalating challenges. The global market for AI cybersecurity is already expected to triple by 2028 to $35 billion. Security providers and their clients may have to patch together multiple strategies to keep threats at bay.

[...] In a presentation at the HITCon security conference in Taipei last year, researchers Cheng Shin-ming and Tseng Ming-huei showed that backdoor code could fully bypass defenses by poisoning less than 0.7% of the data submitted to the machine-learning system. Not only does it mean that only a few malicious samples are needed, but it indicates that a machine-learning system can be rendered vulnerable even if it uses only a small amount of unverified open-source data.

[...] To stay safe, companies need to ensure their data is clean, but that means training their systems with fewer examples than they'd get with open source offerings. In machine learning, sample size matters.

Perhaps poisoning is something users do intentionally in an attempt to keep themselves safe?

Originally spotted on The Eponymous Pickle.

Previously
How to Stealthily Poison Neural Network Chips in the Supply Chain


Original Submission

Is Society Ready for AI Ethical Decision-Making? 79 comments

Researchers study society's readiness for AI ethical decision making:

With the accelerating evolution of technology, artificial intelligence (AI) plays a growing role in decision-making processes. Humans are becoming increasingly dependent on algorithms to process information, recommend certain behaviors, and even take actions of their behalf. A research team has studied how humans react to the introduction of AI decision making. Specifically, they explored the question, "is society ready for AI ethical decision making?" by studying human interaction with autonomous cars.

In the first of two experiments, the researchers presented 529 human subjects with an ethical dilemma a driver might face. In the scenario the researchers created, the car driver had to decide whether to crash the car into one group of people or another – the collision was unavoidable. The crash would cause severe harm to one group of people, but would save the lives of the other group. The subjects in the study had to rate the car driver's decision, when the driver was a human and also when the driver was AI. This first experiment was designed to measure the bias people might have against AI ethical decision making.

In their second experiment, 563 human subjects responded to the researchers' questions. The researchers determined how people react to the debate over AI ethical decisions once they become part of social and political discussions. In this experiment, there were two scenarios. One involved a hypothetical government that had already decided to allow autonomous cars to make ethical decisions. Their other scenario allowed the subjects to "vote" whether to allow the autonomous cars to make ethical decisions. [...]

The researchers observed that when the subjects were asked to evaluate the ethical decisions of either a human or AI driver, they did not have a definitive preference for either. However, when the subjects were asked their explicit opinion on whether a driver should be allowed to make ethical decisions on the road, the subjects had a stronger opinion against AI-operated cars. [...]

[...] "We find that there is a social fear of AI ethical decision-making. However, the source of this fear is not intrinsic to individuals. Indeed, this rejection of AI comes from what individuals believe is the society's opinion," said Shinji Kaneko, a professor in the Graduate School of Humanities and Social Sciences, Hiroshima University, and the Network for Education and Research on Peace and Sustainability. So when not being asked explicitly, people do not show any signs of bias against AI ethical decision-making. However, when asked explicitly, people show an aversion to AI. Furthermore, where there is added discussion and information on the topic, the acceptance of AI improves in developed countries and worsens in developing countries.

Journal Reference:
Johann Caro-Burnett & Shinji Kaneko, Is Society Ready for AI Ethical Decision Making? Lessons from a Study on Autonomous Cars, Journal of Behavioral and Experimental Economics, 2022. DOI: 10.1016/j.socec.2022.101881


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 2, Insightful) by Anonymous Coward on Sunday June 26 2022, @02:37PM (1 child)

    by Anonymous Coward on Sunday June 26 2022, @02:37PM (#1256309)

    Only people who don't want to be responsible for their policies are pushing AI.

    AI is an excuse!

  • (Score: 1, Touché) by Anonymous Coward on Sunday June 26 2022, @04:33PM (2 children)

    by Anonymous Coward on Sunday June 26 2022, @04:33PM (#1256353)

    Does target have a beard and face Mecca? Drone strike approved. AI takes the element of human error out of the equation, with far fewer woman and babies being targetted.

    • (Score: 2) by maxwell demon on Sunday June 26 2022, @05:09PM (1 child)

      by maxwell demon (1608) on Sunday June 26 2022, @05:09PM (#1256359) Journal

      Does target have a beard and face Mecca? Drone strike approved.

      So as soon as Brad Pit happens to look in thewrong direction, he's in danger?

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 1, Touché) by Anonymous Coward on Sunday June 26 2022, @07:22PM

        by Anonymous Coward on Sunday June 26 2022, @07:22PM (#1256380)

        Uh oh subversive detected.

        Airstrike approved.

  • (Score: 1, Interesting) by Anonymous Coward on Sunday June 26 2022, @09:16PM (1 child)

    by Anonymous Coward on Sunday June 26 2022, @09:16PM (#1256402)

    I think real AI will only emerge from all this silicon bullshit in the same way as the O2-catastrophe led to the evolution of human consciousness. I.e. it's kind of a necessary precursor to lay down the silicon infrastructure but otherwise unrelated.

    • (Score: 1, Insightful) by Anonymous Coward on Monday June 27 2022, @07:00AM

      by Anonymous Coward on Monday June 27 2022, @07:00AM (#1256484)

      You can have dumb AI searching for new materials to make better computer hardware, or designing chips, eventually leading to the 3D brain-like chips needed for real AI.

  • (Score: 2) by pdfernhout on Tuesday June 28 2022, @01:45AM

    by pdfernhout (5984) on Tuesday June 28 2022, @01:45AM (#1256620) Homepage

    As I explained in 2010: https://pdfernhout.net/recognizing-irony-is-a-key-to-transcending-militarism.html [pdfernhout.net]
    "... Likewise, even United States three-letter agencies like the NSA and the CIA, as well as their foreign counterparts, are becoming ironic institutions in many ways. Despite probably having more computing power per square foot than any other place in the world, they seem not to have thought much about the implications of all that computer power and organized information to transform the world into a place of abundance for all. Cheap computing makes possible just about cheap everything else, as does the ability to make better designs through shared computing. ...
          There is a fundamental mismatch between 21st century reality and 20th century security thinking. Those "security" agencies are using those tools of abundance, cooperation, and sharing mainly from a mindset of scarcity, competition, and secrecy. Given the power of 21st century technology as an amplifier (including as weapons of mass destruction), a scarcity-based approach to using such technology ultimately is just making us all insecure. Such powerful technologies of abundance, designed, organized, and used from a mindset of scarcity could well ironically doom us all whether through military robots, nukes, plagues, propaganda, or whatever else... Or alternatively, as Bucky Fuller and others have suggested, we could use such technologies to build a world that is abundant and secure for all. ...
            The big problem is that all these new war machines and the surrounding infrastructure are created with the tools of abundance. The irony is that these tools of abundance are being wielded by people still obsessed with fighting over scarcity. So, the scarcity-based political mindset driving the military uses the technologies of abundance to create artificial scarcity. That is a tremendously deep irony that remains so far unappreciated by the mainstream.
          We the people need to redefine security in a sustainable and resilient way. Much current US military doctrine is based around unilateral security ("I'm safe because you are nervous") and extrinsic security ("I'm safe despite long supply lines because I have a bunch of soldiers to defend them"), which both lead to expensive arms races. We need as a society to move to other paradigms like Morton Deutsch's mutual security ("We're all looking out for each other's safety") and Amory Lovin's intrinsic security ("Our redundant decentralized local systems can take a lot of pounding whether from storm, earthquake, or bombs and would still would keep working"). ...
          Still, we must accept that there is nothing wrong with wanting some security. The issue is how we go about it in a non-ironic way that works for everyone. ..."

    --
    The biggest challenge of the 21st century: the irony of technologies of abundance used by scarcity-minded people.
(1)