Slash Boxes

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 16 submissions in the queue.
posted by Fnord666 on Saturday May 21 2022, @09:21AM   Printer-friendly

Machine-learning systems require a huge number of correctly-labeled information samples to start getting good at prediction. What happens when the information is manipulated to poison the data?

For the past decade, artificial intelligence has been used to recognize faces, rate creditworthiness and predict the weather. At the same time, increasingly sophisticated hacks using stealthier methods have escalated. The combination of AI and cybersecurity was inevitable as both fields sought better tools and new uses for their technology. But there's a massive problem that threatens to undermine these efforts and could allow adversaries to bypass digital defenses undetected.

The danger is data poisoning: manipulating the information used to train machines offers a virtually untraceable method to get around AI-powered defenses. Many companies may not be ready to deal with escalating challenges. The global market for AI cybersecurity is already expected to triple by 2028 to $35 billion. Security providers and their clients may have to patch together multiple strategies to keep threats at bay.

[...] In a presentation at the HITCon security conference in Taipei last year, researchers Cheng Shin-ming and Tseng Ming-huei showed that backdoor code could fully bypass defenses by poisoning less than 0.7% of the data submitted to the machine-learning system. Not only does it mean that only a few malicious samples are needed, but it indicates that a machine-learning system can be rendered vulnerable even if it uses only a small amount of unverified open-source data.

[...] To stay safe, companies need to ensure their data is clean, but that means training their systems with fewer examples than they'd get with open source offerings. In machine learning, sample size matters.

Perhaps poisoning is something users do intentionally in an attempt to keep themselves safe?

Originally spotted on The Eponymous Pickle.

How to Stealthily Poison Neural Network Chips in the Supply Chain

Original Submission

Related Stories

How to Stealthily Poison Neural Network Chips in the Supply Chain 5 comments

Submitted via IRC for BoyceMagooglyMonkey

Computer boffins have devised a potential hardware-based Trojan attack on neural network models that could be used to alter system output without detection.

Adversarial attacks on neural networks and related deep learning systems have received considerable attention in recent years due to the growing use of AI-oriented systems.

The researchers – doctoral student Joseph Clements and assistant professor of electrical and computer engineering Yingjie Lao at Clemson University in the US – say that they've come up with a novel threat model by which an attacker could maliciously modify hardware in the supply chain to interfere with the output of machine learning models run on the device.

[...] "Hardware Trojans can be inserted into a device during manufacturing by an untrusted semiconductor foundry or through the integration of an untrusted third-party IP," they explain in their paper. "Furthermore, a foundry or even a designer may possibly be pressured by the government to maliciously manipulate the design for overseas products, which can then be weaponized."

The purpose of such deception, the researchers explain, would be to introduce hidden functionality – a Trojan – in chip circuitry. The malicious code would direct a neural network to classify a selected input trigger in a specific way while remaining undetectable in test data.


Original Submission

Some Top 100,000 Websites Collect Everything You Type 33 comments

Some top 100,000 websites collect everything you type:

When you sign up for a newsletter, make a hotel reservation, or check out online, you probably take for granted that if you mistype your email address three times or change your mind and X out of the page, it doesn't matter. Nothing actually happens until you hit the Submit button, right? Well, maybe not. As with so many assumptions about the web, this isn't always the case, according to new research: A surprising number of websites are collecting some or all of your data as you type it into a digital form.

Researchers from KU Leuven, Radboud University, and University of Lausanne crawled and analyzed the top 100,000 websites, looking at scenarios in which a user is visiting a site while in the European Union and visiting a site from the United States. They found that 1,844 websites gathered an EU user's email address without their consent, and a staggering 2,950 logged a US user's email in some form. Many of the sites seemingly do not intend to conduct the data-logging but incorporate third-party marketing and analytics services that cause the behavior.

[...] "If there's a Submit button on a form, the reasonable expectation is that it does something—that it will submit your data when you click it," says Güneş Acar, a professor and researcher in Radboud University's digital security group and one of the leaders of the study. "We were super surprised by these results. We thought maybe we were going to find a few hundred websites where your email is collected before you submit, but this exceeded our expectations by far."

The Power and Pitfalls of AI for U.S. Intelligence 8 comments

Artificial intelligence use is booming, but it's not the secret weapon you might imagine:

From cyber operations to disinformation, artificial intelligence extends the reach of national security threats that can target individuals and whole societies with precision, speed, and scale. As the U.S. competes to stay ahead, the intelligence community is grappling with the fits and starts of the impending revolution brought on by AI.

The U.S. intelligence community has launched initiatives to grapple with AI's implications and ethical uses, and analysts have begun to conceptualize how AI will revolutionize their discipline, yet these approaches and other practical applications of such technologies by the IC have been largely fragmented.

As experts sound the alarm that the U.S. is not prepared to defend itself against AI by its strategic rival, China, Congress has called for the IC to produce a plan for integration of such technologies into workflows to create an "AI digital ecosystem" in the 2022 Intelligence Authorization Act.

The article at Wired goes on to describe how different government agencies are using AI to find patterns in global web traffic and satellite images, but there are problems when using AI to interpret intent:

AI's comprehension might be more analogous to the comprehension of a human toddler, says Eric Curwin, chief technology officer at Pyrra Technologies, which identifies virtual threats to clients from violence to disinformation. "For example, AI can understand the basics of human language, but foundational models don't have the latent or contextual knowledge to accomplish specific tasks," Curwin says.

[...] In order to "build models that can begin to replace human intuition or cognition," Curwin explains, "researchers must first understand how to interpret behavior and translate that behavior into something AI can learn."

Originally spotted on The Eponymous Pickle.

Is Society Ready for AI Ethical Decision-Making?
The Next Cybersecurity Crisis: Poisoned AI

Original Submission

AI Everything, Everywhere 32 comments

Dick Clark's New Year's Rockin' Eve has become a woke, sanitized shell of its former self. The crowd of rowdy, inebriated locals and tourists is long gone. What you see now is bouncing and screaming for the latest flash-in-the-pan artists while industry veterans like Duran Duran barely elicit a cheer.

Youtuber and music industry veteran Rick Beato recently posted an interesting video on how Auto-Tune has destroyed popular music. Beato quotes from an interview he did with Smashing Pumpkins' Billy Corgan where the latter stated, "AI systems will completely dominate music. The idea of an intuitive artist beating an AI system is going to be very difficult." AI is making inroads into visual art as well, and hackers, artists and others seem to be embracing it with enthusiasm.

AI seems to be everywhere lately, from retrofitting decades old manufacturing operations to online help desk shenanigans to a wearable assistant to helping students cheat. Experts are predicting AI to usher in the next cyber security crisis and the end of programming as we know it.

Will there be a future where AI can and will do everything? Where artists are judged on their talents with a keyboard/mouse instead of a paintbrush or guitar? And what about those of us who will be developing the systems AI uses to produce stuff? Will tomorrow's artist be the programming genius who devises a profound algorithm that can produce stuff faster, or more eye/ear-appealing, where everything is completely computerized and lacking any humanity? Beato makes a good point in his video on auto-tune, that most people don't notice when something has been digitally altered, and quite frankly, they don't care either.

Will the "purists" among us be disparaged and become the new "Boomers"? What do you think?.

Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Insightful) by bradley13 on Saturday May 21 2022, @11:00AM (11 children)

    by bradley13 (3053) on Saturday May 21 2022, @11:00AM (#1246812) Homepage Journal

    AI poisons itself, by learning think you did not expect. No, Tesla, the full moon is not a yellow traffic light. [] AI is easily poisoned by users. Just ask Tay, whom the public managed to pervert in just a few hours. [] So supposing that someone could do this deliberately? That takes no imagination at all.

    If you're going to use a neural net, you must be 100% in control of the training data. You cannot allow the AI to train on data controlled by people you do not trust. Even then, you can never be entirely certain what the network has learned. Are turtles dangerous actually dangerous weapons? []

    IMHO, one should never use a neural network for anything critical, because you will be surprised.

    Everyone is somebody else's weirdo.
    • (Score: 5, Insightful) by Thexalon on Saturday May 21 2022, @01:15PM (5 children)

      by Thexalon (636) on Saturday May 21 2022, @01:15PM (#1246826)

      It's really just falling victim to the same problem that has plagued computers since the dawn of computers: "Garbage in, garbage out." But the results will be believed because nobody in the boardroom understands what's actually happening and instead smiling and nodding along.

      The only thing that stops a bad guy with a compiler is a good guy with a compiler.
      • (Score: 0) by Anonymous Coward on Saturday May 21 2022, @02:21PM

        by Anonymous Coward on Saturday May 21 2022, @02:21PM (#1246835)

        How can you not believe it when it spits out so many significant digits?

      • (Score: 4, Interesting) by garfiejas on Saturday May 21 2022, @02:37PM (2 children)

        by garfiejas (2072) on Saturday May 21 2022, @02:37PM (#1246838)

        100% - but its more fundamental than that, as the OP said - you have no idea about what its learnt (e.g. large sections of the training data may be encoded by the network - waiting for some opponent or business competitor to trigger it - you can defend against if you know what your doing), how its learnt or what attractors are in the network; sufficiently large recurrent neural nets are by definition "chaotic", you can use these properties if you know this, but it will definitely bite you if you don't - the Tesla example is a good example of these issues...

        • (Score: 3, Interesting) by Thexalon on Saturday May 21 2022, @07:01PM

          by Thexalon (636) on Saturday May 21 2022, @07:01PM (#1246896)

          That is certainly part of the problem. Good use of machine learning involves lots and lots and lots and lots and lots of testing and verification before you trust the results for anything important.

          The only thing that stops a bad guy with a compiler is a good guy with a compiler.
        • (Score: 1, Insightful) by Anonymous Coward on Sunday May 22 2022, @04:36AM

          by Anonymous Coward on Sunday May 22 2022, @04:36AM (#1246980)

          Further on your point that the operators have no idea what was learned, consider Anscombe's quartet []:

          The four sets of data have mean, sample variance, correlation, etc. in agreement either exactly or to within half a percent - yet plotting them yeilds disturbingly obvious differences. Studying it (thanks Stan!) will teach you to graph the fucking data before even thinking about interpreting it.

          In a similar way, machine "learning" involves no mental model of the world, it's literally just guessing and bookmarking the results.

      • (Score: 4, Insightful) by sjames on Saturday May 21 2022, @05:56PM

        by sjames (2882) on Saturday May 21 2022, @05:56PM (#1246886) Journal

        That and the encoding of the 'logic' and the rules followed is generally obscured to say the least. You might actually need a second AI to help interpret what the first did, but then it's turtles all the way down.

        In some cases it's easy to detect the error if anyone bothers, for example mistaking the moon for a traffic light. But if the reasoning was at all complex, it may not be obvious that the AI's conclusion was false. For example, determinations of credit worthiness for a loan or (actual controversy) likelihood to re-offend for potential parolees.

    • (Score: 0) by Anonymous Coward on Saturday May 21 2022, @02:07PM

      by Anonymous Coward on Saturday May 21 2022, @02:07PM (#1246832)

      there's a book-story by w.gibson where the clubbermint camera surveillance A.I. has a backdoor. if you wear the correct pattern (hat, t-shirt,etc.) the camera A.I. will not see you ...

    • (Score: 5, Insightful) by mhajicek on Saturday May 21 2022, @02:11PM (1 child)

      by mhajicek (51) on Saturday May 21 2022, @02:11PM (#1246833)

      Neural nets are incapable of passing a code audit.

      The spacelike surfaces of time foliations can have a cusp at the surface of discontinuity. - P. Hajicek
      • (Score: 0) by Anonymous Coward on Saturday May 21 2022, @09:52PM

        by Anonymous Coward on Saturday May 21 2022, @09:52PM (#1246925)

        > Neural nets are incapable of passing a code audit.

        Just used that sentence as input to a Google search, lots of interesting hits...

    • (Score: 5, Insightful) by mcgrew on Saturday May 21 2022, @04:43PM (1 child)

      by mcgrew (701) <> on Saturday May 21 2022, @04:43PM (#1246863) Homepage Journal

      AI is a fraud; I've been working on an article titled Artificial Insanity for a while. It was a program I wrote forty years ago on an incredibly primitive computer to demonstrate that computers can't think. It had the opposite effect, convincing people that this little 4kHz 16kb computer could actually think, so the article will contain the source code (from a second version, the first has been lost).

      AI is simply giant computers you can walk around inside of, like the Illinois SoS mainframe we toured in a college class, with huge databases and millioms of lines of code that dwarfs something simple like Windows. It still comes down to switches flipping on and off; JMP, JR, AND, NOR.

      But what really makes a computer seem intelligent is Anthropomorphism [] and Animism [], especially Anthropomorphism. Stage magicians use these and other tricks; I was a magician as a child. The hand isn't quicker than the eye, the eye is simply easily distracted.

      AI is magic. Not Gandalf magic, but David Copperfield magic. It's a fraud. They've been calling computers "electric brains" for over 70 years, when the biggest computer that existed was less powerful than a musical Hallmark card.

      • (Score: 0) by Anonymous Coward on Sunday May 22 2022, @05:54AM

        by Anonymous Coward on Sunday May 22 2022, @05:54AM (#1246984)

        > ... an article titled Artificial Insanity ...

        Looking forward to this. You may have seen my AC comments in SN along the lines of "current AI isn't much more than fancy pattern matching." Do you have any outlets (your website, journals, magazines, etc) lined up for distribution?

        Maybe you will publish a draft for comments in your SN journal?

        While the usual jerks might get there first, I will try to read it carefully and give you a good peer review.

  • (Score: 5, Informative) by inertnet on Saturday May 21 2022, @11:11AM (5 children)

    by inertnet (4071) on Saturday May 21 2022, @11:11AM (#1246814) Journal

    There are some browser plugins that generate random traffic, so big corp AI doesn't get to know what makes you click. There's chaff and noiszy and some more I forgot.

    • (Score: 5, Interesting) by Spamalope on Saturday May 21 2022, @12:58PM (3 children)

      by Spamalope (5233) on Saturday May 21 2022, @12:58PM (#1246825) Homepage

      Indeed. If you can't avoid the privacy invasive vacuum, throw rocks into it.
      A plugin that uses peer-to-peer to swap data/cookies to further poison tracking would be neat as well.

      • (Score: 2, Interesting) by Anonymous Coward on Saturday May 21 2022, @02:34PM (2 children)

        by Anonymous Coward on Saturday May 21 2022, @02:34PM (#1246837)

        don't consider solution that require more "expense" a solution. you will lose. circumvention solution are solution because they require a tiny (or less expense). arms-race anyone?
        i would recommend randomizing DNS servers(*) contacted. normally one configures 1-3 (non-recursive) DNS servers.
        one should configure 1'000s and randomly contact them.
        it seems much of the spying and stuff is linked to what DNS one uses. not sure, but maybe the spy-site one is visiting doesn't do a IP geolocation lookup but rather gets info from the DNS server contacted on a side-channel or more likely the authorative DNS server is under the control of the spy-site thus they can link together ...

        (*) assume the "s" in DNS is "service not "server"

        on this matter: please? tor-exit configure your local-host dns correctly. if you're a tor-exit node you cannot use tor itself to do dns lookups. it will create a infinite loop back into the tor network to the next tor exit node that loops back into the tor netwok to another exit which lops back until it finds a exit that has a correctly configured dns on localhost.

        IF you want to try something, install KNOTDNS (authorative), install the root-zones (be a root server but unsanctioned) on, then install a RECURSIVE DNS server like UNBOUND on intercept all DNS lookup transparently and tunnel thru tor. have your resolve.conf configured as nameserver (unbound).
        what will (should?) happen: your computer needs to know "". it will ask which will contact a own root-server on, which will give something for "*.com". now unbound will try to contact this sub-authorative DNS server ... but the request will get intercepted and tunnel thru tor coming out some random tor-exit (that may have a broken dns setup), get reply from the *.dns to go ask around some more ... rinse, repeat until unbound has the ip for the domain.

        • (Score: 2) by ls671 on Saturday May 21 2022, @06:51PM (1 child)

          by ls671 (891) on Saturday May 21 2022, @06:51PM (#1246894) Homepage

          i would recommend randomizing DNS servers(*) contacted. normally one configures 1-3 (non-recursive) DNS servers.
          one should configure 1'000s and randomly contact them.

          You will need to configure a lot more than 1000 "non-recursive" DNS servers. One for each domain you want to visit and which are "non-recursive" but authoritative for each domain you wish to visit or connect to.

          Everything I write is lies, including this sentence.
          • (Score: 0) by Anonymous Coward on Sunday May 22 2022, @03:15PM

            by Anonymous Coward on Sunday May 22 2022, @03:15PM (#1247028)

            thx for reply.
            also it doesn't work, since you cannot "tunnel" straight-up DNS request thru tor, since tor only does TCP and DNS is UDP.
            but anyways, it's something i want to try:
            be my own root.server on (using "root.zones" not "root.hints"). it seems some opensource code for linux is around that is used by the REAL root servers. yeah for linux and open-source.
            then have a dns server that can recurse thru the silly DNS tree on
            and configure as nameserver in "resolv(e).conf".

            if in a bind, you can use ..uhm..err.. bind which can use the "root.hints".

            as i see it, the DNS servers use anycast: the shitty secret sub-internet protocol, that has ninja-stealth routers communicating one level below the internet so that a "ip address can be in multiple locations."
            assume you're doing a dns lookup thru tor from, say india, your exit is in canada, goes to a anycast ip, where the ninja-spy-router thinks you're in canada, gives you its anycast server in canada, being authoritative quickly informs the website your looking up that a visitor in canada is gonna come to you soon, but then gets a hit from india ... if it breaks, this is a good thing(tm).

    • (Score: 2) by garfiejas on Saturday May 21 2022, @03:08PM

      by garfiejas (2072) on Saturday May 21 2022, @03:08PM (#1246845)

      Question; given that we (as users) have no idea who coded/taught the ML/AI (audit/transparency) and their "motives" in doing so - and that they really are out to get you (algorithmic attention seekers) and its behaviour may even be random - see earlier post - should it be even legal for Humans (aka me/you/everyone) that are not its owners to be interacting with it?

      i.e. what would an AI look like that was "for" you, learn to share your goals, aspirations and moments - "be your partner" - not simply used or using you as its feed-stock...

  • (Score: -1, Offtopic) by Anonymous Coward on Saturday May 21 2022, @11:36AM (2 children)

    by Anonymous Coward on Saturday May 21 2022, @11:36AM (#1246815)

    I mean, it only takes a single Runaway1956 to dumb the entire S/N down.

    • (Score: -1, Troll) by Anonymous Coward on Saturday May 21 2022, @01:26PM (1 child)

      by Anonymous Coward on Saturday May 21 2022, @01:26PM (#1246827)

      You forgot to sign in again, 'zumi

      • (Score: -1, Troll) by Anonymous Coward on Saturday May 21 2022, @03:00PM

        by Anonymous Coward on Saturday May 21 2022, @03:00PM (#1246842)

        It wasn't 'zumi, dum-dumb, 't'was me

  • (Score: 3, Interesting) by DannyB on Saturday May 21 2022, @05:09PM (3 children)

    by DannyB (5839) Subscriber Badge on Saturday May 21 2022, @05:09PM (#1246876) Journal

    10 Use an AI that is ALREADY trained to recognize cats.

    20 Use that to filter out any non-cats from a much larger set of images. Then use that larger set of cat images to train a new AI.

    30 GOSUB 10
    40 RETURN
    50 END

    People who think Republicans wouldn't dare destroy Social Security or Medicare should ask women about Roe v Wade.
    • (Score: 0) by Anonymous Coward on Saturday May 21 2022, @07:49PM (1 child)

      by Anonymous Coward on Saturday May 21 2022, @07:49PM (#1246902)

      25 Human(s) reviews *all* of the "larger set of cat images" to verify that line 20 worked correctly. Delete as required. Then I might have some confidence that no one stuck in one of these, []


      • (Score: 0) by Anonymous Coward on Monday May 23 2022, @06:33PM

        by Anonymous Coward on Monday May 23 2022, @06:33PM (#1247277)

        You forgot to have the human re-insert all the cat images the primitive AI incorrectly recognized as non-cat images.

    • (Score: 0) by Anonymous Coward on Sunday May 22 2022, @03:44PM

      by Anonymous Coward on Sunday May 22 2022, @03:44PM (#1247035)

      i reply here, but i also want to include the guy above selling(?) books:
      it is probably possible to get "past" the magic trick part of neural network A.I.
      i say this, because we humans ourselfs are probably based on this method of A.I. creation.
      yup, you heard that right: you and the next person both get sick when eating cianide and can't go here or there if not covid vacinated. what i mean, our bodies are interchangeable, the "ME" or ""YOU" are a pattern of the brain. the brain itself totally gets high on LSD in the "YOU" or "ME".
      however what do YOU see here? []

      so for me, the "me" is a A.I. it has sensors and prolly a rudimentary boot-strap, hardwired (GENs build it) code to activate it.
      ME-new born has some reflexes: breathing, suckling, report hunger, etc etc.
      everything else is ACQUIRED (including how much you/me thinks we need to control those basic instincts (crosses legs))!
      the prolly more annoying then baby-ME, the typing here "ME" is a A.I. and you maybe can understand me, 'because we acquired "english" (happily or less-happily), so your "me" has overlapse of my "you". distributed.

      my point being, that we are building a similai of our own A.I. (type), a mirror. it might become a super A.I. but will have, even in super form, same limitations like all neural-network method based A.I.s so basically alot of artificial twitter checking, soap watching, meme creating intelligences (and a divorce here or there).

      the harder and prolly not made with mass stamped identical elements A.I. (neural-net based) is a expert system where we really BUILD the shit that we know from ground up, each and every decision into it. i would say about 50 years for one real slow-poke. gets duck AND rabbit everytime!

  • (Score: 0) by Anonymous Coward on Monday May 23 2022, @01:20AM

    by Anonymous Coward on Monday May 23 2022, @01:20AM (#1247134)
    From the i'm-sorry-dave-i'm-afraid-i-can't-do-that dept.