Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Thursday July 05 2018, @09:06AM   Printer-friendly
from the confusing-the-AI dept.

Submitted via IRC for BoyceMagooglyMonkey

Computer boffins have devised a potential hardware-based Trojan attack on neural network models that could be used to alter system output without detection.

Adversarial attacks on neural networks and related deep learning systems have received considerable attention in recent years due to the growing use of AI-oriented systems.

The researchers – doctoral student Joseph Clements and assistant professor of electrical and computer engineering Yingjie Lao at Clemson University in the US – say that they've come up with a novel threat model by which an attacker could maliciously modify hardware in the supply chain to interfere with the output of machine learning models run on the device.

[...] "Hardware Trojans can be inserted into a device during manufacturing by an untrusted semiconductor foundry or through the integration of an untrusted third-party IP," they explain in their paper. "Furthermore, a foundry or even a designer may possibly be pressured by the government to maliciously manipulate the design for overseas products, which can then be weaponized."

The purpose of such deception, the researchers explain, would be to introduce hidden functionality – a Trojan – in chip circuitry. The malicious code would direct a neural network to classify a selected input trigger in a specific way while remaining undetectable in test data.

Source: https://www.theregister.co.uk/2018/06/19/hardware_trojans_ai/


Original Submission

Related Stories

The Next Cybersecurity Crisis: Poisoned AI 26 comments

Machine-learning systems require a huge number of correctly-labeled information samples to start getting good at prediction. What happens when the information is manipulated to poison the data?

For the past decade, artificial intelligence has been used to recognize faces, rate creditworthiness and predict the weather. At the same time, increasingly sophisticated hacks using stealthier methods have escalated. The combination of AI and cybersecurity was inevitable as both fields sought better tools and new uses for their technology. But there's a massive problem that threatens to undermine these efforts and could allow adversaries to bypass digital defenses undetected.

The danger is data poisoning: manipulating the information used to train machines offers a virtually untraceable method to get around AI-powered defenses. Many companies may not be ready to deal with escalating challenges. The global market for AI cybersecurity is already expected to triple by 2028 to $35 billion. Security providers and their clients may have to patch together multiple strategies to keep threats at bay.

[...] In a presentation at the HITCon security conference in Taipei last year, researchers Cheng Shin-ming and Tseng Ming-huei showed that backdoor code could fully bypass defenses by poisoning less than 0.7% of the data submitted to the machine-learning system. Not only does it mean that only a few malicious samples are needed, but it indicates that a machine-learning system can be rendered vulnerable even if it uses only a small amount of unverified open-source data.

[...] To stay safe, companies need to ensure their data is clean, but that means training their systems with fewer examples than they'd get with open source offerings. In machine learning, sample size matters.

Perhaps poisoning is something users do intentionally in an attempt to keep themselves safe?

Originally spotted on The Eponymous Pickle.

Previously
How to Stealthily Poison Neural Network Chips in the Supply Chain


Original Submission

This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 4, Interesting) by BsAtHome on Thursday July 05 2018, @10:23AM (4 children)

    by BsAtHome (889) on Thursday July 05 2018, @10:23AM (#702909)

    This isn't the first paper on hardware Trojans and won't the last. However, the question is whether we can trust our _current_ hardware. Not some near-future hardware, but the hardware we are currently using.
    We know that all the big CPUs have lots and lots of "extra" stuff on board to do very ill-documented stuff at high privilege level. Traditionally we say "its game over" once you have access to those innards. Who can assure me that we haven't been compromised already? Maybe not utilized publicly, but we all may already be exposed.

    • (Score: 0) by Anonymous Coward on Thursday July 05 2018, @10:54AM

      by Anonymous Coward on Thursday July 05 2018, @10:54AM (#702915)

      the question is whether we can trust our _current_ hardware. Not some near-future hardware, but the hardware we are currently using.

      No, as the recent speculative execution bugs prove.

      To take reflections on trusting trust to the next level, let's imagine there are deliberate hardware backdoors on every commercial CPU. What would be the one tool capable of analyzing every transistor... an AI?

    • (Score: 4, Interesting) by bitstream on Thursday July 05 2018, @11:07AM (2 children)

      by bitstream (6144) on Thursday July 05 2018, @11:07AM (#702920) Journal

      You have already been compromised. Here's some names:
        * Intel Management Engine (ME)
        * Intel System Mode Management (SMM)
        * Trusted Platform Modules (TPM)

      Other vendors have their equivalents.

      There's a internet kill switch.

      • (Score: 0) by Anonymous Coward on Thursday July 05 2018, @04:03PM (1 child)

        by Anonymous Coward on Thursday July 05 2018, @04:03PM (#703040)

        Other vendors have their equivalents.

        There's a internet kill switch.

        Good. Let's use it, preferably before the next election. Advertising and social media have turned the Internet into a complete cesspool. Killing it seems like an idea that gets better and better every day.

        • (Score: 2) by bitstream on Wednesday July 11 2018, @08:23PM

          by bitstream (6144) on Wednesday July 11 2018, @08:23PM (#705912) Journal

          Kill advertising and mainstream social media instead?

          Facebook etc is like a magnet for flies. Keeps the rest cleaner.. ;)

(1)