Stories
Slash Boxes
Comments

SoylentNews is people

posted by Fnord666 on Saturday July 25 2020, @12:46AM   Printer-friendly
from the playing-devil's-advocate dept.

Legal Risks of Adversarial Machine Learning Research:

Adversarial machine learning (ML), the study of subverting ML systems, is moving at a rapid pace. Researchers have written more than 2,000 papers examining this phenomenon in the last 6 years. This research has real-world consequences. Researchers have used adversarial ML techniques to identify flaws in Facebook's micro-targeting ad platform, expose vulnerabilities in Tesla's self driving cars, replicate ML models hosted in Microsoft, Google and IBM, and evade anti-virus engines.

Studying or testing the security of any operational system potentially runs afoul of the Computer Fraud and Abuse Act (CFAA), the primary federal statute that creates liability for hacking. The broad scope of the CFAA has been heavily criticized, with security researchers among the most vocal. They argue the CFAA — with its rigid requirements and heavy penalties — has a chilling effect on security research. Adversarial ML security research is no different.

In a new paper, Jonathon Penney, Bruce Schneier, Kendra Albert, and I examine the potential legal risks to adversarial Machine Learning researchers when they attack ML systems and the implications of the upcoming U.S. Supreme Court case Van Buren v. United States for the adversarial ML field. This work was published at the Law and Machine Learning Workshop held at 2020 International Conference on Machine Learning (ICML).


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 5, Interesting) by Mojibake Tengu on Saturday July 25 2020, @02:07AM (1 child)

    by Mojibake Tengu (8598) on Saturday July 25 2020, @02:07AM (#1026055) Journal

    Fixing immature technology (paradigm of Engineering) by legal means (paradigm of Law) will never succeed.

    Consider this: if some day a true Artificial Hacker will be created by machine learning, and it becomes autonomous, spreading itself on all networked things, who's to be blamed at which court?
    Better to fix your stuff by engineers, not lawyers, before that happens. It may be a very quick chain reaction to collapse all the digital decadence.

    --
    Respect Authorities. Know your social status. Woke responsibly.
    Starting Score:    1  point
    Moderation   +3  
       Insightful=1, Interesting=2, Total=3
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   5  
  • (Score: 2) by DECbot on Saturday July 25 2020, @08:42PM

    by DECbot (832) on Saturday July 25 2020, @08:42PM (#1026260) Journal

    Let's take this one step further, researchers are already looking at interpreting brain signals for machine input. I think that is great progress for more functional prosthetics, but the next logical step its to use the brain interface for high performance tasks like flying, driving, flight traffic control, etc; which would naturally progress to becoming a requirement for those professional fields. Like general computing did in the 90s and 00s. This makes me wonder what the next version out dystopia would look like. Imagine how confidence in democracy would faulter when it is discover input from the brain interface could affect decision-making, much like how rowhammer works on the ram. Now if I were a writer, here would be the fun part, the wealthy elites would still exists and society would cater to them. The working class would opt for the brain implant sacrificing their rights to become a servant of the elite in exchange for living in mild comfort as an Ix like subclass yet risk becoming afflicted by malicious, self aware ML. Then the impoverished low classes live in a subsistent steam punk world completely cut off from electricity in fear of what the AIs have already done to them.
     
    Aaaand I'm going to stop right there and keep to troubleshooting industrial equipment and shitposting on the internet because that's almost terrible enough to become a made for TV movie. Or a trilogy of them invoking memes about knowing kunfu and taking red pills.

    --
    cats~$ sudo chown -R us /home/base