Stories
Slash Boxes
Comments

SoylentNews is people

SoylentNews is powered by your submissions, so send in your scoop. Only 15 submissions in the queue.
posted by Fnord666 on Saturday July 25 2020, @12:46AM   Printer-friendly
from the playing-devil's-advocate dept.

Legal Risks of Adversarial Machine Learning Research:

Adversarial machine learning (ML), the study of subverting ML systems, is moving at a rapid pace. Researchers have written more than 2,000 papers examining this phenomenon in the last 6 years. This research has real-world consequences. Researchers have used adversarial ML techniques to identify flaws in Facebook's micro-targeting ad platform, expose vulnerabilities in Tesla's self driving cars, replicate ML models hosted in Microsoft, Google and IBM, and evade anti-virus engines.

Studying or testing the security of any operational system potentially runs afoul of the Computer Fraud and Abuse Act (CFAA), the primary federal statute that creates liability for hacking. The broad scope of the CFAA has been heavily criticized, with security researchers among the most vocal. They argue the CFAA — with its rigid requirements and heavy penalties — has a chilling effect on security research. Adversarial ML security research is no different.

In a new paper, Jonathon Penney, Bruce Schneier, Kendra Albert, and I examine the potential legal risks to adversarial Machine Learning researchers when they attack ML systems and the implications of the upcoming U.S. Supreme Court case Van Buren v. United States for the adversarial ML field. This work was published at the Law and Machine Learning Workshop held at 2020 International Conference on Machine Learning (ICML).


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1, Touché) by Anonymous Coward on Saturday July 25 2020, @12:53AM

    by Anonymous Coward on Saturday July 25 2020, @12:53AM (#1026032)

    Then they don't have to worry about "legal".

    Besides, we should apply the same legal standards to researchers as we do to politicians, none

    Starting Score:    0  points
    Moderation   +1  
       Touché=1, Total=1
    Extra 'Touché' Modifier   0  

    Total Score:   1