Stories
Slash Boxes
Comments

SoylentNews is people

posted by takyon on Wednesday January 09 2019, @02:52PM   Printer-friendly
from the starving-programmers dept.

Bruce Schneier thinks the problem of finding software vulnerabilities seems well-suited for machine-learning (ML) systems:

Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already a healthy amount of academic literature on the topic -- and research is continuing. There's every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.

Finding vulnerabilities can benefit both attackers and defenders, but it's not a fair fight. When an attacker's ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender's ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.

But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Insightful) by fyngyrz on Wednesday January 09 2019, @04:52PM (7 children)

    by fyngyrz (6567) on Wednesday January 09 2019, @04:52PM (#784168) Journal

    the ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.

    I don't want something to go in and "fix" things; I'm very happy to have something point them out, but I want to do (or at least confirm) the fixes myself so I know exactly how they integrate (or don't) with what I was trying to accomplish. Not to mention learning to anticipate them and not cause them in the first place. Having such a tool is obviously valuable. Depending on it seems like a recipe for disaster to me.

    --
    We should start referring to "age" as "levels."
    So when you're LVL 80, you're awesome.

    Starting Score:    1  point
    Moderation   +1  
       Insightful=1, Total=1
    Extra 'Insightful' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by Runaway1956 on Wednesday January 09 2019, @04:58PM

    by Runaway1956 (2926) Subscriber Badge on Wednesday January 09 2019, @04:58PM (#784173) Journal

    I'm not even a developer, but I can see your point, and agree with it. You've built it, tested it, and turned the ML loose on it. It finds a "vulnerability" which it fixes - and your software no longer does the magic that it was designed to do. Yeah, you want to confirm, maybe allow the ML to "fix" it in a sandbox, test the "fix", and see HOW it "fixed". If you don't like what the ML does, then you can fall back and try to fix it yourself. It's great to have help, but you can't just allow the ML to take over the development, no matter how bad the exploit.

  • (Score: 2) by J_Darnley on Wednesday January 09 2019, @05:42PM (1 child)

    by J_Darnley (5679) on Wednesday January 09 2019, @05:42PM (#784200)

    You have a bug in a format parser so clearly the way to fix the bug is to remove the format parser.

    I swear it was just yesterday that I was reading about an ML tool that was hiding information in plain sight by encoding it in high frequency detail. That might have been last week but I'm sure it was yesterday that some other "AI" was cheating.

    • (Score: 2) by fyngyrz on Wednesday January 09 2019, @11:09PM

      by fyngyrz (6567) on Wednesday January 09 2019, @11:09PM (#784327) Journal

      I swear it was just yesterday that I was reading about an ML tool that was hiding information in plain sight by encoding it in high frequency detail.

      Are you thinking of this? [techcrunch.com]

      --
      Surely not everybody was kung fu fighting?

  • (Score: 2) by Thexalon on Wednesday January 09 2019, @07:20PM (2 children)

    by Thexalon (636) on Wednesday January 09 2019, @07:20PM (#784235)

    Among other reasons, you now have an easy way to intentionally introduce backdoors into all kinds of software: Compromise the ML auto-fix system.

    --
    The only thing that stops a bad guy with a compiler is a good guy with a compiler.
    • (Score: 2) by maxwell demon on Wednesday January 09 2019, @08:54PM (1 child)

      by maxwell demon (1608) on Wednesday January 09 2019, @08:54PM (#784272) Journal

      The point where it really gets interesting is when the ML program is allowed to fix its own code …

      --
      The Tao of math: The numbers you can count are not the real numbers.
      • (Score: 2) by Thexalon on Wednesday January 09 2019, @09:34PM

        by Thexalon (636) on Wednesday January 09 2019, @09:34PM (#784285)

        Then we're starting to get into Reflections on Trusting Trust [acm.org] territory.

        --
        The only thing that stops a bad guy with a compiler is a good guy with a compiler.
  • (Score: 1) by dkman on Thursday January 10 2019, @05:57PM

    by dkman (4462) on Thursday January 10 2019, @05:57PM (#784589)

    Yea. I'm very happy if it can check the code and identify points of risk/bugs. I'm happy if it can suggest a fix. But I'm very unhappy if it goes injecting it's own fix. I'm the one that needs to maintain that code and I want to be able to read it. 2 years from now when I come across some "WTF is this?" code that isn't commented and isn't written in my style I'm not going to be happy about it.