Bruce Schneier thinks the problem of finding software vulnerabilities seems well-suited for machine-learning (ML) systems [schneier.com]:
Going through code line by line is just the sort of tedious problem that computers excel at, if we can only teach them what a vulnerability looks like. There are challenges with that, of course, but there is already [arxiv.org] a [arxiv.org] healthy [acm.org] amount [scitation.org] of [ndss-symposium.org] academic [mdpi.com] literature [dspace.ou.nl] on the topic -- and [oreilly.com] research [vdiscover.org] is [techxplore.com] continuing [sdtimes.com]. There's every reason to expect ML systems to get better at this as time goes on, and some reason to expect them to eventually become very good at it.
Finding vulnerabilities can benefit both attackers and defenders, but it's not a fair fight. When an attacker's ML system finds a vulnerability in software, the attacker can use it to compromise systems. When a defender's ML system finds the same vulnerability, he or she can try to patch the system or program network defenses to watch for and block code that tries to exploit it.
But when the same system is in the hands of a software developer who uses it to find the vulnerability before the software is ever released, the developer fixes it so it can never be used in the first place. The ML system will probably be part of his or her software design tools and will automatically find and fix vulnerabilities while the code is still in development.