Hacker Uploads Own Fingerprints To Crime Scene In Dumbest Cyber Attack Ever:
Max Heinemeyer, director of threat hunting at Darktrace[*], thought it would be interesting to look back at the seven years since launching its AI-powered cybersecurity solution.
[...] Most often, when you hear cybersecurity professionals talking about hacker fingerprints, they are referring to any traces, any digital tracks, that have been left behind by a perpetrator. This kind of fingerprinting can help make broad-brush attack attribution, but it remains almost impossibly difficult to get a definitive attribution purely from such cyber-evidence. Unless that is, you were the hacker responsible for this attack on a luxury goods company which happened back in 2018 but has just been revealed by Heinemeyer.
[...] "The Darktrace AI detected what is potentially the first hack where the perpetrators purposely left their fingerprints at the crime scene," Heinemeyer says, "literally, their fingerprints." The luxury goods business had installed ten fingerprint scanners so as to restrict access to warehouses in an effort to reduce risk. "Unbeknown to them," Heinemeyer continues, "an attacker began exploiting vulnerabilities in one of the scanners. In perhaps the weirdest hacker move yet, they started deleting authorized fingerprints and uploading their own in the hope of gaining physical access."
The AI brain picked this up because one scanner was behaving differently than the others, meaning the security team became aware of the attack within minutes. And, of course, had some pretty conclusive evidence to provide to law enforcement.
[*] https://www.darktrace.com/en/
(Score: 1, Funny) by Anonymous Coward on Monday October 12 2020, @01:45PM (1 child)
It would have been faster and more convenient for the facial recognition software to recognize him.
(Score: 0) by Anonymous Coward on Monday October 12 2020, @03:35PM
He disguised himself as Goatsie.
(Score: 1, Interesting) by Anonymous Coward on Monday October 12 2020, @05:32PM
We get many alerts which in themselves are true-positives, but from and IDS perspective are mostly false-positives and noise since it's confirmed legitimate traffic. It does pick up software phone-home beacons that could easily go undetected. I'm not convinced the platform is that much better than other traditional IDS platforms. You can have a network segment that is already compromised when you first integrate the platform on a network, and it will "learn" that the bad traffic is "normal" behavior. This is the part of "autonomous and proprietary" that I don't particularly care for.
(Score: 3, Insightful) by Rosco P. Coltrane on Monday October 12 2020, @06:46PM
Fingerprint scanners are notoriously easy to fool. Strong presumption it is, conclusive evidence it ain't.
(Score: 1, Insightful) by Anonymous Coward on Monday October 12 2020, @09:21PM
How do they know it was the hackers fingerprints and not just someone they wanted to frame? Did they actually show up in person?
(Score: 0) by Anonymous Coward on Monday October 12 2020, @10:30PM (2 children)
So now every piece of simple software is now an "AI brain"? Why would anybody use AI to perform simple integrity checks?
(Score: 3, Interesting) by fakefuck39 on Monday October 12 2020, @11:02PM
Before we had hardcoded decision points in software. Now the more popular method is writing software that builds dynamic decision trees, and can be trained by giving it a large dataset on input. We call that IA, because it learns to do the job right, instead of being coded to do the job right. Fingerprinting is actually a good application for this, since it's image recognition. You can look at a picture that's smudged, fingers got fatter and stretched, a callus appeared and you learned it's a callus, not part of the fingerprint pattern, etc. Hard to hard-code. But you can write basic pixel matching, and the AI will learn what deviation is ok, and what makes it a different finger, but telling it which ways of "deviate from original image" make it a different person.
There was a guy who did a simple math equation in fpga. He first coded it and solved it, and got a large solution data set for many inputs. He then wrote an "IA" - a program that generated random bits and instructions to make a short random program. the AI would try to run the inputs through each new random program and see if it got the right results. After running this for a long time, the AI made code to solve that equation. The code made absolutely no sense, worked much faster than the program he whore to do the same task, and was smaller. It made no logical sense that it even ran or worked. His theory is there were some properties to the silicon this random program was taking advantage of, and using that for logic instead of the API logic of the FPGA.
(Score: 0) by Anonymous Coward on Monday October 12 2020, @11:06PM
It's just one of those new buzzwords. Makes for great headlines. They should have added the fingerprints to the blockchain and had the quantum AI scour through the blockchain to find it. Makes for better headlines.