Stories
Slash Boxes
Comments

SoylentNews is people

posted by on Wednesday February 15 2017, @10:08AM   Printer-friendly
from the because-they're-more-determined dept.

Society is operating under the illusion that governments and corporations are taking rational choices about computer security, but the fact of the matter is that we're drowning under a sea of false positive, bad management, and a false belief in the power of technology to save us.

"The government is very reactive," said Jason Truppi, director of endpoint detection and response at security firm Tanium and a former FBI investigator. "Over time we've learned it wasn't working - just being reactive, not proactive."

Truppi said we need to puncture the belief that government and industry are working together to solve online threats. In reality, he says, the commercial sector and government are working to very different agendas and the result is a hopeless mishmash of confusing loyalties.

On threat intelligence sharing, for example, the government encourages business to share news of vulnerabilities. But the subsequent investigations can be wide-ranging and lead to business' people being charged for unrelated matters. A result companies are increasingly unwilling to share data if it exposes them to wider risks.

The fact of the matter is that companies don't get their own infosec problems and don't care that much. Truppi, who has now moved to the commercial sector, said that companies are still trying to hire good network security people, but bog them down in useless false alerts and management panics.

-- submitted from IRC


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 0) by Anonymous Coward on Wednesday February 15 2017, @02:51PM

    by Anonymous Coward on Wednesday February 15 2017, @02:51PM (#467394)

    I would add though that it also comes down to fundamental equipment and protocol design. The first generations of computers, from an abacus to the Enigma machines (and, ironically, the computers designed to break them,) never had designed security as the primary design function. (Yes, even Enigma - which was often cracked because the system was designed in a way that let the code setting officers *be* lazy and rely on the same steckers, simple wheel settings, and other easy-to-make operational errors. Including redundant transmission by broken systems which led to cracking that day's enigma traffic.) Many of the protocols we rely upon (email, SMS,) were never thought of as needing encryption, so encryption when it exists is bolted on rather than baked in. Computers are insecure by deisgn.

    If we wanted security first and foremost in computing operations, we could achieve it. It would not be as difficult or expensive as people think. But it would be *inconvenient.* And that is the watchword that kills the whole thing. Who wants security at the cost of adding an extra minute before you see your text? What business relying on ad revenue will funnel *all* ad traffic first to their server then pass it to the end requestor when it is easy to cross-site script?

    Thus, it would rely upon the architects of such a system to make it mandatory. And then hope it can compete, but for the most part the only people who would be interested in the additional burdens are those who have been burned by the fire already. It's far easier and economical to take the hits and continue with a half-assed approach.