Stories
Slash Boxes
Comments

SoylentNews is people

posted by janrinok on Thursday May 25 2023, @05:10PM   Printer-friendly

NYT Link: https://www.nytimes.com/2023/05/23/opinion/cybersecurity-hacking.html

Archive Link: https://archive.is/wMAXA

In the movies, you can tell the best hackers by how they type. The faster they punch the keys, the more dangerous they are. Hacking is portrayed as a highly technical feat, a quintessentially technological phenomenon.

This impression of high-tech wizardry pervades not just our popular culture but also our real-world attempts to combat cybercrime. If cybercrime is a sophisticated high-tech feat, we assume, the solution must be too. Cybersecurity companies hype proprietary tools like "next generation" firewalls, anti-malware software and intrusion-detection systems. Policy experts like John Ratcliffe, a former director of national intelligence, urge us to invest public resources in a hugely expensive "cyber Manhattan Project" that will supercharge our digital capabilities.

But this whole concept is misguided. The principles of computer science dictate that there are hard, inherent limits to how much technology can help. Yes, it can make hacking harder, but it cannot possibly, even in theory, stop it. What's more, the history of hacking shows that the vulnerabilities hackers exploit are as often human as technical — not only the cognitive quirks discovered by behavioral economists but also old-fashioned vices like greed and sloth.

To be sure, you should enable two-factor authentication and install those software updates that you've been putting off. But many of the threats we face are rooted in the nature of human and group behavior. The solutions will need to be social too — job creation programs, software liability reform, cryptocurrency regulation and the like.

For the past four years, I have taught a cybersecurity class at Yale Law School in which I show my students how to break into computers. Having grown up with a user-friendly web, my students generally have no real idea how the internet or computers work. They are surprised to find how easily they learn to hack and how much they enjoy it. (I do, too, and I didn't hack a computer until I was 52.) By the end of the semester, they are cracking passwords, cloning websites and crashing servers.

Why do I teach idealistic young people how to lead a life of cybercrime? Many of my students will pursue careers in government or with law firms whose clients include major technology companies. I want these budding lawyers to understand their clients' issues. But my larger aim is to put technical expertise in its place: I want my students to realize that technology alone is not enough to solve the problems we face.

I start my class by explaining the fundamental principle of modern computing: the distinction between code and data. Code is a set of instructions: "add," "print my résumé," "shut the door." Data is information. Data is usually represented by numbers (the temperature is 80 degrees), code by words ("add"). But in 1936, the British mathematician Alan Turing figured out that code could be represented by numbers as well. Indeed, Turing was able to show how to represent both code and data using only ones and zeros — so-called binary strings.

This groundbreaking insight makes modern computers possible. We don't need to rebuild our computers for every new program. We can feed our devices whatever code we like as binary strings and run that program. That zeros and ones can represent both code and data is, however, a blessing and a curse, because it enables hackers to trick computers that are expecting data into accepting and running malicious code instead.

[...] Diversion programs in Britain and the Netherlands run hacking competitions where teams of coders compete to hack a target network; these programs also seek to match up coders with older security personnel to act as mentors and direct their charges into the legitimate cybersecurity industry. At the moment, with an estimated 3.5 million jobs unfilled worldwide, one fewer attacker is one more desperately needed defender.


Original Submission

 
This discussion was created by janrinok (52) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 3, Interesting) by bloodnok on Thursday May 25 2023, @09:36PM (1 child)

    by bloodnok (2578) on Thursday May 25 2023, @09:36PM (#1308214)

    You can actually create a near 100% impenetrable security setup. Barring zero days, it's very possible to create a secure setup, and those elusive zero day exploits are something that you should not have to worry about unless you're trying to enrich uranium for a country that shouldn't do it according to a bunch of countries that can throw a few millions about just to mess with you.

    Yes, of course you can create a secure web site, but not many achieve it. Most sites use so much 3rd party software that it is impossible to audit it all. And even if you do, that's just this week. As soon as you upgrade something you will have to audit a whole bunch more stuff. And then, there is an exploit of something you use and you have to upgrade to the latest version, because that's just best practice, right?. And that requires new versions of X which requires new versions of Y, and there you go: more to audit. And the fashion for releasing new functionality every few weeks really doesn't help.

    And who audits this stuff anyway? Hardly anyone, is who.

    I'd recommend reading Secrets and Lies by Bruce Schneier. He points out that the attack surface of modern computer systems is huge. And we have to secure every bit of it, while the attacker has only to find 1 exploit.

    I've resigned myself to all systems being compromised eventually. The best thing everyone can do is to have security at all levels, including the databases (VPDs please); only store what is necessary for as long as necessary; not use SINs, etc as keys; and have automated auditing and monitoring that is likely to detect an ongoing attack. You are not likely to be able to prevent compromises, but you do at least have a hope of shutting them down quickly, and restricting how much valuable data is compromised.

    <vent>A financial institution that I use had one of its service providers compromised recently. "No financial data was breached, but my SIN was taken. This company had no business having my SIN, yet that was stolen. For the sake of fuck, what is wrong with these people? The service provider provided print and mailout services, so WTF did they need my SIN for?

    We contacted the financial service company and they said that they use SINs for identification purposes and that this was industry standard practice. Well, it may be standard practice but that doesn't make it either good or even acceptable practice. The Office of the Canadian Privacy Commissioner says explicilty on their web site that this is bad practice, and has done for many years. It was known to be bad practice 30 years ago, but the financial institution says its ok 'cos everyone does it. I'll be trying to move my business elsewhere as there seem to be no other penalties these bozos will face.</vent>

    __
    The Major

    Starting Score:    1  point
    Moderation   +1  
       Interesting=1, Total=1
    Extra 'Interesting' Modifier   0  
    Karma-Bonus Modifier   +1  

    Total Score:   3  
  • (Score: 2) by Opportunist on Friday May 26 2023, @08:01AM

    by Opportunist (5545) on Friday May 26 2023, @08:01AM (#1308277)

    This is why layers of security are so crucial. I am a very big fan of the onion model of security. Every layer should, preferably, be fully capable of securing your system all by itself, every system has information on a need-to-know base, every piece of data has a normal flow of operation and a deviation from it causes an alarm. There is one, and only one, way to access various crucial systems with administrative privileges, which in turn makes securing that path rather easy since you only have to deal with a very limited number of systems that need to be secured.

    Sorry that I can't go into more detail, but we do spend quite a bit of effort (and money, holy shit...) on getting security down right.

    And that's at the same time the reason why security is in the sorry state it's in in most companies: Money. Security costs money. And it costs a shitload of money. The systems are much but not cheap and the people who can actually do it are insanely expensive as well. Our group has to afford that, because security is one of our key selling point (we never had a data breach and we intend to keep it that way). But that's us, an investment group where, as our CISO quipped, rubber stamping "it's for security" on any document gets it funded on the fast-pass.

    That's by far not an industry standard.