Stories
Slash Boxes
Comments

SoylentNews is people

posted by hubie on Saturday February 10 2024, @07:34AM   Printer-friendly
from the Congratulations!-You-found-the-FBI's-backdoor dept.

Arthur T Knackerbracket has processed the following story:

We're very familiar with the many projects in which Raspberry Pi hardware is used, from giving old computers a new lease of life through to running the animated displays so beloved by retailers. But cracking BitLocker? We doubt the company will be bragging too much about that particular application.

The technique was documented in a YouTube video over the weekend, which demonstrated how a Raspberry Pi Pico can be used to gain access to a BitLocker-secured device in under a minute, provided you have physical access to the device.

A Lenovo laptop was used in the video, posted by user stacksmashing, although other hardware will also be vulnerable. The technique also relies on having a Trusted Platform Module (TPM) separate from the CPU. In many cases, the two will be combined, in which case the technique shown cannot be used.

[...] Microsoft has long accepted that such attacks are possible, although it describes them as a "targeted attack with plenty of time; the attacker opens the case, solder, and uses sophisticated hardware or software."

At less than a minute in the example, we'd dispute the "plenty of time" claim, and while the Raspberry Pi Pico is undoubtedly impressive for the price, at less than $10, the hardware spend is neither expensive nor specific.

[...] As one wag observed: "Congratulations! You found the FBI's backdoor."


Original Submission

This discussion was created by hubie (1068) for logged-in users only, but now has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
(1)
  • (Score: 5, Interesting) by Mojibake Tengu on Saturday February 10 2024, @08:38AM (3 children)

    by Mojibake Tengu (8598) on Saturday February 10 2024, @08:38AM (#1343820) Journal

    Digital Systems Security is mostly based on those conditions:

    1. Ignorance

    Common people are artificially kept in position of not knowing how to do simple things. Knowledge is discouraged. Accidental aspects of design are privatized. Technical realization of public know-how is outlawed.

    2. Complexity

    All encryption models are founded on concept of complexity of certain computations. This concept, while accepted historically as practical experience and belief, ignores the fundamental mathematics, namely dimensional and hyper operational scaling aspects of machines composition. Let's face it: Complexity is a Belief.

    3. Expectation of 'hardware is safe' while 'software is vulnerable'. Similarly, an expectation 'what's encrypted is safe' while 'what's unencrypted is vulnerable'.

    This is a fundamental mistake. In Cybernetics theory, hardware and software are perfectly equal. In technical praxis, real hardware is 'fixed' while software is 'unfixed'. People falsely expected fixed things cannot be changed.
    Though any attacks on hardware works same as for software if it is possible to prefix them properly by some abstract 'unfix' operation. Inventing and technical construction of such operation is an initial stage of every attack. That works on any barrier system, not just digital ones. Every practical unfix realization usually involves adding a new machine to the system, does not matter if program or electronics tool.

    And this is what the said raspi crackers actually did. Do you remember saying "A hacker without oscilloscope is not a real hacker."?

    All this above (Ignorance, Belief, Expectation) is actually a model in Social paradigm, not in Cybernetics paradigm. Observably, a paradigm clash. So it is doomed to fail against determined true cyberpunk with cultivated skills. Always.

    --
    Rust programming language offends both my Intelligence and my Spirit.
    • (Score: 3, Interesting) by Opportunist on Saturday February 10 2024, @10:48AM (1 child)

      by Opportunist (5545) on Saturday February 10 2024, @10:48AM (#1343831)

      In technical praxis, real hardware is 'fixed' while software is 'unfixed'. People falsely expected fixed things cannot be changed.

      Or it cannot be changed in the inconvenient case of there being a fundamental flaw that can NOT be fixed exactly because certain aspects of hardware cannot be changed.

      Attacks on hardware usually require more (financial) effort. Attacking hardware often means that you yourself have to have some kind of hardware at your disposal. And hardware is kinda hard to multiply by copying, unlike software where this is a possibility. Whether legally or illegally is moot in this context, what matters is that you cannot simply copy hardware "for free", if at all.

      Then there is skill set. Hardware hacking requires you to know a bit about electrical engineering, a skill that is surprisingly lacking in most computer enthusiasts these days. Sure, the old farts of course have a solid EE background, but anyone under 30? Hell, under 40? Not necessarily so. How many people, professional IT security personnel and "hacker", these days can say that they know how to use an oscilloscope or sensibly employ a logic analyzer? Hell, how many would even know what frequencies their logic analyzers and oscis would have to be able to handle, and how many channels, if they want to analyze some particular hardware?

      Security in hardware relies to a nontrivial amount on "security by obscurity". Not because it would be so much harder to dissect the hardware than it is to take apart software (ok, that too, but this is even the lesser problem for any but the most obscure and bespoke chip designs), but simply because the number of people with the relevant skills is way, WAY lower.

      • (Score: 3, Interesting) by sgleysti on Sunday February 11 2024, @02:23AM

        by sgleysti (56) Subscriber Badge on Sunday February 11 2024, @02:23AM (#1343937)

        I watched the video, and it blows my mind that they send the encryption key in cleartext over a simple digital bus. If the CPU and the TPM established a random symmetric key (AES-256 or similar) on first system boot, this would be a lot harder to pull off. An attacker would have to figure out how to read the stored key from the nonvolatile memory in the CPU or TPM. At least in microcontrollers, there tend to be security mechanisms to disable such reading, some that go so far as to irreversibly disable the programming/debug interface.

        Of course, people can still do power analysis attacks or try to glitch the integrated circuit by doing weird things to its power supply and/or I/O pins, but those attacks are next level.

    • (Score: 2, Funny) by shrewdsheep on Saturday February 10 2024, @04:51PM

      by shrewdsheep (5215) Subscriber Badge on Saturday February 10 2024, @04:51PM (#1343869)

      the fundamental mathematics, namely dimensional and hyper operational scaling aspects of machines composition.

      LOLROTL. That was probably ripped straight out of "Quantum-dimensional higher belief criticism of post-cultural social contracts in the transition of meta-operational conflicts to convolutional mediation abstraction."

  • (Score: 5, Funny) by julian on Saturday February 10 2024, @08:46AM (3 children)

    by julian (6003) on Saturday February 10 2024, @08:46AM (#1343821)

    A mitigation is to use VeraCrypt [veracrypt.fr].

    • (Score: 4, Funny) by Anonymous Coward on Saturday February 10 2024, @11:02AM (1 child)

      by Anonymous Coward on Saturday February 10 2024, @11:02AM (#1343835)

      How is this funny?

    • (Score: 3, Interesting) by sgleysti on Sunday February 11 2024, @02:06AM

      by sgleysti (56) Subscriber Badge on Sunday February 11 2024, @02:06AM (#1343934)

      What seems like a lifetime ago, computer security was part of my job. We used truecrypt, the predecessor to veracrypt. When the computer booted up, it displayed a screen that looked like a BIOS disk error. We had to type in a very long password and hit enter; only after typing in the whole password correctly and hitting enter would the screen change and the computer boot. If we were ever in a situation where the adversary in our threat model were to ask us to open the computer, we were supposed to claim that it was broken...

      Thankfully, I never ended up in such a situation.

  • (Score: 4, Funny) by Rosco P. Coltrane on Saturday February 10 2024, @09:38AM (3 children)

    by Rosco P. Coltrane (4757) on Saturday February 10 2024, @09:38AM (#1343824)

    Admittedly, this cool hack works because the TPM chip isn't integrated into the CPU. But that's less and less common.

    More generally, the question when you have encrypted things is: where do you store the key? Or to put the question another way: the encrypted data is as safe as the place your key is stored at.

    • The safest place to store keys is in your brain. But good luck remembering a key that's long enough to be cryptographically safe.
    • The next best thing is to store a smaller passphase in your brain that decrypts the key that decrypts your hard-drive. That's an incredibly common thing to do and it's a safe compromise. But it requires typing something, which sucks.
    • TPM chips store the encryption key and have a variety of hardware verification mechanism to ensure the hardware hasn't been tampered with, and will serve up the key to a process that hasn't broken the chain of trust. It's great because you don't have to type a password at boot time. But there are issues, such as recovering the data in the event of complete hardware failure from another computer, the vulnerability highlighted by this hack (which will work less and less often though), or the fact that the T in TPM doesn't necessarily stand for Trusted [davescomputertips.com]. Also, if your computer doesn't have TPM, you're SOL. TPM-less computers tend to be rare now, but there are plenty of good older machines that don't have it.

    My own solution is to carry my disk encryption key in my body, in my cryptographic implant [dangerousthings.com]. I use this piece of software [github.com] which can be used with LUKS to get a the key to decrypt the volume from the implant in a cryptographically secure manner: my laptop boots, it waits for me to present my hand to the NFC reader, and then it finishes starting up normally. I don't have to type anything, and the only way to decrypt my hard disk is to cut off my hand, which is highly unlikely.

    Of course, that's only any good for a machine I'm around of. Obviously it's not practical for my servers that need to reboot unattented. But it's great to secure machines I don't want anybody to access when I'm not around, and it doesn't rely on TPM.

    • (Score: -1, Redundant) by Anonymous Coward on Saturday February 10 2024, @11:04AM

      by Anonymous Coward on Saturday February 10 2024, @11:04AM (#1343836)

      How is this funny?

    • (Score: 3, Interesting) by sgleysti on Sunday February 11 2024, @02:30AM (1 child)

      by sgleysti (56) Subscriber Badge on Sunday February 11 2024, @02:30AM (#1343939)

      Is there no PIN or other passphrase to access the key in your implant? I ask because, as far as I understand, the fifth ammendment in the U.S. applies to the contents of a person's mind (combination, passphrase, PIN, etc.) but does not apply to things such as printed passphrases, physical keys (e.g., metal ones, like for a safe), and so on. If the contents of your encrypted computers were subpoenaed and the mere act of moving your hand near the computer was sufficient to grant access, you might be required to do so. The same probably applies to fingerprint locks on phones.

      Not that modern security infrastructure isn't effectively swiss cheese across the board, granting law enforcement access by other means, but the idea is still worth thinking about.

      • (Score: 3, Interesting) by Rosco P. Coltrane on Sunday February 11 2024, @12:07PM

        by Rosco P. Coltrane (4757) on Sunday February 11 2024, @12:07PM (#1343961)

        I have a PIN setup but it's hard-coded into the bootup script. I don't really care too much about that, as my threat model isn't that high with my laptops. But the PIN is set in the chip so you can't sneak up on me with a cellphone and get TOTP codes to log into something.

        The failure mode with implants is when you sleep - or when you're passed out if you're into drinking too much: then someone can scan your hand and do stuff. There's absolutely no way to communicate with an implanted NFC transponder without the implantee noticing: the range is very short and the orientation of the antenna needs to be very deliberate.

        I don't drink, and I don't sleep heavily enough that I wouldn't wake up if someone came up to me with a laptop running and ran it under my hand, especially since it would take some time to orient right to effect a read. A cellphone is easier to position, so the PIN is there to prevent that. Although by design, cellphones always emit a loud beep when they do anything NFC, even if they're muted, to prevent bad guys from doing NFC operations such as payments sneakily. I would definitely wake up too if a cellphone went bing next to me in my sleep.

        And - I know it's security through obscurity, but still... you have to know I have implants in me. That's definitely not obvious without going through an X-ray machine. And even the X-ray machine can miss it if it's not setup right. I know, I tried. The scripts in my laptops make no mention of implants in the comments - just NFC transponders: if someone where to analyze the initrd image in my laptop, they would deduce I have a cryptographic NFC card hidden somewhere. And you know, whoops, I lost it.

        All in all, for a nobody like me, it's plenty good enough security.

  • (Score: 2) by choose another one on Saturday February 10 2024, @01:16PM (2 children)

    by choose another one (515) on Saturday February 10 2024, @01:16PM (#1343846)

    If you’ve got to get a soldering iron out and warmed up and then accurately solder I think I’d dispute the less than a minute claim.
    Certainly if I’m doing the soldering.

    • (Score: 5, Informative) by MadTinfoilHatter on Saturday February 10 2024, @03:37PM (1 child)

      by MadTinfoilHatter (4635) on Saturday February 10 2024, @03:37PM (#1343862)

      He didn't do any soldering. The soldering bit was only what Microsoft claimed was necessary to pull this off. He used a customized raspi pico with a push pin connector that matched the hardware in question. The process took ~42 seconds. See video here: https://youtu.be/wTl4vEednkQ [youtu.be]

      • (Score: 2) by Freeman on Monday February 12 2024, @03:46PM

        by Freeman (732) on Monday February 12 2024, @03:46PM (#1344100) Journal

        Push Pin Connector's for the win is what I'm hearing. I can solder a couple of wires together. The minute we start talking about fine detail is where I go, yep not gonna work for me. Might help if I had a nice soldering iron+setup (not that I'd even know the first thing about whether I had a good one or not), but I also have never had a steady hand. The smaller the part, the more steady you need to be. I'm sure there are ways you could get around that, but I'm sure the ways also increase the $$$+ amount spent in said project. When the thing I'm looking to save a buck at costs less than a decent soldering iron+setup.

        --
        Joshua 1:9 "Be strong and of a good courage; be not afraid, neither be thou dismayed: for the Lord thy God is with thee"
  • (Score: 2, Interesting) by pTamok on Saturday February 10 2024, @04:12PM

    by pTamok (3042) on Saturday February 10 2024, @04:12PM (#1343865)

    Sometimes, bone-headed 'security' snafus are deliberate to allow three-letter agencies plausible and deniable access to something that definitely isn't a back-door.

    Campaigning for explicit back-door access is an obvious thing to do when you already have it, because it makes people think that you don't.

    Intel and AMD are American companies.
    Alphabet, Apple, and Microsoft are American companies.

    Not only can they be forced to 'design in' plausibly deniable back-doors, they can be forced to not disclose it, and deny it happens.

    And the Security and Intelligence agencies do fun stuff like intercept Cisco routers in transit to a customer, open them up, add a backdoor, then repackage them and send them on their way, with Cisco denying they knew of the operation.

    https://www.techradar.com/news/networking/routers-storage/photos-reveal-nsa-tampered-with-cisco-router-prior-to-export-1249191 [techradar.com]
    https://arstechnica.com/tech-policy/2014/05/photos-of-an-nsa-upgrade-factory-show-cisco-router-getting-implant/ [arstechnica.com]

    And there will be more than one plausibly deniable back-door.

    For most people, it is not a problem, because they are not targets of interest to the Security and Intelligence services of the USA.

    Poor design is sometimes deliberate. And, sometimes, just poor design.

(1)