Stories
Slash Boxes
Comments

SoylentNews is people

posted by martyb on Monday March 02 2020, @07:36PM   Printer-friendly
from the cat-and-mouse dept.

Don't run your 2FA authenticator app on these smartphones:

Aaron Turner and Georgia Weidman emphasized that using authenticator apps, such as Authy or Google Authenticator, in two-factor authentication was better than using SMS-based 2FA. But, they said, an authenticator app is useless for security if the underlying mobile OS is out-of-date or the mobile device is otherwise insecure.

[...] The problem is that if an attacker or a piece of mobile malware can get into the kernel of iOS or Android, then it can do anything it wants, including presenting fake authenticator-app screens.

[...] And don't think iOS devices are safer than Android ones -- they're not. There are just as many known exploits for either one, and Weidman extracted the encryption keys from an older iPhone in a matter of seconds onstage.

The iPhone's Secure Enclave offers "some additional security, but the authenticator apps aren't using those elements," said Weidman, founder and chief technology officer of Washington-area mobile security provider Shevirah, Inc. "iOS is still good, but Android's [security-enhanced] SELinux is the bane of my existence as someone who's building exploits."

"We charge three times as much for an Android pentest than we charge for an iOS one," Turner said, referring to an exercise in which hackers are paid by a company to try to penetrate the company's security. "Fully patched Android is more difficult to go after."

[...] In short, "we need to move away from usernames and passwords," Turner said.

[...] Turner [said] "I am fundamentally opposed to using biometrics because it's non-revocable," he said, citing a famous case from Malaysia in which a man's index finger was cut off by a gang to steal the man's fingerprint-protected Mercedes. "Fingerprint readers are biometric toys."

The only form of two-factor authentication without security problems right now, Turner said, is a hardware security key such as a Yubikey or Google Titan key.

"I've got two Yubikeys on me right now," Turner said. "Hardware separation is your friend."


Original Submission

 
This discussion has been archived. No new comments can be posted.
Display Options Threshold/Breakthrough Mark All as Read Mark All as Unread
The Fine Print: The following comments are owned by whoever posted them. We are not responsible for them in any way.
  • (Score: 1) by ze on Monday March 02 2020, @11:10PM

    by ze (8197) on Monday March 02 2020, @11:10PM (#965724)

    tldr: Basically, the machine uses the same way your friends/family/pets know you're you, only if one of them specialized in it 24/7 with super-human (+additional) senses and reliability. But please read all the nuances if you want to argue ;)

    This might sound crazy to a lot, but what I think I'd like for authentication is just for the machine to know me, and the condition I'm in, better than any human could, on the basis of >=dozens of factors correlated from so many sensor inputs, and sufficiently accurate recognition systems.
    For example, the sound of a single heart beat (which, I find, an amplified and sensitive mic can pick up from 20 feet away) probably has a unique acoustic resonance specific to each body... same for the sound of a single breath... add in actual heart rate and breathing patterns, recognition of faces, height, weight, build, and other specific physical characteristics, mannerisms, voice, odors, etc etc.
    Human bodies are full of unique signatures, and if you correlate enough of them and track changes over time, you'll get a unique and reliable recognition profile for an individual and their mental/physical states.
    This would require tons of training, and on devices you trust to keep this info secure (no cloud or phoning home)... the device that knows you may even stay on your person, maybe with daily data backups stored in a safe. Basically acting as a hardware security key, but one that won't authenticate for anyone but you.
    It should train its profile on you on an ongoing and continuous basis so that it can tell not only your baseline on each parameter, but the typical variances, as well as how they may shift over time, and their recent history...
    This means it doesn't just recognize your typical state, but your full range, and keeps up to date on your present condition. Even if someone could capture all this data and play it back to the system convincingly enough, it would be able to tell if it was too stale of data. It would still recognize you after you gained weight, because it watched you do it, and would see a big red flag if you inexplicably lost it all in a day. It would alert to any inconsistencies, like if it detected an injury one moment that went away the next. It could also be on alert if it loses and regains awareness of you, and treat the re-established data stream with extra scrutiny for inconsistencies until it's seen enough to regain confidence (or it might even be an alert condition in itself for that to happen). Also it could tell if there was a doppelganger trying to fool it while it could still see you there too. Basically, it's easy to fake any single metric, but the whole messy collection would be sensitive to anything that's wrong about it; not in a flaky, fingerprint reader fails whenever it can't get a perfect match, way, but a robust, "I can't get clear and complete readings on your fingerprints right now, but it's ok because what I can see of them is useful and correct, and a hundred other factors are too, and none of them or their relationships look alarmingly unusual for you" (unless they actually do) kind of way.
    Note I've also mentioned mental state... it can see your expression, heart rate, breathing, pupil diameter, and so many other subtle and correlatable factors all the time, and could learn to distinguish your states more reliably than your parents or partner can.

    "I'm sorry, Dave, but it looks like you're being coerced. I can't open the door until either you or your mysterious companions step out of the secure entryway so we can chat about it." for one example of how it might go...

    Note also that I am against biometrics that lack this level of sophistication. If it can't tell that you're really you, and that you're alive and well (as much as can reasonably be expected), and not being coerced (and distinguish that from just being irritated, excited, exercising recently, etc), then it's just a liability/backdoor.
    Also of potential relevance is that the actual private crypto keys your polymetric device uses to sign auth requests with can be revoked if needed. But your polymetric profile would never need to be revoked, 'cause no one can sufficiently steal it from you.

    The main weakness I can anticipate is if all this data can be captured by your device for a profile, other devices may attempt to capture just as much to build their own profile with which to create a convincing simulation of you. Its constant awareness of you could be one safeguard, as could mixing in metrics that can't be read without physical contact or cooperation, as well as the overall tracking of your condition and its trajectory, and the system seeking a deep level of self-consistency that less complete/current profiles couldn't live up to. Also, even with the data, just simulating a consistent person to such detail seems like a much harder and higher-resource problem than merely recognizing what your body does naturally. And maybe we could continue finding new non-derivative factors to add to the profile over time, making it even harder on anyone trying to build a fake one.
    But it's also possible this scheme wouldn't really work as well as it would need to. *shrug*