Google acquires SlickLogin: dogs go wild!
SlickLogin, an Israeli start-up, is behind the technology that allows websites to verify a user's identity by using sound waves. It works by playing a uniquely generated, nearly-silent sound through your computer speakers, which is picked up by an app on your smartphone. The app analyses the sound and sends a signal back to confirm your identity.
The firm confirmed the acquisition on its website but did not provide any financial details of the deal.
Too bad they don't still put whistles inside packages of Cap'n Crunch cereal!
(Score: 3, Interesting) by Angry Jesus on Tuesday February 18 2014, @01:41AM
My guess is that they are "fingerprinting" the phone's microphone in order to make it into a unique token. Kind of like the way every camera lens uniquely distorts images so that if you know what the picture should look like you can figure out which camera took the picture by comparing the differences between original and photograph.
(Score: 1) by Nerdfest on Tuesday February 18 2014, @02:02AM
Probably not reliable enough and wouldn't work for people with multiple devices. Great idea if there's enough identifiable distinction though.
(Score: 4, Informative) by tftp on Tuesday February 18 2014, @02:04AM
My guess is that they are "fingerprinting" the phone's microphone in order to make it into a unique token.
Impossible for 3 reasons:
(Score: 2, Informative) by Angry Jesus on Tuesday February 18 2014, @02:55AM
1. Many phones may have the same characteristics of their microphones (they are repeatably made)
Manufacturing tolerances always vary, especially for consumer-grade equipment. The chance that someone trying to crack your account has the same set of variations is going to be small. This isn't the kind of thing that needs to be perfect, it just needs to be good enough, like the iphone's fingerprint sensor.
2. The phone's response is affected by the environment (echo, attenuation, external noises, holsters, bumpers, hands.)
Those are all of a completely different category of variations. Echo? That's time-domain, not even frequency domain.
3. The speakers that emit the sound are part of the deal... and you do not authenticate with them.
Doesn't matter, that's just noise to be filtered out. Sure, if the speakers are really bad, then it will be too noisy to work. But see the first point -- it just has to be good enough, not perfect.
(Score: 2, Informative) by tftp on Tuesday February 18 2014, @05:38AM
Manufacturing tolerances always vary, especially for consumer-grade equipment.
It takes pretty good test equipment (Rohde & Shwartz) and an anechoic chamber to decently characterize a microphone. I made some measurements in such a lab in university. I cannot imagine what can you measure in open air, using random sources that are "barely audible" and in presence of stray signals.
Echo? That's time-domain, not even frequency domain.
Praise Fourier that they are not two interchangeable representations of the same physical process :-) In this case the echo will add another component, with the same frequency and a different phase. These components will add up, changing the amplitude of the resulting response... but since this is frequency-dependent (the delay is a fixed time,) the frequency response gets peaks and valleys. That's how those loudspeakers' enclosures shape the frequency response - by using boundary conditions.
Doesn't matter, that's just noise to be filtered out.
The frequency response of the system is mic(f) * speakers(f). If speakers change, the response changes as well. Since speakers and microphones are horribly nonlinear, harmonic content will be also severely affected by different speakers.
(Score: 1) by Angry Jesus on Tuesday February 18 2014, @06:08AM
It takes pretty good test equipment (Rohde & Shwartz) and an anechoic chamber to decently characterize a microphone.
You are thinking about it completely in reverse - this isn't about minimizing distortion, it is simply about distinguishing between different units. Similar to the way that forensic DNA matching only looks at 10-12 markers when that is a tiny fraction necessary to describe a human.
The frequency response of the system is mic(f) * speakers(f). If speakers change, the response changes as well
That's far too simplistic. Off the top of my head I can think of at least one method that isn't affected so straight-forwardly - measuring harmonic response ratios. Even if the speakers' output levels vary at a specific frequency, the microphone will have its own set of harmonics in relation to the generated tones. The speaker will have its own harmonics too, but all that extra noise won't matter because we are only looking for the harmonic signature of the microphone. I'm sure there are other relationships that could be profiled if someone were to spend more than 30 seconds thinking about it.