It is reasonably well established that it is possible to determine what one is typing based on keyboard sounds, and the difference between different keys (source). This provides a significant issue to anyone who wants to defend against it. The threat models that I have in mind are 1) VoIP software that is set to record audio, either where the VoIP provider does it or someone on the other end does and 2) audio recording in public places. Both seem feasible as the previously linked question has a link to open source software to implement this attack. What can be done to defend against these threats?
Asked
Active
Viewed 208 times
1 Answers
4
Not a lot in terms of practical efforts. The paper "Keyboard Acoustic Emanations" by Asonov, Agrawal gives rise to a few potential direct countermeasures:
- Using a non-mechanical keyboard, e.g. a rubber membrane keyboard. This reduces a lot of the identifying transmissible audio. Unfortunately they're not very comfortable to type on.
- Record key sounds from your keyboard and play them back continuously in a random order. Personally I think this would drive me completely insane.
- Play very loud white noise and hope you don't end up with tinnitus.
An indirect countermeasure is to not have any microphones within range. Don't use a microphone on your computer, and put your phones in some sort of soundproof container.
Ultimately, though, I suspect this threat is only applicable to cases where you're being targeted by a nation state actor, at which point, to quote James Mickens, you're still gonna be Mossad'ed upon.
Polynomial
- 133,763
- 43
- 302
- 380
-
1This answer overall seems very good, but it seems to have one serious flaw. Specifically, "this threat is only applicable to cases where you're being targeted by a nation state actor" is likely false. [This answer](https://security.stackexchange.com/a/198800/69496) shows that it is entirely viable for non-nation-state actors to implement this kind of attack, over VoIP software (perhaps the software is malicious) or perhaps a malicious phone microphone. – john01dav Mar 08 '19 at 11:10
-
@john01dav The fact that I could implement such an attack myself does not mean that I am likely to do it. Why bother with such a convoluted approach that requires access to multiple devices, potential physical locality, and extensive audio processing work? If I have access to your phone I probably have your emails and a bunch of other access. If I have access to your house I might as well just drop a USB keylogger. When I say "this threat is only applicable to state actors" I mean that there's no sensible motivation for lesser actors to go that route. [Relevant xkcd](https://xkcd.com/538/). – Polynomial Mar 08 '19 at 11:35
-
2@Polynomial There are plenty of groups involved in industrial espionage which have technological capabilities on par with nation state actors. They are willing to go to great lengths to steal information using technical means, but don't (always) have the ability to use physical force or violence to do so. – forest Mar 08 '19 at 12:05
-
@forest That's certainly another actor that might do this, but I still don't think it'll be top of anyone's list. – Polynomial Mar 08 '19 at 12:44
-
@Polynomial , it’s unlikely until someone builds a metasploit module to exploit it. After that point every script kiddy will be remotely locking screens and trying to listen for passwords to be reentered. – John Deters Mar 08 '19 at 16:30
-
@JohnDeters You can't build a metasploit module for this type of attack. It requires a lot of known data (sounds matched to key values) and you still need a microphone on someone's system. – Polynomial Mar 08 '19 at 17:21
-
@Polynomial You don't need a _microphone_ per se. You can use a laser against a window or another reflective surface with similar properties to remotely detect audio patterns. But it would still need a lot of known data as well as special hardware. – forest Mar 09 '19 at 06:41
-
@forest Exactly my point. Laser mics against windows are also a bit of a myth iirc; you need something more reflective in line of sight. – Polynomial Mar 09 '19 at 10:43
-
@Polynomial No they work, but the audio quality is not ideal. The best (public) results I know of were merely able to determine the gender and likely language of people inside the room. It also only works on single-layer glass. I do recall reading a paper about the shadows cast from components of an incandescent lightbulb with a vibrating filament being used to record audio from a room though... – forest Mar 09 '19 at 12:49