48

Lately I've been reading about things like BadUSB and RubberDucky which are essentially USB sticks that tell the computer they are a keyboard. Once they are plugged in they "type in" whatever commands they were told to execute. My question is, why are keyboards automatically trusted in almost every OS? For example, if an OS detects a new keyboard plugged in, why not pop up a password prompt and disallow that keyboard from doing anything until it enters the password? It doesn't seem like this would create a ton of usability issues. Is there a reason why this or another protection measure isn't used?

trallgorm
  • 885
  • 7
  • 19
  • 62
    How would you type in the password for the keyboard without the keyboard?! – spacetyper Feb 04 '16 at 15:44
  • 15
    @spacetyper You would use the new keyboard. The point was that it would not, for example, be able to "press" the shortcut for cmd and then enter commands through there, until it entered the password. I don't see any harm in just letting it type stuff into the password field. – trallgorm Feb 04 '16 at 15:48
  • 26
    If the bad guys get physical access to your machine long enough to plug a USB to it, you've already lost. – Mindwin Remember Monica Feb 04 '16 at 17:49
  • 32
    @Mindwin: The bad guys only need to sell you a fake keyboard. They don't need physical access to your computer. – Lightness Races in Orbit Feb 04 '16 at 18:59
  • 5
    I think a better way to phrase this is 'Do any operating systems implement a security model for USB devices similar to bluetooth pairing'? – Jeff Sacksteder Feb 04 '16 at 21:25
  • 18
    The following heuristic is far from foolproof, but you might be able to catch a lot of cases if the computer presented a dialog when a *second* keyboard was plugged in. (Almost) everyone wants to use a keyboard, but not so many want to use two at once, so adding a keyboard when one is already present is perhaps a reasonable indicator that something is strange. The threshold could be made configurable, for people who know that they routinely use other keyboard-like devices: even if you routinely use a keyboard, the G700s, and an Optimus Mini, you probably don't want to use four keyboards. – The Spooniest Feb 04 '16 at 21:36
  • 2
    @TheSpooniest You can do that for example with USBGuard which was presented at this year's fosdem: https://fosdem.org/2016/schedule/event/usbguard/ . The video of the talk is already online. http://mirrors.dotsrc.org/fosdem/2016/h1309/usbguard.mp4 If I recall correctly, you can configure it so that the first keyboard is allowed but the second usb device wanting to use the keyboard interface gets blocked. Disclaimer: I'm not involved in the project. I only heard the talk. – Sumyrda - remember Monica Feb 04 '16 at 22:05
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/35361/discussion-on-question-by-trallgorm-why-dont-oses-protect-against-untrusted-usb). – Rory Alsop Feb 05 '16 at 19:51
  • Requiring a password typed with the keyboard (or some keyboard at least) is not a bad idea. But be aware that there are legitimate use cases for keyboards with subsets of keys. The obvious example is the 10-key pads that used to be common to allow laptops to be used comfortably by accountants. But custom keyboards intended to provide lots of keyed macro sequences or extra F-keys have their place in places like CAD systems as well as computer games. So you probably can't require an arbitrary string be typed on just the new keyboard. – RBerteig Feb 05 '16 at 22:50
  • 1
    I feel like a lot of the answers are missing the fact that you've asked about "keyboards" that look nothing like a keyboard. You might want to highlight that. I think people are a lot more likely to use a free 16GB thumb drive they got from some conference than buy a sketchy keyboard from some random company. And when the OS asks them to confirm the new "keyboard" they're more likely to shut it down. Note that you don't even have to render the thumb drive unusable: just don't load the keyboard drivers if the user is unsure. – MichaelS Feb 06 '16 at 03:20
  • 1
    Why does it need to type in password? The OS or BIOS can display a prompt and the user click OK. And then you've got the same issue with potentially malicious mouse. – Lie Ryan Feb 06 '16 at 07:28

6 Answers6

62

The trust model for a device you plug in to your computer is just inherently difficult. The USB standard was created to allow literally anyone to create a USB device. Security wasn't a factor. Even if it was, where do you place the trust? The movie industry tried this model with HDMI, and it's essentially failed miserably. You can't simultaneously give someone a device that does something, and prevent them from understanding how to do the same thing.

Your example proposes to put the trust in the user. The most obvious problem is nobody wants to type in passwords just to use a keyboard. Barring that, would it really solve anything?

The user already trusts the device, otherwise they wouldn't be plugging it into their computer. Since trust has already been established, why wouldn't they simply do whatever is required to get it to work?

Steve Sether
  • 21,530
  • 8
  • 50
  • 76
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/35341/discussion-on-answer-by-steve-sether-why-dont-oses-protect-against-untrusted-us). – Rory Alsop Feb 05 '16 at 11:43
  • 6
    HDMI is not a good comparison because it is an entirely different threat model. In the case of HDMI a third party who is neither the manufacturer of the hardware neither the owner perceive the owner of the hardware as a threat and want the manufacturer of the hardware to mitigate that threat. In the case of USB the owner of the hardware actually have an interest in the security working. So this is a difference between **protecting the owner** and **protecting against the owner**. – kasperd Feb 05 '16 at 19:12
  • 1
    @kasperd While that's true, the point is really that you can't keep a secret about how to implement hardware when you sell it to everyone. If you place the trust on the hardware, and the big secret is in the hardware then there's really nothing you can do to prevent whatever the secret is from getting out. – Steve Sether Feb 05 '16 at 19:22
  • 1
    The question asks about a security model for USB. I don't see it asking for that security model to be kept secret. Neither does the question ask for anything that would require any aspect of the hardware design to be kept secret. The usability issues could be addressed by applying the confirmation only to hardware plugged in after the system was booted, which is the main threat in this question. – kasperd Feb 05 '16 at 19:37
  • @kasperd I'm of the opinion that best answers look towards the question as a whole rather than trying to narrowly focus in on one aspect or another. Your approach of trying to get everything down to something "well defined" loses all the meaning in the process. As we narrow down the world into smaller and smaller pieces and get things as precise as possible, we wind up losing the connection to the real world. – Steve Sether Feb 05 '16 at 19:44
  • If the requirement is to type a password on the new keyboard, and the keyboard is shaped like a usb stick, how then does the user go about doing "whatever is required to get it to work"? – Andreas Feb 06 '16 at 12:43
  • @Andreas User types in the password as requested. You're assuming suspicion on the part of the user, which I don't. People aren't security researchers, and they don't understand security even a little bit. – Steve Sether Feb 06 '16 at 15:04
  • @SteveSether How do they type using the usb stick? – Andreas Feb 06 '16 at 15:07
  • 1
    @Andreas USB stick has a USB port in it, and the user is asked to plug the keyboard into it and type the password. – Steve Sether Feb 06 '16 at 15:22
  • @SteveSether Ah. That's wonderful :) – Andreas Feb 06 '16 at 15:27
21

For a start, keyboards tend to be trusted from a lot earlier in the boot process than the OS - if you have a BIOS password, or a Bitlocker key, you'll enter that before the OS has loaded, using the keyboard. In fact, a particularly malicious keyboard could do pretty much anything to prevent the OS from loading, up to, and probably including, pretending to be a bootable drive, and starting up a rootkit before letting the OS start.

You could also extend the same rules to mice (they could click on a predefined set of points to open the virtual keyboard, then type whatever they like).

Alternatively, you could decide that you will only use devices you trust, and accept the slim risk of bad things happening.

Matthew
  • 27,263
  • 7
  • 89
  • 101
  • 5
    I'm not sure this answer is complete. While worrying about the on-boot problem is important the BadUSB example is more of a plug-in-play vulnerability. – RoraΖ Feb 04 '16 at 15:28
7

The answer is Usability

How should the user give consent that the mouse/keyboard is trusted? With the keyboard/mouse which could be malicious? How do you handle the case when one has to swap/replace the keyboard? Especially in server-scenario you have multiple keyboards/mouse stored somewhere else and you grap the next best when you need physical access to the server. You will not remember which keyboard belonged to which server after months/years and the keyboard might even get destroyed. How to use the replacement keyboard? Give your consent with the unknown keyboard? How to do this with the first keyboard? Let's say you try a new PC of a friend and use your own keyboard and then give it to your friend. How should he give consent to his keyboard? Edit: You could ask for a password before first use but see my one but last paragraph.

So basically the unsolved question is: How can the computer establish a trusted/secure connection to the user which cannot be faked/circumvented by other hardware/software/bad guy/... in an easy usable way?

Rule 3 of the 10 Immutable laws of computer security is: "If a bad guy has unrestricted physical access to your computer, it's not your computer anymore." If you put in a BadUSB from a bad guy you are the minion of the bad guy and give him physical access to it by proxy. Notice that there are similar worse attacks than bad USB. For example putting a device from a bad guy into a FireWire or other DMA interface lets him read/write any memory and run any code and even circumwent lock screens of Windows/Linux/Mac. So best never put an untrusted device into your computer.

Edit: Because of this rule and such attacks were not thought of when the standard was thought of (physical security was less important at that time except in cases where physical access was restricted anyway), something like came never part of the standard. There were already many easier possible attacks with physical access so it was not worth to consider such a small edge case.It would have massivly increased the complexity of the system, especially if the autorisation has to be shared between multiple operating systems and the BIOS ("Press F10 for BIOS") and how many to store. The next problem arises when deciding where to display the password, especially if multiple monitors are detected like a defective laptop screen. All this also would have had negative impact on the acceptance from users and an easier to use standard might have become the standard instead. Since the devices are produced by economic working companies increased complexity (=cost) and lower acceptance (less pieces sold) this slim edge case would not have been important at that time.

There is specialized software on the market which let's you define trusted USB-devices for corporate high security environments but because of the points I mentioned it is not in broad use.

H. Idden
  • 2,998
  • 1
  • 11
  • 19
  • 7
    -1; *"How to use the replacement keyboard? Give your consent with the unknown keyboard? How to do this with the first keyboard?"* - yeah, why not? Assuming that this process is reasonably rate-limited so that the keyboard can't merely brute-force the password, this seems like a workable solution. It's one thing to say that this isn't worth the cost to usability, but you seem to be implying that it's inherently non-viable, and it isn't. – Mark Amery Feb 04 '16 at 18:57
  • @MarkAmery After thinking about it because of your comment I noticed your of course valid point. My thought process was about the following way: 1. Connect device. 2. Authorisize device 3. Install driver and make it accessible 4. Interact with device. The other possibility I thought of was 1, 3, 4, 2 If one would do 1, 3, 2, 4 and only allow inputting the password before 4 it might be indeed possible. I need to rethink my answer and also also consider things like drivers, BIOS mode ("Press F10 for BIOS"). Also sharing this information between BIOS and all installed OSes would need to be solved – H. Idden Feb 04 '16 at 19:23
  • @MarkAmery I hope I could improve the answer with your help. – H. Idden Feb 04 '16 at 19:45
  • The problem is nowhere near as hard as you make it sound. The OS should simply not allow new input devices without accepting them via an existing input device. When you accept them you should be able to select "accept once" or "always accept this device model on this specific usb port". – R.. GitHub STOP HELPING ICE Feb 05 '16 at 00:49
  • 6
    @R.. Let's say you have something like a Logitech Unifying Receiver working with your keyboard and mouse. Now, it breaks, and your local electronics store doesn't have any Logitech hardware but *does* stock Microsoft mice and keyboards. Explain to me how you would resolve this situation, in the scenario you describe, in a way that doesn't bring us right back to square one security-wise with regards to untrusted USB devices. – user Feb 05 '16 at 09:36
  • @R.. In addition to Michael's criticism, you're completely ignoring another thing about computer users - they will try to make any dialog go away without reading it or paying any attention to it. So you basically have two main routes the users will take - either the user presses X as soon as the dialog pops out, or he'll just click Allow "Of course I want to use the device, that's why I bought it". Users don't read. They don't have the knowledge required to pick the right answer even if they *did* read. – Luaan Feb 05 '16 at 13:51
  • @MichaelKjörling: That's trivial: always accept the first input device if there's no input device plugged in. That still solves the "malicious hardware" threat model with no user inconvenience. – R.. GitHub STOP HELPING ICE Feb 05 '16 at 14:21
  • @R.. What if there are multiple input devices present when booting including BadUSB? What about long running computers like servers where input devices are plugged off while normal operation? What about servers/home-theater-servers starting without input device? – H. Idden Feb 05 '16 at 14:34
  • @H.Idden: Devices that don't normally have a keyboard attached generally have some means of input that should be considered as having an input device already - touchscreen, remote control for an entertainment device, etc. This is nowhere near as hard as you're making it. People like you need to stop being **intentionally dense** about a security problem that's easily fixed as soon as the false narrative that it's unfixable gets out of the way. – R.. GitHub STOP HELPING ICE Feb 05 '16 at 17:11
  • @H.Idden: Regarding servers, they're rather outside the scope of discussion for how a user-facing input device policy should work. they're also not immediately at risk for this kind of issue because a server does not boot to a root shell; you have to login before rogue input poses any threat. Again, the theme is the same: stop being intentionally dense and open a discussion on real solutions. – R.. GitHub STOP HELPING ICE Feb 05 '16 at 17:13
  • @R.. I am sorry if I was too pedantic. I have much to do with requirements analysis and have had too many projects/products fail because they have overseen such edge cases in the usage flow while defining the requirements. One of the worst examples I have seen as user was a backup software which could only be used for recovery while on the original OS and hard disk. I noticed it when my hard drive failed and the software refused to use my backup because I couldn't boot/run it from the dying hard disk with the data. Luckily I could get a copy of the most important data from elsewhere. – H. Idden Feb 05 '16 at 17:35
  • 2
    @R I don't think H. Idden is being 'intentionally dense'. He is pointing out it isn't as straight forward as you seem to think it is. These are things you have to take into account when deciding what the default behavior of a system will be especially when many OSs now a days can run in pretty much any type of device with little to no modification. The 'Ok via already installed device' plan would fail on a laptop on which the keyboard/touchpad stopped working correctly. Half baked solutions are not solutions they are problems. – Mr.Mindor Feb 05 '16 at 18:08
  • Implementation of this kind of feature will have to be optional and be turned on from the BIOS. Most users will not fully understand why they have to approve the keyboard, and those people who care about such issues will most likely reserve a dedicated trusted keyboard that is used just to enable this feature. So from usability perspective, it's a non-issue. – Lie Ryan Feb 06 '16 at 07:43
1

The question always seems to be between security and convienience. With the HID attacks the balance seems to be in conveniences favour due to the physical access needed for these attacks. Obviously this could be implemented but there doesnt seem to be a need to do this at the moment, why add extra code and issues if the threat is minimal at the time.

Sighbah
  • 341
  • 1
  • 7
  • 1
    Nope, there is no such thing as a trade-off between security and convenience. There's good design and poor design, and there's actionable security and insolvable problems. In this case, the OS has no say over how hardware is processed by the BIOS, and so cannot do anything about the keyboard being malicious. See @Matthew's answer. – Steve Dodier-Lazaro Feb 04 '16 at 15:15
  • 4
    "The security vs. convenience dilemma has become one of the biggest issues facing information security, with the “lock it all down” mentality present in many organizations today. These information security infrastructures are being modeled after Fort Knox without a single thought given to how it will affect the end user." https://www.giac.org/paper/gsec/3770/security-vs-convenience/106079". Matthew does have a point with the BIOS but i think you have more issues if someone has the ability to physically access the machine. – Sighbah Feb 04 '16 at 15:42
  • 1
    Yes, this is how the people who don't understand design present it: "we must choose between something secure vs something that is easy" so they don't have to do requirements engineering, understand human behaviour and come up with interactions and services that inherently make humans behave securely and don't sacrifice utility for security. – Steve Dodier-Lazaro Feb 04 '16 at 16:12
  • No need to access your hardware in order to perform a BadUSB attack or similar HW-level attack. You can compromise a hardware vendor's security and corrupt their firmware. Vendors themselves and state-level attackers can do this, they have in the past stolen signing keys from such vendors. – Steve Dodier-Lazaro Feb 04 '16 at 16:13
  • @Steve DL - I do agree on your points regarding requirements engineering and design although i would be keen to understand your view if the requirements impede user interaction. Is that bad design or simply being more secure? Where do you draw the line between bad design and simply being more secure? What are some good examples? – Motivated Feb 04 '16 at 16:20
  • @Motivated "bad design" = all the crap that we are given as alternatives to passwords and captchas that fail to analyse utility and cost requirements from the perspective of users as acting with multiple technologies for end goals that are not technology related in the first place. Sometimes good design is to dispense with something altogether. Proponents of the linear security-usability paradigm will do comparative evaluations of equally crappy auth mechanisms whilst people with a service design mindset will implement federated identity schemes because they understand where utility is lost. – Steve Dodier-Lazaro Feb 04 '16 at 16:23
  • @Motivated bad design is when you don't implement a user-centered design process but merely come up with poor metrics for security and usability (poor as in centered on your interaction at hand, ignoring setup and upkeep costs and ignoring alternatives which require larger changes to technology). Sometimes the design space is very limited but often people fail to re-examine underlying assumptions to security tech and use the linear paradigm as an excuse for their designs' shortcomings. – Steve Dodier-Lazaro Feb 04 '16 at 16:25
  • @Sighbah the question makes no sense if you assume the user is aware they're plugging in a keyboard. It does if the USB device advertised is pretending not to be a keyboard and then acts as one. In that case though the capabilities required by said device allow other attacks which are more serious and completely bypass the OS. Rendering this entire discussion moot. – Steve Dodier-Lazaro Feb 04 '16 at 16:27
  • @SteveDL - If the focus is user-centered design (and i agree it should be), what would you consider alternatives to current design decisions e.g. captchas, multi-factor authentication, etc – Motivated Feb 04 '16 at 16:27
  • @Motivated that's a long discussion. I'm currently writing a paper covering theory and design principles for security but it's not ready for review yet. Sorry. Feel free to ping me in the DMZ though. – Steve Dodier-Lazaro Feb 04 '16 at 16:30
  • @Steve DL I would love to see your paper covering theory and design principles for security without been any inconvienience to the user. – Sighbah Feb 04 '16 at 16:33
  • @SteveDL - In addition, what would be alternatives to USB devices if security isn't a consideration? – Motivated Feb 04 '16 at 16:33
  • @Motivated Sorry, I'm not an expert at all on physical sec so not very keen to discuss how device identification / authentication protocols should be written. All I can tell with certainty is that hardware level issues are not solved at the software level. It's a bit late for that. – Steve Dodier-Lazaro Feb 04 '16 at 16:40
1

The OS knows nothing about the world outside of itself. It is naturally designed to trust hardware, because it has no way of verifying if the hardware really does exist. In fact, if you were to compare the concept of an OS running on hardware to the movie The Matrix, you'd pretty much be spot on. The OS is simply a collection of bytes that are eventually processed by the hardware. It may be running on a piece of real hardware, virtualized with other OSes that are equally unaware of each other, or even physically distributed across multiple units of hardware that act as a cohesive whole. The only real requirement is that the hardware acts in a way that is consistent with how the OS believes it should behave.

At the end of the day, the OS cannot exist without the hardware, and is utterly dependent upon the hardware to tell the truth. While some progress has been made towards making more secure systems (e.g. when they started restricted how PCI buses can use DMA), those are still mostly hardware solutions to security. The OS can refuse input from USB devices, but it can't reliably determine what a USB device is by examining it, because the device can lie. It can identify itself as whatever it wants to, and the OS can't do anything about it. Any verification of hardware would have to come from more hardware. All the OS knows is that it's receiving a signal on a known bus that conforms to a known protocol. You could easily emulate that using any other sort of software running in a hypervisor, for example; the OS can't tell the difference.

We can certainly do things to make things harder for malicious software, such as requiring some type of encryption chip with asynchronous keys, perhaps using a blacklist/whitelist key system, which would protect against casual acts of maliciousness, but that would only hinder development and raise the cost of new hardware, frustrate consumers, and lock out competition that is unable to get on to the whitelist or is even actively blocked by a blacklist. There is no perfect solution to the problem, and any reasonable solution would need to be done at the hardware level, since the OS can't readily determine if the hardware is what it says it is.

phyrfox
  • 5,724
  • 21
  • 24
-14

My motto of software is 3-D :

  • Divergence
  • Division
  • Domain

Every component MUST do what it's supposed to do and nothing extra/else, because you don't expect your fridge to open your beer. OS must provide a unified environment and API consistence. Including for the actual protection from bad usb, but the protection itself must be a module, an extension based on commonly-accepted API's. That's it - it's an architectural question.

schroeder
  • 125,553
  • 55
  • 289
  • 326
Alexey Vesnin
  • 1,565
  • 1
  • 8
  • 11
  • Comments are not for extended discussion; this conversation has been [moved to chat](http://chat.stackexchange.com/rooms/35385/discussion-on-answer-by-alexey-vesnin-why-dont-oses-protect-against-untrusted-u). – Rory Alsop Feb 06 '16 at 12:24