I wonder if these "bugs" will create a market for security dongles that perform AES, RSA, etc? That way they aren't black boxes like CPUs that literally have minds of their own these days (IME). I would like to own a USB dongle that took files in and outputted them in encrypted form. Bonus if they were an open spec so you could have various vendors or open source FPGA versions. Bonus if the key load would be airgapped from the PC side, say via QRcode, hex buttons, microSD, Bluetooth with hardware disable switch, or even rfid.
Yes that does create some new attack vectors, but these "bugs" make me think that the whole architecture is a rooted, burning trash fire.
Well, yes. There is already a large market for these "security dongles", and many libraries and protocols for interacting with them. They're called HSMs and examples of libraries include PKCS#11, JCE, MCE, and protocols like KMIP. Widely used in the financial sectors, CAs (of course), revenue collection such as tolls, government functions such as passport issuance, and some kinds of industrial control segments, among others.
It's long been the case that side-channel attacks can extract key materials out of conventional CPUs. Power analysis alone has been now a decades long science and not going away any time soon, made all the more exciting by the prevalence of RF and the advancement of antennas. Spectre and the like is just another wake-up for those not paying attention e.g. in cloud services. Consider yourself one of the enlightened when it comes to crypto material handling.
Well, I worked with one of proprietary security tokens before. Nothing to be proud of, unpatched software/firmware bugs, zero responsibility of manufacturer and usability mess. The thing is, not only cryptography hardware and software itself should be safe, but whole system should be up-to date and have no weak links, which is hard in practice and few want to pay for it.
Makes me think if there is any incentive to do crypto properly or security theater will always prevail?
I've had the unfortunate experience of integrating a gemalto network HSM and the broken state of the documentation alone is enough to make you question any engineering inside.
HSM devices, as I understand it, are designed mostly to protect secrets (keys) and perform asymmetric crypto operations safely but perhaps slowly. The Intel AES-NI, on the other hand, is designed for high speed fast symmetric crypto, with nothing kept secret to the user of the AES-NI instructions.
There’s already a TPM in most computers, and the TPM can do this for you.
Be careful, though: there are real TPMs (actual chips built by companies that, in theory, know how to build secure-ish hardware) and there are firmware-emulated TPMs. I think that Intel and AMD both offer the latter. Intel calls its offering “platform trust technology”. I take that to mean “please trust our platform even though there is sometimes a web server running on the same chip as the so-called TPM”.
Using a TPM for generic crypto operations is mostly outside of what they were designed to do. Real hardware TPMs tend to be very slow so they're not really useful for encrypting/decrypting network traffic, they're really supposed to be used to sign messages and verify checksums. They are also supposed to support full on trusted computing, but since it was designed by crypto nerds the trusted computing stuff is almost entirely useless. It completely blocks the CPU while running and is very slow so the practical use cases are very hard to find.
> I think that Intel and AMD both offer the latter.
Have no clue on Intel, but for AMD there is basically separate ARM core within CPU so it's has TrustZone built in. Is it just a web server too? I truly curious.
I don't think they meant that the TPM itself runs a webserver. I believe what they mean was that the emulated version runs on the cpu itself, which means that if the system is also running a webserver then any cpu vulnerabilities would compromise the security of the emulated TMP.
This is a misconception. These crypto keys are only designed to protect RSA and ECC private key, and encrypt symmetric key instead of actual data, for good reasons. The actual symmetric encryption is still performed on the host computer, the actual AES key can still be stolen from a CPU side-channel.
Are these tokens any good? Yes, they guard your private key. Is it enough to protect you from Spectre? No.
they are not cheap as they are not even close to usable out of 2nd factor auth flows. actual hardware acelerators for servers and real world loads are much more expensive though, but you get much more than this. also the interface to yubikeys is a usb hid, much more trivial to exploit than the article's issue.
Well, firstly, they have HSM-grade hardware available as well. Secondly, they have crypto processors that let you use PGP or PKCS11 certificates with the private key and certificate operations happening on the device, directly integrated into native system utilities.
Also, source on them being "much more trivial to exploit than the article's issue"? The only issue I've heard with Yubikey's certificate operations was https://www.yubico.com/keycheck/ where they also provided anyone affected with a replacement key at no charge.
> yubi key expose a device (or type as a usb keyboard), which every single user process have access to
So? Are there any actual exploits you'd like to share that take advantage of either of these? Or are you just speaking in hypotheticals? Because in that case, basically everything you do on any computer that isn't airgapped (and even that can be exploited) is going to theoretically be exploitable.
The Pi Zero can do this as a USB client device for a host PC, including the QRCodes with the Pi camera, buttons/switches, and a small touchscreen for host-isolated verification and pin entry. Beaglebone Black (possibly also Pocket Beagle) can do it too, and the am335x does have some minimal crypto acceleration.
They're not high performance, they aren't "security focused" hardware, nor are they perfect fit for the task, but they're reasonably well understood and broadly available. The Pi Zero does have a closed "GPU" firmware, but there is an effort[1] to run open firmware on it.
However if you expect the host to attack the device, USB OTG (i.e. Linux USB gadget drivers) may not be a good choice, you may want to access it via network instead and that opens up more choices (though most will not be as small).
The other alternatives are basically going to be micro-controllers, for example FST-01[2]/Gnuk[3], and FPGAs which are still more of a black box than CPUs at the moment.
>>but these "bugs" make me think that the whole architecture is a rooted, burning trash fire.
The next architecture should/will have dedicated on-chip space for things like encryption. That we are mixing essential and trivial data in the same space, and expecting neither to leak into each other, is the root of the problem. I wouldn't be suprised if in ten years we are talking about L1 through L3 Cache, with a separate "LX" cache for the important security stuff.
Well, I doubt it. If anything, Spectre points us in the direction of dedicated, simple, low-power cores that do no speculation for security sensitive tasks. Shared resources is the root of all sidechannel leaks, so my prediction is we will see, or at least should see, IMO, systems with multiple, separate physical chips, and maybe separate physical RAM, to run untrusted or sensitive code.
I don't think a separate chip for non-trusted code would work. When I play minecraft I don't want it to be relegated to some sub-chip because it isn't trusted. Conversely, the bits of code running my disk encryption shouldn't be sharing resources with minecraft. So I think the more practical route is dedicated space/chips/resources for security-related stuff, and the big chip for all the less important stuff.
Now on a server, with a far greater proportion of security-related tasks, then we may need greater allocation to security. A split between security-specialized with lots of separate protected chips, and general-use CPUs with one bigger chip may be likely.
Because drawing lines between trusted and non-trusted code isn't easy. Sometimes you need/want to dedicate all available processing power to a bit of trusted code (ie booting into windows). But other time you want to do the same for non-trusted (ie minecraft). Separate chips for each means you've basically halved your peak available horsepower (assuming both are of equal power). Rather than draw the line between trusted and not, I'd draw the line between security-related stuff (encryption keys) and everything else. Then the separate secure chip can be relatively small while the big chip remains available for everything else.
(I say "chip" but I think it is more likely to be a separate 'core' on a chip, one with it's own cache and ram. It wouldn't need much of either.)
Yes, I agree with you. What you are describing is exactly how the examples I gave above are designed. I didn't mean to imply that the whole OS should be considered "trusted", but rather just the security critical components.
Ah, but where do you draw the line between 'security sensitive tasks' and the rest? Is the keyboard driver security sensitive? What about pointer drivers (mouse, touch, etc)? Voice input, is that security sensitive? Video? Draw the line too tight and performance will really suffer, make it too loose and all that effort is for naught.
Absolutely that is future, you are right on. Witness ARM's "trust zone", Intel's SGX, chip vendors all over the place are making secure boot standard even for the lowest end micros. TPMs have been standard in laptops for almost a decade now. It's no longer optional to do key protection.
Well, I worked with one of proprietary security tokens before. Nothing to be proud of, unpatched software/firmware bugs, zero responsibility of manufacturer and usability mess. The thing is, not only cryptography hardware and software itself should be safe, but whole system should be up-to date and have no weak links, which is hard in practice and few want to pay for it.
Makes me think if there is any incentive to do crypto properly or security theater will always prevail?
Had the same with HSM vendors - poorly patched versions of OpenSSL to talk to it (no upstream patches), functionality missing, months between vulns in OpenSSL and their version getting fixed - hopeless industry tbf.
Yes that does create some new attack vectors, but these "bugs" make me think that the whole architecture is a rooted, burning trash fire.