It's fascinating - how does one defend against an attacker or red-team who controls the CPU voltage rails with enough precision to bypass any instruction one writes? It's an entirely new class of vulnerability, as far as I can tell.
This talk https://www.youtube.com/watch?v=BBXKhrHi2eY indicates that others have had success doing this on Intel microcode as well - only in the past few months. Going to be some really exciting exploits coming out here!
> how does one defend against an attacker or red-team who controls the CPU voltage rails
The xbox does have defences against this, the talk explicitly mentions rail monitoring defences intended to detect that kind of attack. It had a lot of them, and he had to build around them. The exploit succeeds because he found two glitch points that bypassed the timing randomisation and containment model.
I don't see much motivation for fixing that when I can purchase a nrf52xx Bluetooth Beacon on aliexpress for €4 and flash it with firmware that pretends to be 50 different airtags, rotating every 10 minutes, and therefore bypassing all tracker detections.
It's pretty trivial to just open it up and disconnect the speaker too. I took one apart to make a custom wallet card out of it and broke the speaker in doing so; the rest of it worked perfectly fine (though obviously the warning would still work).
It's not new - fault injection as a vulnerability class has existed since the beginning of computing, as a security bypass mechanism (clock glitching) since at least the 1990s, and crowbar voltage glitching like this has been widespread since at least the early 2000s. It's extraordinarily hard to defend against but mitigations are also improving rapidly; for example this attack only works on early Xbox One revisions where more advanced glitch protection wasn't enabled (although the author speculates that since the glitch protection can be disabled via software / a fuse state, one could glitch out the glitch protection).
You can't. Console makers have these locked-down little systems with all the security they can economically justify... embedded in an arbitrarily-hostile environment created by people who have no need to economically justify anything. It's completely asymmetrical and the individual hackers hold most of the cards. There's no "this exploit is too bizarre" for people whose hobby is breaking consoles, and if even one of those bizarre exploits wins it's game over.
And if you predict the next dozen bizarre things someone might try, you both miss the thirteenth thing that's going to work and you make a console so over-engineered Sony can kick your ass just by mentioning the purchase price of their next console. ("$299", the number that echoed across E3.)
This is a cat-and-mouse that can always be won by a sufficiently advanced cat. Whatever protection circuit you design, the attacker can decap the chip, put a wire on the right node in that circuit and force it to disabled. But that's really really hard, and most cats can't do it.
It's reassuring that the owner of a device will always own it, in the end.
Glitching attacks are typically performed by switching the supply voltage at quite high frequencies, a typical low-voltage detection won't trigger a reset under such conditions. And this is also why glitching attacks are often performed by spiking higher voltages, not lower. See for example Joe Grant's latest video on breaking crypto wallets [0].
Low-voltage detection is usually implemented as simple comparator which should trigger instantly, but often only on a single Vcc pin, and due to the decoupling caps found on a typical circuit design there is effectively an RC circuit that filters short fluctuations of supply voltage. So most low-voltage detection implementations only trigger on 'longer' periods of low voltage.
Traditionally low-voltage detection features (like brown-out detection) are there to guarantee functionality of the uC itself or the device the uC controls. It is typically not intended as a defence measure against these types of attacks. In fact, 15 years ago it may not have been much of a concern.
Voltage glitching is an old technique. Here's a paper about it from 2 decades ago https://ieeexplore.ieee.org/document/1708651 but it is at least another decade older as an attack vector.
Defend against it one way by voltage monitoring or physical intrusion detection, and another way by droop and such detection and countermeasures on the device. Both probably just increase the cost of hacking it by some orders of magnitude, but that may be enough.
Basically if someone has physical access to device, its game over.
You can do things like efuses that basically brick devices if something gets accessed, but that becomes a matter of whether the attacker falls for the trap.
> Basically if someone has physical access to device, its game over.
It took more than a decade to exploit this vulnerability and even then there are fairly trivial countermeasures that could have been used to prevent it (and that are implemented in other platforms.)
Nothing is unhackable, but it requires a very peculiar definition of "game over".
(And as others have pointed out: only early versions of this Xbos One where vulnerable to this attack.)
The incentives to hack the XOne were few. Easy sideloading. No exclusives. Not a great performance per dollar ratio either. It is the opposite of Nintendo consoles if you think about it, and nintendo consoles are notorious for having a really quick homebrew scene.
Every time a console gets hacked, the checklist of SOC security architects grows a little longer. Boot ROMs are written in formally verifiable language, there are hardware glitch detectors, CPUs running in lockstep to guard against glitches, checks against out of order completion of security phases, random delay insertion, and so forth.
When it comes to SOC security, the past is not a good predictor of the present. The previous Nintendo SOC was designed 15 years ago. A lot has been learned since. It's become increasingly harder to bypass these mechanisms.
The fact that it took 13 years to hack the Xbox One is not because it's not an attractive platform: because of its high profile, it has been a popular subject for security research grad students from the moment it was released. And if anything, the complexity of the current hack shows how much SOC security has progressed over the years.
I'm not at all familiar with the Xbox One, but this is a feature that's generally available if you're designing "closed" hardware like a console. Most SoC these days have some sort of security processor that runs in its own little sandbox and can monitor different things that suggest tampering (e.g. temperatures, rail voltages, discrete tamper I/O) and take a corrective action. That might be as simple as resetting the chip, but often you can do more dramatic things like wiping security keys.
But this exploit shows that it's still almost impossible to protect yourself from motivated attackers with local access. All of that security stuff needs to get initialized by code that the SoC vendor puts in ROM, and if there's an exploit in that, you're hooped.
This attack is on the early models that didn't have those protections enabled. The researcher surmised that later models do indeed have anti-glitching mechanisms enabled.
This talk https://www.youtube.com/watch?v=BBXKhrHi2eY indicates that others have had success doing this on Intel microcode as well - only in the past few months. Going to be some really exciting exploits coming out here!