OpenTitan looks interesting. I guess my main question though is, how does it support owner controlled applications, i.e. applications where the device owner (not the OEM) can provision and modify all code, rather than it being OEM-signed? Is it efuse-based or does it have nonvolatile storage on board?
I’ve been following the project for a really long time. As an digital design engineer, I really envy you, you got such an interesting project at hand. Kudos for what you are doing!
What I don't see touched on in this post is the high-level trust model. What generally makes trusted computing a non-starter for libre software is that it contains privileged keys baked in by the manufacturer.
A secure element chip does not need a completely libre/open design to support software freedom, but it does need to allow end-owners to run whatever software they would like - not subject to a third party's approval. To that end:
1. The chip needs to be able to be completely erased, having all keys zeroed out, at which time new keys can be loaded and/or internally generated. Signature verification keys are zeroed out to establish a new root of trust, and program/memory keys are zeroed out to maintain the security of information previously entrusted to the chip.
2. Resetting the chip should take a configurable amount of time (days to a month), either based on internally counting the clock or external proof of work, to protect against evil maid attacks by those with temporary possession of the hardware.
3. If there is any remote attestation functionality, then the reset procedure must output the attestation signing keys, including the private parts. This way the manufacturer (or any downstream integrators) cannot record the public identity of the shipped attestation keys to create a trust relationship that would prevent the ultimate owner from running software of their choosing.
For software freedom, these aspects are much more important than the design and licensing of the computation core(s). Let them ship ARM. Let them ship only ARM. Let them ship a proprietary ISA that needs to be painstakingly reverse engineered over a decade. Spend your social capital making sure the ultimate end-owners of the chip remain in control of their own devices! This is what is critical for open source.
> What I don't see touched on in this post is the high-level trust model. What generally makes trusted computing a non-starter for libre software is that it contains privileged keys baked in by the manufacturer.
Keys don't have to be open source.
Re (2), the way you defeat evil maid attacks on boot or wake is by using a TPM and measuring every bit of firmware and software loaded (to some point) so that local storage decryption is not possible unless you're running golden code. The hard part is stopping evil maid attacks on running systems.
Re (3), for open systems you really need any remote attestation to be at the option of the end user or the operating system that they choose to run (but the end user must have the option to run an operating system of their choice, which then might not implement remote attestation).
TPMs got these things right, but they're not SEs. TPMs could be SEs, but they're missing necessary functionality. I'm imagining a BPF-style system where you'd get to send bytecode to be interpreted with access to various loaded key objects (handles), and which would be able to execute TPM commands (subject to policy) and use their outputs without exposing them to the user, with the program's execution and availability of its final outputs being conditional on satisfying some relevant policy. Because so many systems have a TPM, a TCG TPM standard update that allows for a very limited and TPM-centric SE to be implemented with a TPM firmware update would be very interesting.
> For software freedom, these aspects are much more important than the design and licensing of the computation core(s). Let them ship ARM. Let them ship only ARM. Let them ship a proprietary ISA that needs to be painstakingly reverse engineered over a decade. Spend your social capital making sure the ultimate end-owners of the chip remain in control of their devices! This is what is critical for open source.
Yes, the amount of software built to run in SEs is minute, therefore it's not that big a deal to expect vendors to recompile that software if a new version of the SE were to use a different ISA. I'm a bit surprised at the focus on RISC-V, but I'm not annoyed by it -- if it really costs them little, then I don't see the problem, and having the option is certainly interesting.
I think the focus on RISC-V is in large part because of the focus of RTL-level verification. You can't release ARM core's RTL, and some may fear that backdoors might be hidden there. But you can easily release RISC-V core's RTL, and let people verify that there is no backdoors in there.
I think that one thing you can do is to release the RTL AND a matching gate level model, AND a description of the scan chain through all the flops (so a end user can verify the gate model from outside in the actual chip).
Mind you one could still still hide logic outside the scan chain but it should still be discoverable thru scan.
How did TPMs get these things right? To me, it seems they've gotten them direly wrong by letting the manufacturers create privileged keys.
I wrote my comment from the perspective of making something better than modern TPM implementations, iff they get the ownership/control issues right.
I don't understand your counterpoint to my (3). It's not enough for the OS to have an option to not perform remote attestation - the owner needs to have the ability to perform a mock remote attestation. Otherwise a remote party can just demand remote attestation be done, and the owners's "choice" will have been nullified.
> How did TPMs get these things right? To me, it seems they've gotten them direly wrong by baking in privileged keys.
Even if TPMs came with no endorsement key certificates, the platform could still trust-on-first-use the TPMs and create "privileged keys". The only way to prevent that would be to make TPMs useless.
> I don't understand your counterpoint to my (3). It's not enough for the OS to have an option to not perform remote attestation - the owner needs to have the ability to perform a mock remote attestation. Otherwise a remote party can just demand remote attestation be done, and the owners's "choice" will have been nullified.
I don't see how to do that without the party demanding attestation somehow establishing trust in your device, but that requires "privileged keys", and surely that party gets to say "no" (at least technologically; legally is another matter).
There's also a tension in that attestation is very useful for enterprise security. I.e., servers and VMs in data centers and the cloud, as well as employer-provided personal devices. Hard to get that but also make it impossible for consumer personal devices to attest. Also, users do get better security from attestation, at the cost of being at the vendors' mercy regarding walled garden lock-in. TPMs are neutral in this, enabling both, secure systems and DRM, and preventing neither.
> Even if TPMs came with no endorsement key certificates, the platform could still trust-on-first-use the TPMs and create "privileged keys". The only way to prevent that would be to make TPMs useless
I assume by "platform" you mean a remote party exerting control, bordering on non-consensual, over your local computer? If the hardware is not shipped with keys, the way to get around that would be to generate your own key external to the TPM, and load it into the TPM. Then the "platform" could trust that first-contact key all day long, but you would retain control.
> I don't see how to do [mock attestation] without the party demanding attestation somehow establishing trust in your device
That's the point of my requirement for the SE/TPM to spit out the attestation signing key when in reset mode. It grants the owner the ability to create mock attestations of the device as the previous owner knew it. Going forward, the owner can walk away and have the device create attestations that they can trust, with the condition that they aren't out of contact with the device for longer than the reset period. (The reset period requirement is what creates the distinction between a mere possessor, and the bona fide owner)
> There's also a tension in that attestation is very useful for enterprise security. I.e., servers and VMs in data centers and the cloud, as well as employer-provided personal devices
Yes, these are exactly the useful scenarios I envisioned when I wrote my comment, and would be fully supported with the requirements I laid out. What is not supported is claiming to sell someone a piece of hardware, while retaining indefinite control over it.
I mean the OEM who issues any "platform certificates" (in the TCG sense) for keys in the TPM's "platform key hierarchy". That would be, e.g., Dell.
> If the hardware is not shipped with keys, the way to get around that would be to generate your own key external to the TPM, and load it into the TPM.
If you have to do that every time, then you're always storing that key in the clear, and then you get no security advantage from having a TPM.
If you have to do that once and then you get the key back wrapped in a secret key only the TPM knows, well, TPMs pretty much support that. And now the platform can do the thing you don't like: ship with such a wrapped key (and certificate for it), which then enables operating systems to implement DRM.
You really can't get away from this. It's just not possible. Either you have a TPM-like device that is useful, or you don't have one, or you have one that's not useful (which is the same as not having one).
> That's the point of my requirement for the SE/TPM to spit out the attestation signing key when in reset mode. It grants the owner the ability to create mock attestations of the device as the previous owner knew it. [...]
Unfortunately that would ruin the TPM's security features in general. It would make it useless for enterprise, for example.
> > There's also a tension in that attestation is very useful for enterprise security. I.e., servers and VMs in data centers and the cloud, as well as employer-provided personal devices
> Yes, these are exactly the useful scenarios I envisioned when I wrote my comment, and would be fully supported with the requirements I laid out.
Doesn't work. See above.
> What is not supported is claiming to sell someone a piece of hardware, while retaining indefinite control over it.
The Tillitis TKey may provide a solution to this. You can (will be very soon) able to buy TKey devices that are not provisioned by Tillitis. Using the TKey programmer you can build your own FPGA bitstream (using the open toolchain). And then, using the provisioning tool or manually generate and insert your own Unique Device Secret (UDS).
This also means that you can get two (or more) blank TKey devices and provision them with the same UDS, creating a backup key that you can put in a safe place. I'd recommend to at least have different Unique Device Identities, to allow you to see which of the devices you use when connecting it to the host.
The bitstream, with the UDS is stored and locked in the FPGA non volatile configuration memory. It will not protect against a prepared evil maid-attack that gives the attacker enough time to dismantle the casing and perform a warm-boot attack. We are working on ways to make that attack harder, less feasible.
Exactly. In particular, see the "Powerful, Reliable Software Can Be Bad" section of <https://www.gnu.org/philosophy/open-source-misses-the-point....>. It'll take careful engineering to keep this from being useful for DRM, which I'm worried won't get done.
The TCG got this right in that TPMs do not force you into a DRM world, but they certainly enable it. I say that's right because any TPM-like technology that could let you secure boot could also let the platform vendor lock you into a DRM world -or lock you out of the walled garden if you opt out- but the TCG took a neutral approach that lets the user opt out.
By making it only possible to verify that you run on certain stack (which enforces your DRM) they made a very simple but powerful scheme that doesn't in fact infringe on users rights (No matter what FSF's FUD campaign said)
There's more to it. A TPM comes with an endorsement key certificate, and the platform to which the TPM is attached can come with platform certificates, and the OS can require those to exist and be valid and that the TPM still have the corresponding private keys. An OS can use those keys and certificates to implement a garden walled by secured boot + attestation. The user gets to change the relevant key hierarchy seeds, thus causing those private keys to no longer be available, and those certificates to be useless, and the OS to no longer be able to do attestation using those keys and certificates -- but then the OS vendor gets to lock that device out of the walled garden.
> they made a very simple but powerful scheme that doesn't in fact infringe on users rights
This is straight up false in the standard model where protocols are what mediates between otherwise-independent parties.
If a remote party can verify the software I am running on my machine against my wishes, that creates a security vulnerability that prevents me from running software of my choice.
> > they [the TCG] made a very simple but powerful scheme that doesn't in fact infringe on users rights
> This is straight up false in the standard model where protocols are what mediates between otherwise-independent parties.
I've answered this elsewhere. TPMs are neutral technology enabling technology that you might approve of and technology you might disapprove of.
> If a remote party can verify the software I am running on my machine against my wishes, that creates a security vulnerability that prevents me from running software of my choice.
TPMs do not prevent you from having opt-out.
The matter of whether someone can give you only a binary choice of opting into or out of walled gardens is a market/political problem. I don't see how to prevent walled gardens using technology, especially when walled gardens vendors can make it so you can only access the walled gardens on devices that they manufacture (or bless) -- it's their tech, so what can you do to make it not possible for them to implement lock-in? It's just not possible.
> I've answered this elsewhere. TPMs are neutral technology enabling technology that you might approve of and technology you might disapprove of.
And I strongly disagree! A TPM with an attestation key that is known to the previous owner (eg manufacturer or distributor) allows the previous owner (and everyone who trusts them) to compel the current owner to verify what software they are running on their own machine. The presence of the neutral hardware capability, plus the non-neutral treat-owner-as-an-attacker security property that the owner cannot read the private attestation key, creates a non-neutral security vulnerability for the owner. Hence my point (3) to mitigate this vulnerability by making it so that the owner can always choose to emulate an attestation. This property would make the hardware feature actually neutral.
> it's their tech, so what can you do to make it not possible for them to implement lock-in?
The problem is not set top boxes and other narrow-purpose devices created by vertically integrated companies - code signing is enough to lock those down and guarantee the creation of e-waste. Rather the issue is general purpose computers including hardware that allow remote parties to dictate what code you're running when you interact with them. For example, a bank that says "for [our] security, you must run a known copy of MS Windows with a known copy of InterChrome Explorer to access our website" (or analogous proprietary environment). The market basically moves in lock step, so you won't be able to choose a different bank (just as with the now-common "SMS 2FA" hassle). So given the basic non-neutral capability, corporations can gradually shove the general purpose computing genie back in the bottle, rather than having to make due with trusting device owners.
With binary/proprietary software, a similar dynamic exists right now. But for important cases it's possible to reverse engineer protocols and make third party clients. Owner-hostile remote attestation closes off the escape. This further pushes us into computational disenfranchisement, where the computer you purportedly own, that's sitting right in front of you, is enforcing the rules of the corporate-government power structure rather than functioning as your agent.
We do need to win the battle in the market, but to do that we at least need an option for people to buy that respects their freedom. Saying "buy this corporate phone and then run this procedure to change the TPM key [destroying your ability to use corporate-based attestation]" isn't a compelling story, but saying "buy this freedom-respecting computer based on a freedom-respecting TPM" is. We need enough of the people doing the latter that it is market-infeasible for banks et al to require remote attestation.
> A TPM with an attestation key that is known to the previous owner (eg manufacturer or distributor) allows the previous owner (and everyone who trusts them) to compel the current owner to verify what software they are running on their own machine.
The user can change the seeds and this stops being the case. Also, this feature is very useful in enterprise settings. Useful here, harmful there -- that's what makes the technology neutral. Even if TPMs didn't have endorsement key certificates, the platform vendor could add its own -- TPMs would be pretty useless without being able to derive keys from seeds or wrap keys, but having that feature is enough to make possible what you dislike.
> Saying "buy this corporate phone and then run this procedure to change the TPM key [destroying your ability to use corporate-based attestation]" isn't a compelling story [...]
You wouldn't say that. You'd say "buy this [device] and install [certain OS] on it". Why would vendors pre-ship the thing that they don't want? The user has to do something, but it can be made easy.
It's like you didn't read this subthread. You can't keep them from making sure that customer devices have this sort of hardware, not without resorting to legislation that you'll never get, so getting to at least have an opt-out is a lot better than not.
Actually even with the requirements I laid out, DRM would still work. A manufacturer could ship a device with decryption keys for DRM content. If the end-owner decided to liberate their device, then the decryption keys would be erased and DRM content could no longer be played. The owner would be left with a blank device ready to accept new software, likely open source, and the device wouldn't be relegated to becoming e-waste.
That's how it works with some of the hw backed DRM schemes that for example try to use remote Attestation with TPM, or secure media pathways on intel me or AMD PSP. The DRM software simply stops working, while you get to run everything you build yourself still.
There's no stopping walled gardens -- not with technology as the vendors would choose not to use such technology. The most choice users will ever get to make (per-device) is whether to be locked-into or locked-out-of the walled gardens.
> I'm a staunch supporter of libre software, but I don't see the harm.
The harm is we're still perpetuating DRM crap. The music industry has accepted that Spotify's DRM can be trivially broken and completely given up trying to police BitTorrent... it's time for the movie industry to realize the same. Sadly, Netflix failed to get enough dominance to bend over and cane the movie industry into submission, and regulatory agencies worldwide just accepted happily that movie studios went all-in on vertical integration again.
The concept of DRM was initially created on general purpose open computers. No TPM, no code signing, no remote attestation. Just plain old proprietary software running on open systems was enough to create something with DRM functionality, and put blood in the water for the control freaks.
Having a device be able to store and use secure keying material that cannot be read back out seems like a very useful basic property. A basic property which directly implies being able to store keys for decrypting media content, thus making for a better DRM scheme than could be achieved with a fully-debugger-able processor.
Of course, I could missing a way that this property could tweaked to neutralize some of its downsides - akin to how my original comment laid out slight modifications to the usual TPM/SE requirements that would prevent much abuse of TPMs/SEs by centralizing entities.
> The performance of older processes is also not sufficient for the latest cryptographic systems, such as Post Quantum algorithms or Multiparty Threshold ECDSA with Identifiable Aborts. On the upside, one could understand the design down to the transistor level using this process.
It would be great to see support for PQ and MPC algorithms, but what would make that easier would be SE programmability with access to the base primitives. The specific title reference implies GG-20 (which the original paper is broken, revealed in later papers such as the Alpha-Rays attack), so that would suggest a need for an SE that could expose field element arithmetic (both for DKG and signature creation) and support for Paillier, but even just having the former would be a major boon, as you could implement MPC protocols that rely only on base DLog assumptions like DKLs18 and 19 which use oblivious transfer (albeit with some matrix operations). Any news on this that can be shared at this time?
One goal of an open-SDK SE is for all the base primitives to be open, precisely because stuff like GG-20 gets broken. I don't think it's realistic to think that we're at a point where we could burn such a novel and complicated class of algorithms into a chip, and expect it to have no revisions.
The specific citation was provided because MPC algorithms span a pretty big range of computational difficulty, depending on the features you require and the primitives you assume. Narrowing it down to an exemplar algorithm (even if broken) helps pin the analysis to a given complexity range.
It's also interesting to note that some MPC algorithms assume some amount of secure and real-time communication with an oracle to assist with key distribution, which means you're possibly having to stuff a TLS stack into the SE.
Feels like https://cryptech.is people would be in this space. They did an FPGA HSM but a secure element with an open design is in the street, a neighbour idea maybe?
Some of us from the CrypTech core team are involved in Tillitis and the TKey device. Compared to the Cryptech HSM, the TKey is closer to a secure element (it started its life as an embedable root of trust module for VPN relays).
The TKey is basically a tiny, RISC-V based System on Chip in a tiny FPGA. The SoC has a minimal interface to the host, a simple context based memory map and a FW to allow loading applications and generate application base secrets.
This is long overdue. I've long been intensely frustrated at the way the secure element industry keeps everything behind NDAs, even just basic datasheets.
The SEs you can get without NDAs are invariably fixed-function models which you can't program yourself, which aren't interesting at all to me. There shouldn't be such a thing as an "OpenPGP smartcard" (yet there is), there should just be generic programmable smartcards I can program with an application of my choice.
The sad thing is such smartcards do exist, you're just not allowed to have them because they're all locked behind NDAs (e.g. the ST32 line - not to be confused with the popular STM32 microcontrollers by the same company). These typically feature a 32-bit ARM core and flash, and are used for bank cards and so on. And you can't have one.
Literally all I want from a secure element is:
- Some nonvolatile storage which can only be accessed by the code on the device
- A way to flash a new program to a device in a way which causes all data on the device to be securely erased
- Smartcard-like physical hardening
If bunnie makes this happen I'll be very happy. But his mention of "if we release an updatable chip" sounds... concerning? It sounds like he is focusing on people being able to audit the code they're shipping rather than ship their own applications. Perhaps he could clarify what "non-updatable" means for the normal SKU?
One additional issue is the availability of non-volatile storage. To my knowledge "high-performance" processes can't actually use flash, and flash requires a specialised process. This leads to SoCs (as opposed to MCUs) tending to use OTP eFuses to program security keys, which is very troublesome as it makes it largely impractical to use them in an owner-controlled way. You essentially have to permanently "damage" the chip to enable secure boot. The ability to wipedown the device fully, as described above, is essential to a device which is practical to use.
I'm not sure how MCUs with on-board flash get around this; perhaps they are just built outright on a flash process. They could be multi-die but that would presumably add cost. The mention of using ReRAM is interesting, I wonder what's motivating that.
(The STM32 line of MCUs, while not hardened for security purposes, does at least have readout protection. But frustratingly, it can't be disabled once enabled, which prevents the "wipedown" scenario mentioned above. I also suspect that Oxide's work on using off-the-shelf MCUs for boot security processes, which has resulted in them discovering all of the dubious and frustrating things about these MCUs from a security perspective, means they'll be quite interested in the development described in this article.)
> But his mention of "if we release an updatable chip" sounds... concerning? It sounds like he is focusing on people being able to audit the code they're shipping rather than ship their own applications. Perhaps he could clarify what "non-updatable" means for the normal SKU?
This is driven by the paying customer (not the end user: but the entity actually writing checks with lots of zeroes on it and taking in truckloads of chips).
It is a fact that the biggest, highest volume paying customers to date for SEs would assume that such chips could only be updated through means that the customer controls. Thus, a business manager doing volume-driven research to craft a chip specification for such a market would reasonably conclude this is a mandatory checkbox for generating volume orders.
The flip side is the assumption that nobody in their right mind would pay for a chip that could be fully wiped and/or updated by end users. And to a first order, the conclusion is correct: I can't think of anyone who would write a check big enough to fill distribution channels with such a SKU. Keep in mind that chips are run by the wafer, and things don't get interesting until you run about 25 wafers in a single lot, so you're looking for someone to underwrite the cost for upwards of 100k devices until a sustainable market for such fully updateable chips materializes.
This is where the "obviousness" of an open source approach comes crashing into the reality of market-driven hardware economics. It seems pretty obvious to people like you and me that we should have fully updateable SKUs that can be wiped and provisioned by end users. But there's a chicken-and-egg problem of building the hardware, so that OSS developers can write the software stack, then hopefully getting adoption by volume businesses; but volume business wouldn't adopt until the hardware was cheap enough and the software stable enough.
The angle I'm hoping to exploit is that because the chips are done with no OTP and only ReRAM, we can release a fully-open and updateable SKU as a late-binding wafer sort option. So instead of having to buy a whole lot of wafers, we can split out a portion of a larger run as the "open source developer's" SKU, at a volume that could be underwritten with say, a crowdfunding campaign.
Thus, you'd have the same design, fabricated using the same masks and literally on the same wafer, but based on how some ReRAM bits are configured in the factory, one will accept updates, and the other will not. It's a compromise, but that's the angle I'm trying to work, at the very least. And until someone is willing to step in with a few million bucks to make the demands of other paying customers irrelevant, it's maybe the only practical approach I can see at the moment to break the chicken and egg cycle.
I think the team is on-board with the approach, but there's a lot of technical details to work out, which is why there is an "if" qualification. For example, if you totally wipe all the ReRAM, but you need some extant code to write ReRAM in the first place, you again have a chicken and egg problem. This problem is solvable at wafer sort using probe pads that can put the initial code in place to allow bootstrapping, but how to extend this to a convenient user-accessible solution is an open problem. This is particularly troublesome for SKUs that are destined to be packaged in minimal packages, e.g. an 8-pin SO package or a smart card format, where you simply lack the pins to expose a provisioning port. This is also, unfortunately, where all the volume is.
It could be that the open source SKU simply comes in a package with more pins and we burn precious I/O to expose something like a permanent JTAG interface. Muxing over the JTAG pins to other I/O can also help mitigate the challenge, but it adds complexity and verification/shipment risk if the mux circuit is defective, and again, a hardware bug costs millions of dollars in this scenario so no decision can be considered "trivial". Nobody wins if I try to force some clever provisioning solution that turns out to be broken (also keep in mind that smart card I/O is weird and fussy, it's not a generic CMOS I/O cell).
Also keep in mind that creating a version with a larger package could be a non-starter if we don't also have demand from a "paying" customer to fund the R&D and supply chain to push out that package format. Chip packages aren't free, and also come with substantial up-front costs and high minimum volume requirements.
Very interesting writeup - thanks for going into detail on this. So basically there will be a variant which isn't erasable unless you have the "OEM"'s key and a variant which is always erasable.
Everything above makes sense and I really appreciate how you're trying to make this happen for the open source community by intelligently designing a product for both markets.
I infer from the above that you're intending to avoid having any kind of mask boot ROM that could allow programming via the normal interface used to talk to the SE... trying to avoid a boot ROM is interesting, but this is certainly one of the downsides. I guess you could try and hardcode the logic for a serial download protocol over the standard interface to the SE, if you can validate it well...
Wishing you luck and hope everything goes to plan.
> The flip side is the assumption that nobody in their right mind would pay for a chip that could be fully wiped and/or updated by end users.
Eh, TPMs are fully erasable -- well, the seeds that matter anyways, but not the public keys that sign firmware updates. There's a big difference between "fully wiped" in the TPM sense and "updated by end users" if the latter means "with their own code", but the latter is still OK if in the process all key material is lost that could be used to impersonate the original device.
I.e., an SE that users can flash with their own software should be OK provided that there is no way to impersonate the original SE after doing so. I've no idea if that can be the case for SEs, but it definitely could be for TPM-like devices.
FWIW it's a bit of a different game with TPMs than with secure elements in general - imagine how much would it cost a bank in credit card replacement costs if the firmware on them could be erased by anyone, making the card useless. That's not something that should be wipeable.
Sure, because the bank probably doesn't charge customers for the cost of the replacement card, so customers could get a free smartcard for other purposes. What do these cost nowadays?
> The mention of using ReRAM is interesting, I wonder what's motivating that.
Your previous paragraph succinctly addresses the motivation: high performance processes can't use flash. However, ReRAM is already qualified for use in these processes. Since ReRAM is non-volatile (with a storage lifetime measured in 100+ years) and dense, you can get the benefits of flash and of OTP, and also in a high performance process. By implementing "OTP" as a bank of ReRAM (with a write-protect bit, but still an erase-all mode), the option to do a full device wipe-down is theoretically available with an OTP-like behavior.
As an aside, I've always wanted a device that supports trusted boot using something like the following model:
- Secret generated inside the device in nonvolatile memory (e.g. during manufacturing)
- During boot, a boot ROM reads the initial boot block, computes KDF(secret, boot block), then sets a lock bit to prevent read/write of the secret in nonvolatile memory
- Boot the boot block, with the secret passed to it (or loaded into a cryptographic glovebox for application use, etc.)
This scheme is remarkably simple, yet can be used to bootstrap basically anything else, since you can write your own bootblock to derive further keys as you wish. Essentially every bootblock gets its own unique secret. You can't change the bootblock, but if manufacturers can commit to an immutable mask ROM, people can commit to an immutable bootblock with enough effort. A managed bootblock upgrade protocol in which a bootblock can choose to endorse a successor based on some arbitrary policy it implements could probably be designed with enough effort.
Not sure if this is compatible with what you're going for though. I could see some people not wanting this kind of thing or not liking a secret they can't access. I guess you could make it erasable and let people set their own secret.
Just some random ideas floating around in my head.
Sounds like you have the right idea in terms of the requirements I discuss above and which people are mentioning elsewhere in this thread. I look forward to reading the datasheets when this is announced.
> There shouldn't be such a thing as an "OpenPGP smartcard" (yet there is), there should just be generic programmable smartcards I can program with an application of my choice.
That's in fact very possible! I've been doing just that, i.e. running an open source implementation [1] of the OpenPGP smartcard standard [2] on a physical card to which I have the root keys.
The card specifications themselves are presumably behind NDAs, though, so I do have to trust the vendor to not have built in anything nefarious, e.g. a predictable/keyed RNG.
> And you can't have one.
I have three :)
The specifications are indeed behind NDAs, but you don't really need those – you can just work with the (publicly accessible) Global Platform application management commands and the Java Card API. The excellent GlobalPlatformPro [3] can take care of much of the former.
I'm aware of JavaCard (and those BASIC cards, which are even more absurd - why would anyone want to implement cryptography in BASIC?). But JavaCard (or BASIC) isn't the real CPU but a VM. So I guess I should have said, "I want a smartcard where I can program the actual CPU, not in some VM."
May as well add some random smartcard history... I believe historically one of the reasons for this is that most smartcard chips for the longest times were basically an 8051, a made-to-order mask ROM, and a very small amount of flash (too small for much code). So there was a real need to minimise the size of code which needed to be put in nonvolatile storage. I assume JavaCard implementations have their own implementations of standard cryptographic algorithms in mask ROM to avoid the need to put them in the undersized quantity of flash.
But this isn't the case anymore, where you have smartcards with 32-bit ARM cores and which are all flash-based. If there's a way to obtain and program one of these, I'd like to know about it.
> why would anyone want to implement cryptography in BASIC?
The cryptography for these cards is usually provided by the OS or even in custom hardware, mostly for performance reasons: The underpowered 8-bit MCUs that were commonly used for them can't run e.g. RSA in a reasonable timeframe, so they have custom coprocessors for that.
> "I want a smartcard where I can program the actual CPU, not in some VM." [...] smartcards with 32-bit ARM cores and which are all flash-based. If there's a way to obtain and program one of these, I'd like to know about it.
I'm unfortunately not aware of such a thing in the smartcard world, but if you think about it, this just pushes the problem of trust down one layer. At some point, you'll have to trust the hardware RNG to supply actual randomness (and not just the output of a known-seed PRNG).
I suppose it could be useful in some scenarios to get rid of the proprietary OS/VM implementation if you are concerned about bugs there, but in the end, you are still ultimately trusting your hardware vendor.
Or are you mainly interested in being able to use a stack other than Java Card?
It's a USB dongle rather than a smart card, but you might find it useful/interesting nonetheless. The site says they are out of stock but I have a few left. Contact me off line and I'll send you one if you think you'd be interested in noodling around with it.
Yeah. I'm aware of these kinds of things. Since real SE chips tend to be all NDAware, or if they aren't are fixed-function, these open source designs invariably work around either using a COTS MCU and its readout protection functionality, or maybe an FPGA with reversed open tooling.
There are several problems with these:
- The above one uses an STM32, which has an inflexible readout protection mechanism as I mentioned above;
- Readout protection really isn't hardened or intended to provide smartcard-level security, and is usually trivially glitched.
What is "NDAware"? The only reference I can find to this term anywhere on the web is in your comments on HN.
And what is your threat model? The STM32 series are designed to protect IP in industrial applications, so if their readout protection was easy to break that would destroy a substantial portion of their market.
By NDAware, I mean chips that require NDAs to access datasheets and register manuals, and which are thus are of no interest to me.
STM32 is vulnerable to glitching, as are MCUs specifically marketed for "secure" applications like the SAM L11: https://chip.fail/
If your MCU doesn't specifically list hardening against glitching specifically, it's almost certainly vulnerable to it (and good luck finding a MCU with anti-glitching measures which isn't NDAware). Best not to take the security claims of hardware vendors too seriously unless you've personally verified things.
Interesting to note these kinds of MCUs are used by things including Bitcoin wallets, etc., rather than purpose-built secure element chips, almost certainly because the latter are all NDAware. The above research managed to trivially glitch these chips. So this is an example of a real harm and security issue being caused by the current state of the SE market and its obsession with secrecy.
Ah. I parsed that as ND-Aware, not NDA-ware. You might want to add a hyphen.
> this is an example of a real harm
That depends on your threat model, which is the reason I asked. The SC4-HSM is designed to be secure against loss and casual theft (i.e. a home robbery), but not active theft by a technically savvy adversary.
> The sad thing is such smartcards do exist, you're just not allowed to have them because they're all locked behind NDAs (e.g. the ST32 line - not to be confused with the popular STM32 microcontrollers by the same company). These typically feature a 32-bit ARM core and flash, and are used for bank cards and so on. And you can't have one.
Actually, you can. German company ZeitControl sells cards you can program on your own in a BASIC dialect [1]. Been years since I last played around with my set, but they're still well in business.
Stealth mode simply refers to a stage in a company's evolution, not an on-going state of operation.
For example, because the company is not yet fully formed, it doesn't have a website -- nobody has made one, because it's not the most pressing thing to do right now. I don't even think the company has an official logo yet.
They are basically walking and chewing gum by trying to put together their advisory board, while they also put together the details of the corporate entity. In essence most startups go through a "stealth mode" phase because many disclosable details simply do not exist. It's perfectly consistent to say "we want to do open source", but also not have prepared a logo, a website, and an overall corporate narrative.
Counter intuitive, perhaps, but not necessarily purpose-defeating, because a stealth mode company can choose to release its open source components only a later moment in time, when it is ready to generate positive cash flow.
there is no value right now in open secure element. secure element is arguably the most secure hardware platform in the world to deploy at scale.
- secure element is in every credit card
- secure element is in every cell phone
- secure element is in every modern passport
these aren’t simple use cases. these have national security implications. i foresee this going the opposite direction as more critical things rely on secure element more individual proprietary implementations will emerge.
it’s still a standard. you can still develop on top of secure element. but who really benefits from it being open?
One obvious is trust. Secure Elements are validated against Common Criteras, often at very high level (6+). Yet, there have been problems. An example is bad RSA implementations.
Developing for SEs normally require signing boatloads of NDAs. This makes it hard do talk about weaknesses, things that simply could easily be improved (in support libraries for example).
By having open designs, open source more people can review and decide for themselves if they are ready to trust the SE. And hopefully more bugs can be found, issues can be fixed and things can be improved faster. This will benefit the users and the societies as a whole.
Reduction in e-waste and reduction in forced obsolescence: the more open this secure element is, the more it sets a standard. Then that creates a robust ecosystem around it.
(For example, see the Raspberry Pi which helped spur an ecosystem. Also, even the original Pi 1 can still be useful today.)
Those secure elements in credit cards (Java Card platform) and cell phones (SIM) have an open standard. This is natural growth in that standard.