[continued from part #1]
(Full disclosure: this blogger worked on Google Wallet 2012-2013)
Mobile payment systems as implemented today muddy the clean lines between “card-present” and “card-not-present” interactions. Payments take place at a brick-and-mortars store with the consumer present and tapping an NFC terminal. But the card itself is delivered to the smart-phone remotely, via over-the-air provisioning. Consumers do not have to walk into a bank-branch or even prove ownership of a valid mailing address. They may be required to pass a know-your-customer or KYC test by answering a series of questions designed to deter money laundering. But that requires no physical presence, being carried out within a web-browser or mobile application.
Arguably this is not too different from traditional plastic-cards: they are also “provisioned” remotely, by sending the card in an envelope via snail mail. Meanwhile the application takes place online, with the customer completing a form to provide necessary information for a credit check; effectively card-not-present information. The main difference is that applying for a new credit card has a much higher bar than provisioning an existing card to a smart-phone. In one case, the bank is squarely facing default risk: the possibility that the consumer may run up a hefty bill on the new card and never pay it back. In the latter scenario, there is no new credit being extended— NFC purchases are charged against preexisting line of credit, with the same limits and interest rates as before. There is no reason to suspect that an otherwise prudent and restrained customer will become a spendthrift merely because they can also spend their money via tap-and-pay.
Consequently the burden of proof is much lower when proving that one owns an existing card vs proving that one is a good credit risk for an additional card based on past loan history. Applying for a new line of credit typically requires knowledge of social-security number (perhaps the most misused piece of quasi-secret information, having been repurposed from an identifier into an authenticator) billing address and personal information such as date of birth. Adding an existing card into an NFC wallet is much simpler, although variations exist between different implementations. For example Google Wallet required users to enter their card number, complete with CVV2 to prove ownership. Apple Pay goes one step further, borrowing an idea from Coin: users take a picture of the card with the phone camera. This is largely security theater. Any competent carder can create convincing replicas that will fool a store clerk inspecting the card in his/her hand; a blurry image captured with a phone camera is hardly an effective way to detect forgeries. It is more likely a concession from Apple to issuing banks. More important, no amount of taking pictures will reveal magnetically encoded information from the stripe. Neither Apple Pay or Google Wallet have any assurance of CVV1. (Interestingly Coin can verify that because it ships with an actual card-reader that users must swipe their cards through— a necessity because Coin effectively depends on cloning mag-stripes.)
Bottom line: all of the information required to provision an existing card to a mobile device for NFC payments can be obtained from a card-not-present transaction. For example, it can be obtained via phishing or compromising an online merchant to observe cards in flight through their system. For the first time, it becomes possible to “obtain” a new credit card using only card-not-present data lifted from an existing card. That payment instrument can be now used to go on a spending spree in the real-world, at any retailer accepting NFC payments. Online fraud has breached what used to be a Chinese-wall and crossed over into bricks-and-mortar space.
Several news stories have “discovered” that Apple Pay has not, in fact, spelled the end of credit card fraud and may even have created new opportunities. That seems surprising considering that NFC payments were supposed to be an improvement over magnetic-stripe swipes in terms of security, using a cryptographic protocol that prevents reusing information stolen from one merchant to make additional fraudulent transactions at another one. Much of the problem turns out to be usual sad state of technology journalism. It is not that NFC or EMV have a new vulnerability that is being exploited against hapless iPhone users. (To be fair, EMV does have its fair share of security weaknesses and mobile payments have introduced some incremental risks, but those subtleties are not what has the press riled up.) Apple has not created a new way to steal credit-cards. But it has created a more effective avenue for monetizing already stolen cards. Apple Pay is not the vulnerability— it is just one particular technique for exploiting an ancient one.
Online vs in-store transactions
Going back to our summary of how credit-card payments operate, we differentiated between two types of transactions:
- Card-present, or more colloquially “in-store.” The customer walks up to a cash registers and hands over their card to the merchant. That card can be “read” in different ways. At the low-tech end of the spectrum is old-school mechanical imprinting, creates an actual carbon-copy of the front of the card bearing embossed numbers. More common is the “swipe” where information encoded in the magnetic-stripe at the back of the card is scanned by moving the card through a magnetic field. Finally if the card has a smart chip, there is the EMV option of executing a complex cryptographic protocols between card/terminal. In these cases each interaction is unique and the data observed by the terminal different, unlike a magnetic-stripe which has the same information every time.
- “Online,” or what used to be called phone-order/mail-order back when picking up a telephone or sending pieces of paper via USPS did not seem such an antiquated concepts. Generically this class is known as “card-not-present” transaction, because the merchant does not have the actual piece of plastic in hand when placing the charge. (We will avoid the term “online” because in payments it is also used to describe when a point-of-sale terminal is communicating in real-time with card network, as opposed to batching up transactions for later submission.)
From a fraud perspective, the key observation is that each modality exchanges slightly different information with the terminal. All of them share the same basic data such as credit-card number and expiration. But each one also introduces a unique twist. Track-data on magnetic stripe has a 3-5 digit “card validation code” commonly called CVV1. Online transactions use a different value called CVV2, printed on the card itself but not encoded in the magnetic-stripe. Meanwhile the basic version of EMV simulates the action of swiping for “track-data” for backwards compatibility, but substitutes a dynamic CVV or CVV3 which changes each time in a manner unpredictable without knowing the cryptographic secret stored in the chip.
A corollary of this difference is that it acts as a natural “firewall” between channels. Fraud remains largely contained to its original channel. Consider the criminals who popped Target or HomeDepot point-of-sale terminals in the past. This attack allowed to them amass a cache of raw track-data from magnetic-stripes swiped at those cash registers. That information can be used to create convincing replicas of cards that will behave exactly like the original card when swiped through a reader. But there is no CVV2 encoded in the magnetic-stripe.** That is a limiting factor if our criminals wanted to monetize those cards online, instead of walking into a store. Most websites these days will collect and validate CVV2 for online orders. (As an aside, there trade-offs in both avenue for monetization. In-store fraud is harder to scale because it requires recruiting mules to run the risk of walking into a store with fake cards, with their faces captured on camera. Online fraud scales better; there is no limit to how many websites you can drive to or how many big-box items can fit into the trunk. Downside is delivery involves a shipping address that can be traced- notice how many ecommerce sites flat out refuse shipping to PO boxes.)
** Some merchants have started asking for or keying in CVV2 by looking at the card during retail transactions. That is a dangerous pattern. It may help that particular merchant reduce fraud temporarily by doing additional verification on the card, but it weakens the overall ecosystem by putting card-not-present at greater risk against compromised terminals.
[continued from part I]
There are ways to improve confidence in the correct operation of a blackbox ECDSA implementation that has bee tasked with signing transactions with our private key. One approach suggested in the original paper is choosing the jointly between blackbox and external client requesting signatures. There is an inherent asymmetry here, because the two sides can not complete such a protocol on equal terms. Because knowledge of the nonce used to a particular signature allows private-key recovery, the client can not learn the final value that will be used to compute the signature. But we can still defeat a kleptographic system by guaranteeing that the blackbox can not fully control choice of nonce either.
- Blackbox chooses its own nonce k and commits to it, for example by outputting a hash of the curve point that would have resulted from using that nonce eg P = k∗G
where G is the generator of the group. (Recap: first half of an ECDSA signature is the x-coordinate of the point on the elliptic-curve that is obtained by “multiplying” the generator with the nonce.)
- Client returns a random scalar value r.
- Blackbox opens the commitment to reveal its chosen point P— but not the scalar k— and then proceeds to compute the signature using Q = r ∗ (k∗G).
- Before accepting the signature, client verifies that the final point Q and original choice P are related as Q = r∗P.
This guarantees that even if a kleptopgraphic implementation chose “cooked” k values to leak information, that signal is washed away when those choices are multiplied by random factors. In fact multiplication is not the only option. The protocol is equally effective with addition and using (r+k) ∗ P as final value. But an extra point-multiplication for both sides can not be avoided because each side still has to compute r∗P on its own. They can not accept a result precomputed by the other side. (It does make it easier to for the client to verify expected result by a simple point addition instead of the more expensive multiplication.)
Main challenge for this protocol is the interactivity— it changes the interface between the ECDSA implementation and client invoking a signature operation by requiring two round-trips. But it need not require changes to the client software. For cryptographic hardware such as HSMs, there is already a middleware layer such as PKCS#11 that translates higher-level abstractions such as signing into low-level commands specific to the device. This abstraction layer could hide the extra round-trip. Alternatively the round-trip can be eliminated by a stateful client. Suppose that after every signature the HSM outputs a commitment to the nonce it will use for the next signature and client caches this value. Then each signature request can be accompanied by client’s own random contribution and each signature response can be verified against the commitment cached from previous operation.
We can extend that further to come up with a different mitigation: suppose that the blackbox ECDSA implementation is asked to commit to thousands of nonces ahead of time. The client can in turn specify a single seed value that will be used to influence every nonce in the list according to a pseudo-random function of that random seed. (We can’t simply add/multiply every nonce with the same random value. That would fail to mask patterns across multiple nonces chosen by a kleptographic implementation.) In this case no interactivity is required for signature operations, since both blackbox and client contributions to the final nonce are determined ahead of time. One caveat: a kleptographic implementation can try to cheat by faking a malfunction and outputting invalid responses in order to skip some values in the sequence, leaking information based on which ones were omitted. Meanwhile the client can’t insist that blackbox sign with previous nonce, because reusing nonces across different messages also results in private-key disclosure.
As a side-note: precomputing and caching nonces can also serve as a performance optimization, by leveraging the online/offline nature of ECDSA. Such signature schemes have the unusual property that a significant chunk of the required computation can be done without seeing the message that is being signed. For ECDSA the generation of the first half of the signature fits the bill: multiply a fixed point of the elliptic-curve by a random nonce that is chosen independent of the message. That point multiplication is by far the most expensive part of the signature. Front-loading that and computing nonces ahead of time reduces perceived latency when it is time to actually emit a signature.
One problem not addressed in the original paper is that of key generation. If the objective is guarding against a kleptographic blackbox ECDSA implementation, then it can not be trusted to generate keys either. Otherwise it is much simpler to “leak” private keys directly by using a subverted RNG whose output is known to the adversary. Ensuring randomness of nonces used when signing will not help in that situation; the private key is already compromised without signing a single message. But the same techniques used to randomize the nonce can be applied here, since an ECDSA public-key is also computed as a point-product of the secret private key and fixed generator point. The blackbox can commit to its choice of private-key by outputting a hash of the public-key, and the client can provide additional random input that causes final chosen key to be distributed randomly even if the blackbox was dishonest.
All of this complexity raises a different question: why is Bitcoin using ECDSA in the first place? As pointed out, RSA signing does not suffer from this problem of requiring “honest” randomness for each signature. But that is simply one criteria among many considerations. A future post will compare RSA and ECDSA side-by-side for use in a cryptocurrency such as Bitcoin.
“Cold-wallets can be attacked.” Behind that excited headline turns out to be a case of superficial journalism and missing the real story. Referring back to the original paper covered in the article, the attack is premised on a cold-wallet implementation that has been already subverted by an attacker. Now that may sound tautological: “if your wallet has been compromised, then it can be compromised.” But there is a subtlety the authors are careful to point out: offline Bitcoin storage is supposed to be truly incommunicado. Even if an attacker has managed to get full control and execute arbitrary code- perhaps by corrupting the system ahead of time, before it was placed into service- there is still no way for that malicious software to communicate with the outside world and disclose sensitive information. Here we give designers the benefit of the doubt, assuming they have taken steps to physically disable/remove networking hardware and place the device in a Faraday cage at the bottom of a missile silo. Such counter-measures foreclose the obvious communication channels to the outside world. The attacker may have full control of the wallet system, including knowledge of the cryptographic keys associated with Bitcoin funds, but how does she exfiltrate those keys?
There is always the possibility of covert channels, ways of communicating information in a stealth way. For example the time taken for a system to respond could be a hidden signal: operate quickly to signal 0, introduce artificial delays to communicate 1. But such side-channels are not readily available here either; the workings of offline Bitcoin storage are not directly observable to attackers in the typical threat model. Only the legitimate owners have direct physical access to the system. Our attacker sits some place on the other side of the world, while those authorized users walk in to generate signed transactions.
But there is one piece of information that must be communicated out of that offline wallet and inevitably become visible to everyone— the digital signature on Bitcoin transactions signed by that wallet. Because transactions are broadcast to the network, those signatures are public knowledge. Within those signatures is an easy covert channel. Credit goes to ECDSA, the digital-signature algorithm chosen by Satoshi for the Bitcoin protocol. ECDSA is a probabilistic algorithm. For any given message, there is a large number of signatures that would be considered “valid” according to the verification algorithm; in fact for the specific elliptic-curve used by Bitcoin, an extraordinarily large number in the same ballpark as estimated number of particles in the observable universe. An “honest” implementation of ECDSA is expected to choose a nonce at random and construct the signature based on that random choice. But that same freedom offers a malicious ECDSA implementation to covertly send messages by carefully “cooking” the nonce to produce a specific pattern in the final signature output. For example successive key-bits can be leaked by choosing the signature to have same parity as the bit being exfiltrated.
But the channel present within ECDSA is far more sophisticated. Building on the work of Moti Yung and Adam Young, it is an example of a kleptographic system. It is efficient: two back-to-back signatures are sufficient to output the entire key. It is also deniable: without the additional secret value injected by the attacker, it is not possible for other observers with access to same outputs—recall that everyone gets to see transactions posted on the blockchain— to pull-off that key-recovery feat. That includes the legitimate owner of the wallet. To everyone else these signatures looks indistinguishable from those output by an “honest” cold-storage implementation.
There is a notion of deterministic ECDSA where nonces are generated as a function of the message, instead of chosen randomly. This variant was designed to solve a slightly different problem, namely that each ECDSA signature requires a fresh unpredictable nonce. Reusing one from a different message or even generating a partially predictable nonce leads to private-key recovery. While this looks like a promising way to close the covert channel, the problem is there is no way for an outside observer to verify that the signature was generated deterministically. (Recall that we posit attacker has introduced malware subverting the operation of the cold-storage system, including its cryptographic implementation.) Checking that a signature was generated deterministically requires knowing the private key- which defeats the point of only entrusting private keys to the cold-storage itself.
This same problem also applies to other black-box implementations of ECDSA where the underlying system is not even open to inspection, namely special-purpose cryptographic hardware such as smart-cards and hardware security modules (HSM.) An HSM manufacturer could use a similar kleptographic technique to disclose keys in a way that only that manufacturer can recover. In all other aspects, including statistical randomness tests run against those nonces, the system is indistinguishable from a properly functioning device.
[continued from part III]
In addition to the login screen and screen-saver, elevation prompts for sensitive operations will also work with smart-cards:
As before, the dialog can adjust for credential type in real-time. On detecting presence of a smart-card (more precisely, a card for which an appropriate tokend module exists and contains valid credentials) the dialog will change in two subtle ways:
- Username field is hard-coded to the account mapped from the certificate on the card, and this entry is grayed out to prevent edits
- Password field is replaced by PIN
If the card is removed before PIN entry is completed, UI reverts back to the original username/password collection model.
One might expect that elevation in command line with “sudo” would similarly pick up the presence of smart-card but that is not the case. su and sudo still require a password. One heavy-weight solution involves installing the PKCS#11 PAM (pluggable authentication module) since OS X does support the PAM extensibility mechanism. A simpler work-around is to substitute an osascript wrapper for sudo. This wrapper can invoke the GUI credential collection which is already smart-card aware:
(Downside is that the elevation request is attributed to osascript, instead of the specific binary to be executed with root privileges. But presumably the user who typed out the command knows the intended target.)
Before discussing the trust-model and comparing it to Windows implementation, here is a quick overview of steps to enable smart-card logon with OS X:
- Install tokend modules for the specific type of card you plan to use. For the US government PIV standard, OpenSC project installer contains one out of the box.
- Enable smart-card login using the security command to modify authorization database.
$ sudo security authorizationdb smartcard enable YES (0) $ sudo security authorizationdb smartcard status Current smartcard login state: enabled (system.login.console enabled, authentication rule enabled) YES (0)
(Side-note: prior to Mavericks the authorization “database” was a plain text-file at /etc/authorization and it could be edited manually with a text editor— this is why some early OSX smart-card tutorials suggest tinkering with the file directly. In Mavericks it is a true SQLite database and best manipulated with the security utility.)
- Associate one or more certificate mappings to the local account, using sc_auth command.
Because certificate hashes are tied to a public-key, this mapping does not survive the reissuance of the certificate under a different key. That defeats the point of using PKI in the first place. OSX is effectively using X509 as a glorified public-key container, no different from SSH in the trusting specific keys rather than the generalized concept of an identity (“subject”) whose key at any given time is vouched for by a third-party. Contrast that with how Active Directory does certificate mapping, adding a level of indirection by using fields in the certificate. If the certificate expires or the user loses their token, they can be issued a new certificate from the same CA. Because the replacement has the same subject and/or same UPN, it provides continuity of identity: different certificate, same user. There is no need to let every endpoint know that a new certificate has been issued for that user.
A series of future posts will look at how the same problem is solved on Linux using a PAM tailored for digital certificates. Concrete implementations such as PAM PKCS#11 have same two-stage design: verify ownership of private key corresponding to a certificate, followed by mapping the certificate to local account. Its main differentiating factor is the choice of sophisticated mapping schemes. These can accommodate everything from the primitive single-certificate approach in OSX to the Windows design that relies on UPN/email, and other alternatives that build on existing Linux trust structures such as ssh authorized keys.
[continued from part II]
Managing the mapping for smart-card logon
OS X supports two options for mapping a certificate to a local user account:
- Perform look-up in enterprise directory
- Decide based on hash of the public-key in the certificate
For local login on stand-alone computers without Active Directory or equivalent, only the second, very basic option is available. As described by several sources [Feitian, PIV focused guides, blog posts], sc_auth command in OS X— which is just a Bash script— is used to manage that mapping via various sub-commands. sc_auth hash purports to display keys on currently present smart-cards, but in fact outputs a kitchen sink of certificates including those coming from the local keychain. It can be scoped to specific key by passing an identifier. For example to get PIV authentication key out of a PIV card when using OpenSC tokend modules:
$ sc_auth hash -k "PIV" 67081F01CB1AAA07EF2B19648D0FD5A89F5FAFB8 PIV AUTH key
The displayed value is a SHA1 hash derived from the public-key. (Keep in mind that key names such “PIV AUTH key” above are manufactured by the tokend middleware; your mileage may vary when using different one.)
To convince OS X into accepting that certificate for local logon, sc_auth accept must be invoked with root privileges.
$ sudo sc_auth accept -u Alice -k "PIV"
This instructs the system to accept the PIV certificate on presently connected smart-card for authenticating local user Alice. There is another option to specify the key using its hash:
$ sudo sc_auth accept -u Alice -h 67081F01CB1AAA07EF2B19648D0FD5A89F5FAFB8
More than one certificate can be mapped to a single account by repeating that process. sc_auth list will display all currently trusted public-key hashes for a specified user:
$ sc_auth list -u Alice 67081F01CB1AAA07EF2B19648D0FD5A89F5FAFB8
Finally sc_auth remove deletes all certificates currently mapped to a local user account:
$ sudo sc_auth remove -u Alice
Smart-card user experience on OS X
So what does the user experience look like once the mapping is configured?
First the bad news: don’t throw away your password just yet. The boot/reboot process remains unchanged. FileVault II full-disk encryption still requires typing in the password to unlock the disk.** Interestingly, its predecessor the original FileVault did support smart-cards because it was decrypting a container in the file-system after enough of the OS had been loaded to support tokend. New variant operates at a much lower level. Because OS X does not ask for the password a second time after the FileVault prompt, there is no opportunity to use smart-card in this scenario.
Good news is that subsequent authentication and screen unlocking can be done using a smart-card. The system recognizes the presence of a card and modifies its UI to switch authentication mode on the fly. For example, here is what the Yosemite login screen usually looks like after signing out:**
After a card is connected to the system, the UI updates automatically:
Local account mapped to the certificate from the card is chosen, and any other avatars that may have been present disappear from the UI. More subtly the password prompt changes into a PIN prompt. After entering the correct PIN, the system will communicate with the card to verify its possession of the private-key and continue with login as before.
- On failed PIN entry, the system does not display the number of remaining tries left before the card is locked. It is common for card standards to return this information as part of the error; PIV specification specifically mandates that. Windows will display the count after incorrect attempts as a gentle nudge to be careful with next try; a locked card typically requires costly intervention by enterprise IT.
- After logging in, it is not uncommon to see another prompt coming from the keychain, lest the user is lulled to a false sense of freedom from passwords:
Keychain entries previously protected by the password still need to be unlocked using the same credential. If authentication took place using a smart-card, that password is not available after login. So the first application trying to retrieve something out of the key-chain will trigger on-demand collection. (As the dialog from Messages Agent demonstrates, that does not take very long.)
Unlocking the screen works in a similar fashion, reacting to the presence of a card. Here is example UI when coming out of screen-saver that requires password:
After detecting card presence:
This is arguably the main usability improvement to using smart-cards. Instead of typing a complex passphrase to bring the system out of sleep or unlock after walking away (responsible individuals lock their screen before leaving their computer unattended, one would hope) one need only type in a short numeric PIN.
* In other words OS X places the burden of security on users to choose a random pass-phrase, instead of offloading that problem to specialized hardware. Similarly Apple has never managed to leverage TPMs for disk encryption, despite a half-hearted attempt circa 2006, keeping with the company tradition of failing to grok enterprise technologies.
** MacWorld has a helpful guide for capturing these screenshots, which involve SSH from another machine.
[continued from part I]
Local vs domain logon
In keeping with the Windows example from 2013, this post will also look at local smart-card logon, as opposed to directory based. That is, the credentials on the card correspond to a user account that is defined on the local machine, as opposed to a centralized identity management service such as Active Directory. A directory-based approach would allow similar authentication to any domain-joined machine using a smart-card, while the local case only works for a user account recognized by one machine. For example, it would not be possible to use these credentials to access a remote file-share afterwards, while that would be supported for the AD scenario because the logon in that case results in Kerberos credentials recognized by all other domain participants.
Interestingly that local logon case is natively supported by the remnants of smart-card infrastructure in OS X. By contrast Windows only ships with domain logon, using PKINIT extension to Kerberos. Third-party solutions such as eIDAuthenticate are required to get local scenario working on Windows. (In the same way that Apple can’t seem to grok enterprise requirements, MSFT errs in the other direction of enabling certain functionality only for enterprise. One can imagine a confused program manager in Redmond asking “why would an average user ever want to login with a smart-card?”)
At a high-level there are two pieces to smart-card logon, which is to say authenticating with a digital certificate where the private key happens to reside on a “smart card.” (Increasingly the card is not an actual card but could be a USB token or even virtual-card emulated by the TPM on the local machine.)
- Target machine decides which user account that certificate represents
- The card proves possession of the private-key corresponding to the public-key found in the certificate
The second part is a relatively straightforward cryptographic problem that has many precedents in the literature, including SSH public-key authentication and TLS client authentication. Typically both sides jointly generate a challenge, the card performs a private-key operation using that challenge and machine verifies the result using the public-key. Exact details vary based on key-type and allowed key-usage attributes in the certificate. For example if the certificate had an RSA key, the machine could encrypt some data and ask the card to respond with proof that it can recover the original plaintext. If the certificate instead had an ECDSA key which is only usable for signatures but not encryption (or it has an RSA key but the key-usage constraints on the certificate rule out encryption) the protocol may involve signing a jointly generated challenge.
Tangent: PKINIT in reality
Reality of PKINIT specification is a lot messier— not to mention the quirks of how Windows has implemented it historically. Authentication of user is achieved indirectly in two steps. First client sends the Kerberos domain-controller (KDC) a request signed with its own key-pair, then receives a Kerberos ticket-granting-ticket (TGT) from KDC encoded in one of two different ways:
- Encrypted in the same RSA key used by the client when signing— this was the only option supported in early implementations of PKINIT in Windows
- Using a Diffie-Hellman key exchange, with DH parameter and client input authenticated by the signature sent in the original request
Mapping from any CA
The good news is local logon does not involve a KDC, there is no reason for implementations to worry about idiosyncrasies of RFC4556 or interop with Kerberos. Instead they have a different problem in mapping a certificate to user accounts. This is relatively straightforward in the enterprise: user certificates are issued by an internal certificate authority that is either part of Active Directory domain controller using the certificate services role (in the common case of an all-Windows shop) or some stand-alone internal CA that has been configured for this purpose and marked as trusted by AD. The certificates are based on a fixed template with username encoded in specific X509 fields such as email address or UPN, universal principal name. That takes the guesswork out of deciding which user is involved. The UPN “email@example.com” or that same value in email field unambiguously indicates this is user Alice logging into Acme enterprise domain.
By contrast local smart-card logon is typically designed to work with any existing certificates the user may have, including self-signed ones; this approach might be called “bring-your-own-certificate” or BYOC. No assumptions can be made about the template that certificate follows, other than conforming to X509. It may not have any fields that would allow obvious linking with the local account. For example there may be no UPN and email address might point to Alice’s corporate account, while the machine in question is a stand-alone home PC with personal account nicknamed “sunshine.” Main design challenge then is devising a flexible and secure way to choose which certificates are permitted to login to which account.