NFC hotel keys: when tag-types matter

Sometimes low-tech is better for security, particularly when compared to choosing the wrong type of latest technology. Growing use of NFC in hotel keys provides some instructive examples.

The hospitality industry started out with old-fashioned analog keys, where guests were given actual physical objects. Not surprisingly, they promptly proceeded to lose or misplace those keys, creating several problems for the hotel. First one is getting replacement keys made— certainly there are additional copies lying around, but they are still finite in number compared to the onslaught of careless guests. But more costly is having to potentially change that lock itself. With a key missing (and very likely attached to a key-tag bearing the hotel logo, along with room number) there is now someone else out there who could get into the room at will to burglarize the place. Upgrading to programmable card keys conveniently solves both problems. With a vast supply of cheap plastic stock, the hotel need not even charge guests for failing to return their key at checkout. Lost cards are not a security problem either, as the locking mechanism itself can be programmed to reject previous keys.

Until recently most of these cards used magnetic-stripes, the same 40-year old technology found in traditional credit-cards. It is trivial to clone such cards by reading the data encoded on the card and writing it over to a new blank card. Encrypting or authenticating card contents will not help— there is no need to reverse-engineer what those bits on the stripe represent or modify them. We are guaranteed that an identical card with same exact bits will open the door as long as the original does. (There may well be time-limitations, such as check-out date encoded in the card beyond which the door will not permit entry but those apply to both the original and clone.)

Jury is out on whether cards are easier to copy than old-fashioned keys. In both cases, some type of proximity is required. Reading magnetic stripe involves swiping the card through a reader; this can not be done with any stealth while the guest remains in possession of the card. Physical keys can also be copied at any home-improvement store but more surprisingly a clone can be made from a photograph of the key alone. The bitting, that pattern of bumps and ridges, is the abstract representation of the key that is sufficient to create a replica. This can be extracted from an image, as famously demonstrated for Diebold voting-machines. One paper from 2008 even used a high-powered zoom lenses to demonstrate feasibility of automatically extracting bitting from photographs taken 200ft away from the victim.

Next cycle of upgrades in the industry is replacing magnetic stripes by NFC. This blogger recently used such a key and decided to scan it using the handy NXP TagInfo application on Android:

Ultralight hotel key

Ultralight hotel key

How does this fare in comparison to the previous two options? It is arguably easiest to clone.

First notice the tag type is Ultralight. As the name implies, this is among the simplest types with least amount of memory. More importantly, Ultralight tags have no security mechanism against protecting their contents against reading. They are effectively 48 bytes worth of open books, available for anyone to read without authentication. Compare that to Mifare DESFire or even Mifare Classic, where each sector on the card has associated secret keys for read/write access. Data on those tags can only be accessed after NFC reader authenticates with the appropriate key.

So far this is not that different from magnetic stripes. The problem is NFC tags can be scanned surreptitiously while they are still on the person. It requires proximity; “N” does stand for “near” and typical read-ranges are limited to ~10cm. The catch is tags can be read through common materials such as cotton, plastic or leather. Metal shielding is required to effectively protect RFID-enabled objects against reading. It takes considerable skill to grab a card out of someone’s pocket, swipe it through a magnetic-stripe reader and return it undetected. It is much easier to gently bump into the same person while holding an ordinary phone with NFC capabilities.

The business case for upgrading a magnetic-stripe system to Ultralight is unclear. It certainly did not improve the security of guest rooms. By all indications it is also more expensive. Even the cheapest NFC tag still costs more than the ubiquitous magnetic stripe which has been around for decades. Same goes for readers mounted at every door required to process these cards. The only potential cost savings lie in using off-the-shelf mobile devices such as Android tablets for encoding cards at the reception desk, a very small piece of overall installation costs. (Speaking of mobile, this system also can not implement the more fancy use case of allowing customers to use their own phone as room key. While NFC hardware in most devices is capable of card-emulation, it can only emulate specific tag types. Ultralight is not one of them.) For a small incremental cost, the company providing this system could have used Ultralight C tags, which have three times as much memory, but also crucially, support 3DES-based authentication with readers that makes cloning non-trivial.

CP

Dual-EC, BitLocker disk encryption and conspiracy theories

[Full disclosure: this blogger worked at MSFT but has not been involved in BitLocker development]

Infosec community is still looking for a replacement since the cancellation of TrueCrypt. Last year the mysterious group behind the long standing disk-encryption system announced they were discontinuing work. In a final insult to users, they suggested current users migrate to BitLocker, the competing feature built into Windows. It could not have been worse timing, just when NCC Group announced to great fanfare their completion of an unsolicited security audit on the project. (Not to worry; there are plenty of audit opportunities left in OS/2 and DEC Ultrix for PDP11, to take other equally relevant systems as TrueCrypt.) What to do when your favorite disk encryption system has reached end-of-life? Look around for competing alternatives and weigh their strengths/weaknesses for starters. However a recent article on Intercept looking at Windows BitLocker spends more time spinning conspiracy theories than helping users migrate to BitLocker. There are four “claims” advanced:

  • Windows supports the dual-EC random number generator (RNG) which is widely believed to have been deliberately crafted by the NSA to be breakable
  • BitLocker is a proprietary implementation, and its source code is not available for review
  • MSFT will comply with law-enforcement requests to provide content
  • MSFT has removed the diffuser from BitLocker without a good explanation, demonstrably weakening the implementation

Let’s take these one by one.

“Windows has dual-EC random number generator”

It is true that Windows “next-generation” crypto API introduced in Vista supports dual-EC RNG, widely believed to have been designed by the NSA with a backdoor to allow predicting its output. In fact it was a pair of MSFT employees who first pointed out in a very restrained rump-session talk at 2007 Crypto conference that dual-EC design permits a backdoor without speculating on whether NSA itself had availed itself of the opportunity. Fast forward to Snowden revelations, and RSA Security finds itself mired in a PR debacle when it emerged that the company accepted $10M payment from the NSA for incorporating dual-EC.

Overlooked in the brouhaha is that while dual-EC has been available as an option in Windows crypto API, it was never set as the default random number generator. Unless some application went out of its way to request a different RNG— and none of the built-in Windows features including BitLocker ever did that— the backdoor would have sat idle. (That said it creates interesting opportunities for post-exploit payloads: imagine state-sponsored malware whose only effect on target is switching default system RNG, with no other persistence.)

From a product perspective, the addition of dual-EC RNG to Vista can be considered as a mere “checkbox” feature aimed at a vocal market segment. There was a published standard from NIST called SP800-90 laying down a list of officially-sanctioned RNG. Such specifications may not matter to end-users but carry a lot of weight in government/defense sector where deployments are typically required to operate in some NIST-approved configuration. That is why the phrase “FIPS-certified” makes frequent appearances in sales materials. From MSFT perspective, a lot of customers required those boxes to be checked as a prerequisite for buying Windows. Responding to market pressure, MSFT added the feature and did so in exactly the right way such “niche-appeal” features should be introduced: away from the mainline scenario, with zero impact on majority of users who do not care about it. That is the main difference between RSA and Windows: RSA made dual-EC the default RNG in their crypto library. Windows offered it as an option, but never set as the default the system RNG. (It would have made no sense; in addition to security concerns, it was plagued by dog-slow performance compared to alternatives based on symmetric ciphers such as AES counter-mode.)

Bottom line: Existence of a weak RNG as an additional option to satisfy some market niche— an option never used by default— has no bearing on the security of BitLocker.

“BitLocker is not open-source”

Windows itself is not open-source either but that has never stopped people from discovering hundreds of significant vulnerabilities by reverse engineering the binaries. Anyone is free to disassemble any particular component of interest or single-step through it in a debugger. Painstaking as that effort may be compared to reading original source, thousands of people have made a career out of this within the infosec community. In fact Microsoft even provides debug symbols drawn from source-code to make that task easier. As far as closed-source binaries go, Windows is probably the most carefully examined piece of commercial software with an entire cottage industry of researchers working to make that process in crafty ways. From comparing patched binaries against their earlier version to reveal silently fixed vulnerabilities to basic research on how security features such as EMET operate, being closed-source has never been a hurdle to understanding what is going on under the hood. The idea that security research community can collectively uncover hundreds of very subtle flaws in the Windows kernel, Internet Explorer or the graphics subsystem— massively complex code-bases compared to BitLocker— while being utterly helpless to notice a deliberate backdoor in disk encryption is laughable.

Second, many people past and present did get to look at Windows source code at their leisure. Employees, for starters. Thousands of current and past MSFT employees had the opportunity to freely browse Windows code, including this blogger during his tenure at MSFT. (That included the separate crypto codebase “Enigma” which involved signing additional paperwork related to export-controls.) To allege that all of these people, many of whom have since left the company and spoken out in scathing terms about their time, are complicit in hiding the existence of a backdoor or too oblivious/incompetent to notice its presence is preposterous.

And it is not only company insiders who had many chances to discover this hypothetical backdoor. Some government customers were historically given access to Windows code to perform their own audit. More recently the company has opened transparency centers in Europe inviting greater scrutiny. The idea that MSFT would deliberately include a backdoor with full knowledge that highly sophisticated and cautious customers— including China, not the most trusting of US companies— would get to pore over every line, or for that matter provide doctored source-code to hide the backdoor, is equally preposterous.

Bottom-line: Being open-source may well improve the odds for security community at large to identify vulnerabilities in a particular system. (But even that naive theory of “given enough eyeballs, all bugs are shallow” has been seriously questioned in the aftermath of Shellshock and never-ending saga of OpenSSL) But being closed-source in and of itself can not be a priori reason to disqualify a system on security grounds, much less serve as “evidence” that a hidden backdoor exists after having survived years of reverse-engineering in arguably the most closely scrutinized proprietary OS in the world. Linking source-code availability to security that way is a non-sequitur.

“MSFT will comply with law-enforcement requests”

This is a very real concern for content hosted in the cloud. For data stored on servers operated by MSFT such as email messages at Hotmail/Outlook.com, files shared via One Drive or Office365 documents saved to the cloud, the company can turn over content in response to an appropriate request from law enforcement.  MSFT is not alone in that boat either; same rules apply to Google, Facebook, Twitter, Yahoo, DropBox and Box. Different cloud providers compete along the privacy dimension based on product design, business model, transparency and willingness to lobby for change. But they can not hope to compete in the long run on their willingness to comply with existing laws on the books or creative interpretations of these laws.

All that aside, BitLocker is disk encryption for local content. It applies to data stored on disk inside end-user machine and removable media such as USB thumb-drives. It is not used to protect content uploaded to the cloud. (Strictly speaking one could use it to encrypt cloud storage, by applying BitLocker-To-Go on virtual disk images. But that is at best a curiosity, far from mainstream usage.)

On the surface then it seems there is not much MSFT can do if asked to decrypt a seized laptop with BitLocker enabled. If disk encryption is implemented properly, only the authorized user possesses the necessary secret to unlock. And if there is some yet-to-be-publicized vulnerability affecting all BitLocker usage such as cold-boot attacks, weak randomness or hardware defects in  TPMs, there is no need to enlist MSFT assistance in decryption. Law enforcement might just as well exploit that vulnerability on their own, using their own offensive capabilities. Such a weaknesses would have existed all along, before the laptop is seized pursuant to an investigation. There is nothing MSFT can do to introduce a new vulnerability after the seizure, any more than they can go back in time to back-door BitLocker before it was seized.

But there is a catch. Windows 8 made a highly questionable design decision to escrow BitLocker keys to the cloud by default. These keys are stored associated with the Microsoft Live account, presumably as a usability improvement against forgotten passphrases. If a user were to forget their disk encryption passphrase or the TPM used to protect keys malfunctions, they can still recover as long as they can  log into their online account. That capability provides a trivial way for MSFT to assist in the decryption of BitLocker protected volumes: tap into the cloud system to dig up escrowed keys. Good news is that default behavior can be disabled; in fact, it is disabled by default in enterprise systems presumably because MSFT realized IT departments would not tolerate such a cavalier attitude around key management.

Bottom-line: There is a legitimate concern here, but not in the way the original article envisioned. Intercept made no mention of the disturbing key-escrow feature in Windows 8. Instead the piece ventures into purely speculative territory around Government Security Program from 2003 and other red-herrings around voluntary public/private-sector cooperation involving MSFT.

“MSFT removed the diffuser”

For a change, this is a valid argument. As earlier posts mentioned, full-disk encryption suffers from a fundamental limitation: there is no room for an integrity check. The encryption of one sector on disk must fit exactly on that one sector. This would not be a problem if our only concern was confidentiality, or preventing other people from reading the contents of data. But it is a problem for integrity, detecting whether unauthorized changes were made. In cryptography this is achieved by adding an integrity check to data. That process is frequently combined with encryption because both confidentiality and integrity are highly desirable properties.

But in FDE schemes without any extra room to stash an integrity check, designers are forced to take a different approach. They give up on preventing bad guys from making changes, but try to make sure those changes can not be controlled with any degree of precision. In other words you can flip bits in the encrypted ciphertext stored on disk, and it will decrypt to something (without an integrity check, there is no such thing as “decryption error”) but that something will be meaningless junk; or so the designers hope. The original BitLocker diffusers attempted to achieve that effect, by “mixing” the contents within a sector such that modifying even a single bit of encrypted data would result in randomly introducing errors all over the sector after decryption. That notion was later formalized in cryptographic literature, standardized into modes such as XTS that are now supported by self-encrypting disk products on the market.

Fast forward to Windows 8 and the diffuser mysteriously goes away, leaving behind vanilla AES encryption in CBC mode. With CBC mode it is possible to introduce partially controlled changes at the level of AES blocks. (“Partial” in the sense that one block can be modified freely but then the previous block is garbled.) How problematic is that? It is easy to imagine hypothetical scenarios based on what the contents of that specific location represent. What if it is a flag that controls whether firewall is on and you could disable it? Or registry setting that shuts off ASLR? Or enables kernel-debugging, which then allows controlling the target with physical access? It turns out a more generic attack is possible in practice involving executables. The vulnerability was already demonstrated with LUKS disk-encryption for Linux. Suppose that sector on disk happens to hold an executable file that will be run by the user. Controlled changes mean the attacker can modify the executable itself, controlling what instructions will be executed when that sector is decrypted to run the binary. In other words, you get arbitrary code execution. More recently, the same attack was demonstrated against the diffuser-less BitLocker.

So there is a very clear problem with this MSFT decision. It weakens BitLocker against active attacks where the adversary gets the system to decrypt the disk after having tampered with its contents. That could happen without user involvement if decryption is done by TPM alone. Or it may be an evil-maid attack where the laptop is surreptitiously modified but the legitimate owner, being oblivious, proceeds to unlock the disk by entering their PIN.

Bottom-line: Windows 8 did weaken BitLocker, either because the designers underestimated the possibility of active attacks or made a deliberate decision that performance was more important. It remains to be seen whether Windows 10 will repair this.

CP

Private cloud-computing and the emperor’s new key management (part II)

[continued from part I]

So what are the problems with Box enterprise-key management?

1. Key generation

First observe that the bulk data encryption keys are generated by Box. These are the keys used to encrypt the actual contents of files in storage. These keys need to be generated “randomly” and discarded afterwards, keeping only the version wrapped by the master-key. But access to the customer key is not required if one can recover the data-encryption keys directly. A trivial way for Box to retain access to customer data- for example, if ordered by law enforcement- is to generate keys using a predictable scheme or simply stash aside the original key.

2. Possession of keys vs. control over keys

Note that Box can still decrypt data anytime, as long as the HSM interface is up. For example consider what happens when employee Alice uploads a file and shares it with employee Bob. At some future instant, Bob will need to get a decrypted copy of this file on his machine. By virtue of the fact Box must be given access to HSMs, there must exist at least one path where that decryption takes place within Box environment, with Box making an authenticated call to the HSM.**

That raises two problems. The first is that the call does not capture user intent. As Box notes, any requests to HSM will create an audit-trail but that is not sufficient to distinguish between the cases:

  • Employee Bob is really trying to download the file Alice uploaded
  • Some Box insider went rogue and wants to read that document

While there is an authentication step required to access HSMs, those protocols can not express whether Box is acting autonomously versus acting on behalf of a user at the other side of the transaction requesting a document. That problem applies even if Box refrains from making additional HSM calls in order to avoid arousing suspicion— just to be on the safe side, in case the enterprise is checking HSM requests against records of what documents its own employees accessed, even though the latter is provided by Box and presumably subject to falsification. During routine use of Box, in the very act of sharing content between users, plaintext of the document is exposed. If Box wanted to start logging documents- because it has gone rogue or is being compelled by an authorized warrant- it could simply wait until another user tries to download the same document, in which case decryption will happen naturally. No spurious HSM calls are required. For that matter Box could just wait until Alice makes some revisions to the document and uploads a new version in plaintext.

3. Blackbox server-side implementation

Stepping back from specific objections, there is a more fundamental flaw in this concept: customers still have to trust that Box has in fact implemented a system that works as advertised. This is ongoing trust for the life of the service, as distinct from one-time trust at the outset. The latter would have been an easier sell because such leaps of faith are common when purchasing IT. It is the type of optimistic assumption one makes when buying a laptop for example, hoping that the units were not Trojaned from the factory by the manufacturer. Assuming the manufacturer was honest at the outset, deciding to go rogue at later point in time would be too late- they can not compromise existing inventory already shipped out. (Barring auto-update or remote-access mechanisms, of course.)

With a cloud service that requires ongoing trust, the risks are higher: Box can change its mind and go “rogue” anytime. They can start stashing away unencrypted data, silently escrowing keys to another party or generating weak keys that can be recovered later. Current Box employees will no doubt swear upon a stack of post-IPO shares that no such shenanigans are taking place. This is the same refrain: “trust us, we are honest.” They are almost certainly right. But to outsiders a cloud service is an opaque black-box: there is no way to verify that such claims are accurate. At best an independent audit may confirm the claims made by the service provider, reframing the statement into “trust Ernst & Young, they are honest” without altering the core dynamic: this design critically relies on competent and honest operation of the service provider to guarantee privacy.

Bottom line

Why single out Box when this is the modus operandi for most cloud operations? Viewing the glass as half-full, one could argue that at least they tried to improve the situation. One counter-point is that putting this much effort for negligible privacy improvement makes for a poor cost/benefit tradeoff. After going through all the trouble of deploying HSMs, instituting key-management procedures and setting up elaborate access-controls between Box and corporate data center, the customer ends up not much better than they would have been using vanilla Google Drive.

That is unfortunate because this problem is eminently tractable. Of all the different private-computing  scenarios, file storage is most amenable to end-to-end privacy- after all there is not much “computing” going on, when all you are doing is storing and retrieving chunks of opaque ciphertext without performing any manipulation on it. Unlike solving the problem of searching over encrypted text or calculating formulas over a spreadsheet with encrypted cells, no new cryptographic techniques are required to implement this. (With the possible exception of proxy re-encryption; but only if we insist that Box itself handle sharing. Otherwise there is a trivial client-side solution, by decrypting and reencrypting to another user public-key.) Instead of the current security theater, Box could have spent about the same amount of development effort to achieve true end-to-end privacy for cloud storage.

CP

** Tangent: Box has a smart-client and mobile app so in theory decryption could also be taking place on the end-user PC. In that model HSM access is granted to enterprise devices instead of Box service itself, keeping the trust boundary internal to the organization. But that model faces practical difficulties in implementation. Among other things, HSM access involves some shared credentials- for example in the case of Safenet Luna SA7000s used by CloudHSM, there is a partition passphrase that would need to be distributed to all clients. There is also the problem that user Alice could decrypt any document, even those she did not have access to by permission. To work around such issues, would require adding a level of indirection by putting another service in front of HSMs that authenticates users via their standard enterprise identity, not their Box account. Even then there is the scenario for files from a web-browser when no such intelligence exists to perform on the fly decryption client-side.

Private cloud-computing and the emperor’s new key management (part I)

The notion of private computation in the cloud has been around at least in theory for almost as long cloud computing itself, even predating the times when infrastructure-as-a-service went by the distinctly industrial sounding moniker “grid-computing.” That precedence makes sense, because it addresses a significant deal-breaker for many faced with the decision to outsource computing infrastructure: data security. What happens to proprietary company information when it is now sitting on servers owned by somebody else? Can this cloud-provider be trusted to not “peek” at the data or tamper with the operation of the services that tenants are running inside the virtual environment? Can the IaaS provider guarantee that some rogue employee can not help themselves to confidential data in the environment? What protections exist if some government with creative interpretation of fourth-amendment right comes knocking?

Initially cloud providers were quick to brush aside these concerns with appeals to brand authority and brandishing certifications such as ISO 27001 audits and PCI-compliance. Some customers however remained skeptical, requiring special treatment beyond such assurances. For example Amazon has a dedicated cloud for its government customers, presumably with improved security controls and isolated from the other riff-raff always threatening to break out of their own VMs to attack other tenants.

Provable privacy

Meanwhile the academic community was inspired by these problems to build a new research agenda around computing on encrypted data. These schemes assume cloud providers are only given encrypted data which they can not decrypt- not even temporarily, an important distinction that critically fails for many of the existing systems as we will see. Using sophisticated cryptographic techniques, the service provider can perform meaningful manipulations on ciphertext such as searching for text or number-crunching, producing results that are are only decryptable by the original data owner. This is a powerful notion. It preserves the main advantage of cloud computing: lease CPU cycles, RAM and disk space from someone else on demand to complete a task while maintaining confidentiality of the data being processed, including crucially the outputs from the task.

Cloud privacy in practice

At least that is the vision. Today private-computation in the cloud is caught in a chasm between:

  • Ineffective window-dressing that provides no meaningful security- subject of this post
  • Promising ideas that are not quite feasible at-scale yet, such as fully homomorphic encryption

In the first category are solutions which boil down to the used-car salesmen pitch: “trust us, we are honest and/or competent.” Some of these are transparently non-technical in nature: for example warrant canaries are an attempt to work-around the gag-orders accompanying national security letters by using the absence of a statement to hint at some incursion by law enforcement. Others attempt to cloak or hide the critical trust assumption in layers of complex technology, hoping that an abundance of buzzwords (encrypted, HSM, “military-grade,” audit-trail, …) can pass for a sound design.

Box enterprise key management

As an example consider enterprise-key management feature pitched by Box. On paper this is attempting to solve a very real problem discussed in earlier posts: storing data in the cloud encrypted in such a way that the cloud-provider can not read the data. To qualify as “private-computation” in the full sense, that guarantee must hold even when the service provider is:

  • Incompetent- experiences a data-breach by external attackers out to steal any data available
  • Malicious- decides to peek into or tamper with hosted data, in violation of existing contractual obligations to the customer
  • Legally compelled- required to provide customer data to law-enforcement agency pursuant to an investigation

A system with these properties would be a far-cry from popular cloud storage solutions available today. By default Google Drive, Microsoft One Drive and Dropbox have full access to customer data. Muddying the waters somewhat, they often tout as “security feature” that customer data is encrypted inside their own data-centers. In reality of course such encryption is complete window-dressing: it can only protect against risks introduced by the cloud service provider, such as rogue employees and theft of hardware from data-centers. That encryption can be fully peeled away by the hosting service whenever it wants, without any cooperation required by the original data custodian.

Design outline

The solution Box has announced with much fanfare claims to do better. Here is an outline of that design to the extent that can be gleamed from published information:

  • There is a master-key for each customer, where “customer” is defined as an enterprise rather than individual end-users. (Recall that Box distinguishes itself from Dropbox and similar services by focusing on managed IT environments.)
  • As before, individual files uploaded to Box are encrypted with a key that Box generates.
  • The new twist is that those individual bulk-encryption keys are in turn encrypted by the customer specific master-key

So far, this is only adding a hierarchical aspect to key management. Where EKM is different is transferring custody of the master-key back to the customer, specifically to HSMs hosted at Amazon AWS and backed-up by units hosted in the customer data-center holding duplicates of the same secrets keys. (It is unclear whether these are symmetric  or asymmetric keys. The latter design would make more sense by allowing encryption to proceed locally without involving remote HSMs and only decryption to require interaction.)

Box implies that this last step is sufficient to provide “Exclusive key control – Box can’t see the customer’s key, can’t read it or copy it.” Is that sufficient? Let’s consider what could go wrong.

[continued in part II]

CP