PRAGUE—Relying on microchipped identification cards isn’t a bad idea, says security researcher Petr Svenda of Masaryk University. But first we have to make them much harder to hack.
Svenda’s research here was at the heart of a major vulnerability uncovered in October of an electronic-authentication technology used by numerous corporations and governments around the world. The ROCA flaw weakened the security in ID cards and fobs using Infineon chips to generate RSA encryption keys, including 750,000 Estonian IDs.
Hackers could use the vulnerability to generate their own cryptographically sound “private” key. By doing this, they could essentially steal identities while gaining entry into otherwise secure spaces and sowing doubt into the authenticity ID cards and fobs themselves.
The discovery underscores security challenges among myriad Internet of Things technologies. Vendors of Internet-connected identification devices, in particular, face challenges in ensuring that their products are secure and patchable, once in use. Encrypted identification is a booming business, with widespread estimated usage in more than 136 countries by 2021, according to Acuity Market Intelligence. Over the next couple years, the firm predicts, vendors will rake in tens of billions of dollars as companies issue hundreds of millions of chip-based cards worldwide.
Part of the security problem, Svenda says, is the typical “landlord” style of encryption implementation, which puts the onus of end-user protection on the technology vendor—and keeps the technology’s inner workings under wraps. White-labeled microchip ID technology, he says, makes it hard to even determine whether an ID card is vulnerable.
“This proprietary style tends to fail less frequently—but with higher impact, if it happens,” he says, adding that being able to “check your key, and be sure whether it’s vulnerable or not, is very important.”
The growing popularity of encrypted identification, as with that of most other technologies, comes with growing risks. How secure are digital ID cards and fobs, and are their manufacturers doing enough to protect consumers? Svenda is cautiously hopeful that they are taking threats seriously.
What follows is an edited transcript of our conversation.
Q: What’s your biggest concern as “smart” identity cards are adopted across the world?
Government agencies are not really thinking that smart-card chips can actually fail en masse. It was always anticipated that a highly skilled attacker could break one chip with local access. But not all chips.
Did you discover the first instance of this happening?
Possibly. Using a similar mathematical toolkit, D.J. Bernstein discovered vulnerabilities in Taiwanese smart cards. But the attack was not valid for every single smart card. Several out of the millions coming from the manufacturer had a crippled random number generator.
In our discovery, the issue was not with the hardware but rather with the software library included in these chips. You can either not use that library and switch to a different algorithm, or you may try to use longer keys that are not yet shown to be vulnerable.
Software is moving more and more into open source, with transparency and better code access. The smart-card industry is more like the regular hardware industry. Just to see a device’s technical specifications, you need to sign a nondisclosure agreement.
It turns out that the same library used on the Estonia cards is also loaded into trusted platform module (TPM) chips. And Infineon has 20 percent to 30 percent of the [smart card chip] market share. We were surprised that it’s widely used, especially for BitLocker, Microsoft’s disk encryption tool, and that the TPMs can be updated, to allow a fix for flawed key generation.
In this case, Infineon was able to at least create the firmware update. Update rollouts can be challenging, regardless. There are some discussions over whether we should develop better mechanisms for updates. Operating systems and software went through this 10 years ago; now it’s time for the Internet of Things.
IoT is driven mostly by functionality; security is typically added only later—if at all. (Smart identification cards are an exception in that security is by necessity built in from the start.)
Changing the mentality of hardware manufacturers at large, with respect to security, will take time. Regulation pressure tends to be only gradual, and direct-market pressure is difficult to mount. It usually takes about a decade to see significant improvements.
What should someone required to use these cards or fobs know? Should they be worried?
It’s always good if people are pressing their government to do the right thing! Generally, I believe that we need secure hardware. And I believe that it’s a big issue that the smart-card industry is so closed. Software is moving more and more into open source, with transparency and better code access. The smart-card industry is more like the regular hardware industry. Just to see a device’s technical specifications, you need to sign a nondisclosure agreement.
While there are certification labs that do a great job, they have a limited amount of people, and they don’t necessarily catch everything or fix anything. The structure of the keys we were attacking was known to the certification labs. They had just overlooked the possibility that they could be transformed. The certification process, in my eyes, is important, but it’s not like fixing a flaw.
It sounds like the problem is partially related to the supply chain, which has been an ongoing problem in security for some time.
In general, the supply chain is an issue. If everything is manufactured in China, you can’t really say what is happening there. The second issue is that there are usually at least two vendors in between the manufacturer and the customer, but the severity of this issue gets lost in translation between the vendors.
If we are depending so much on these chips, I don’t want to see a situation where only few people have all the details.
In this case, the intermediate vendors were not really motivated to tell whole truth, because they were not responsible for introduction of the flaw. The only end-user customer we notified and directly spoke with was Estonia—simply because we realized in August that it was still issuing vulnerable certificates for their electronic IDs.
How could ID chip vendors better secure the hardware?
We need physical hardware security that makes life way harder for remote attackers—something like universal two-factor authentication tokens.
I also hope to see hardware vendors use widely accessible, open-source descriptions of what they are doing. Right now, some claim that they are afraid that someone will steal their IDs. But if we are depending so much on these chips, I don’t want to see a situation where only few people have all the details.
How could hardware makers avoid these catastrophic single points of failure?
In some Baltic states, they remove the single point of failure by moving half of the key to your mobile phone. The second half lives on a server operated by a consortium. So if an attacker wants to forge the signature, she needs to compromise your mobile phone, as well as the server.
With smart cards, you can do the same. You can basically have an ID equipped not with a single chip, but with two chips. Even if a flaw is discovered, it’s only in one chip.