There’s been a concern that quantum computers could break the current encryption algorithms we use to protect data today. Recently, NIST announced the approval of three quantum-safe algorithms that could withstand quantum computers.
IBM developed two of the approved algorithms, so on the most recent episode of our podcast, Get With IT, we interviewed Vadim Lyubashevsky, a cryptography researcher at IBM Research, about the algorithms.
Here is an edited and abridged version of that conversation:
Why will quantum computers be able to break current encryption algorithms?
So there’s this algorithm from the mid 90s by Peter Shor that says on a quantum computer, you can solve factoring in discrete log in polynomial time, rather than exponential time, which is what we believe you can do with a classical computer. The classical outputs weren’t broken right away because we didn’t have a quantum computer, but now it seems that there’s a lot of breakthroughs happening, and there’s quite a good chance that we’ll have a fully fault-tolerant quantum computer by 2030 and maybe something that can actually break good enough to break this cryptography by mid 2030s.
I know we can’t get into too much detail in a 15 minute podcast, but if you can, can you describe at a basic level what’s special about these new algorithms that will make them a bit more safeguarded against these quantum computers?
So the simple answer is that nobody knows if any cryptography is actually safe from any type of computer. So the classical algorithms that we’re replacing because of quantum computers, maybe they even need to be replaced because classical computers can break them. It’s just that nobody has figured out a way to break these underlying math problems.
It’s the same thing with quantum-safe cryptography. So the encryption in quantum-safe cryptography is based on some mathematical problem that people who work in quantum algorithms have tried solving, and it seems like they’ve struggled for about 20 to 30 years to make any progress. And there’s a lot of people working on it, but there have been a lot of false attempts to break these problems, which shows that people are working on this. In terms of what the algebraic hard part of these problems is, that’s a bit hard to say. I would say that it’s because people have tried solving it and can’t.
I can give you a very brief toy example of the flavor of the problem that we’re basing these things on. So I like to think of it as a knapsack problem. So let’s say I give you 1000 random numbers with 1000 digits each, and then I pick 500 of them, sum them up and I give you the sum, but you don’t know which 500 numbers I summed up. So now the question to you is, which 500 numbers did I sum, or maybe there’s a different set of 500 numbers and you have to find any 500 numbers that sum to the amount.
This is the flavor of problems that we’re basing this lattice-based quantum-safe cryptography on, and there haven’t been any algorithmic advances, quantum or classical or otherwise, that would make us think that this is an easy problem. But of course, research has to go on, and cryptanalysts have to keep working on this, and that’s how we gain confidence this problem really is hard.
That kind of leads me to my next question, which is that I know that when NIST announced these three algorithms, it did say it’s still reviewing and approving more algorithms, and I know that IBM has some more that are being considered in that next grouping. So can you talk about what the benefit is to having as many different algorithms for solving this problem?
There’s two. So the first one is that if you have problems with different mathematical assumptions, if one gets broken, you can probably migrate to the other one. The second one is that NIST standardized one encryption scheme and two signatures and now they’ve standardized one more, and then there’s yet another. So it seems like they really want a lot of signatures. So the reason for that is, unlike with encryption, we don’t know how to get the best of all worlds with signatures. Some signature schemes are shorter, some are much faster, and some of the ones with short signatures have large public keys, but some of them have relatively short public keys.
Different scenarios would require different signatures, and NIST has standardized two signatures, one lattice based and one hash based. The significance of the hash-based one, even though it’s less efficient than all the other ones, is the security assumption is very safe in terms of it’s based on the hardest of hash functions. So we have a security and efficiency trade off. But, yeah, I would say the main reason that NIST is having an additional round of signature algorithms, is that they want something tailored for each scenario, since we can’t seem to get the best.
IBM developed two of the three algorithms, so I’m curious how the company started getting involved in this research.
I moved to IBM in about 2015 and since they were developing quantum computers, they also wanted to protect the world against the eventual coming of quantum computers so they thought it was wise to also invest in quantum-safe algorithms. When I got there, the call from NIST went out about a year later, and we formed a consortium. One of my PhD students, who’s now an IBM researcher, was working on this, and that’s when we submitted the algorithm. So that was IBM’s reasoning for doing it. If we’re entering the quantum world, you have to do both things. You have to have the algorithm, but also protect against quantum computers.
What would you say is the significance of NIST’s approval?
Now the standards are fixed. A lot of people were waiting to start the migration because they said things can change, even though they didn’t change much. So, for example, IBM already implemented something in their Z servers many years ago, and it can be easily migrated to a standard, but I think a lot of others had a wait and see approach. But now that the standards are there, there is nothing to wait for. So I think the significance is that there’s no excuse to not start switching to quantum safety.
Do you think that it’s likely that more companies will migrate to these standards right away now that they’re approved? Or do you think there’s kind of going to be a little bit of procrastination there since it’s not a current concern?
There are companies who have data that is still going to be valuable in 2035 and beyond. So in that case, it makes sense to switch right away, because somebody can just be intercepting the data and then waiting till they get a quantum computer and then decrypt it with them. Maybe a regular company will say, look, we have nothing that’s so interesting that we’re worried about.
It’s more like you have a vacation coming up in seven days. When do you pack? You know it’s coming up, and you’re going to have to do it. So either you do it last minute, and you probably will forget a bunch of stuff, or you plan ahead and maybe leave a little bit of space in the suitcase in case at the last minute you need to put an umbrella or something, depending on the weather.
But, yeah, it makes sense to do most of the work right now, because you’re going to have to do it.