## The paradox of hard and easy

« previous post | next post »

If you're interested in one-way functions and Kolmogorov complexity, you'll probably want to read this mind-crunching article:

"Researchers Identify ‘Master Problem’ Underlying All Cryptography", by Erica Klarreich, Quanta Magazine (April 6, 2022)

The existence of secure cryptography depends on one of the oldest questions in computational complexity.

To ease our way, here are brief descriptions of the two key terms:

In computer science, a **one-way function** is a function that is easy to compute on every input, but hard to invert given the image of a random input. Here, "easy" and "hard" are to be understood in the sense of computational complexity theory, specifically the theory of polynomial time problems. Not being one-to-one is not considered sufficient for a function to be called one-way….

(source)

In algorithmic information theory (a subfield of computer science and mathematics), the **Kolmogorov complexity** of an object, such as a piece of text, is the length of a shortest computer program (in a predetermined programming language) that produces the object as output. It is a measure of the computational resources needed to specify the object, and is also known as **algorithmic complexity**, **Solomonoff–Kolmogorov–Chaitin complexity**, **program-size complexity**, **descriptive complexity**, or **algorithmic entropy**. It is named after Andrey Kolmogorov, who first published on the subject in 1963.

(source)

The article begins unexpectedly, at least for me:

In 1868, the mathematician Charles Dodgson (better known as Lewis Carroll) proclaimed that an encryption scheme called the Vigenère cipher was “unbreakable.” He had no proof, but he had compelling reasons for his belief, since mathematicians had been trying unsuccessfully to break the cipher for more than three centuries.

There was just one small problem: A German infantry officer named Friedrich Kasiski had, in fact, broken it five years earlier, in a book that garnered little notice at the time.

Cryptographers have been playing this game of cat and mouse, creating and breaking ciphers, for as long as people have been sending secret information. “For thousands of years, people [have been] trying to figure out, ‘Can we break the cycle?’” said Rafael Pass, a cryptographer at Cornell Tech and Cornell University.

Five decades ago, cryptographers took a huge step in this direction. They showed that it’s possible to create provably secure ciphers if you have access to a single ingredient: a “one-way function,” something that’s easy to carry out but hard to reverse. Since then, researchers have devised a wide array of candidate one-way functions, from simple operations based on multiplication to more complicated geometric or logarithmic procedures.

Today, the internet protocols for tasks like transmitting credit card numbers and digital signatures depend on these functions. “Most of the crypto that is used in the real world is something that can be based on one-way functions,” said Yuval Ishai, a cryptographer at the Technion in Haifa, Israel.

But this advance has not ended the cat-and-mouse game — it has only sharpened its focus. Now, instead of having to worry about the security of every aspect of an encryption scheme, cryptographers need only concern themselves with the function at its core. But none of the functions currently in use have ever been definitively proved to be one-way functions — we don’t even know for sure that true one-way functions exist. If they do not, cryptographers have shown, then secure cryptography is impossible.

In the absence of proofs, cryptographers simply hope that the functions that have survived attacks really are secure. Researchers don’t have a unified approach to studying the security of these functions because each function “comes from a different domain, from a different set of experts,” Ishai said.

Cryptographers have long wondered whether there is a less ad hoc approach. “Does there exist some problem, just one master problem, that tells us whether cryptography is possible?” Pass asked.

Now he and Yanyi Liu, a graduate student at Cornell, have shown that the answer is yes. The existence of true one-way functions, they proved, depends on one of the oldest and most central problems in another area of computer science called complexity theory, or computational complexity. This problem, known as Kolmogorov complexity, concerns how hard it is to tell the difference between random strings of numbers and strings that contain some information.

Liu and Pass proved that if a certain version of Kolmogorov complexity is hard to compute, in a specific sense, then true one-way functions do exist, and there’s a clear-cut way to build one. Conversely, if this version of Kolmogorov complexity is easy to compute, then one-way functions cannot exist. “This problem, [which] came before people introduced one-way functions, actually turns out to fully characterize it,” Pass said.

The finding suggests that instead of looking far and wide for candidate one-way functions, cryptographers could just concentrate their efforts on understanding Kolmogorov complexity. “It all hinges on this problem,” Ishai said. The proof is “breakthrough work on the foundations of cryptography.”

Liu and Pass proved that if a certain version of Kolmogorov complexity is hard to compute, in a specific sense, then true one-way functions do exist, and there’s a clear-cut way to build one. Conversely, if this version of Kolmogorov complexity is easy to compute, then one-way functions cannot exist. “This problem, [which] came before people introduced one-way functions, actually turns out to fully characterize it,” Pass said.

The finding suggests that instead of looking far and wide for candidate one-way functions, cryptographers could just concentrate their efforts on understanding Kolmogorov complexity. “It all hinges on this problem,” Ishai said. The proof is “breakthrough work on the foundations of cryptography.”

…

Here's the abstract of the paper by Yanyi Liu and Rafael Pass, submitted to arXiv on 9/24/20):

**On One-way Functions and Kolmogorov Complexity**

We prove that the equivalence of two fundamental problems in the theory of computing. For every polynomial t(n)≥(1+ε)n,ε>0, the following are equivalent:

– One-way functions exists [*recte* exist] (which in turn is equivalent to the existence of secure private-key encryption schemes, digital signatures, pseudorandom generators, pseudorandom functions, commitment schemes, and more);

– t-time bounded Kolmogorov Complexity, Kt, is mildly hard-on-average (i.e., there exists a polynomial p(n)>0 such that no PPT algorithm can compute Kt, for more than a 1−1p(n) fraction of n-bit strings).

In doing so, we present the first natural, and well-studied, computational problem characterizing the feasibility of the central private-key primitives and protocols in Cryptography.

Klarreich goes on to broach the core issue of her argument thus:

**Leveraging Hardness**

Usually, a hard problem is an obstacle. But in cryptography, where you can deploy it against your adversaries, it’s a boon. In 1976, Whitfield Diffie and Martin Hellman wrote a groundbreaking paper in which they argued that the particular hardness of one-way functions was precisely what cryptographers needed to meet the demands of the dawning computer age. “We stand today on the brink of a revolution in cryptography,” they wrote.

In the decades that followed, researchers figured out how to build a wide variety of cryptographic tools out of one-way functions, including private key encryption, digital signatures, pseudorandom number generators and zero-knowledge proofs (in which one person can convince another that a statement is true without revealing the proof). Diffie and Hellman’s paper was “almost like a prophecy,” Pass said. From the single building block of one-way functions, cryptographers managed to build “these super-complex and beautiful creatures,” he said.

To get a feel for how one-way functions work, imagine someone asked you to multiply two large prime numbers, say 6,547 and 7,079. Arriving at the answer of 46,346,213 might take some work, but it is eminently doable. However, if someone instead handed you the number 46,346,213 and asked for its prime factors, you might be at a loss. In fact, for numbers whose prime factors are all large, there is no efficient way (that we know of) to find those factors. This makes multiplication a promising candidate for a one-way function: As long as you start with large enough prime numbers, the process seems easy to do, but hard to undo. But we don’t know for sure that this is the case. Someone could find a fast way to factor numbers at any moment.

Cryptographers have gleaned an assortment of potential one-way functions from different areas of mathematics, but no single function has a higher claim than another. If, say, multiplication were toppled as a one-way function tomorrow, that wouldn’t say anything about the validity of the other candidate one-way functions. Cryptographers have long asked whether there is some quintessential one-way function — one which, if broken, would pull all the other candidates down with it.

…

What cryptographers were really after was a universal one-way function that stemmed from some natural problem — one that would give real insight into whether one-way functions exist. Researchers long had a particular problem in mind: Kolmogorov complexity, a measure of randomness that originated in the 1960s. But its connection with one-way functions was subtle and elusive.

Pass became fascinated with that connection as a graduate student in 2004. Over the years he toyed with the problem, without much success. But he felt sure there was something there, and a burst of activity in Kolmogorov complexity over the past five years only heightened his interest.

Pass tried to persuade several graduate students to explore the question with him, but none were willing to take on what might turn out to be a fruitless project. Then Yanyi Liu started graduate school at Cornell. “Yanyi was fearless,” Pass wrote in an email. Together, they plunged in.

Klarreich then asks:

**What Is Random?**

The concept of randomness is, by its nature, tricky to pin down. There’s a *Dilbert* comic strip in which an office tour guide shows Dilbert the accounting department’s “random number generator” — which turns out to be a monster who just keeps repeating the number 9. “Are you sure that’s random?” Dilbert asks. “That’s the problem with randomness,” his guide answers, “you can never be sure.”

After this, the article becomes increasingly mathematical and abstruse, paying attention to the implications of such deep and dense topics as computable and incomputable, practical and impractical (which is actually desirable), and so forth.

One quotation that struck me especially powerfully is this one from Ryan Williams, a computer scientist at the Massachusetts Institute of Technology:

“If you believe this [Kolmogorov complexity] problem is difficult … then you believe in one-way functions,” Williams said. And “if you believe in crypto at all, then you’ve kind of got to believe that this version of time-bounded Kolmogorov complexity must be hard.”

The article is accompanied by four captivating photographs:

1. Rafael Pass

2. Martin Hellman, Whitfield Diffie, and Ralph Merkle

3. Yanyi Liu

4. Andrey Kolmogorov

The caption of the last photograph reads: "Andrey Kolmogorov came up with a way to measure randomness based on how easy it was to describe the generation of a string of numbers." He is the veritable picture of a genial genius — but still a little bit scary.

**Selected readings**

- "Wake up! There's a Kolmogorov complexity trough!" (8/7/08)
- "Linguists who count" (5/28/09) — this comment
- "Becoming a modifier" (7/8/17)
- "The National Rhetoric Association" (1/5/13) — in the comments
- "Translating games" (12/9/08)
- "Man bites dog" (8/20/13)
- "'Tis the season" (12/21/07)
- "Pharyngula minutes" (8/1/08)

[Thanks to James Fanell]

## KeithB said,

April 12, 2022 @ 8:03 am

A one time pad is practically the definition of a "one-way function", which is why it is unbreakable.

## Victor Mair said,

April 12, 2022 @ 8:08 am

From

Peter O'Brien:You might also want to look at Linear A – a language that has never been broken… that means something to cryptography, but I’m not telling what… :)

## Ross Presser said,

April 12, 2022 @ 9:17 am

@KeithB One-time pads are indeed unbreakable — you can brute force them but you'll recover

every possible plaintextin doing so, with no way to know which one was the actual intended one. But they are not usually called one-way functions, nor are they usually chosen as a crypto solution, because they represent the most acute form of the key distribution problem — you have to securely transmit a key that is just as long as your plaintext. Technically I suppose it really is a function — Pad(n,b) where n is the position and b is the byte — but to reiterate, the pad is incompressible because it is random. The article goes into depth about what ausefulone-way function is.## Gregory Kusnick said,

April 12, 2022 @ 11:17 am

This can't be right. A system that leaks information about keys is insecure, regardless of how unbreakable its core function is.

## Philip Taylor said,

April 12, 2022 @ 3:26 pm

I am intrigued as to why the author of the "mind-crunching article" cited and linked at the beginning of the thread felt it necessary to include the phrase “using the following formula” in “print the first thousand digits of π using the following formula…”. I cannot see how the choice of formula can affect the output, assuming that both output in the same radix (typically 10).

## Peter Taylor said,

April 12, 2022 @ 4:15 pm

@Philip Taylor, it's because the thing you're measuring there is (effectively) the length of the formula.

## Paul Garrett said,

April 12, 2022 @ 7:28 pm

As Peter Taylor says, one form of the measure of "complexity", is the length of the formula that produces the output.

But/and "formula" is really an instance of "description"… and not every description is given by a (straightforward) plug-and-chug formula. That is, a description can involve branching logic, as in "if the last digit was X, then compute the following digit one way, and if the last digit was _not_ X, compute it this other way". Or, generally, computer programs, with their potential logical complexity…

But, interestingly, sometimes a tricky branching (recursive!?!) program can be much shorter than a more "linear" logical/formulaic presentation of an object. In that case, the object still does have a low complexity. The catch is that it is provably essentially impossible to effectively compute the complexity of objects (see Goedel, Turing, Kolmogorov, Solomonoff, Chaitin, et al), that is, it is impossible determine whether there is some tricky much-shorter "program" which will describe/produce the object at hand.

My own impression is that to truly understand complexity is akin to understanding P versus NP and such stuff. I'll be amazed if/when people make genuine progress.

## Philip Taylor said,

April 13, 2022 @ 3:07 am

I do understand the point that you (pl.) are making, but does not specifying the formula to be used defeat the object of the exercise ? Looking at the second of the definitions which Victor cited, we read :

Now if we want to compute the Kolmogorov complexity of the first 1000 digits of pi, surely "the shortest computer program" must be able to use any formula, not just one particular one, must it not ?

## Andrew Usher said,

April 13, 2022 @ 6:48 am

I agree that this article should at least have mentioned one-time pads (which aren't subject to this criterion at all) as well as the nature of the connection to P ?= NP, which people interested in the matter have surely heard of – practically, the existence of one-way functions is equivalent to P != NP (theoretically it is stronger).

The one-time pad in fact uses a symmetric function, as far from one-way as possible, and yet is easily seen perfectly secure (and the converse is also proven – no other cipher can resist infinite computational power); its security lies wholly in the key.

Philip Taylor is surely right that, for complexity determination, the shortest program must be free to choose any correct algorithm, or the definition would be arbitrary (it is already arbitrary what 'machine' it is run on, but as long as it is Turing-complete and the same for the problems being compared, comparison is legitimate).

k_over_hbarc at yahoo.com

## Michael Watts said,

April 13, 2022 @ 12:55 pm

I actually take the opposite view on the printing-the-digits-of-pi problem statement. It is true that a string which consists of the first 1000 digits of pi would have Kolmogorov complexity determined by whatever formula led to the most succinct instructions.

But that's not what the article is saying. It's describing what Kolomogorov complexity means, and giving examples. "Print a thousand 9s" is an example of instructions that give a 1000-digit string composed entirely of 9s, but it may or may not be the

shortest possibleinstructions that give that same string.The same thing is going on in "print the first thousand digits of π using the following formula…". This is supposed to be an example of instructions from which Kolmogorov complexity will be measured. Valid instructions provide an upper bound on the Kolmogorov complexity of the string.

To be valid instructions, you have to specify the formula you're using. "Print the first thousand digits of pi" is illegitimate – how are you supposed to follow those instructions? There is a problem in this example, but it isn't the words "using the following formula"; it's the meaningless words that precede them, "print the first thousand digits of pi".

Philip Taylor is thinking of the

question"what is the Kolmogorov complexity of the first 1000 digits of pi?", but the article is thinking of theanswer"at most this much, because I can use this formula to produce those digits".## Paul Garrett said,

April 13, 2022 @ 1:23 pm

Andrew Usher, it's a mathematical tangent, but there is indeed a surprising (mathematical) point, that might come as a surprise to the general educated population, namely, that there can be several different _correct_ formulas/algorithms/programs that "compute something" (or "answer a question"). In particular, there need not be a "single correct" algorithm. _And_ some can be far more efficient than others. Discovery of more-efficient algorithms/formulas is one form of "progress". Yes, one could decide to say that "correct" really means "optimal, as well as correct", but, then, the issue of finding an optimal algorithm and proving that it is optimal is not algorithmically solvable (in general).

## Paul Garrett said,

April 13, 2022 @ 7:48 pm

… and, possibly amusingly, while we're off on a mathematical tangent: until relatively modern times, there was not a good understanding of "probability" versus "randomness". And there are possibly-counter-intuitive aspects: if one flips an unbiased coin 10 times, every possible sequence of outcomes of heads/tails is of equal probability. However, first, we'd be suspicious of a coin that returned all heads, and so on. A "random-looking" sequence of outcomes is more "believable". Yes, but, in fact, of course, every possible outcome has the same probability. The distinction between all heads and HTTTHTTTHHH or whatever is the Kolmogorov complexity, in effect, that a "truly random" outcome has no shorter description (or algorithm to describe it, etc.) than itself.

## KeithB said,

April 14, 2022 @ 7:13 am

Paul Garret:

And Las Vegas makes a lot of money on the confusion…

## Peter Grubtal said,

April 15, 2022 @ 12:43 am

Paul Garrett

…and what most people think would be a random distribution of points in a 2-D space is closer to a uniform distribution. A true random distribution shows clusters. This has led to big arguments about clustering of diseases near nuclear or chemical plants.

## Bill Benzon said,

April 15, 2022 @ 8:59 am

Where's a super-intelligent AI when you need one?

## Andrew Usher said,

April 16, 2022 @ 8:40 am

Paul Garrett:

I am of course familiar with all that – that's just why I said 'choose any correct algorithm' rather than 'choose the correct algorithm'. The example of getting all heads from random coin flips (or anything similar) shows a difficulty in defining 'random' data – as you would, I am sure, admit, we can't prove that some given data can't be produced by an algorithm shorter than itself – all we can do is show its statistical indistinguishability from the output of a truly random process.

## unekdoud said,

April 19, 2022 @ 9:33 am

@Peter Grubtal: To muddle matters more, a "uniform distribution" (in the statistical sense) is something you can sample from to get a random point. Your typical "true random distribution" of 100 points, coming from a collection of iid uniformly distributed variables, can be obtained as one single configuration sampled out of a uniform distribution on the appropriate product space.