Imagine a Matryoshka doll. Inside one doll is another, inside that one, yet another, and so on. Now, imagine that each subsequent doll is not just smaller than the last, but incomparably more complex inside – so much so that it's a million times harder to take apart than the previous one. And the next one, a million million times harder still. This is the Ackermann function – a mathematical object that looks like a three-line definition but hides an infinite tower of escalating complexity within.
Three Lines That Defy Imagination
Wilhelm Ackermann described his function in 1928 as an example of a computable function that is fundamentally impossible to describe with a simple loop. Mathematicians call such functions “not primitive recursive” – but behind this technical term lies something far more picturesque.
The function is denoted as A(m, n), where m and n are ordinary non-negative integers, and is defined by three rules:
- If m equals zero: the result is simply n plus one.
- If m is greater than zero and n equals zero: take A(m − 1, 1).
- If both are greater than zero: take A(m − 1, A(m, n − 1)).
Three lines. That's it. But let's see what happens at different values of m – this is the parameter mathematicians call the depth of recursion, and it's what determines how deep we venture into our Matryoshka doll.
At m = 0, the function simply adds one: A(0, n) = n + 1. Nothing impressive – a step along the number line.
At m = 1, it adds two: A(1, n) = n + 2. Still modest.
At m = 2, it doubles and adds three: A(2, n) = 2n + 3. Getting a little more interesting.
At m = 3, it raises two to the power of (n + 3) and subtracts three: A(3, n) = 2n+3 − 3. For n = 4, this is already 125; for n = 10, it's around eight thousand.
At m = 4, this is a “tower” of twos of height (n + 3), minus three. For n = 0, it's 222 − 3 = 13, and for n = 1, it's 2222 − 3 = 65533. At n = 2, the number is so enormous that there wouldn't be enough atoms in the entire visible universe to write it down.
This is what hides behind those three lines. The Ackermann function is a staircase where each next step is not just higher than the last, but higher by a magnitude that the previous step cannot even conceive of.
How to Tame Infinity: The Modulo Operator
For practical tasks – especially in cryptography and computer science – numbers that don't fit in the universe are useless. We need numbers that can fit into a memory cell. And this is where an elegant mathematical trick comes onto the stage: taking the remainder of a division.
Imagine the face of a clock. It has 12 divisions. If you move 15 hours past noon, the hand will point to 3 – because 15 has “wrapped around” the clock face, and the remainder of 15 divided by 12 is 3. Mathematicians write this as 15 mod 12 = 3. This 'clock face' is called a finite ring – a set where numbers move in a circle.
In the study in question, mathematicians proposed exactly this: to apply the Ackermann function not to arbitrarily large numbers, but to numbers on such a 'clock face' – and to look at the remainder. They chose powers of two as the size of the clock face: 2, 4, 8, 16, 32, 64, and so on – because computers, with their binary nature, work especially efficiently with such numbers.
Let's write this a bit more formally. The modular Ackermann function is denoted as Ak(m, n) = A(m, n) mod 2k, where k is the “size of the clock face in powers of two.” With k = 3, our clock face has 8 divisions (0, 1, 2, 3, 4, 5, 6, 7); with k = 8, it has 256 divisions; with k = 64, it has an enormous but finite number of divisions.
Now for the most interesting part: at different values of depth m, the function behaves completely differently on this 'clock face' – as if different musicians were playing the same instrument.
Portraits of Depth: From Simple to Unpredictable
At m = 0, the function simply adds one, moving around the circle. Imagine a hand that takes one step at a time on a clock face: 0 → 1 → 2 → 3 → ... → 7 → 0 → 1 → ... This is one large cycle covering all the numbers. Beautiful and predictable – like clockwork.
At m = 1, the function adds two: 0 → 2 → 4 → 6 → 0, and in parallel, 1 → 3 → 5 → 7 → 1. Two separate cycles – the even and odd numbers live in their own worlds, never intersecting. Like two gears spinning side by side without touching.
At m = 2, the situation becomes more interesting. The function A(2, n) = 2n + 3 always produces an odd result – regardless of whether n is even or odd (twice any number is even, plus three is odd). This means the function's range has shrunk by half: it only draws on half of the clock face. On our 8-division clock face: 1 → 5, 3 → 1, 5 → 5, 7 → 1. Some numbers are completely unreachable – as if an artist deliberately left half the canvas blank.
At m = 3, something unexpected happens: A(3, n) = 2n+3 − 3. For sufficiently large n, the exponent exceeds k, and 2n+3 mod 2k becomes zero. The function “collapses” into the constant 2k − 3. Almost all input values lead to a single point – like rivers converging into a single sea. On one hand, this is mathematically beautiful. On the other, for practical purposes, this uniformity is undesirable: a good hash function should produce different outputs for different inputs.
At m ≥ 4, everything changes. The function grows so rapidly that even for very small input values, its result “wraps around” the clock face many times. Its behavior becomes externally chaotic – unpredictable to an outside observer, though internally, it is completely deterministic. It is this zone that is of the greatest interest.
Hash Functions: Fingerprints for Data
Here, we need to make a small digression to explain what a hash function is and why it's needed.
Imagine you have a book – say, a thousand-page novel. You need to verify that it hasn't been altered during transmission over the internet. Rereading the entire book each time is unthinkable. But if you could take a 'fingerprint' of the book – a short number that uniquely corresponds to that specific text – the task becomes vastly simpler. Change one letter? The fingerprint will change completely unpredictably. Want to forge the text so the fingerprint remains the same? That should be practically impossible.
This 'fingerprint' is a hash. The function that calculates it is called a hash function. A good hash function has three key properties:
- Avalanche effect: The slightest change in the input data radically changes the output.
- Collision resistance: It is computationally infeasible to find two different inputs with the same output.
- Uniform distribution: All possible output values appear with roughly the same frequency.
The authors of the study propose building a hash function based on the modular Ackermann function. The idea is to take an input message, convert it into a number using a simple preliminary step (let's call the result n), and then apply Ak(m, n) to it. The parameter m here acts as a 'lever of complexity': the larger it is, the more complex and unpredictable the function becomes.
Formally, this is written as:
Hm(M) = Ak(m, hash_base(M)) mod 2k
where M is the original message, hash_base(M) is the preliminary numerical 'imprint' of the message, and k is the size of the output value in bits.
Sensitivity to Depth: Why One Step Changes Everything
One of the most fascinating properties of this construction is how it reacts to changes in the parameter m. Recall how strikingly different the portraits of depth we painted above are. The functions at m = 2 and m = 3 behave in completely different ways – though the parameter differs by only one.
This property is called sensitivity to depth. The researchers tested it empirically: they took values of Ak(m, n) and Ak(m + 1, n) for the same n and different m and measured how much they correlate with each other. It turned out that starting from m = 3 and higher, the correlation drops sharply – functions at adjacent depth values produce results that are statistically almost unrelated to each other.
It's like how in music, the same motif played in different keys sounds recognizably similar – but the same motif, processed with different instrumental techniques, can sound completely unrecognizable. The depth m is not just a number; it's a qualitative leap in the nature of the function.
When a function acts as a self-map on a finite set – taking a number from the clock face and returning another number on the same clock face – it inevitably generates cycles. Start with any number, apply the function again and again, and sooner or later, you will return to where you started. The question is how long this path will be and how complex its pattern is.
For small depths, the picture is transparent:
- At m = 0: one large cycle encompassing all 2k numbers – like a single lap around the entire clock face.
- At m = 1: two cycles of 2k−1 numbers each – the even and odd numbers in their separate orbits.
- At m = 2: the function's range shrinks to the odd numbers, the cycles become more complex, but are still analyzable.
- At m = 3: the function “collapses” to a nearly constant value – most numbers lead to a single point, and cycles degenerate into short loops.
At m ≥ 4, the landscape changes dramatically. The enormous growth of the original function, when “wrapped” onto a finite clock face, generates long, unpredictable orbits. The researchers hypothesize that the average cycle length for m ≥ 4 approaches 2k−1 – half the size of the clock face. This would be a nearly ideal property: the function “visits” almost all points in the space before returning to the start, ensuring high unpredictability.
But for now, this is just a hypothesis – a beautiful one, supported by numerical experiments, but still awaiting a rigorous proof. In this sense, mathematics resembles architecture: building a structure that looks stable is not the same as proving its stability with calculations.
Uniformity: When the Die is Fair
Another question the researchers place at the center of their attention is the uniformity of distribution of the output values. Roughly speaking: if we were to throw numbers onto our 'clock face' via the Ackermann function thousands of times, would each division receive roughly the same number of hits?
For m = 0 and m = 1, the answer is yes: the function simply takes steps around the clock face, and each number is reached exactly as many times as the others. The die is fair.
For m = 2, the answer is no: the range has shrunk to odd numbers, and the even numbers are never reached at all. The die is unfair, and this makes such a function a weak tool for cryptography.
For m = 3 – again, no, for a different reason: most input values lead to the same output.
For m ≥ 4 – the empirical data suggests that the distribution becomes significantly more uniform. But “empirically” here means, “we checked with specific numbers and found no gross deviations.” A theoretical proof of uniformity for high m does not yet exist – this is one of the main open problems of the study.
Open Questions: A Map of Blank Spots
Honesty is one of the best traits of good mathematics. The authors of the study not only propose a construction but also openly state what they have not yet proven. Here is the map of blank spots they leave for future researchers:
- Uniformity for high m: Is Ak(m, n) for m ≥ 4 truly uniformly distributed across the entire space of output values?
- Cycle length: Does the average cycle length approach 2k−1, and if so, can this be proven rigorously?
- Collision resistance: Can we guarantee that for sufficiently large m and k, it is impossible to find two different inputs with the same hash?
- Pre-image resistance: Even knowing the hash, is it possible to recover the original message? The researchers assume not – but this assumption needs proof.
- Optimal parameters: How should k and m be chosen to ensure the desired level of security at a reasonable computational cost?
- Modifications of the function: Can the Ackermann construction be modified or extended to improve specific properties without losing its fundamental unpredictability?
Each of these questions is a research program in itself. Some of them will likely prove more difficult than they appear. The Ackermann function, by its very nature, resists simple analytical tools – its growth is too rapid for classical estimates.
Why It Matters: At the Intersection of Beauty and Utility
Mathematics often travels along two parallel roads: the road of beauty and the road of utility. Sometimes they run separately for a long time, and then they unexpectedly converge. The Ackermann function was conceived in 1928 as a purely theoretical object – a demonstration that computable functions exist beyond primitive recursion. No one was thinking about practical applications.
Nearly a century later, researchers have discovered that the very property that makes the Ackermann function theoretically interesting – its hierarchical, avalanche-like escalating complexity – could become the basis for practical information security tools.
The idea is elegant: instead of building a hash function from specially selected operations (as is done in standard algorithms), take a mathematical object whose complexity grows organically – from the very nature of recursion. The depth m becomes a 'tuning knob': the larger the m, the more complex and reliable the function, and the harder it is to crack.
Of course, practical application in cryptography is still a long way off. The open questions listed above are not minor technical details but fundamental mathematical problems. Until they are solved, it is premature to speak of real cryptographic security. But it is precisely this kind of work – where theoretical mathematics probes the boundaries of practical application – that paves the way for future discoveries.
The Ackermann function has been silent for nearly a century, hidden away in textbooks on computability theory. Perhaps it is only now beginning to speak in its full voice.