«As I finished the last paragraph, a strange thought occurred to me: Isn't the very attempt to popularize these findings another example of computational intractability? Explaining why minimal rationality is as complex as optimality is almost like asking readers to avoid the worst possible understanding without guaranteeing the best one. I hope I haven't created another NP-complete paradox in the realm of human perception.» – Professor Emile Dubois
A Prelude to a Paradox
Imagine a soirée at Versailles, where every guest is trying not to look like a complete idiot. Not to dazzle with wit, not to conquer hearts – just to avoid utter disgrace. It seems like a minimal standard, doesn't it? However, mathematicians have discovered something astonishing: organizing such a reception, where no one makes an utterly catastrophic social blunder, turns out to be just as complex a task as organizing a perfect ball where every guest achieves maximum happiness.
This isn't a metaphor – it's the harsh mathematical reality of game theory. The collective hallucination of rationality, as I like to call it, possesses a strange property: it is equally complex at all its levels, from genius down to simple common sense.
Nash Equilibrium: When Everyone is Too Smart
Let's start with a classic. A Nash equilibrium is a state where each player chooses the optimal strategy, given the choices of the others. No one can improve their position by acting alone. It sounds elegant, but there's a problem: finding such an equilibrium in a real-world game is about as easy as predicting the trajectory of every molecule in a glass of champagne.
Mathematicians have long known that the tasks of finding, counting, or even simply determining the existence of a Nash equilibrium belong to a class of computationally intractable problems. In technical terms, this means they belong to the complexity classes of NP-complete, NP-hard, and #P-complete problems – the very categories that make computers sweat and pray.
The reason is obvious: we are demanding optimality from every player. It's like requiring every courtier to choose the perfect outfit, the perfect retort, and the perfect moment to bow – all simultaneously, while considering the choices of all other courtiers.
Minimalist Rationality: What If We Just Avoid Being Idiots?
This raises a natural question: what if we lower the bar? Not to complete irrationality, of course, but to some minimal threshold of reason? What if we require players not to be optimal, but merely to avoid the worst?
This seems like the weakest possible criterion for rationality. You don't have to choose the best strategy – just don't choose the worst one. Don't wear pajamas to a royal reception. Don't bet all your money on zero when you know it's going to be black. Don't buy stock in a company that has already declared bankruptcy.
The concept of a not-worst response (NWR) formalizes this very intuition. A strategy is a not-worst response if there is at least one other strategy that yields a lower payoff, given the actions of the other players. In other words, you're not at the absolute bottom of your options.
A strategy profile where every player chooses a not-worst response is called an NWR profile. This is a collective state of minimal reasonableness – no one is behaving in a catastrophically stupid manner.
The Illusion of Simplicity
Logic would suggest that if we lower the requirements, the task should become simpler. If finding the optimum is hard, then finding “just not the worst” should be easier. It's like the difference between creating a masterpiece and avoiding a total disaster – the former requires talent, the latter just common sense.
However, mathematics, that merciless destroyer of intuitions, says otherwise. Research shows that problems related to NWR profiles have the same computational complexity as problems for Nash equilibria. This is one of those rare moments when reality proves to be more paradoxical than any metaphor.
The Three Curses of Complexity
Let's consider three fundamental problems:
Existence
The first task: determine whether an NWR profile even exists in a given game. Is it possible to arrange a situation where no one makes the worst possible choice? It turns out this task is NP-complete – the same complexity class as the famous Boolean satisfiability problem.
The proof is elegant in its cruelty. Mathematicians construct a game where the existence of an NWR profile is equivalent to solving the 3-SAT problem, a classic NP-complete puzzle. Imagine a game with several types of participants: variable-players choose between “true” and “false,” clause-players check if logical conditions are met, and an observer-player keeps an eye on the big picture.
The payoffs are cleverly arranged so that an NWR profile exists if and only if a solution to the original logical formula exists. It's like building a social system where the ability for everyone to avoid catastrophe depends on solving a formidable puzzle.
Search
The second task: if an NWR profile exists, find one. This turns out to be an NP-hard problem – by definition, no easier than determining existence. If we could quickly find such profiles, we could quickly determine their existence, which would solve the first problem.
The irony is that even knowing a solution is out there, we can't efficiently find it. It's like knowing there's a person in a crowd who isn't making any egregious mistakes, but having no way to identify them without checking everyone individually.
Counting
The third task: how many NWR profiles exist? This problem is #P-complete – an even more difficult class. Counting is often harder than searching because it requires exploring the entire solution space.
Each unique solution to the logical formula corresponds to a unique NWR profile in the constructed game. Thus, counting NWR profiles is equivalent to counting all solutions to a complex combinatorial problem.
Potential Games: An Oasis That Turned Out to Be a Mirage
In game theory, there is a special class called potential games. Their unique feature is that they have a special potential function, and finding a Nash equilibrium is reduced to finding an extremum of this function. The existence of a pure Nash equilibrium is guaranteed here, and it can often be found in a reasonable amount of time.
It would seem that here lies a saving island of simplicity. If Nash equilibria are easy to find in potential games, then surely NWR profiles, with their minimal requirements, should be even easier?
No. The task of finding an NWR profile even in potential games turns out to be PLS-complete. PLS (Polynomial Local Search) is a class of local search problems where a solution is found through a series of local improvements. It doesn't sound as scary as NP-completeness, but it's still a computational quagmire.
The proof uses a reduction from the problem of finding a local maximum in the MAX-CUT problem – a classic optimization puzzle about partitioning a graph. A game is constructed where each vertex of the graph corresponds to a player choosing one of two sides of the cut. An NWR profile in this game corresponds to a local maximum of the cut function.
Even in this structured, almost friendly context, minimal rationality remains computationally elusive.
The Psychology of Impossibility
What does all this tell us about the nature of rationality and collective behavior? The conclusion is deeper than just the technical details of computational complexity.
First, complexity does not disappear when requirements are lowered. We intuitively assume that less rationality should be simpler. But mathematics shows that coordinating even minimally reasonable behavior is just as difficult as coordinating optimal behavior. The problem isn't the level of rationality – it's the very need for coordination.
Second, this explains why human societies so often get stuck in obviously bad equilibria. We see situations where, it seems, any reasonable person could do better simply by avoiding patently idiotic decisions. But a collective shift to such a state proves to be just as difficult as a shift to the optimum.
Third, this creates an interesting paradox for the theory of bounded rationality. Many researchers assumed that relaxing the requirements of rationality would make models more realistic and computable. But this research shows that this simplification only helps up to a certain point. As soon as we demand some minimal rationality from all participants, computational complexity returns in full force.
The Trade-off Between Quality and Coverage
The research hints at a fundamental trade-off: one can either demand strong guarantees of rationality from a small fraction of players, or weak guarantees from all players. But one cannot demand any guarantees from everyone without computational consequences.
It's like a law of conservation of complexity in social systems. You cannot have both universality and computability. Either you allow for some people to act irrationally (and the problem simplifies), or you demand minimal reasonableness from everyone (and the problem remains hard).
Real markets, political systems, and social networks all exist in this trade-off space. We cannot guarantee that every participant acts at least minimally reasonably, precisely because verifying and enforcing such a guarantee is computationally impossible.
Historical Perspective: From Nash to NWR
In the early 1950s, John Nash proposed his famous equilibrium concept, based on the idea of mutual optimality. It was a revolution – for the first time, there was a mathematically rigorous way to predict the outcomes of strategic interactions. The 1994 Nobel Prize only confirmed the significance of this idea.
But by the 1970s, evidence of its computational complexity began to accumulate. By the late 2000s, it was rigorously proven that finding a Nash equilibrium belongs to the PPAD-complete class of problems – another category of computational intractability.
This created an intellectual tension: on the one hand, the Nash equilibrium is a fundamental concept whose existence is guaranteed (at least in mixed strategies). On the other hand, finding it is practically impossible. It's as if we know that a treasure exists, but the map to it is written in a language that no one can read in a reasonable amount of time.
Relaxing the requirement to a not-worst response seemed like a natural way out of this impasse. But, as this research shows, that path turned out to be an illusion. The complexity is rooted deeper than previously thought.
The Philosophy of Intractability
There is something profoundly ironic in the fact that even minimal collective rationality is computationally unattainable. It's as if the universe has set a fundamental limit on our ability to coordinate reasonable behavior.
Perhaps this explains the persistence of what economists call “coordination failures.” We all see that the current situation is bad. We all know we could act better. But moving to a better state – or even just to a state where no one makes the worst mistakes – proves impossible not because of human stupidity, but because of fundamental mathematical constraints.
Financial bubbles, political deadlocks, and environmental crises – all may be manifestations of this fundamental intractability. Not because people don't see the problem, and not because a solution doesn't exist, but because coordinating even the minimally reasonable behavior of all participants hits insurmountable computational barriers.
Practical Implications: From Theory to Reality
What significance do these abstract results have for the real world? More than it might seem.
First, it calls into question the idea of “rational expectations” in economics. If agents cannot even compute how to avoid the worst decisions, how can they form rational expectations about optimal strategies?
Second, it explains why algorithmic game theory and mechanism design are so difficult. Attempts to design markets, auctions, or protocols that guarantee reasonable behavior from participants run into these fundamental barriers. Even minimal guarantees prove to be computationally expensive.
Third, it has implications for decentralized systems – from blockchains to decentralized finance (DeFi). These systems often rely on the assumption that participants will act reasonably (or at least avoid obviously stupid actions). But ensuring such guarantees for all participants may be fundamentally impossible.
Epilogue: The Beauty of Impossibility
There's a peculiar beauty in these results. They show that some things are difficult not because we aren't smart enough or technologically advanced enough. They are difficult because the very structure of logic and computation imposes fundamental limits.
Minimal collective rationality – avoiding the worst by all participants – turns out to be just as elusive as collective optimality. This is not a shortcoming of game theory, but its profound truth. Coordination is difficult not by chance, but by necessity.
We live in a world where even agreeing for everyone not to be an idiot turns out to be computationally equivalent to achieving perfection. This is both discouraging and liberating. Discouraging, because it dashes hopes for simple solutions to social problems. Liberating, because it explains why so many obviously non-optimal situations are so stable – it's not about malice or stupidity, but about mathematics.
Money, as I often say, is a collective hallucination. But now we know that even the most minimal collective rationality is also a hallucination – and a computationally intractable one at that. Perhaps that is why human history is a series not of optimal decisions, but rather a chaotic wandering between different levels of collective madness.
And in that, I must admit, there is something comforting.