Humanity has a method for trying to prevent new technologies from getting out of hand: explore the possible negative consequences, involving all parties affected, and come to some agreement on ways to mitigate them.
New research, though, suggests that the accelerating pace of change could soon render this approach ineffective.
People use laws, social norms and international agreements to reap the benefits of technology while minimizing undesirable things like environmental damage. In aiming to find such rules of behavior, we often take inspiration from what game theorists call a Nash equilibrium, named after the mathematician and economist John Nash. In game theory, a Nash equilibrium is a set of strategies that, once discovered by a set of players, provides a stable fixed point at which no one has an incentive to depart from their current strategy.

To reach such an equilibrium, the players need to understand the consequences of their own and others’ potential actions. During the Cold War, for example, peace among nuclear powers depended on the understanding the any attack would ensure everyone’s destruction. Similarly, from local regulations to international law, negotiations can be seen as a gradual exploration of all possible moves to find a stable framework of rules acceptable to everyone, and giving no one an incentive to cheat – because doing so would leave them worse off.
But what if technology becomes so complex and starts evolving so rapidly that humans can’t imagine the consequences of some new action? This is the question that a pair of scientists — Dimitri Kusnezov of the National Nuclear Security Administration and Wendell Jones, recently retired from Sandia National Labs — explore in a recent paper. Their unsettling conclusion: The concept of strategic equilibrium as an organizing principle may be nearly obsolete.
Kusnezov and Jones derive insight from recent mathematical studies of games with many players and many possible choices of action. One basic finding is a sharp division into two types, stable and unstable. Below a certain level of complexity, the Nash equilibrium is useful in describing the likely outcomes. Beyond that lies a chaotic zone where players never manage to find stable and reliable strategies, but cope only by perpetually shifting their behaviors in a highly irregular way. What happens is essentially random and unpredictable.
The authors argue that emerging technologies — especially computing, software and biotechnology such as gene editing — are much more likely to fall into the unstable category. In these areas, disruptions are becoming bigger and more frequent as costs fall and sharing platforms enable open innovation. Hence, such technologies will evolve faster than regulatory frameworks — at least as traditionally conceived — can respond.
What can we do? Kusnezov and Jones don’t have an easy answer. One clear implication is that it’s probably a mistake to copy techniques used for the more slowly evolving and less widely available technologies of the past. This is often the default approach, as illustrated by proposals to regulate gene editing techniques. Such efforts are probably doomed in a world where technologies develop thanks to the parallel efforts of a global population with diverse aims and interests. Perhaps future regulation will itself have to rely on emerging technologies, as some are already exploring for finance.
We may be approaching a profound moment in history, when the guiding idea of strategic equilibrium on which we’ve relied for 75 years will run up against its limits. If so, regulation will become an entirely different game.