The designs of current economic and governance systems may be inadequate to solve our many pressing problems. Ideas from complex systems science and other fields can help us re-conceptualize systems, leading to better designs.
By John Boik, February 6, 2017
In December 2016, Steven Hawking sounded the alarm on inequality and other pressing problems that he warned could destroy civilization and the ecosystem before we humans are technically capable of escaping into space. He’s in good company. In 2015 Gerardo Ceballos et al. warned that the sixth mass extinction may have already begun. Based on a mathematical model of social indicators, Peter Turchin predicted in 2013, and reaffirmed in 2017, that we can expect many years of social instability and political violence, peaking in the 2020s.
Societies around the globe face a long list of interrelated, intractable or emerging challenges. These include climate change, habitat loss, financial instability, pollution, misuse of artificial intelligence, weapons of mass destruction, and inequality. Any one of these, and especially several in combination, could lead to massive social and/or environmental decay. Like Hawking, the Western public seems to be keenly aware of the dangers. Based on a 2013 survey of more than 2,000 adults in the United States, Canada, Australia, and the United Kingdom, almost a quarter rate the probability of human extinction within 100 years as 50 percent or greater.
One might think that a civilization facing the prospect of severe degradation, if not collapse or extinction, would mount a focused scientific effort to understand, from a holistic, or systems perspective, what has gone wrong, and then devise a viable path to change course. But this R&D program, which I term wellbeing centrality, is nascent, held back by lack of funding and a needlessly narrow focus.
One reason the scientific community is late to the game is that we don’t yet view major decision-making systems as technologies. We tend to take them as “givens,” so their designs are not subject to scientific investigation and engineering innovation. Thus, we analyze conditions, trends, and policies, but not underlying systems. We innovate phones and medical equipment, even financial instruments, but only adjust the dials on the underlying systems.
My use of terms is unusual. I use decision-making systems, social choice systems, and problem-solving systems interchangeably to refer to the three big systems by which societies organize behavior and solve problems. These are economic/financial/monetary; governance/political; and legal/justice. I focus on the first two here and, for brevity, refer to them simply as economic and governance systems.
Failing to pursue scientific inquiry into the design of social choice systems is a bit like failing to pursue scientific inquiry into the design of the brain. Viewed abstractly, both the brain and social choice systems make decisions. They gather, store, and recall data as necessary to make (imperfect) predictive models of what will happen next if certain actions are taken. Then they evaluate predicted outcomes on a good-bad scale in order to make decisions. Some models are based on logical deliberation, while others are associative, based on pattern matching.
To the degree that the functional or evolutionary purpose of either the brain or social choice systems can be understood by science, pathologies can be identified and useful treatments devised. Moreover, as a parallel to the field of positive psychology, designs and interventions of social choice systems could be aimed at maximizing the degree of collective wellbeing, or thriving. Thus, one could imagine fields of positive economics and positive governance. Once the scientific community and funders realize that the designs of decision-making systems fall into the realm of science and engineering, and that innovation is desperately needed and widely desired, an explosion of effort will likely follow. In the ideal, all segments of global society would participate.
Without scientific inquiry into the design and function of the brain, we would be unable to develop cures for Alzheimer's or Parkinson's disease or make much progress with positive psychology. Likewise, without scientific inquiry into the design and function of social choice systems, we may be unable to solve our pressing problems, including climate change, or to markedly increase collective wellbeing. Quite simply, the challenges we face may be too difficult for dominant social choice systems to solve. That we already face the abyss could be taken as evidence. But we can change course. Through scientific examination of social choice system designs, and by pursuit of engineering innovation, viable solutions become possible.
In my recent working paper “Optimality of Social Choice Systems: Complexity, Wisdom, and Wellbeing Centrality,” I raise two overarching questions that frame the type of scientific inquiry necessary: What design characteristics would relatively optimal social choice systems exhibit? And how could research and development of more optimal social choice systems best proceed?
I want to emphasize that the questions refer to designs of whole systems. Efforts to improve policies and hold leaders accountable are, of course, important. But what is largely missing is a parallel, mutually supportive program to investigate decision-making systems as wholes. System components include rules and regulations, social norms, information flows, technologies, and education programs. Most important of all, they include the conceptual models and world views on which a system is based.
All of these components play a role in the decision-making process already outlined: gather, store, and recall data; generate a set of possible actions; predict and evaluate outcomes; and make a decision. A given social choice system, just like a given brain, might conduct this process well, or poorly. Each step in the process, and each component of the system, is open to scientific examination and engineering innovation.
As a starting premise, consider relative optimality to be a measure of a system’s capacity to help a community solve problems and organize activities such that collective wellbeing is elevated. Collective wellbeing, in turn, can be broadly defined to include social and environmental flourishing, both local and global. Thus, relative optimality has two aspects: a proximal one, problem-solving capacity; and a distal one, elevated wellbeing, the result of successful problem-solving.
Optimality as used here has little relationship with Pareto optimality, an optimal allocation of resources according to a given definition. Here it refers to “goodness” of system design relative to alternative designs. Goodness is measured as problem-solving capacity and the degree of collective wellbeing that problem solving produces.
While much work has been done to identify wellbeing indicators (education and income levels, disease rates, air quality, life expectancy, and so on), little work has been done on the proximal aspect of relative optimality, problem-solving capacity. I focus on it here, viewed from a 10,000-foot elevation. By taking an abstract view, it becomes apparent that we don’t have to look far for good design ideas. Successful problem-solving systems are all around us and within us, ubiquitous in nature, in the form of complex adaptive systems.
A complex system is a system comprised of many semi-independent parts that interact cohesively. A complex adaptive system is one that alters its behavior or structure in response to stimuli. If the system is complex enough and its anticipatory faculties are sufficiently sophisticated, we say that it learns.
A sophisticated complex adaptive system learns in order to choose a good next action. That is, it conducts decision-making via the process already defined. It gathers, stores, and recalls information as needed for computation, including evaluation of predicted outcomes. Then it makes a choice. Depending on the system, its models might be associative and/or logical.
Thus, sophisticated complex adaptive systems are forward-looking. Substantial energy is directed toward predicting which actions will be beneficial. This is particularly true for humans, a species that Seligman et al. dub Homo prospectus.
To make decisions, we evaluate possible futures via imagined scenarios. Each scenario produces a whiff of emotion, which leads to an intuitive sense of good or bad. We evaluate observations (as opposed to scenarios) in a similar way. Some of the modeling process is slow, conscious, and logical, and some is fast, unconscious, and based on pattern matching. To the degree that we perfect our prospection skills and act according to what is revealed, we earn the title Homo sapiens (wise man).
Of course, any organism or other type of complex adaptive system can make a terrible decision at any moment. And every complex system is subject to cascading failures. For example, a small injury in an animal that would otherwise be repaired can sometimes lead to a death spiral. The ultimate fate of all systems is some sort of death or transition. But while they exist, successful complex adaptive systems tend to be both resilient (able to deform in response to stress) and robust (able to handle stress without changes to structure). If otherwise, they are not successful.
In gathering data and in predictive modeling and evaluation, successful systems strike a dynamic balance between two strategies: they continue old patterns (physical, mental, energetic, whatever) or otherwise base decisions on past information; and they explore new patterns or otherwise base decisions on new information. I like to call this problem-solving approach the stability-agility balance. Mathematicians use a similar approach to solve very difficult optimization problems, only in that field it is called the exploitation-exploration balance.
The stability-agility balance of successful complex adaptive systems tends to exhibit certain hallmark features. One is (self-organized) criticality, where a system operates near the transition zone between stability and agility, between favoring old and favoring new. By operating near the transition zone, a system is continuously poised for change. Even a small bump can drive it, in whole or in part, into a new configuration.
Successful complex adaptive systems tend to operate near criticality because doing so enhances information processing (computation) and leads to resilience and robustness. In short, problem-solving capacity is maximized. For this reason, criticality should become a key term in the economics and political science literature. Moreover, when applied to human decision-making systems, criticality can be construed to involve two notions that many hold dear: wisdom and democracy.
The relationship between criticality and wisdom is due to the heightened problem-solving capacity of a system at criticality. We call a person wise if he or she is a good problem-solver, especially of difficult problems. If we think of a society as a superorganism—an organism consisting of individuals connected by information flows—then we can say that a society is wise if it is good at solving difficult problems. Just like in a distributed computing network, where individual computers are connected by information flows, it is apparent that a society’s problem-solving capacity (and wisdom) ultimately stems from the problem-solving capacity of individuals. Further, the nature and quality of the flows dictate whether the system as a whole reaches its maximum abilities.
It follows that a society is wise to the degree that individuals are wise, and to the degree that the information flows between people are of high quality (low noise, and sufficient speed, magnitude, and breadth, or bandwidth). We can say then that relatively optimal social choice systems are relatively good at communicating information between individuals in such a way that computation of the whole system is relatively high.
Several implications follow. Relatively optimal social choice systems would likely include as components social norms and education programs that value the development of wisdom in individuals. Relatively optimal systems would also likely include scientific inquiry into the nature of wisdom. In recent years, a substantial volume of research has been published on this topic. The general consensus is that wisdom is multifaceted, with most definitions involving aspects such as decision-making ability, pragmatic knowledge of life, pro-social attitudes (empathy, compassion, fairness, etc.), self-reflection, ability to cope with uncertainty, and emotional regulation and self-control. Survey instruments are being developed to measure the level of wisdom in a population.
Another implication is that relatively optimal social choice systems should be relatively good at understanding past and current conditions in order to make relatively accurate predictions about future outcomes. This raises the possibility of assessing the quality of a social choice system design by measuring its predictive accuracy. When social choice systems are viewed as predictive systems, the importance of data collection and transparency becomes apparent. So too do the potential benefits of artificial intelligence (and computer modeling, in general).
The relationship between criticality and democracy stems from the characteristics of a system near criticality. For convenience, call each person in a social choice system a node. Based on experiments with synthetic networks, it appears that near criticality, a dynamic balance is achieved between the sources of information that influence a node’s next behavior. One source is the node itself—its past behavior—and the other is the nodes that it links to—their behavior.
If each node bases its next behavior only on its past, the system would be rigid. On the other hand, if each node bases its next behavior only on the behavior of others, the system would be chaotic. Either way, computation suffers. But near the balance point, coherent information transfer and the computational ability of the network are maximized. But this also means that near criticality the potential influence of any single node on computation of the network is maximized. In a sense, then, criticality can be viewed as nature’s version of direct, collaborative democracy. A system near criticality uses as much information from individuals as it safely can.
It’s easy to imagine a society that is adept at problem solving, yet isn’t wise. Wisdom implies that the problems solved are the problems that matter. If wise societies focus on problems that matter, and individuals are the ultimate source of problem-solving capacity, then what types of problems (or challenges) matter to individuals?
Meaningful problems are the ones that directly relate to real human needs. Others are, by definition, more superficial. The Chilean economist Manfred Max-Neef identifies nine categories of human need: subsistence, protection, affection, understanding, participation, leisure, creation, identity, and freedom.
Turned around, real human needs can be viewed as gifts of nature, developed over eons of evolution, through countless ancestral species, that focus our attention on solving problems that matter. The need to receive and express affection, for example, helps drive us to cooperate in solving difficult problems. The need for leisure drives us to rest and care for our bodies, and so on.
Thus, relatively optimal social choice systems are relatively good at directly addressing real human needs. Addressing real human needs and facilitating wise decisions implies that relatively optimal social choice systems would produce what could be called an economics of meaning, and politics of meaning.
We have viewed economic and governance systems as networks of individuals connected by information flows, and have seen that each individual in the network is programmed by evolution to (generally) focus on problems or challenges that matter. The network, depending on its design, can help or hinder this process.
For example, a network might excessively filter, fail to collect, distort, or ignore the information generated by some individuals. It might over-amplify the information of other individuals. It might distract individuals from focusing on meaningful problems, or so confuse, exhaust, or sicken them that they can’t focus. Any number of pathologies are possible.
With this perspective, it is easy to see many signs of pathology in our current systems. For example, only 13 percent of workers worldwide are engaged in their jobs. This means, likely, that the large majority are not focused on solving problems that matter to them (apart from the need to put food on the table). Indeed, the main problem that workers are asked to tackle in a capitalist system isn’t on Max-Neef’s list: How can I increase the wealth of the company and its investors? Too often, success means exploiting human needs (witness car commercials that promise feelings of love, for example, or beer commercials that promise romantic relationships or respect). It can also mean exploiting the environment.
Contrast this with a system of economic direct democracy, where money is viewed as a bona fide voting tool and all individuals are unconditionally guaranteed a high and equal income (for many, through employment). Such a system might better allow individuals to collectively fund meaningful jobs and services (and achieve full employment even if robotics expands), and so to focus on solving problems that matter. I have published a simulation model illustrating how such a system might work.
Another pathology is severe inequality of income and wealth. In practice, money already functions as a voting tool; the more a person has, the more he or she can influence the decisions of society. This is far from the idea of criticality, where the capacity of any one individual to impact the computation of a social choice system is maximized.
Yet another example of pathology can be seen in representative democracy, where a great deal of the information generated by individuals is filtered out. Once every few years, a citizen can vote either to keep an official in office or to replace him or her with someone else. This sort of infrequent yes/no evaluation can convey only a tiny amount of information relative to what is possible. Contrast this with a system of collaborative direct democracy, where individuals can share their rich assessments of a problem and offer potential solutions.
So far, a distinction has been made between economic and governance systems. But a society uses all of its social choice systems to organize behavior and solve problems. Relatively optimal complete social choice systems, then, are relatively good at using all systems in an integrated fashion to focus on solve problems that matter.
I have argued that the designs of existing social choice systems may be inadequate to solve pressing problems, and have suggested that ideas from complexity science, cognitive sciences, evolutionary biology, and other fields could help us re-conceptualize social choice systems, leading to new designs. Already a new academic program of complexity economics is taking shape, but so far it has largely focused on examining existing systems in order to improve economic analysis and policy advice. The proposal here is to take the next logical step, to the design of new, whole systems.
It is of course unreasonable to expect that new designs, no matter how promising, will undergo abrupt, large-scale implementation. That path of transition would be too risky, too expensive, and too divisive. A viable, even attractive path exists that could take us through the transition from current systems to better ones. I call it engage global, test local, spread viral.
Engage global means to involve the global academic community, and science and technology sector, in partnership with other segments of society, in a focused R&D effort aimed at the design and testing of new systems and benchmarking of results. This is the multidisciplinary wellbeing centrality program mentioned at the beginning of this article. Through public surveys and discussions, instrument development, evaluation of design goals, computer simulations, identification of data needs, and other efforts, it lays the scientific groundwork for the next two steps.
Test local means to conduct scientific field trials of new systems at the local (e.g., community or city) level. Volunteers (individuals, businesses, nonprofits, etc.) who wish to be part of a trial would organize as a civic club. In this way, testing can be done by small teams, at low cost and risk, without need for legislative action, and in parallel with existing systems.
Spread viral means that systems demonstrating clear benefits (like eliminating poverty, improving public health, and generating higher-paying and more meaningful jobs) would likely spread horizontally, even virally, to new locations. Over time this would create a global network of systems that cooperate in trade, education, the implementation of new trials and systems, and in other ways. Eventually, an empowered network of communities and cities would influence other segments of society. Moreover, the problem-solving capacity of such a network would itself be subject to scientific inquiry and engineering innovation.
The wellbeing centrality program and its engage global, test local, spread viral strategy is in keeping with Buckminster Fuller’s admonition that “you never change things by fighting the existing reality.” To change something, he said, “Build a new model that makes the existing model obsolete.”
John Boik, PhD
Author, Economic Direct Democracy: A Framework to End Poverty and Maximize Well-Being (2014) and founder, Principled Societies Project (http://www.PrincipledSocietiesProject.org).
Please share and republish.
This work is licensed under a Creative Commons Attribution-NoDerivatives 4.0 International License.