Better ways of thinking
The previous two chapters identified the need to better understand and work with the complex ecological systems of Nature, and the dynamic nature of societies and economies. Science is making some progress with the first, but our dominant economic paradigm offers no help with the second.
The reductionist modes of thinking which have been prevalent till now must be replaced by thinking based on systems. And the basis of our decision-making needs to move from the arrogance of extractivism to the humility of conservation.
Systems thinking (to combat reductionist thinking)
In the mid-1990s, as a senior manager in public sector organisations, I came across Peter Senge and his management text “The Fifth Discipline”. The first, and foundation discipline, was “systems thinking”.
Systems thinking is pretty much the opposite of linear, analytic, reductionist thinking. It aims to understand things as related and interactive parts of a dynamic whole. Steve Keen’s use of Minsky models for analysing economic system behaviour is one example of applied systems thinking.
The easiest way to understand systems thinking is to start by considering living systems, such as the pond or forest or field and their wildlife nearest you. They represent a complex system of life cycles, predation, and interaction which, all else being equal, stays more or less stable over time, but is unpredictable in both how things happen and how things change.
Systems thinking is the art of making sense out of such a system.
In fact, systems thinking is grounded in ecological science, the study of living systems (including their non-living elements). Ecological thinking is now being applied across all living systems (not just pond-life, as we tend to think of it). And all of this is also related to complexity and chaos theory – the study of dynamic physical systems.
Key characteristics of systems thinking include how elements of the system relate to and interact with each other, the importance of feedback, delays between events in complex systems, and how changes occur. Once you have learned basic systems thinking, you can appreciate the world in new ways.
It can be applied to any complex living system – for example, organisations and societies. Senge’s work was targeted at organisations, and I was lucky enough in the 2000s to be able to bring some of his colleagues from the Society for Organisational Learning out to New Zealand to conduct workshops and seminars with senior public sector managers. I know from the feedback we received then that their thinking horizons were widened substantially as a result of their introduction to these new ways of thinking.
In a nutshell, systems thinking gives us far better tools for understanding our complex world than linear or reductionist thinking. Linear thinking can’t handle more than a couple of things at a time without getting all tangled up. And reductionist thinking believes that we can understand the whole as the sum of its parts, which is demonstrably false in our complex, dynamic, living world.
One of the best things systems thinking can do is help us lift ourselves out of immediate events into longer term thinking. Short-termism is one of our major current social diseases. The drive for immediate information, immediate profit, immediate distraction has strengthened over the last few decades. And it does distract us from stepping back and seeing the whole.
Systems thinking encourages us to examine the feedback and feedforward loops in a system over time, to think not only about immediate effects but also about cumulative and compounding effects, in relation to the long-term health of the system.
Systems thinking is also helpful with getting over simple binary or polar thinking. It takes us out of an “either/or” world into a much richer one. I recently heard Richard Randerson, an eminent New Zealand theologian, talking about “theological bipolarity”, the idea that you have to choose between belief in an all-powerful supreme being and humanism. Randerson pointed out that there were many forms of spirituality, and they didn’t have to invoke a specific sort of supreme being. Systems thinking automatically allows for richer sets of possibilities than linear or binary thinking.
This doesn’t mean that we have to swamp ourselves with complexity, and endless attempts to understand it, at the expense of life and action. We will always have to come back to simpler sets of rules to allow ourselves to function effectively.
It was recently discovered that birds flock by using three simple rules of flight (“separation, alignment, and cohesion”), which provide those amazing spectacles we see of rapid, complex coordinated flights from large or small flocks of birds[i]. But it took systems thinking to establish this – moving on the basis of the three simple rules created new and unexpected – or “emergent” – patterns of behaviour. Linear thinking could never have accomplished this.
Fish and insects behave in similar ways. And apparently we humans may also exhibit similar patterns of behaviour, given the chance. There is no doubt that we in the affluent world are suffering from the loss of simplicity of behavioural rules which the churches used to provide. We are in a transitional state, seeking new and safe rules by which to govern ourselves.
Systems thinking may be part of the way that we can come back to a more stable place to stand. Not by providing us with a new ethical base, but by helping us understand how morality and ethics play out in the real and complex world we inhabit.
Just as ecological science helps us understand how natural systems work, and Steve Keen’s Minsky models help us understand how economies ACTUALLY work, so might other applications of systems thinking help us understand each other and our world better.
Systems thinking, along with its specific applications in the natural world, needs to be a basic part of our education, from primary schools on, to give us more effective tools for understanding our world.
The precautionary principle and imperative (to combat arrogance and extractivism)
This section is about the eternal challenge of keeping a sensible guiding hand on our ingenuity with material technologies. Chapter 13 on the Western “enlightenment” described the reductionism and extractivism which focussed us on material technologies, and the chapter 24 on “misguided ingenuity” described the arrogance and hubris that have misguided us in our development and use of them.
To recap, we humans are really clever at material technologies. But we’re not always wise in what we develop and how we develop them. We do things because we can, not because we should. So, at the macro-level, we build bigger and more destructive weapons; giant dams which are miracles of scale and design, but which obliterate precious environments and change local climates and ecologies; and ever-taller skyscrapers which look like pretty good targets for those improved weapons. Al Qaeda’s choice of the World Trade Centre as a primary target was no accident.
And, at the micro-level, our ability to miniaturise has led to the iPhone, nanobots and genetic engineering. Most evidence suggests that our new communications technologies are pretty good. They have allowed many poor societies to leapfrog the expensive landline infrastructures previously needed for information-gathering and communications.
But the jury is still out on the risks and net benefits of many of these technologies, in either the short or the long-term.
When we create new solutions for the long-term, we are usually guessing about whether they will last long enough, and not have unforeseen side-effects. And, in a profiteering environment, lowest cost and riskier solutions are likely to be tried.
My biggest worry about new technologies, such as genetic engineering, is that we are seeking to engineer “single best” solutions, instead of understanding that life is built on diversity and adaptation. Monsanto, among its many crimes against humanity, is a huge and serial offender in this regard. Industrial agriculture is built on monocropping, which not only leads to soil degradation, but also increases the risk of devastation by unforeseen pests or diseases. Agricultural resilience is built on multicropping and gradual adjustment, not on vast fields of single crops.
Of course, Monsanto’s parallel crime is its attempts to monopolise the ownership and distribution of seeds, but this is just good old corporate pursuit of profit in action, not the misguided use of our technological ingenuity.
Anyway, the general point is that we know much less than we think and act as if we do. We need to take a more precautionary approach to our development and use of new technologies. We need to think harder about what we should do, rather than just doing what we can do.
And, in fairness, in the social and medical sciences there are already many safeguards available for the testing and use of new processes or drugs, such as ethical protocols and processes. Well, in theory at least. Corporations have a long history of avoiding these safeguards, of controlling the underlying science, of concealing or rigging test results, and of selling their products in less protected markets even though they have been found to be unsafe. But again, this is a problem of corporate pursuit of profit rather than misguided use of technological ingenuity.
We need to adopt the precautionary principle and imperative as fundamental considerations in the development and application of new technologies, whether material and immaterial. By “material or immaterial”, I mean not just the material technologies of the hard sciences, but also the immaterial technologies of governance and economic management – the institutions and processes we design and operate to help organise and maintain our societies.
The precautionary principle and imperative
The “precautionary principle” is an ethical principle which says that if we can’t scientifically establish what damage might be caused by some action, then the burden of proof that there will be little or no damage should fall on those who wish to take the action.
It shifts the burden of proof to the proposer of the action. And if the outcomes of the action are uncertain or dangerous, then they should not be taken.
However, there are two problems with this as it stands.
The first is that the “proposers of the action” have a long history of telling the stories they want to tell, of denying or ignoring counter-evidence, and of suborning the scientists themselves who do the evaluative work, mainly by being their funders or employers.
This can only be addressed by relying on independent advice – if it can be obtained, in a world where corporations have increasing power over science. And if it cannot be obtained, then the precautionary principle should prevail, and the action should not be taken. So this problem is solvable.
The second problem is that all science is “uncertain”. We can never know anything with absolute certainty, so the risk with the precautionary principle is that it might unnecessarily slow down or freeze innovation. Life builds and thrives on testing the boundaries – we just need to be more cautious, by applying the precautionary principle. But we can’t freeze – we need to apply a “precautionary imperative” as well.
My definition of a “precautionary imperative” is that we must not be stopped from taking action by the existence of residual uncertainty, but that this must be tempered by the reversibility of the action. In other words, if we’re pretty sure this is the right way to go, let’s start on the way, but keep contingencies available in case we got it wrong.
This is NOT the “proactionary imperative” advocated by so-called “trans-humanists”, who appear to be gung ho advocates of material technologies as the solution to the world’s problems[ii].
Nor is it an excuse for taking least cost solutions based on relative risk assessments of long-term outcomes. Helen Mongillo[iii] quoted to me the case of using lime in coal mine waste to neutralise acids and reduce potential leaching of metals into waterways. It is much cheaper than using a liner and cover, but no one really knows how long the lime will work – and its failure is irreversible. In such a case, the higher cost solution, which is much more certain, and also more able to be maintained, should be chosen.
Putting the precautionary principle and imperative another way, we should always look before we leap. If we can’t see where we’re going, we shouldn’t leap unless we have no other choices (and we always have other choices). If we’re pretty sure we can get there, and we really need to do it, let’s go for it. If we think we can get there, and we’re pretty sure we’ll die or be severely wounded if we don’t try, let’s go for it, and hope the adrenalin helps. If we’re pretty sure we can’t get there, let’s turn and face the problem we’re fleeing, or start climbing down the cliff instead of trying to leap the chasm, or find some other solution to the problem.
The “transhumanists” apparently want to leap before they look, and will even leap if they can’t see where they are going. This is fine in certain circumstances, where the costs of failure are minor, or the action can be reversed. But it’s not a prescription for development of technologies which could cause severe and/or irreversible damage.
The geo-engineers, who want to design grand technological solutions for our climate crisis by conducting experiments at planetary level, also want to leap before they look and will, if given the chance, claim that “there is no alternative”[iv].
There are alternatives. We have all the material technologies we need to stop and eventually reverse global warming and other forms of degradation of our environment NOW.
We are just so dominated by the current narratives of social Darwinism and the pursuit of profit that we lack the will to change. Yes, it would cause economic damage. Yes, the fossil fuel industry would collapse. But this is minor damage compared with the coming environmental and social holocaust.
The merchants of doubt who have been bolstering up the tobacco industry and the climate change deniers have been applying their own clever, but twisted, version of the precautionary principle. They throw arguments around like confetti, often self-contradictory, in the hope that some of it will stick. One of their arguments is that the science is uncertain, so we shouldn’t act. They tell lies and raise already resolved issues to support this apparent “uncertainty” [v]. The science of climate change is as certain as it can be, and even the lowest common denominator reports of the IPCC are getting increasingly strident in telling us this. We need to apply the precautionary imperative and act, not be delayed from action by any uncertainty or, in this case, any appearance of uncertainty.
The precautionary principle and imperative can also be applied to our governance and economic arrangements, in a slightly different way.
The hollowing out of democracy, the psychopathic pursuit of profit by corporations, the casino which the world’s financial system has turned into, the underlying false economic dogma, and the rapidly rising inequalities consequent on all this, are all pretty obvious once you manage to get a reasonable perspective on them.
The precautionary principle says that we who wish for action should shoulder the burden of proof. It’s hard to see how much more proof we need than the reams of books and text and research studies that show this is the case. That reducing our footprint on the planet, building stronger democracies, reshaping the profit motive and financial system, abandoning failed economic theory, and building more egalitarian societies, are far better for all of us than maintaining the current systems.
The precautionary imperative says that we should act even if we don’t have slam-dunk proof, as long as our actions are reversible. And the good news here is that nothing that progressives are proposing is irreversible. We are proposing social engineering, not geo-engineering.
Social structures are always changeable – nothing is better evidence of this than the changes that have occurred in affluent societies in the last 30 years as a result of the neoliberal revolution. These changes can and must be reversed.
[iii] An environmental scientist, and a very helpful reader of the first draft of this book.
[iv] See for example, “This Changes Everything”, P255ff loc 4547ff
[v] See for example “Merchants of Doubt”, Naomi Oreskes and Eric M. Conway, 2010