Wednesday, July 29, 2015

The Cimmerian Hypothesis, Part Three: The End of the Dream

Let's take a moment to recap the argument of the last two posts here on The Archdruid Report before we follow it through to its conclusion. There are any number of ways to sort out the diversity of human social forms, but one significant division lies between those societies that don’t concentrate population, wealth, and power in urban centers, and those that do. One important difference between the societies that fall into these two categories is that urbanized societies—we may as well call these by the time-honored term “civilizations”—reliably crash and burn after a lifespan of roughly a thousand years, while societies that lack cities have no such fixed lifespans and can last for much longer without going through the cycle of rise and fall, punctuated by dark ages, that defines the history of civilizations.

It’s probably necessary to pause here and clear up what seems to be a common misunderstanding. To say that societies in the first category can last for much more than a thousand years doesn’t mean that all of them do this. I mention this because I fielded a flurry of comments from people who pointed to a few examples of  societies without cities that collapsed in less than a millennium, and insisted that this somehow disproved my hypothesis. Not so; if everyone who takes a certain diet pill, let’s say, suffers from heart damage, the fact that some people who don’t take the diet pill suffer heart damage from other causes doesn’t absolve the diet pill of responsibility. In the same way, the fact that civilizations such as Egypt and China have managed to pull themselves together after a dark age and rebuild a new version of their former civilization doesn’t erase the fact of the collapse and the dark age that followed it.

The question is why civilizations crash and burn so reliably. There are plenty of good reasons why this might happen, and it’s entirely possible that several of them are responsible; the collapse of civilization could be an overdetermined process. Like the victim in the cheap mystery novel who was shot, stabbed, strangled, clubbed over the head, and then chucked out a twentieth floor window, that is, civilizations that fall may have more causes of death than were actually necessary. The ecological costs of building and maintaining cities, for example, place much greater strains on the local environment than the less costly and concentrated settlement patterns of nonurban societies, and the rising maintenance costs of capital—the driving force behind the theory of catabolic collapse I’ve proposed elsewhere—can spin out of control much more easily in an urban setting than elsewhere. Other examples of the vulnerability of urbanized societies can easily be worked out by those who wish to do so.

That said, there’s at least one other factor at work. As noted in last week’s post, civilizations by and large don’t have to be dragged down the slope of decline and fall; instead, they take that route with yells of triumph, convinced that the road to ruin will infallibly lead them to heaven on earth, and attempts to turn them aside from that trajectory typically get reactions ranging from blank incomprehension to furious anger. It’s not just the elites who fall into this sort of self-destructive groupthink, either: it’s not hard to find, in a falling civilization, people who claim to disagree with the ideology that’s driving the collapse, but people who take their disagreement to the point of making choices that differ from those of their more orthodox neighbors are much scarcer. They do exist; every civilization breeds them, but they make up a very small fraction of the population, and they generally exist on the fringes of society, despised and condemned by all those right-thinking people whose words and actions help drive the accelerating process of decline and fall.

The next question, then, is how civilizations get caught in that sort of groupthink. My proposal, as sketched out last week, is that the culprit is a rarely noticed side effect of urban life. People who live in a mostly natural environment—and by this I mean merely an environment in which most things are put there by nonhuman processes rather than by human action—have to deal constantly with the inevitable mismatches between the mental models of the universe they carry in their heads and the universe that actually surrounds them. People who live in a mostly artificial environment—an environment in which most things were made and arranged by human action—don’t have to deal with this anything like so often, because an artificial environment embodies the ideas of the people who constructed and arranged it. A natural environment therefore applies negative or, as it’s also called, corrective feedback to human models of the way things are, while an artificial environment applies positive feedback—the sort of thing people usually mean when they talk about a feedback loop.

This explains, incidentally, one of the other common differences between civilizations and other kinds of human society: the pace of change. Anthropologists not so long ago used to insist that what they liked to call “primitive societies”—that is, societies that have relatively simple technologies and no cities—were stuck in some kind of changeless stasis. That was nonsense, but the thin basis in fact that was used to justify the nonsense was simply that the pace of change in low-tech, non-urban societies, when they’re left to their own devices, tends to be fairly sedate, and usually happens over a time scale of generations. Urban societies, on the other hand, change quickly, and the pace of change tends to accelerate over time: a dead giveaway that a positive feedback loop is at work.

Notice that what’s fed back to the minds of civilized people by their artificial environment isn’t simply human thinking in general. It’s whatever particular set of mental models and habits of thought happen to be most popular in their civilization. Modern industrial civilization, for example, is obsessed with simplicity; our mental models and habits of thought value straight lines, simple geometrical shapes, hard boundaries, and clear distinctions. That obsession, and the models and mental habits that unfold from it, have given us an urban environment full of straight lines, simple geometrical shapes, hard boundaries, and clear distinctions—and thus reinforce our unthinking assumption that these things are normal and natural, which by and large they aren’t.

Modern industrial civilization is also obsessed with the frankly rather weird belief that growth for its own sake is a good thing. (Outside of a few specific cases, that is. I’ve wondered at times whether the deeply neurotic American attitude toward body weight comes from the conflict between current fashions in body shape and the growth-is-good mania of the rest of our culture; if bigger is better, why isn’t a big belly better than a small one?) In a modern urban American environment, it’s easy to believe that growth is good, since that claim is endlessly rehashed whenever some new megawhatsit replaces something of merely human scale, and since so many of the costs of malignant growth get hauled out of sight and dumped on somebody else. In settlement patterns that haven’t been pounded into their present shape by true believers in industrial society’s growth-for-its-own-sake ideology, people are rather more likely to grasp the meaning of the words “too much.”

I’ve used examples from our own civilization because they’re familiar, but every civilization reshapes its urban environment in the shape of its own mental models, which then reinforce those models in the minds of the people who live in that environment. As these people in turn shape that environment, the result is positive feedback: the mental models in question become more and more deeply entrenched in the built environment and thus also the collective conversation of the culture, and in both cases, they also become more elaborate and more extreme. The history of architecture in the western world over the last few centuries is a great example of this latter: over that time, buildings became ever more completely defined by straight lines, flat surfaces, simple geometries, and hard boundaries between one space and another—and it’s hardly an accident that popular culture in urban communities has simplified in much the same way over that same timespan.

One way to understand this is to see a civilization as the working out in detail of some specific set of ideas about the world. At first those ideas are as inchoate as dream-images, barely grasped even by the keenest thinkers of the time. Gradually, though, the ideas get worked out explicitly; conflicts among them are resolved or papered over in standardized ways; the original set of ideas becomes the core of a vast, ramifying architecture of thought which defines the universe to the inhabitants of that civilization. Eventually, everything in the world of human experience is assigned some place in that architecture of thought; everything that can be hammered into harmony with the core set of ideas has its place in the system, while everything that can’t gets assigned the status of superstitious nonsense, or whatever other label the civilization likes to use for the realities it denies.

The further the civilization develops, though, the less it questions the validity of the basic ideas themselves, and the urban environment is a critical factor in making this happen. By limiting, as far as possible, the experiences available to influential members of society to those that fit the established architecture of thought, urban living makes it much easier to confuse mental models with the universe those models claim to describe, and that confusion is essential if enough effort, enthusiasm, and passion are to be directed toward the process of elaborating those models to their furthest possible extent.

A branch of knowledge that has to keep on going back to revisit its first principles, after all, will never get far beyond them. This is why philosophy, which is the science of first principles, doesn’t “progress” in the simpleminded sense of that word—Aristotle didn’t disprove Plato, nor did Nietzsche refute Schopenhauer, because each of these philosophers, like all others in that challenging field, returned to the realm of first principles from a different starting point and so offered a different account of the landscape. Original philosophical inquiry thus plays a very large role in the intellectual life of every civilization early in the process of urbanization, since this helps elaborate the core ideas on which the civilization builds its vision of reality; once that process is more or less complete, though, philosophy turns into a recherché intellectual specialty or gets transformed into intellectual dogma.

Cities are thus the Petri dishes in which civilizations ripen their ideas to maturity—and like Petri dishes, they do this by excluding contaminating influences. It’s easy, from the perspective of a falling civilization like ours, to see this as a dreadful mistake, a withdrawal from contact with the real world in order to pursue an abstract vision of things increasingly detached from everything else. That’s certainly one way to look at the matter, but there’s another side to it as well.

Civilizations are far and away the most spectacularly creative form of human society. Over the course of its thousand-year lifespan, the inhabitants of a civilization will create many orders of magnitude more of the products of culture—philosophical, scientific, and religious traditions, works of art and the traditions that produce and sustain them, and so on—than an equal number of people living in non-urban societies and experiencing the very sedate pace of cultural change already mentioned. To borrow a metaphor from the plant world, non-urban societies are perennials, and civilizations are showy annuals that throw all their energy into the flowering process.  Having flowered, civilizations then go to seed and die, while the perennial societies flower less spectacularly and remain green thereafter.

The feedback loop described above explains both the explosive creativity of civilizations and their equally explosive downfall. It’s precisely because civilizations free themselves from the corrective feedback of nature, and divert an ever larger portion of their inhabitants’ brainpower from the uses for which human brains were originally adapted by evolution, that they generate such torrents of creativity. Equally, it’s precisely because they do these things that civilizations run off the rails into self-feeding delusion, lose the capacity to learn the lessons of failure or even notice that failure is taking place, and are destroyed by threats they’ve lost the capacity to notice, let alone overcome. Meanwhile, other kinds of human societies move sedately along their own life cycles, and their creativity and their craziness—and they have both of these, of course, just as civilizations do—are kept within bounds by the enduring negative feedback loops of nature.

Which of these two options is better? That’s a question of value, not of fact, and so it has no one answer. Facts, to return to a point made in these posts several times, belong to the senses and the intellect, and they’re objective, at least to the extent that others can say, “yes, I see it too.” Values, by contrast, are a matter of the heart and the will, and they’re subjective; to call something good or bad doesn’t state an objective fact about the thing being discussed. It always expresses a value judgment from some individual point of view. You can’t say “x is better than y,” and mean anything by it, unless you’re willing to field such questions as “better by what criteria?” and “better for whom?”

Myself, I’m very fond of the benefits of civilization. I like hot running water, public libraries, the rule of law, and a great many other things that you get in civilizations and generally don’t get outside of them. Of course that preference is profoundly shaped by the fact that I grew up in a civilization; if I’d happened to be the son of yak herders in central Asia or tribal horticulturalists in upland Papua New Guinea, I might well have a different opinion—and I might also have a different opinion even if I’d grown up in this civilization but had different needs and predilections. Robert E. Howard, whose fiction launched the series of posts that finishes up this week, was a child of American civilization at its early twentieth century zenith, and he loathed civilization and all it stood for.

This is one of the two reasons that I think it’s a waste of time to get into arguments over whether civilization is a good thing. The other reason is that neither my opinion nor yours, dear reader, nor the opinion of anybody else who might happen to want to fulminate on the internet about the virtues or vices of civilization, is worth two farts in an EF-5 tornado when it comes to the question of whether or not future civilizations will rise and fall on this planet after today’s industrial civilization completes the arc of its destiny. Since the basic requirements of urban life first became available not long after the end of the last ice age, civilizations have risen wherever conditions favored them, cycled through their lifespans, and fell, and new civilizations rose again in the same places if the conditions remained favorable for that process.

Until the coming of the fossil fuel age, though, civilization was a localized thing, in a double sense. On the one hand, without the revolution in transport and military technology made possible by fossil fuels, any given civilization could only maintain control over a small portion of the planet’s surface for more than a fairly short time—thus as late as 1800, when the industrial revolution was already well under way, the civilized world was still divided into separate civilizations that each pursued its own very different ideas and values. On the other hand, without the economic revolution made possible by fossil fuels, very large sections of the world were completely unsuited to civilized life, and remained outside the civilized world for all practical purposes. As late as 1800, as a result, quite a bit of the world’s land surface was still inhabited by hunter-gatherers, nomadic pastoralists, and tribal horticulturalists who owed no allegiance to any urban power and had no interest in cities and their products at all—except for the nomadic pastoralists, that is, who occasionally liked to pillage one.

The world’s fossil fuel reserves aren’t renewable on any time scale that matters to human beings. Since we’ve burnt all the easily accessible coal, oil, and natural gas on the planet, and are working our way through the stuff that’s difficult to get even with today’s baroque and energy-intensive technologies, the world’s first fossil-fueled human civilization is guaranteed to be its last as well. That means that once the deindustrial dark age ahead of us is over, and conditions favorable for the revival of civilization recur here and there on various corners of the planet, it’s a safe bet that new civilizations will build atop the ruins we’ve left for them.

The energy resources they’ll have available to them, though, will be far less abundant and concentrated than the fossil fuels that gave industrial civilization its global reach.  With luck, and some hard work on the part of people living now, they may well inherit the information they need to make use of sun, wind, and other renewable energy resources in ways that the civilizations before ours didn’t know how to do. As our present-day proponents of green energy are finding out the hard way just now, though, this doesn’t amount to the kind of energy necessary to maintain our kind of civilization.

I’ve argued elsewhere, especially in my book The Ecotechnic Future, that modern industrial society is simply the first, clumsiest, and most wasteful form of what might be called technic society, the subset of human societies that get a significant amount of their total energy from nonbiotic sources—that is, from something other than human and animal muscles fueled by the annual product of photosynthesis. If that turns out to be correct, future civilizations that learn to use energy sparingly may be able to accomplish some of the things that we currently do by throwing energy around with wild abandon, and they may also learn how to do remarkable things that are completely beyond our grasp today. Eventually there may be other global civilizations, following out their own unique sets of ideas about the world through the usual process of dramatic creativity followed by dramatic collapse.

That’s a long way off, though. As the first global civilization gives way to the first global dark age, my working guess is that civilization—that is to say, the patterns of human society necessary to support the concentration of population, wealth, and power in urban centers—is going to go away everywhere, or nearly everywhere, over the next one to three centuries. A planet hammered by climate change, strewn with chemical and radioactive poisons, and swept by mass migrations is not a safe place for cities and the other amenities of civilized life. As things calm down, say, half a millennium from now, a range of new civilizations will doubtless emerge in those parts of the planet that have suitable conditions for urban life, while human societies of other kinds will emerge everywhere else on the planet that human life is possible at all.

I realize that this is not exactly a welcome prospect for those people who’ve bought into industrial civilization’s overblown idea of its own universal importance. Those who believe devoutly that our society is the cutting edge of humanity’s future, destined to march on gloriously forever to the stars, will be as little pleased by the portrait of the future I’ve painted as their equal and opposite numbers, for whom our society is the end of history and must surely be annihilated, along with all seven billion of us, by some glorious cataclysm of the sort beloved by Hollywood scriptwriters. Still, the universe is under no obligation to cater to anybody’s fantasies, you know. That’s a lesson Robert E. Howard knew well and wove into the best of his fiction, the stories of Conan among them—and it’s a lesson worth learning now, at least for those who hope to have some influence over how the future affects them, their families, and their communities, in an age of decline and fall.

Wednesday, July 22, 2015

The Cimmerian Hypothesis, Part Two: A Landscape of Hallucinations

Last week’s post covered a great deal of ground—not surprising, really, for an essay that started from a quotation from a Weird Tales story about Conan the Barbarian—and it may be useful to recap the core argument here. Civilizations—meaning here human societies that concentrate power, wealth, and population in urban centers—have a distinctive historical trajectory of rise and fall that isn’t shared by societies that lack urban centers. There are plenty of good reasons why this should be so, from the ecological costs of urbanization to the buildup of maintenance costs that drives catabolic collapse, but there’s also a cognitive dimension.

Look over the histories of fallen civilizations, and far more often than not, societies don’t have to be dragged down the slope of decline and fall. Rather, they go that way at a run, convinced that the road to ruin must inevitably lead them to heaven on earth. Arnold Toynbee, whose voluminous study of the rise and fall of civilizations has been one of the main sources for this blog since its inception, wrote at length about the way that the elite classes of falling civilizations lose the capacity to come up with new responses for new situations, or even to learn from their mistakes; thus they keep on trying to use the same failed policies over and over again until the whole system crashes to ruin. That’s an important factor, no question, but it’s not just the elites who seem to lose track of the real world as civilizations go sliding down toward history’s compost heap, it’s the masses as well.

Those of my readers who want to see a fine example of this sort of blindness to the obvious need only check the latest headlines. Within the next decade or so, for example, the entire southern half of Florida will become unfit for human habitation due to rising sea levels, driven by our dumping of greenhouse gases into an already overloaded atmosphere. Low-lying neighborhoods in Miami already flood with sea water whenever a high tide and a strong onshore wind hit at the same time; one more foot of sea level rise and salt water will pour over barriers into the remaining freshwater sources, turning southern Florida into a vast brackish swamp and forcing the evacuation of most of the millions who live there.

That’s only the most dramatic of a constellation of climatic catastrophes that are already tightening their grip on much of the United States. Out west, the rain forests of western Washington are burning in the wake of years of increasingly severe drought, California’s vast agricultural acreage is reverting to desert, and the entire city of Las Vegas will probably be out of water—as in, you turn on the tap and nothing but dust comes out—in less than a decade. As waterfalls cascade down the seaward faces of Antarctic and Greenland glaciers, leaking methane blows craters in the Siberian permafrost, and sea level rises at rates considerably faster than the worst case scenarios scientists were considering a few years ago, these threats are hardly abstract issues; is anyone in America taking them seriously enough to, say, take any concrete steps to stop using the atmosphere as a gaseous sewer, starting with their own personal behavior? Surely you jest.

No, the Republicans are still out there insisting at the top of their lungs that any scientific discovery that threatens their rich friends’ profits must be fraudulent, the Democrats are still out there proclaiming just as loudly that there must be some way to deal with anthropogenic climate change that won’t cost them their frequent-flyer miles, and nearly everyone outside the political sphere is making whatever noises they think will allow them to keep on pursuing exactly those lifestyle choices that are bringing on planetary catastrophe. Every possible excuse to insist that what’s already happening won’t happen gets instantly pounced on as one more justification for inertia—the claim currently being splashed around the media that the Sun might go through a cycle of slight cooling in the decades ahead is the latest example. (For the record, even if we get a grand solar minimum, its effects will be canceled out in short order by the impact of ongoing atmospheric pollution.)

Business as usual is very nearly the only option anybody is willing to discuss, even though the long-predicted climate catastrophes are already happening and the days of business as usual in any form are obviously numbered. The one alternative that gets air time, of course, is the popular fantasy of instant planetary dieoff, which gets plenty of attention because it’s just as effective an excuse for inaction as faith in business as usual. What next to nobody wants to talk about is the future that’s actually arriving exactly as predicted: a future in which low-lying coastal regions around the country and the world have to be abandoned to the rising seas, while the Southwest and large portions of the mountain west become more inhospitable than the eastern Sahara or Arabia’s Empty Quarter.

If the ice melt keeps accelerating at its present pace, we could be only a few decades form the point at which it’s Manhattan Island’s turn to be abandoned, because everything below ground level is permanently  flooded with seawater and every winter storm sends waves rolling right across the island and flings driftwood logs against second story windows. A few decades more, and waves will roll over the low-lying neighborhoods of Houston, Boston, Seattle, and Washington DC, while the ruined buildings that used to be New Orleans rise out of the still waters of a brackish estuary and the ruined buildings that used to be Las Vegas are half buried by the drifting sand. Take a moment to consider the economic consequences of that much infrastructure loss, that much destruction of built capital, that many people who somehow have to be evacuated and resettled, and think about what kind of body blow that will deliver to an industrial society that is already in bad shape for other reasons.

None of this had to happen. Half a century ago, policy makers and the public alike had already been presented with a tolerably clear outline of what was going to happen if we proceeded along the trajectory we were on, and those same warnings have been repeated with increasing force year by year, as the evidence to support them has mounted up implacably—and yet nearly all of us nodded and smiled and kept going. Nor has this changed in the least as the long-predicted catastrophes have begun to show up right on schedule. Quite the contrary: faced with a rising spiral of massive crises, people across the industrial world are, with majestic consistency, doing exactly those things that are guaranteed to make those crises worse.

So the question that needs to be asked, and if possible answered, is why civilizations—human societies that concentrate population, power, and wealth in urban centers—so reliably lose the capacity to learn from their mistakes and recognize that a failed policy has in fact failed.  It’s also worth asking why they so reliably do this within a finite and predictable timespan: civilizations last on average around a millennium before they crash into a dark age, while uncivilized societies routinely go on for many times that period. Doubtless any number of factors drive civilizations to their messy ends, but I’d like to suggest a factor that, to my knowledge, hasn’t been discussed in this context before.

Let’s start with what may well seem like an irrelevancy. There’s been a great deal of discussion down through the years in environmental circles about the way that the survival and health of the human body depends on inputs from nonhuman nature. There’s been a much more modest amount of talk about the human psychological and emotional needs that can only be met through interaction with natural systems. One question I’ve never seen discussed, though, is whether the human intellect has needs that are only fulfilled by a natural environment.

As I consider that question, one obvious answer comes to mind: negative feedback.

The human intellect is the part of each of us that thinks, that tries to make sense of the universe of our experience. It does this by creating models. By “models” I don’t just mean those tightly formalized and quantified models we call scientific theories; a poem is also a model of part of the universe of human experience, so is a myth, so is a painting, and so is a vague hunch about how something will work out. When a twelve-year-old girl pulls the petals off a daisy while saying “he loves me, he loves me not,” she’s using a randomization technique to decide between two models of one small but, to her, very important portion of the universe, the emotional state of whatever boy she has in mind.

With any kind of model, it’s critical to remember Alfred Korzybski’s famous rule: “the map is not the territory.” A model, to put the same point another way, is a representation; it represents the way some part of the universe looks when viewed from the perspective of one or more members of our species of social primates, using the idiosyncratic and profoundly limited set of sensory equipments, neural processes, and cognitive frameworks we got handed by our evolutionary heritage. Painful though this may be to our collective egotism, it’s not unfair to say that human mental models are what you get when you take the universe and dumb it down to the point that our minds can more or less grasp it.

What keeps our models from becoming completely dysfunctional is the negative feedback we get from the universe. For the benefit of readers who didn’t get introduced to systems theory, I should probably take a moment to explain negative feedback. The classic example is the common household thermostat, which senses the temperature of the air inside the house and activates a switch accordingly. If the air temperature is below a certain threshold, the thermostat turns the heat on and warms things up; if the air temperature rises above a different, slightly higher threshold, the thermostat turns the heat off and lets the house cool down.

In a sense, a thermostat embodies a very simple model of one very specific part of the universe, the temperature inside the house. Like all models, this one includes a set of implicit definitions and a set of value judgments. The definitions are the two thresholds, the one that turns the furnace on and the one that turns it off, and the value judgments label temperatures below the first threshold “too cold” and those above the second “too hot.” Like every human model, the thermostat model is unabashedly anthropocentric—“too cold” by the thermostat’s standard would be uncomfortably warm for a polar bear, for example—and selects out certain factors of interest to human beings from a galaxy of other things we don’t happen to want to take into consideration.

The models used by the human intellect to make sense of the universe are usually less simple than the one that guides a thermostat—there are unfortunately exceptions—but they work according to the same principle. They contain definitions, which may be implicit or explicit: the girl plucking petals from the daisy may have not have an explicit definition of love in mind when she says “he loves me,” but there’s some set of beliefs and expectations about what those words imply underlying the model. They also contain value judgments: if she’s attracted to the boy in question, “he loves me” has a positive value and “he loves me not” has a negative one.

Notice, though, that there’s a further dimension to the model, which is its interaction with the observed behavior of the thing it’s supposed to model. Plucking petals from a daisy, all things considered, is not a very good predictor of the emotional states of twelve-year-old boys; predictions made on the basis of that method are very often disproved by other sources of evidence, which is why few girls much older than twelve rely on it as an information source. Modern western science has formalized and quantified that sort of reality testing, but it’s something that most people do at least occasionally. It’s when they stop doing so that we get the inability to recognize failure that helps to drive, among many other things, the fall of civilizations.

Individual facets of experienced reality thus provide negative feedback to individual models. The whole structure of experienced reality, though, is capable of providing negative feedback on another level—when it challenges the accuracy of the entire mental process of modeling.

Nature is very good at providing negative feedback of that kind. Here’s a human conceptual model that draws a strict line between mammals, on the one hand, and birds and reptiles, on the other. Not much more than a century ago, it was as precise as any division in science: mammals have fur and don’t lay eggs, reptiles and birds don’t have fur and do lay eggs. Then some Australian settler met a platypus, which has fur and lays eggs. Scientists back in Britain flatly refused to take it seriously until some live platypuses finally made it there by ship. Plenty of platypus egg was splashed across plenty of distinguished scientific faces, and definitions had to be changed to make room for another category of mammals and the evolutionary history necessary to explain it.

Here’s another human conceptual model, the one that divides trees into distinct species. Most trees in most temperate woodlands, though, actually have a mix of genetics from closely related species. There are few red oaks; what you have instead are mostly-red, partly-red, and slightly-red oaks. Go from the northern to the southern end of a species’ distribution, or from wet to dry regions, and the variations within the species are quite often more extreme than those that separate trees that have been assigned to different species. Here’s still another human conceptual model, the one that divides trees from shrubs—plenty of species can grow either way, and the list goes on.

The human mind likes straight lines, definite boundaries, precise verbal definitions. Nature doesn’t. People who spend most of their time dealing with undomesticated natural phenomena, accordingly, have to get used to the fact that nature is under no obligation to make the kind of sense the human mind prefers. I’d suggest that this is why so many of the cultures our society calls “primitive”—that is, those that have simple material technologies and interact directly with nature much of the time—so often rely on nonlogical methods of thought: those our culture labels “mythological,” “magical,” or—I love this term—“prescientific.” (That the “prescientific” will almost certainly turn out to be the postscientific as well is one of the lessons of history that modern industrial society is trying its level best to ignore.) Nature as we experience it isn’t simple, neat, linear, and logical, and so it makes sense that the ways of thinking best suited to dealing with nature directly aren’t simple, neat, linear, and logical either.

 With this in mind, let’s return to the distinction discussed in last week’s post. I noted there that a city is a human settlement from which the direct, unmediated presence of nature has been removed as completely as the available technology permits. What replaces natural phenomena in an urban setting, though, is as important as what isn’t allowed there. Nearly everything that surrounds you in a city was put there deliberately by human beings; it is the product of conscious human thinking, and it follows the habits of human thought just outlined. Compare a walk down a city street to a walk through a forest or a shortgrass prairie: in the city street, much more of what you see is simple, neat, linear, and logical. A city is an environment reshaped to reflect the habits and preferences of the human mind.

I suspect there may be a straightforwardly neurological factor in all this. The human brain, so much larger compared to body weight than the brains of most of our primate relatives, evolved because having a larger brain provided some survival advantage to those hominins who had it, in competition with those who didn’t. It’s probably a safe assumption that processing information inputs from the natural world played a very large role in these advantages, and this would imply, in turn, that the human brain is primarily adapted for perceiving things in natural environments—not, say, for building cities, creating technologies, and making the other common products of civilization.

Thus some significant part of the brain has to be redirected away from the things that it’s adapted to do, in order to make civilizations possible. I’d like to propose that the simplified, rationalized, radically information-poor environment of the city plays a crucial role in this. (Information-poor? Of course; the amount of information that comes cascading through the five keen senses of an alert hunter-gatherer standing in an African forest is vastly greater than what a city-dweller gets from the blank walls and the monotonous sounds and scents of an urban environment.) Children raised in an environment that lacks the constant cascade of information natural environments provide, and taught to redirect their mental powers toward such other activities as reading and mathematics, grow up with cognitive habits and, in all probability, neurological arrangements focused toward the activities of civilization and away from the things to which the human brain is adapted by evolution.

One source of supporting evidence for this admittedly speculative proposal is the worldwide insistence on the part of city-dwellers that people who live in isolated rural communities, far outside the cultural ambit of urban life, are just plain stupid. What that means in practice, of course, is that people from isolated rural communities aren’t used to using their brains for the particular purposes that city people value. These allegedly “stupid” countryfolk are by and large extraordinarily adept at the skills they need to survive and thrive in their own environments. They may be able to listen to the wind and know exactly where on the far side of the hill a deer waits to be shot for dinner, glance at a stream and tell which riffle the trout have chosen for a hiding place, watch the clouds pile up and read from them how many days they’ve got to get the hay in before the rains come and rot it in the fields—all of which tasks require sophisticated information processing, the kind of processing that human brains evolved doing.

Notice, though, how the urban environment relates to the human habit of mental modeling. Everything in a city was a mental model before it became a building, a street, an item of furniture, or what have you. Chairs look like chairs, houses like houses, and so on; it’s so rare for humanmade items to break out of the habitual models of our species and the particular culture that built them that when this happens, it’s a source of endless comment. Where a natural environment constantly challenges human conceptual models, an urban environment reinforces them, producing a feedback loop that’s probably responsible for most of the achievements of civilization.

I suggest, though, that the same feedback loop may also play a very large role in the self-destruction of civilizations. People raised in urban environments come to treat their mental models as realities, more real than the often-unruly facts on the ground, because everything they encounter in their immediate environments reinforces those models. As the models become more elaborate and the cities become more completely insulated from the complexities of nature, the inhabitants of a civilization move deeper and deeper into a landscape of hallucinations—not least because as many of those hallucinations get built in brick and stone, or glass and steel, as the available technology permits. As a civilization approaches its end, the divergence between the world as it exists and the mental models that define the world for the civilization’s inmates becomes total, and its decisions and actions become lethally detached from reality—with consequences that we’ll discuss in next week’s post.

Wednesday, July 15, 2015

The Cimmerian Hypothesis, Part One: Civilization and Barbarism

One of the oddities of the writer’s life is the utter unpredictability of inspiration. There are times when I sit down at the keyboard knowing what I have to write, and plod my way though the day’s allotment of prose in much the same spirit that a gardener turns the earth in the beds of a big garden; there are times when a project sits there grumbling to itself and has to be coaxed or prodded into taking shape on the page; but there are also times when something grabs hold of me, drags me kicking and screaming to the keyboard, and holds me there with a squamous paw clamped on my shoulder until I’ve finished whatever it is that I’ve suddenly found out that I have to write.

Over the last two months, I’ve had that last experience on a considerably larger scale than usual; to be precise, I’ve just completed the first draft of a 70,000-word novel in eight weeks. Those of my readers and correspondents who’ve been wondering why I’ve been slower than usual to respond to them now know the reason. The working title is Moon Path to Innsmouth; it deals, in the sidelong way for which fiction is so well suited, with quite a number of the issues discussed on this blog; I’m pleased to say that I’ve lined up a publisher, and so in due time the novel will be available to delight the rugose hearts of the Great Old Ones and their eldritch minions everywhere.

None of that would be relevant to the theme of the current series of posts on The Archdruid Report, except that getting the thing written required quite a bit of reference to the weird tales of an earlier era—the writings of H.P. Lovecraft, of course, but also those of Clark Ashton Smith and Robert E. Howard, who both contributed mightily to the fictive mythos that took its name from Lovecraft’s squid-faced devil-god Cthulhu. One Howard story leads to another—or at least it does if you spent your impressionable youth stewing your imagination in a bubbling cauldron of classic fantasy fiction, as I did—and that’s how it happened that I ended up revisiting the final lines of “Beyond the Black River,” part of the saga of Conan of Cimmeria, Howard’s iconic hero:

“‘Barbarism is the natural state of mankind,’ the borderer said, still staring somberly at the Cimmerian. ‘Civilization is unnatural. It is a whim of circumstance. And barbarism must always ultimately triumph.’”

It’s easy to take that as nothing more than a bit of bluster meant to add color to an adventure story—easy but, I’d suggest, inaccurate. Science fiction has made much of its claim to be a “literature of ideas,” but a strong case can be made that the weird tale as developed by Lovecraft, Smith, Howard, and their peers has at least as much claim to the same label, and the ideas that feature in a classic weird tale are often a good deal more challenging than those that are the stock in trade of most science fiction: “gee, what happens if I extrapolate this technological trend a little further?” and the like. The authors who published with Weird Tales back in the day, in particular, liked to pose edgy questions about the way that the posturings of our species and its contemporary cultures appeared in the cold light of a cosmos that’s wholly uninterested in our overblown opinion of ourselves.

Thus I think it’s worth giving Conan and his fellow barbarians their due, and treating what we may as well call the Cimmerian hypothesis as a serious proposal about the underlying structure of human history. Let’s start with some basics. What is civilization? What is barbarism? What exactly does it mean to describe one state of human society as natural and another unnatural, and how does that relate to the repeated triumph of barbarism at the end of every civilization?

The word “civilization” has a galaxy of meanings, most of them irrelevant to the present purpose. We can take the original meaning of the word—in late Latin, civilisatio—as a workable starting point; it means “having or establishing settled communities.” A people known to the Romans was civilized if its members lived in civitates, cities or towns. We can generalize this further, and say that a civilization is a form of society in which people live in artificial environments. Is there more to civilization than that? Of course there is, but as I hope to show, most of it unfolds from the distinction just traced out.

A city, after all, is a human environment from which the ordinary workings of nature have been excluded, to as great an extent as the available technology permits. When you go outdoors in a city,  nearly all the things you encounter have been put there by human beings; even the trees are where they are because someone decided to put them there, not by way of the normal processes by which trees reproduce their kind and disperse their seeds. Those natural phenomena that do manage to elbow their way into an urban environment—tropical storms, rats, and the like—are interlopers, and treated as such. The gradient between urban and rural settlements can be measured precisely by what fraction of the things that residents encounter is put there by human action, as compared to the fraction that was put there by ordinary natural processes.

What is barbarism? The root meaning here is a good deal less helpful. The Greek word βαρβαροι, barbaroi, originally meant “people who say ‘bar bar bar’” instead of talking intelligibly in Greek. In Roman times that usage got bent around to mean “people outside the Empire,” and thus in due time to “tribes who are too savage to speak Latin, live in cities, or give up without a fight when we decide to steal their land.” Fast forward a century or two, and that definition morphed uncomfortably into “tribes who are too savage to speak Latin, live in cities, or stay peacefully on their side of the border” —enter Alaric’s Visigoths, Genseric’s Vandals, and the ebullient multiethnic horde that marched westwards under the banners of Attila the Hun.

This is also where Conan enters the picture. In crafting his fictional Hyborian Age, which was vaguely located in time betwen the sinking of Atlantis and the beginning of recorded history, Howard borrowed freely from various corners of the past, but the Roman experience was an important ingredient—the story cited above, framed by a struggle between the kingdom of Aquilonia and the wild Pictish tribes beyond the Black River, drew noticeably on Roman Britain, though it also took elements from the Old West and elsewhere. The entire concept of a barbarian hero swaggering his way south into the lands of civilization, which Howard introduced to fantasy fiction (and which has been so freely and ineptly plagiarized since his time), has its roots in the late Roman and post-Roman experience, a time when a great many enterprising warriors did just that, and when some, like Conan, became kings.

What sets barbarian societies apart from civilized ones is precisely that a much smaller fraction of the environment barbarians encounter results from human action. When you go outdoors in Cimmeria—if you’re not outdoors to start with, which you probably are—nearly everything you encounter has been put there by nature. There are no towns of any size, just scattered clusters of dwellings in the midst of a mostly unaltered environment. Where your Aquilonian town dweller who steps outside may have to look hard to see anything that was put there by nature, your Cimmerian who shoulders his battle-ax and goes for a stroll may have to look hard to see anything that was put there by human beings.

What’s more, there’s a difference in what we might usefully call the transparency of human constructions. In Cimmeria, if you do manage to get in out of the weather, the stones and timbers of the hovel where you’ve taken shelter are recognizable lumps of rock and pieces of tree; your hosts smell like the pheromone-laden social primates they are; and when their barbarian generosity inspires them to serve you a feast, they send someone out to shoot a deer, hack it into gobbets, and cook the result in some relatively simple manner that leaves no doubt in anyone’s mind that you’re all chewing on parts of a dead animal. Follow Conan’s route down into the cities of Aquilonia, and you’re in a different world, where paint and plaster, soap and perfume, and fancy cookery, among many other things, obscure nature’s contributions to the human world.

So that’s our first set of distinctions. What makes human societies natural or unnatural? It’s all too easy  to sink into a festering swamp of unsubstantiated presuppositions here, since people in every human society think of their own ways of doing things as natural and normal, and everyone else’s ways of doing the same things as unnatural and abnormal. Worse, there’s the pervasive bad habit in industrial Western cultures of lumping all non-Western cultures with relatively simple technologies together as “primitive man”—as though there’s only one of him, sitting there in a feathered war bonnet and a lionskin kilt playing the didgeridoo—in order to flatten out human history into an imaginary straight line of progress that leads from the caves, through us, to the stars.

In point of anthropological fact, the notion of “primitive man” as an allegedly unspoiled child of nature is pure hokum, and generally racist hokum at that. “Primitive” cultures—that is to say, human societies that rely on relatively simple technological suites—differ from one another just as dramatically as they differ from modern Western industrial societies; nor do simpler technological suites correlate with simpler cultural forms. Traditional Australian aboriginal societies, which have extremely simple material technologies, are considered by many anthropologists to have among the most intricate cultures known anywhere, embracing stunningly elaborate systems of knowledge in which cosmology, myth, environmental knowledge, social custom, and scores of other fields normally kept separate in our society are woven together into dizzyingly complex tapestries of knowledge.

What’s more, those tapestries of knowledge have changed and evolved over time. The hokum that underlies that label “primitive man” presupposes, among other things, that societies that use relatively simple technological suites have all been stuck in some kind of time warp since the Neolithic—think of the common habit of speech that claims that hunter-gatherer tribes are “still in the Stone Age” and so forth. Back of that habit of speech is the industrial world’s irrational conviction that all human history is an inevitable march of progress that leads straight to our kind of society, technology, and so forth. That other human societies might evolve in different directions and find their own wholly valid ways of making a home in the universe is anathema to most people in the industrial world these days—even though all the evidence suggests that this way of looking at the history of human culture makes far more sense of the data than does the fantasy of inevitable linear progress toward us.

Thus traditional tribal societies are no more natural than civilizations are, in one important sense of the word “natural;” that is, tribal societies are as complex, abstract, unique, and historically contingent as civilizations are. There is, however, one kind of human society that doesn’t share these characteristics—a kind of society that tends to be intellectually and culturally as well as technologically simpler than most, and that recurs in astonishingly similar forms around the world and across time. We’ve talked about it at quite some length in this blog; it’s the distinctive dark age society that emerges in the ruins of every fallen civilization after the barbarian war leaders settle down to become petty kings, the survivors of the civilization’s once-vast population get to work eking out a bare subsistence from the depleted topsoil, and most of the heritage of the wrecked past goes into history’s dumpster.

If there’s such a thing as a natural human society, the basic dark age society is probably it, since it emerges when the complex, abstract, unique, and historically contingent cultures of the former civilization and its hostile neighbors have both imploded, and the survivors of the collapse have to put something together in a hurry with nothing but raw human relationships and the constraints of the natural world to guide them. Of course once things settle down the new society begins moving off in its own complex, abstract, unique, and historically contingent direction; the dark age societies of post-Mycenean Greece, post-Roman Britain, post-Heian Japan, and their many equivalents have massive similarities, but the new societies that emerged from those cauldrons of cultural rebirth had much less in common with one another than their forbears did.

In Howard’s fictive history, the era of Conan came well before the collapse of Hyborian civilization; he was not himself a dark age warlord, though he doubtless would have done well in that setting. The Pictish tribes whose activities on the Aquilonian frontier inspired the quotation cited earlier in this post weren’t a dark age society, either, though if they’d actually existed, they’d have been well along the arc of transformation that turns the hostile neighbors of a declining civilization into the breeding ground of the warbands that show up on cue to finish things off. The Picts of Howard’s tale, though, were certainly barbarians—that is, they didn’t speak Aquilonian, live in cities, or stay peaceably on their side of the Black River—and they were still around long after the Hyborian civilizations were gone.

That’s one of the details Howard borrowed from history. By and large, human societies that don’t have urban centers tend to last much longer than those that do. In particular, human societies that don’t have urban centers don’t tend to go through the distinctive cycle of decline and fall ending in a dark age that urbanized societies undergo so predictably. There are plenty of factors that might plausibly drive this difference, many of which have been discussed here and elsewhere, but I’ve come to suspect something subtler may be at work here as well. As we’ve seen, a core difference between civilizations and other human societies is that people in civilizations tend to cut themselves off from the immediate experience of nature nature to a much greater extent than the uncivilized do. Does this help explain why civilizations crash and burn so reliably, leaving the barbarians to play drinking games with mead while sitting unsteadily on the smoldering ruins?

As it happens, I think it does.

As we’ve discussed at length in the last three weekly posts here, human intelligence is not the sort of protean, world-transforming superpower with limitless potential it’s been labeled by the more overenthusiastic partisans of human exceptionalism. Rather, it’s an interesting capacity possessed by one species of social primates, and quite possibly shared by some other animal species as well. Like every other biological capacity, it evolved through a process of adaptation to the environment—not, please note, to some abstract concept of the environment, but to the specific stimuli and responses that a social primate gets from the African savanna and its inhabitants, including but not limited to other social primates of the same species. It’s indicative that when our species originally spread out of Africa, it seems to have settled first in those parts of the Old World that had roughly savanna-like ecosystems, and only later worked out the bugs of living in such radically different environments as boreal forests, tropical jungles, and the like.

The interplay between the human brain and the natural environment is considerably more significant than has often been realized. For the last forty years or so, a scholarly discipline called ecopsychology has explored some of the ways that interactions with nature shape the human mind. More recently, in response to the frantic attempts of American parents to isolate their children from a galaxy of largely imaginary risks, psychologists have begun to talk about “nature deficit disorder,” the set of emotional and intellectual dysfunctions that show up reliably in children who have been deprived of the normal human experience of growing up in intimate contact with the natural world.

All of this should have been obvious from first principles. Studies of human and animal behavior alike have shown repeatedly that psychological health depends on receiving certain highly specific stimuli at certain stages in the maturation process. The famous experiments by Henry Harlow, who showed that monkeys raised  with a mother-substitute wrapped in terrycloth grew up more or less normal, while those raised with a bare metal mother-substitute turned out psychotic even when all their other needs were met, are among the more famous of these, but there have been many more, and many of them can be shown to affect human capacities in direct and demonstrable ways. Children learn language, for example, only if they’re exposed to speech during a certain age window; lacking the right stimulus at the right time, the capacity to use language shuts down and apparently can’t be restarted again.

In this latter example, exposure to speech is what’s known as a triggering stimulus—something from outside the organism that kickstarts a process that’s already hardwired into the organism, but will not get under way until and unless the trigger appears. There are other kinds of stimuli that play different roles in human and animal development. The maturation of the human mind, in fact, might best be seen as a process in which inputs from the environment play a galaxy of roles, some of them of critical importance. What happens when the natural inputs that were around when human intelligence evolved get shut out of the experiences of maturing humans, and replaced by a very different set of inputs put there by human beings? We’ll discuss that next week, in the second part of this post.

Wednesday, July 08, 2015

Darwin's Casino

Our age has no shortage of curious features, but for me, at least, one of the oddest is the way that so many people these days don’t seem to be able to think through the consequences of their own beliefs. Pick an ideology, any ideology, straight across the spectrum from the most devoutly religious to the most stridently secular, and you can count on finding a bumper crop of people who claim to hold that set of beliefs, and recite them with all the uncomprehending enthusiasm of a well-trained mynah bird, but haven’t noticed that those beliefs contradict other beliefs they claim to hold with equal devotion.

I’m not talking here about ordinary hypocrisy. The hypocrites we have with us always; our species being what it is, plenty of people have always seen the advantages of saying one thing and doing another. No, what I have in mind is saying one thing and saying another, without ever noticing that if one of those statements is true, the other by definition has to be false. My readers may recall the way that cowboy-hatted heavies in old Westerns used to say to each other, “This town ain’t big enough for the two of us;” there are plenty of ideas and beliefs that are like that, but too many modern minds resemble nothing so much as an OK Corral where the gunfight never happens.

An example that I’ve satirized in an earlier post here is the bizarre way that so many people on the rightward end of the US political landscape these days claim to be, at one and the same time, devout Christians and fervid adherents of Ayn Rand’s violently atheist and anti-Christian ideology.  The difficulty here, of course, is that Jesus tells his followers to humble themselves before God and help the poor, while Rand told hers to hate God, wallow in fantasies of their own superiority, and kick the poor into the nearest available gutter.  There’s quite precisely no common ground between the two belief systems, and yet self-proclaimed Christians who spout Rand’s turgid drivel at every opportunity make up a significant fraction of the Republican Party just now.

Still, it’s only fair to point out that this sort of weird disconnect is far from unique to religious people, or for that matter to Republicans. One of the places it crops up most often nowadays is the remarkable unwillingness of people who say they accept Darwin’s theory of evolution to think through what that theory implies about the limits of human intelligence.

If Darwin’s right, as I’ve had occasion to point out here several times already, human intelligence isn’t the world-shaking superpower our collective egotism likes to suppose. It’s simply a somewhat more sophisticated version of the sort of mental activity found in many other animals. The thing that supposedly sets it apart from all other forms of mentation, the use of abstract language, isn’t all that unique; several species of cetaceans and an assortment of the brainier birds communicate with their kin using vocalizations that show all the signs of being languages in the full sense of the word—that is, structured patterns of abstract vocal signs that take their meaning from convention rather than instinct.

What differentiates human beings from bottlenosed porpoises, African gray parrots, and other talking species is the mere fact that in our case, language and abstract thinking happened to evolve in a species that also had the sort of grasping limbs, fine motor control, and instinctive drive to pick things up and fiddle with them, that primates have and most other animals don’t.  There’s no reason why sentience should be associated with the sort of neurological bias that leads to manipulating the environment, and thence to technology; as far as the evidence goes, we just happen to be the one species in Darwin’s evolutionary casino that got dealt both those cards. For all we know, bottlenosed porpoises have a rich philosophical, scientific, and literary culture dating back twenty million years; they don’t have hands, though, so they don’t have technology. All things considered, this may be an advantage, since it means they won’t have had to face the kind of self-induced disasters our species is so busy preparing for itself due to the inveterate primate tendency to, ahem, monkey around with things.

I’ve long suspected that one of the reasons why human beings haven’t yet figured out how to carry on a conversation with bottlenosed porpoises, African gray parrots, et al. in their own language is quite simply that we’re terrified of what they might say to us—not least because it’s entirely possible that they’d be right. Another reason for the lack of communication, though, leads straight back to the limits of human intelligence. If our minds have emerged out of the ordinary processes of evolution, what we’ve got between our ears is simply an unusually complex variation on the standard social primate brain, adapted over millions of years to the mental tasks that are important to social primates—that is, staying fed, attracting mates, competing for status, and staying out of the jaws of hungry leopards.

Notice that “discovering the objective truth about the nature of the universe” isn’t part of this list, and if Darwin’s theory of evolution is correct—as I believe it to be—there’s no conceivable way it could be. The mental activities of social primates, and all other living things, have to take the rest of the world into account in certain limited ways; our perceptions of food, mates, rivals, and leopards, for example, have to correspond to the equivalent factors in the environment; but it’s actually an advantage to any organism to screen out anything that doesn’t relate to immediate benefits or threats, so that adequate attention can be paid to the things that matter. We perceive colors, which most mammals don’t, because primates need to be able to judge the ripeness of fruit from a distance; we don’t perceive the polarization of light, as bees do, because primates don’t need to navigate by the angle of the sun.

What’s more, the basic mental categories we use to make sense of the tiny fraction of our surroundings that we perceive are just as much a product of our primate ancestry as the senses we have and don’t have. That includes the basic structures of human language, which most research suggests are inborn in our species, as well as such derivations from language as logic and the relation between cause and effect—this latter simply takes the grammatical relation between subjects, verbs, and objects, and projects it onto the nonlinguistic world. In the real world, every phenomenon is part of an ongoing cascade of interactions so wildly hypercomplex that labels like “cause” and “effect” are hopelessly simplistic; what’s more, a great many things—for example, the decay of radioactive nuclei—just up and happen randomly without being triggered by any specific cause at all. We simplify all this into cause and effect because just enough things appear to work that way to make the habit useful to us.

Another thing that has much more to do with our cognitive apparatus than with the world we perceive is number. Does one apple plus one apple equal two apples? In our number-using minds, yes; in the real world, it depends entirely on the size and condition of the apples in question. We convert qualities into quantities because quantities are easier for us to think with.  That was one of the core discoveries that kickstarted the scientific revolution; when Galileo became the first human being in history to think of speed as a quantity, he made it possible for everyone after him to get their minds around the concept of velocity in a way that people before him had never quite been able to do.

In physics, converting qualities to quantities works very, very well. In some other sciences, the same thing is true, though the further you go away from the exquisite simplicity of masses in motion, the harder it is to translate everything that matters into quantitative terms, and the more inevitably gets left out of the resulting theories. By and large, the more complex the phenomena under discussion, the less useful quantitative models are. Not coincidentally, the more complex the phenomena under discussion, the harder it is to control all the variables in play—the essential step in using the scientific method—and the more tentative, fragile, and dubious the models that result.

So when we try to figure out what bottlenosed porpoises are saying to each other, we’re facing what’s probably an insuperable barrier. All our notions of language are social-primate notions, shaped by the peculiar mix of neurology and hardwired psychology that proved most useful to bipedal apes on the East African savannah over the last few million years. The structures that shape porpoise speech, in turn, are social-cetacean notions, shaped by the utterly different mix of neurology and hardwired psychology that’s most useful if you happen to be a bottlenosed porpoise or one of its ancestors.

Mind you, porpoises and humans are at least fellow-mammals, and likely have common ancestors only a couple of hundred million years back. If you want to talk to a gray parrot, you’re trying to cross a much vaster evolutionary distance, since the ancestors of our therapsid forebears and the ancestors of the parrot’s archosaurian progenitors have been following divergent tracks since way back in the Paleozoic. Since language evolved independently in each of the lineages we’re discussing, the logic of convergent evolution comes into play: as with the eyes of vertebrates and cephalopods—another classic case of the same thing appearing in very different evolutionary lineages—the functions are similar but the underlying structure is very different. Thus it’s no surprise that it’s taken exhaustive computer analyses of porpoise and parrot vocalizations just to give us a clue that they’re using language too.

The takeaway point I hope my readers have grasped from this is that the human mind doesn’t know universal, objective truths. Our thoughts are simply the way that we, as members of a particular species of social primates, to like to sort out the universe into chunks simple enough for us to think with. Does that make human thought useless or irrelevant? Of course not; it simply means that its uses and relevance are as limited as everything else about our species—and, of course, every other species as well. If any of my readers see this as belittling humanity, I’d like to suggest that fatuous delusions of intellectual omnipotence aren’t a useful habit for any species, least of all ours. I’d also point out that those very delusions have played a huge role in landing us in the rising spiral of crises we’re in today.

Human beings are simply one species among many, inhabiting part of the earth at one point in its long lifespan. We’ve got remarkable gifts, but then so does every other living thing. We’re not the masters of the planet, the crown of evolution, the fulfillment of Earth’s destiny, or any of the other self-important hogwash with which we like to tickle our collective ego, and our attempt to act out those delusional roles with the help of a lot of fossil carbon hasn’t exactly turned out well, you must admit. I know some people find it unbearable to see our species deprived of its supposed place as the precious darlings of the cosmos, but that’s just one of life’s little learning experiences, isn’t it? Most of us make a similar discovery on the individual scale in the course of growing up, and from my perspective, it’s high time that humanity do a little growing up of its own, ditch the infantile egotism, and get to work making the most of the time we have on this beautiful and fragile planet.

The recognition that there’s a middle ground between omnipotence and uselessness, though, seems to be very hard for a lot of people to grasp just now. I don’t know if other bloggers in the doomosphere have this happen to them, but every few months or so I field a flurry of attempted comments by people who want to drag the conversation over to their conviction that free will doesn’t exist. I don’t put those comments through, and not just because they’re invariably off topic; the ideology they’re pushing is, to my way of thinking, frankly poisonous, and it’s also based on a shopworn Victorian determinism that got chucked by working scientists rather more than a century ago, but is still being recycled by too many people who didn’t hear the thump when it landed in the trash can of dead theories.

A century and a half ago, it used to be a commonplace of scientific ideology that cause and effect ruled everything, and the whole universe was fated to rumble along a rigidly invariant sequence of events from the beginning of time to the end thereof. The claim was quite commonly made that a sufficiently vast intelligence, provided with a sufficiently complete data set about the position and velocity of every particle in the cosmos at one point in time, could literally predict everything that would ever happen thereafter. The logic behind that claim went right out the window, though, once experiments in the early 20th century showed conclusively that quantum phenomena are random in the strictest sense of the world. They’re not caused by some hidden variable; they just happen when they happen, by chance.

What determines the moment when a given atom of an unstable isotope will throw off some radiation and turn into a different element? Pure dumb luck. Since radiation discharges from single atoms of unstable isotopes are the most important cause of genetic mutations, and thus a core driving force behind the process of evolution, this is much more important than it looks. The stray radiation that gave you your eye color, dealt an otherwise uninteresting species of lobefin fish the adaptations that made it the ancestor of all land vertebrates, and provided the raw material for countless other evolutionary transformations:  these were entirely random events, and would have happened differently if certain unstable atoms had decayed at a different moment and sent their radiation into a different ovum or spermatozoon—as they very well could have. So it doesn’t matter how vast the intelligence or complete the data set you’ve got, the course of life on earth is inherently impossible to predict, and so are a great many other things that unfold from it.

With the gibbering phantom of determinism laid to rest, we can proceed to the question of free will. We can define free will operationally as the ability to produce genuine novelty in behavior—that is, to do things that can’t be predicted. Human beings do this all the time, and there are very good evolutionary reasons why they should have that capacity. Any of my readers who know game theory will recall that the best strategy in any competitive game includes an element of randomness, which prevents the other side from anticipating and forestalling your side’s actions. Food gathering, in game theory terms, is a competitive game; so are trying to attract a mate, competing for social prestige, staying out of the jaws of hungry leopards, and most of the other activities that pack the day planners of social primates.

Unpredictability is so highly valued by our species, in fact, that every human culture ever recorded has worked out formal ways to increase the total amount of sheer randomness guiding human action. Yes, we’re talking about divination—for those who don’t know the jargon, this term refers to what you do with Tarot cards, the I Ching, tea leaves, horoscopes, and all the myriad other ways human cultures have worked out to take a snapshot of the nonrational as a guide for action. Aside from whatever else may be involved—a point that isn’t relevant to this blog—divination does a really first-rate job of generating unpredictability. Flipping a coin does the same thing, and most people have confounded the determinists by doing just that on occasion, but fully developed divination systems like those just named provide a much richer palette of choices than the simple coin toss, and thus enable people to introduce a much richer range of novelty into their actions.

Still, divination is a crutch, or at best a supplement; human beings have their own onboard novelty generators, which can do the job all by themselves if given half a chance.  The process involved here was understood by philosophers a long time ago, and no doubt the neurologists will get around to figuring it out one of these days as well. The core of it is that humans don’t respond directly to stimuli, external or internal.  Instead, they respond to their own mental representations of stimuli, which are constructed by the act of cognition and are laced with bucketloads of extraneous material garnered from memory and linked to the stimulus in uniquely personal, irrational, even whimsical ways, following loose and wildly unpredictable cascades of association and contiguity that have nothing to do with logic and everything to do with the roots of creativity. 

Each human society tries to give its children some approximation of its own culturally defined set of representations—that’s what’s going on when children learn language, pick up the customs of their community, ask for the same bedtime story to be read to them for the umpteenth time, and so on. Those culturally defined representations proceed to interact in various ways with the inborn, genetically defined representations that get handed out for free with each brand new human nervous system.  The existence of these biologically and culturally defined representations, and of various ways that they can be manipulated to some extent by other people with or without the benefit of mass media, make up the ostensible reason why the people mentioned above insist that free will doesn’t exist.

Here again, though, the fact that the human mind isn’t omnipotent doesn’t make it powerless. Think about what happens, say, when a straight stick is thrust into water at an angle, and the stick seems to pick up a sudden bend at the water’s surface, due to differential refraction in water and air. The illusion is as clear as anything, but if you show this to a child and let the child experiment with it, you can watch the representation “the stick is bent” give way to “the stick looks bent.” Notice what’s happening here: the stimulus remains the same, but the representation changes, and so do the actions that result from it. That’s a simple example of how representations create the possibility of freedom.

In the same way, when the media spouts some absurd bit of manipulative hogwash, if you take the time to think about it, you can watch your own representation shift from “that guy’s having an orgasm from slurping that fizzy brown sugar water” to “that guy’s being paid to pretend to have an orgasm, so somebody can try to convince me to buy that fizzy brown sugar water.” If you really pay attention, it may shift again to “why am I wasting my time watching this guy pretend to get an orgasm from fizzy brown sugar water?” and may even lead you to chuck your television out a second story window into an open dumpster, as I did to the last one I ever owned. (The flash and bang when the picture tube imploded, by the way, was far more entertaining than anything that had ever appeared on the screen.)

Human intelligence is limited. Our capacities for thinking are constrained by our heredity, our cultures, and our personal experiences—but then so are our capacities for the perception of color, a fact that hasn’t stopped artists from the Paleolithic to the present from putting those colors to work in a galaxy of dizzyingly original ways. A clear awareness of the possibilities and the limits of the human mind makes it easier to play the hand we’ve been dealt in Darwin’s casino—and it also points toward a generally unsuspected reason why civilizations come apart, which we’ll discuss next week.

Wednesday, July 01, 2015

The Dream of the Machine

As I type these words, it looks as though the wheels are coming off the global economy. Greece and Puerto Rico have both suspended payments on their debts, and China’s stock market, which spent the last year in a classic speculative bubble, is now in the middle of a classic speculative bust. Those of my readers who’ve read John Kenneth Galbraith’s lively history The Great Crash 1929 already know all about the Chinese situation, including the outcome—and since vast amounts of money from all over the world went into Chinese stocks, and most of that money is in the process of turning into twinkle dust, the impact of the crash will inevitably proliferate through the global economy.

So, in all probability, will the Greek and Puerto Rican defaults. In today’s bizarre financial world, the kind of bad debts that used to send investors backing away in a hurry attract speculators in droves, and so it turns out that some big New York hedge funds are in trouble as a result of the Greek default, and some of the same firms that got into trouble with mortgage-backed securities in the recent housing bubble are in the same kind of trouble over Puerto Rico’s unpayable debts. How far will the contagion spread? It’s anybody’s guess.

Oh, and on another front, nearly half a million acres of Alaska burned up in a single day last week—yes, the fires are still going—while ice sheets in Greenland are collapsing so frequently and forcefully that the resulting earthquakes are rattling seismographs thousands of miles away. These and other signals of a biosphere in crisis make good reminders of the fact that the current economic mess isn’t happening in a vacuum. As Ugo Bardi pointed out in a thoughtful blog post, finance is the flotsam on the surface of the ocean of real exchanges of real goods and services, and the current drumbeat of financial crises are symptomatic of the real crisis—the arrival of the limits to growth that so many people have been discussing, and so many more have been trying to ignore, for the last half century or so.

A great many people in the doomward end of the blogosphere are talking about what’s going on in the global economy and what’s likely to blow up next. Around the time the next round of financial explosions start shaking the world’s windows, a great many of those same people will likely be talking about what to do about it all.  I don’t plan on joining them in that discussion. As blog posts here have pointed out more than once, time has to be considered when getting ready for a crisis. The industrial world would have had to start backpedaling away from the abyss decades ago in order to forestall the crisis we’re now in, and the same principle applies to individuals.  The slogan “collapse now and avoid the rush!” loses most of its point, after all, when the rush is already under way.

Any of my readers who are still pinning their hopes on survival ecovillages and rural doomsteads they haven’t gotten around to buying or building yet, in other words, are very likely out of luck. They, like the rest of us, will be meeting this where they are, with what they have right now. This is ironic, in that ideas that might have been worth adopting three or four years ago are just starting to get traction now. I’m thinking here particularly of a recent article on how to use permaculture to prepare for a difficult future, which describes the difficult future in terms that will be highly familiar to readers of this blog. More broadly, there’s a remarkable amount of common ground between that article and the themes of my book Green Wizardry. The awkward fact remains that when the global banking industry shows every sign of freezing up the way it did in 2008, putting credit for land purchases out of reach of most people for years to come, the article’s advice may have come rather too late.

That doesn’t mean, of course, that my readers ought to crawl under their beds and wait for death. What we’re facing, after all, isn’t the end of the world—though it may feel like that for those who are too deeply invested, in any sense of that last word you care to use, in the existing order of industrial society. As Visigothic mommas used to remind their impatient sons, Rome wasn’t sacked in a day. The crisis ahead of us marks the end of what I’ve called abundance industrialism and the transition to scarcity industrialism, as well as the end of America’s global hegemony and the emergence of a new international order whose main beneficiary hasn’t been settled yet. Those paired transformations will most likely unfold across several decades of economic chaos, political turmoil, environmental disasters, and widespread warfare. Plenty of people got through the equivalent cataclysms of the first half of the twentieth century with their skins intact, even if the crisis caught them unawares, and no doubt plenty of people will get through the mess that’s approaching us in much the same condition.

Thus I don’t have any additional practical advice, beyond what I’ve already covered in my books and blog posts, to offer my readers just now. Those who’ve already collapsed and gotten ahead of the rush can break out the popcorn and watch what promises to be a truly colorful show.  Those who didn’t—well, you might as well get some popcorn going and try to enjoy the show anyway. If you come out the other side of it all, schoolchildren who aren’t even born yet may eventually come around to ask you awed questions about what happened when the markets crashed in ‘15.

In the meantime, while the popcorn is popping and the sidewalks of Wall Street await their traditional tithe of plummeting stockbrokers, I’d like to return to the theme of last week’s post and talk about the way that the myth of the machine—if you prefer, the widespread mental habit of thinking about the world in mechanistic terms—pervades and cripples the modern mind.

Of all the responses that last week’s post fielded, those I found most amusing, and also most revealing, were those that insisted that of course the universe is a machine, so is everything and everybody in it, and that’s that. That’s amusing because most of the authors of these comments made it very clear that they embraced the sort of scientific-materialist atheism that rejects any suggestion that the universe has a creator or a purpose. A machine, though, is by definition a purposive artifact—that is, it’s made by someone to do something. If the universe is a machine, then, it has a creator and a purpose, and if it doesn’t have a creator and a purpose, logically speaking, it can’t be a machine.

That sort of unintentional comedy inevitably pops up whenever people don’t think through the implications of their favorite metaphors. Still, chase that habit further along its giddy path and you’ll find a deeper absurdity at work. When people say “the universe is a machine,” unless they mean that statement as a poetic simile, they’re engaging in a very dubious sort of logic. As Alfred Korzybski pointed out a good many years ago, pretty much any time you say “this is that,” unless you implicitly or explicitly qualify what you mean in very careful terms, you’ve just babbled nonsense.

The difficulty lies in that seemingly innocuous word “is.” What Korzybski called the “is of identity”—the use of the word “is” to represent  =, the sign of equality—makes sense only in a very narrow range of uses.  You can use the “is of identity” with good results in categorical definitions; when I commented above that a machine is a purposive artifact, that’s what I was doing. Here is a concept, “machine;” here are two other concepts, “purposive” and “artifact;” the concept “machine” logically includes the concepts “purposive” and “artifact,” so anything that can be described by the words “a machine” can also be described as “purposive” and “an artifact.” That’s how categorical definitions work.

Let’s consider a second example, though: “a machine is a purple dinosaur.” That utterance uses the same structure as the one we’ve just considered.  I hope I don’t have to prove to my readers, though, that the concept “machine” doesn’t include the concepts “purple” and “dinosaur” in any but the most whimsical of senses.  There are plenty of things that can be described by the label “machine,” in other words, that can’t be described by the labels “purple” or “dinosaur.” The fact that some machines—say, electronic Barney dolls—can in fact be described as purple dinosaurs doesn’t make the definition any less silly; it simply means that the statement “no machine is a purple dinosaur” can’t be justified either.

With that in mind, let’s take a closer look at the statement “the universe is a machine.” As pointed out earlier, the concept “machine” implies the concepts “purposive” and “artifact,” so if the universe is a machine, somebody made it to carry out some purpose. Those of my readers who happen to belong to Christianity, Islam, or another religion that envisions the universe as the creation of one or more deities—not all religions make this claim, by the way—will find this conclusion wholly unproblematic. My atheist readers will disagree, of course, and their reaction is the one I want to discuss here. (Notice how “is” functions in the sentence just uttered: “the reaction of the atheists” equals “the reaction I want to discuss.” This is one of the few other uses of “is” that doesn’t tend to generate nonsense.)

In my experience, at least, atheists faced with the argument about the meaning of the word “machine” I’ve presented here pretty reliably respond with something like “It’s not a machine in that sense.” That response takes us straight to the heart of the logical problems with the “is of identity.” In what sense is the universe a machine? Pursue the argument far enough, and unless the atheist storms off in a huff—which admittedly tends to happen more often than not—what you’ll get amounts to “the universe and a machine share certain characteristics in common.” Go further still—and at this point the atheist will almost certainly storm off in a huff—and you’ll discover that the characteristics that the universe is supposed to share with a machine are all things we can’t actually prove one way or another about the universe, such as whether it has a creator or a purpose.

The statement “the universe is a machine,” in other words, doesn’t do what it appears to do. It appears to state a categorical identity; it actually states an unsupported generalization in absolute terms. It takes a mental model abstracted from one corner of human experience and applies it to something unrelated.  In this case, for polemic reasons, it does so in a predictably one-sided way: deductions approved by the person making the statement (“the universe is a machine, therefore it lacks life and consciousness”) are acceptable, while deductions the person making the statement doesn’t like (“the universe is a machine, therefore it was made by someone for some purpose”) get the dismissive response noted above.

This sort of doublethink appears all through the landscape of contemporary nonconversation and nondebate, to be sure, but the problems with the “is of identity” don’t stop with its polemic abuse. Any time you say “this is that,” and mean something other than “this has some features in common with that,” you’ve just fallen into one of the corel boobytraps hardwired into the structure of human thought.

Human beings think in categories. That’s what made ancient Greek logic, which takes categories as its basic element, so massive a revolution in the history of human thinking: by watching the way that one category includes or excludes another, which is what the Greek logicians did, you can squelch a very large fraction of human stupidities before they get a foothold. What Alfred Korzybski pointed out, in effect, is that there’s a metalogic that the ancient Greeks didn’t get to, and logical theorists since their time haven’t really tackled either: the extremely murky relationship between the categories we think with and the things we experience, which don’t come with category labels spraypainted on them.

Here is a green plant with a woody stem. Is it a tree or a shrub? That depends on exactly where you draw the line between those two categories, and as any botanist can tell you, that’s neither an easy nor an obvious thing. As long as you remember that categories exist within the human mind as convenient handles for us to think with, you can navigate around the difficulties, but when you slip into thinking that the categories are more real than the things they describe, you’re in deep, deep trouble.

It’s not at all surprising that human thought should have such problems built into it. If, as I do, you accept the Darwinian thesis that human beings evolved out of prehuman primates by the normal workings of the laws of evolution, it follows logically that our nervous systems and cognitive structures didn’t evolve for the purpose of understanding the truth about the cosmos; they evolved to assist us in getting food, attracting mates, fending off predators, and a range of similar, intellectually undemanding tasks. If, as many of my theist readers do, you believe that human beings were created by a deity, the yawning chasm between creator and created, between an infinite and a finite intelligence, stands in the way of any claim that human beings can know the unvarnished truth about the cosmos. Neither viewpoint supports the claim that a category created by the human mind is anything but a convenience that helps our very modest mental powers grapple with an ultimately incomprehensible cosmos.

Any time human beings try to make sense of the universe or any part of it, in turn, they have to choose from among the available categories in an attempt to make the object of inquiry fit the capacities of their minds. That’s what the founders of the scientific revolution did in the seventeenth century, by taking the category of “machine” and applying it to the universe to see how well it would fit. That was a perfectly rational choice from within their cultural and intellectual standpoint. The founders of the scientific revolution were Christians to a man, and some of them (for example, Isaac Newton) were devout even by the standards of the time; the idea that the universe had been made by someone for some purpose, after all, wasn’t problematic in the least to people who took it as given that the universe was made by God for the purpose of human salvation. It was also a useful choice in practical terms, because it allowed certain features of the universe—specifically, the behavior of masses in motion—to be accounted for and modeled with a clarity that previous categories hadn’t managed to achieve.

The fact that one narrowly defined aspect of the universe seems to behave like a machine, though, does not prove that the universe is a machine, any more than the fact that one machine happens to look like a purple dinosaur proves that all machines are purple dinosaurs. The success of mechanistic models in explaining the behavior of masses in motion proved that mechanical metaphors are good at fitting some of the observed phenomena of physics into a shape that’s simple enough for human cognition to grasp, and that’s all it proved. To go from that modest fact to the claim that the universe and everything in it are machines involves an intellectual leap of pretty spectacular scale. Part of the reason that leap was taken in the seventeenth century was the religious frame of scientific inquiry at that time, as already mentioned, but there was another factor, too.

It’s a curious fact that mechanistic models of the universe appeared in western European cultures, and become wildly popular there, well before the machines did. In the early seventeenth century, machines played a very modest role in the life of most Europeans; most tasks were done using hand tools powered by human and animal muscle, the way they had been done since the dawn of the agricultural revolution eight millennia or so before. The most complex devices available at the time were pendulum clocks, printing presses, handlooms, and the like—you know, the sort of thing that people these days use instead of machines when they want to get away from technology.

For reasons that historians of ideas are still trying to puzzle out, though, western European thinkers during these same years were obsessed with machines, and with mechanical explanations for the universe. Those latter ranged from the plausible to the frankly preposterous—René Descartes, for example, proposed a theory of gravity in which little corkscrew-shaped particles went zooming up from the earth to screw themselves into pieces of matter and yank them down. Until Isaac Newton, furthermore, theories of nature based on mechanical models didn’t actually explain that much, and until the cascade of inventive adaptations of steam power that ended with James Watt’s epochal steam engine nearly a century after Newton, the idea that machines could elbow aside craftspeople using hand tools and animals pulling carts was an unproven hypothesis. Yet a great many people in western Europe believed in the power of the machine as devoutly as their ancestors had believed in the power of the bones of the local saints.

A habit of thought very widespread in today’s culture assumes that technological change happens first and the world of ideas changes in response to it. The facts simply won’t support that claim, though. As the history of mechanistic ideas in science shows clearly, the ideas come first and the technologies follow—and there’s good reason why this should be so. Technologies don’t invent themselves, after all. Somebody has to put in the work to invent them, and then other people have to invest the resources to take them out of the laboratory and give them a role in everyday life. The decisions that drive invention and investment, in turn, are powerfully shaped by cultural forces, and these in turn are by no means as rational as the people influenced by them generally like to think.

People in western Europe and a few of its colonies dreamed of machines, and then created them. They dreamed of a universe reduced to the status of a machine, a universe made totally transparent to the human mind and totally subservient to the human will, and then set out to create it. That latter attempt hasn’t worked out so well, for a variety of reasons, and the rising tide of disasters sketched out in the first part of this week’s post unfold in large part from the failure of that misbegotten dream. In the next few posts, I want to talk about why that failure was inevitable, and where we might go from here.