Monday, December 28, 2015

Writers Notebook - 28 Dec 2015

Relationships with Technology

  • The historian Ronald Wright has noted, "From ancient times until today, civilized people have believed that they behave better, and are better, than so-called savages." But this is just not so. It is a belief that is unjustified specifically because we have the same stone age brain as those savages. Their cultures are different from our own, but we cannot be said in any meaningful way to be "better", either individually or as a group. Hunter-gatherers can be just as friendly, brutal, caring, dismissive, helpful and murderous as modern, civilized people. The difference between the two would seem to be simply a matter of technology. Use of technology certainly changes brain structures, but does not change the fundamentals of emotion, including compassion and fear.
  • The word in the Pashto language for an AK-47, the world's most ubiquitous military assault rifle is, disturbingly, "machine". The AK-47 may be the only machine rural Pashtun children ever see. Trucks are rare in their mountain villages, time is told by the sun, and plumbing is unheard of.

Quote of the Day

Tuesday, December 22, 2015

Star Wars' Dirty Little Secret: We are the Empire


We all love Star Wars. For those of us old enough to remember the 1977 original Episode IV, it was a life changing event. We absorbed Star Wars memes before the rest of society, and integrated them into our lives. Atheists, Hindus, Jews, and Christians all speak of the Force without irony. Even those of us who wouldn't be caught dead dressing up as wookies or droids feel its influence.

We even love the brainchild of Jedi Master Lucas when the last word should properly spelled "wares". Lucas reportedly made the majority of his fortune selling the marketing rights to Star Wars paraphernalia until Lucasfilm's sale to Disney for more than 4 billion USD as recently as 2012.

Star Wars, though, has a secret. A dirty little secret. As with Avatar, and the much older The Lord of the Rings, the bad guys feature shiny new constructions of huge scale under some form of magical control. They are everything we wish to make in our Brave New World. Never mind the cost.

We are awed by the relative size of the Imperial star cruiser in Episode IV and again by the massive Death Star. Comparisons of scale go right through to the brand-new Episode VII, where we are shown how the new superweapon Starkiller Base dwarfs the original Death Star. The Empire and the First Order like things big, new, shiny, and made of metal. They are as dehumanizing as Saruman's industrial mines or the mining machines of Avatar.

The good guys, on the other hand, are nature-loving tree huggers. Yoda hangs out on a swampy world full of life because the Force is generated by life. Nature is Yoda's place of power. Life, big and often dangerous life, abounds in the Star Wars universe. Our heroes can't turn around without being surprised by an outsized creature, from the driest desert to an asteroid in the depths of space. Life is everywhere. In many ways, it is central player. And it contrasts completely with the modern, clean, metal world of the Empire.

Star Wars rebels live in small, human-centric groups. Luke Skywalker makes his way to the Rebel Alliance on Yavin, only to find his childhood friend Biggs among the pilots. How can two friends meet at random in a galaxy teeming with life and millions of inhabited planets? Because the rebels are a tiny group. In fact, they are a tiny group of tiny groups, each the approximate size of hunter-gatherer groups.

How many "snub fighters" did the "well equipped" Alliance send out against the Death Star? Thirty. That's it. Thirty just happens to be the median size of a traditional hunter-gatherer group.

Everything about the Star Wars universe pits insanely big, dehumanizing, industrial, machine-dominated governmental forces against something else that we can all relate to: Tiny, ragtag groups of friends who know each other well and act as a team only because they wish to. The rebels have choices, as Han Solo's character demonstrates over and over again.

Even the weapons show the contrast of scale. The Empire has the Death Star. The First Order has Starkiller Base. The rebels have one-person fighter ships and the occasional lightsaber.

The rebels fight at human scale with personal weapons against a huge enemy that awes them with its size and power. And yet they win. And we cheer.

There's more. Rebels have babies, real flesh-and-blood human babies. Leia and Han had a baby between Episode's VI and VII. Even Darth Vadar had a mom when he was a cute little kid. Their parents loved them even when they went horribly wrong. But those babies that are even exposed a little bit to the Dark Side start turning into machines, bit by bit. The ultimate expression of dehumanization are the storm troopers. They are clones to the last under the Empire, and orphans painfully ripped from their parents' loving arms under the First Order.

Let's leave the comfortable fantasy of Star Wars for just a moment, and take a trip to Afghanistan. The incredibly brave Afghan reporter Najibullah Quraishi reported on PBS's Frontline on the indoctrination of child soldiers in Afghanistan's Eastern provinces. Quraishi insists that we listen to an ISIS commander as he instructs nine-year-olds in the use of grenades, and AK-47 rifles.

And what is the word in the Pashto language for an AK-47, the world's most ubiquitous military assault rifle? The word is, disturbingly, "machine". The AK-47 may be the only machine these children ever see. Trucks are rare in their mountain villages, time is told by the sun, and plumbing is unheard of.

Machines have a similar relationship in the Star Wars universe. Droids are everywhere. Who makes them? The only droid we see being made was C-3PO, Anakin's homebrew friend. In fact, Anakin's creation of the metal man was the first indication we were given that he would turn to the dark side. We see Luke repairing C-3PO's arm in Episode IV, and charging R2-D2 in Episode V. Chewie fixes C-3PO again in VI. Others hack away on-screen and off at the Millennium Falcon and other gear. But who makes them? It must be the Empire. The rebels sure don't. The rebels are too busy running and fighting.

The droids of the Alliance are machines like the Pashtuns' AK-47. Both groups of backwoods fighters are mere users of high technology. They are not the progenitors.

It is time we faced facts. The terrorists and freedom fighters that we Americans purport to abhor are the prototype for the Rebel Alliance. We are the Empire, just as Iran and Hezbollah have told us we are.

Star Wars shows us the central schizophrenia of modern Western society. We yearn for the tight-knit, human-scale societies of friends working for a common cause. We also want our indoor plumbing, Netflix, regular food supplies, and pornography. We drown our social discomfort with the next hit of sugar.

The great irony of Star Wars is that we collectively sit in air-conditioned comfort, munching our popcorn and drinking our sugary sodas, rapt by the magic of CGI-induced scenes of stickin' it to the man. We cheer the dirty and ill-equipped heroes that tear down the great metal empire of oppression. Then we go to work the next day and keep building the Empire.


Tuesday, July 07, 2015

Writer's Notebook - 7 July 2015

On Pandering

The 2016 US presidential campaign is seemingly in full cajole. This gem was uncovered in the Economist news magazine in the Lexington column of their 20 June 2015 issue:
"Rick Perry, a former governor of Texas, rode to the barbecue on a Harley belonging to a disabled war hero, accompanied by exNavy SEALS, to raise funds for a charity that gives puppies to military veterans."

Mr. Perry spent five years in the US Air Force from 1972-1977, three years of which he was a pilot of C-130 cargo planes. As the Economist journalist noted, his appearance in Iowa reached the level of performance art.

My wife points out, "I would laugh at that if I read it in a Terry Pratchett book." Indeed. Perhaps my next business venture should be a family game called "Truth or Pratchett?". One would vie with peers to guess whether a ridiculous quotation actually happened, or was rebranded from a Pratchett novel.

Quotes of the Day

Both of today's quotations are from the classicist Edith Hamilton, from her masterwork The Greek Way.

  • "The ancient priests had said, 'Thus far and no farther. We set the limits to thought.' The Greeks said, 'All things are to be examined and called into question. There are no limits set to thought.'"
  • "Before Greece the domain of the intellect belonged to the priests. They were the intellectual class of Egypt. Their power was tremendous. Kings were subject to it. Great men must have built up that mighty organization, great minds, keen intellects, but what they learned of old truth and what they discovered of new truth was valued as it increased the prestige of the organization. And since Truth is a jealous mistress and will reveal herself not a whit to any but a disinterested seeker, as the power of the priesthood grew, and any idea that tended to weaken it met with a cold reception, the priests must fairly soon have become sorry intellectualists, guardians only of what seekers of old had found, never using their own minds with freedom."



Tuesday, February 17, 2015

Introductory JSON-LD Videos

The prolific Manu Sporny has created a useful series of videos explaining JSON-LD, the preferred format for representing structured data on the Web. JSON-LD is a serialization of the RDF data model, which allows it to be much more than just a format.

Here are Manu's videos:


  • What is Linked Data? A short non-technical introduction to Linked Data, Google's Knowledge Graph, and Facebook's Open Graph Protocol.
  • What is JSON-LD? A short introduction to JSON-LD for Web developers, designers, and hobbyists. It covers how to express basic Linked Data in JSON.
  • JSON-LD: Compaction and Expansion An overview of JSON-LD's compaction and expansion features and how you can use them to merge data from multiple sources.
  • JSON-LD: Core Markup An overview of some of the core markup features of JSON-LD including types, aliasing, nesting, and internationalization support.

Monday, January 19, 2015

Book Review: Superintelligence by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies [Goodreads] by Nick Bostrom is a big idea book. The big idea is that the development of truly intelligent artificial intelligence is the most important issue that our generation will face. According to Bostrom, it may be the most important issue the human race has ever faced. This view suggests that how we approach the development and response to AI will be more important than how we respond to nuclear proliferation, climate change, continued warfare, sustainable agriculture, water management, and healthcare. That is a strong claim.
The sale of Bostrom's book has no doubt been helped by recent public comments by super entrepreneur Elon Musk and physicist Stephen Hawking. Musk, with conviction if not erudition, said
With artificial intelligence we are summoning the demon.  In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
One almost wishes that Musk didn't live in California. He provided ten million US dollars to the Future of Life Institute to study the issue three months later. Bostrom is on the scientific advisory board of that body.
Hawking agrees with Musk and Bostrom, although without the B movie references, saying,
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
Bostrom, Musk and Hawking make some interesting, and probably unfounded, presumptions. This is hardly uncommon in the current public conversation around strong AI. All seem to presume that we are building one or more intelligent machines, that these machines will probably evolve to be generally intelligent, that their own ideas for how to survive will radically differ from ours, and that they will be capable of self-evolution and self-reproduction
Jeff Hawkins provides the best answer to Elon Musk, Stephen Hawking, and Nick Bostrom that I have read to date:
Building intelligent machine is not the same as building self-replicating machines. There is no logical connection whatsoever between them. Neither brains nor computers can directory self-replicate, and brainlike memory systems will be no different. While one of the strengths of intelligent machines will be our ability to mass-produce them, that's a world apart from self-replication in the manner of bacteria and viruses. Self-replication does not require intelligence, and intelligence does not require self-replication. (On Intelligence [Goodreads], pp. 215)
Should we not clearly separate our concerns before we monger fear? The hidden presumptions of self-evolution and self-reproduction seem to be entirely within our control. Bostrom makes no mention of these presumptions, nor does he address their ramifications.
At least Bostrom is careful in his preface to admit his own ignorance, like any good academic. He seems honest in his self assessment:
Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions.
Beautifully, a footnote at the end of the first sentence reads, "I don't know which ones." It would be nice to see Fox News adopt such a strategy.
Another unstated presumption is that we are building individual machines based on models of our communal species. Humans may think of themselves as individuals, but we could not survive without each other, nor would there be much point in doing so.
We have not even begun to think about how this presumption will affect the machines we build. It is only in aggregate that we humans make our civilization. Some people are insane, or damaged, or dysfunctional, or badly deluded. Why should we not suspect that a machine built on the human model could not, indeed, would not, run the same risk? We should admit the possibility of our creating an intelligent machine that is delusional in the same way that we should admit the mass delusions of our religious brethren.
Is my supposition too harsh? Consider the case of Ohio bartender Michael Hoyt. Hoyt is not known to have had any birth defects, nor to have suffered physical injury. Yet he lost his job, and was eventually arrested by the FBI, after threatening the life of Speaker of the House John Boehner. Hoyt reportedly heard voices that told him Boehner was evil, or the Devil, or both. He suspected the Boehner was responsible for the Ebola outbreak in West Africa. He told police that he was Jesus Christ. Is Hoyt physically ill, or simply the victim of inappropriate associations in his cortex? We have many reasons to suspect the latter.
Bostrom originally spelled his name with an umlaut (Boström), as befits his Swedish heritage. He apparently dropped it at the same time as he started calling himself "Nick" in place of his birth name Niklas. Bostrom lives in the UK and is now a philosopher at St. Cross College, University of Oxford. Perhaps the Anglicization of his name is as much related to his physical location as the difficulty in convincing publishers and, until recently, the Internet Domain Name System, to consistently handle umlauts. His Web site at nickbostrom.com uses simple ASCII characters.
According to Bostrom, we have one advantage over the coming superintelligence. It is a bit unclear what that advantage is. The book's back jacket insists that "we get to make the first move." Bostrom's preface tells us that "we get to build the stuff." I tend to trust Bostrom's own words here over the publicist's, but think that both are valid perspectives. We have multiple advantages after all.
Another advantage is that we get to choose whether to combine the two orthogonal bits of functionality mentioned earlier, self-evolution and self-replication, with general intelligence. Just what the motivation would be for anyone to do so has yet to be explained by anyone. Bostrom makes weak noises about the defense community building robotic soldiers, or related weapons systems. He does not suggest that those goals would necessarily include self-evolution nor self-replication.
The publisher also informs us on the jacket that "the writing is so lucid that it somehow makes it all seem easy." Bostrom, again in his preface, disagrees. He says, "I have tried to make it an easy book to read, but I don't think I have quite succeeded." It is not a difficult read for a graduate in philosophy, but the general reader will occasionally wish a dictionary and Web browser close at hand. Bostrom's end notes do not include his supporting mathematics, but do helpfully point to academic journal articles that do. Of course, philosophic math is more useful to ensure that one understands an argument being made than in actually proving it.
Perhaps surprisingly, Bostrom makes scant mention of Isaac Asimov's famous Three Laws of Robotics, notionally designed to protect humanity from strong AI. This is probably because professional philosophers have known for some time that they are woefully insufficient. Bostrom notes that Asimov, a biochemistry professor during his long writing career, probably "formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories." (pp. 139)
To be utterly picayune, the book includes some evidence of poor editing, such as adjoining paragraphs that begin with the same sentence, and sloppy word order. I would have expected Oxford University Press to catch a bit more than they did.
Bostrom, perhaps at the insistence of his editors, pulled many philosophical asides into clearly delineated boxes that are printed with a darker background. Light readers can easily give them a miss. Those who are comfortable with the persnickety style of the professional philosopher will find them interesting.
Bostrom does manage to pet one of my particular peeves when he suggests in one such box that, "we could write a piece of code that would function as a detector that would look at the world model in our developing AI and designate the representational elements that correspond to the presence of a superintelligence... If we could create such a detector, we could then use it to define our AI's final values." The problem is that Bostrom doesn't understand the nature of complex code in general, nor the specific forms of AI code that might lead to a general intelligence.
There are already several forms of artificial intelligence where we simply do not understand how they work. We can train a neural network, but we cannot typically deconstruct the resulting weighted algorithm to figure out how a complex recognition task is performed. So-called "deep learning", which generally just means neural networks of more than historical complexity due to the application of more computing power, just exacerbates the problem of understanding. Ask a Google engineer exactly how their program recognizes a face, or a road, or a cat, and they will have no idea. This is equally true in Numenta's Cortical Learning Algorithm (CLA), and will be true of any eventual model of the human brain. Frankly, it is even true of any large software program that has entered maintenance failure, which is almost always an admission by a development team that the program has become too complex for them to reliably change. Bostrom's conception of software is at least as old as the Apple Newton. That is not a complement.
We will surely have less control over any form of future artificial intelligence than it will require to implement his proposed solution. Any solution will not be as simple as inserting a bit of code into a traditional procedural program.
Critically, Bostrom confuses the output of an AI system with its intelligence (pp. 200). This equivalence has been a persistent failure of philosophy. To quote Jeff Hawkins again, who I think sees this particularly clearly,
But intelligence is not just a matter of acting or behaving intelligently. Behavior is a manifestation of intelligence, but not the central characteristic or primary definition of being intelligent. A moment's reflection proves this: You can be intelligent just lying in the dark, thinking and understanding. Ignoring what goes on in your head and focusing instead on behavior has been a large impediment to understanding intelligence and building intelligent machines.
How will we know when a machine becomes intelligent? Alan Turing famously proposed the imitation game, now known as the Turing test, which suggested that we could only know by asking it and observing its behavior. Perhaps we can only know if it tells us without being programmed to do so. Philosophers like Bostrom will, no doubt, argue about this for a long time, in the same way they now argue whether humans are really intelligent. Whatever "really" means.
Bostrom's concluding chapter, "Crunch time", opens with a discussion of the top mathematics prize, the Fields Medal. Bostrom quotes a colleague who likes to say that a Fields Medal indicates that a recipient "was capable of accomplishing something important, and that he didn't." This trite (and insulting) conclusion is the basis for a classic philosophical ramble on whether our hypothetical mathematician actually invented something or whether he "merely" discovered something, and whether the discovery would eventually be made later by someone else. Bostrom makes an efficiency argument: A discovery speeds progress but does not define it. Why he saves this particular argument for his terminal chapter would be a mystery if he had something important to say about what we might do. Instead, he simply tells us to get on studying the problem.
I find that professional philosophers often slip in scale in this way. One moment they are discussing the capabilities and accomplishments of an individual human, generally assumed to be male, and the next they switch to a bird's eye view of our species as if the switch in perspective were justified mid-course. I find this both confusing and disingenuous. It is as if the philosopher cannot bear to view our species from the distance that might yield a more objective understanding.
The actions of individuals, both male and female, are inextricably linked to our cognitive biases. We do not make rational decisions, we make emotional ones, even when we try not to. We make decisions that keep our in-groups stable, by and large. A few, a very few, spend their days trying to think rationally, or exploring the ramifications of rational laws on our near future. A few dare to challenge conventional thinking aimed at in-group stability. Those few are not better than the rest. They are just an outward-looking minority evolved for the group's longer term survival. But the aggregate of our individual decisions looks much like a search algorithm. We explore physical spaces, new foods to eat, new places to be, new ways to raise families, new ways to defeat our enemies. Some work and some don't. Evolution is also a search algorithm, although a much slower one. Our species is where it is because our intelligence has explored more of our space faster and to greater effect. That is both our benefit and our challenge.
The strengths and weaknesses of the professional philosopher's toolbox are just not important to Bostrom's argument. Superintelligence would have been a stronger book if he has transcended them. Instead, it is a litany of just how far philosophy alone can take us, and a definition of where it fails.
I could find no discussion of the various types of approaches to AI, nor how they might play out differently. There are at least five, mostly mutually contradictory, types of AI. They are, in rough historical order:
  1. Logical symbol manipulation. This is the sort that has given us proof engines, and various forms of game players. It is also what traditionalists think of when they say "AI".
  2. Neural networks. Many problems in computer vision and other sort of pattern recognition problems have been solved this way.
  3. Auto-associative memories. This variation on neural networks uses feedback to allow recovery of a pattern when presented with only part of the pattern as input.
  4. Statistical, or "machine learning". These techniques use mathematical modeling to solve particular problems such as cleaning fuzzy images.
  5. Human brain emulation. Brain emulation may be used to predict future events based on past experiences.
Of these, and the handful of other less common approaches not mentioned, only human brain emulation is currently aiming to create a general artificial intelligence. Not only that, but few AI researchers actually think we are anywhere close to that goal. The popular media has represented a level of maturity that is not currently present.
The recent successes of the artificial intelligence community are a much longer way from general intelligence than one hears from news media, or even some starry eyed AI researchers. There are also good reasons not to worry even if we do manage to create intelligent machines.
Recent news-making successes in AI have been due to the scale of available computing. Examples include the ability for a program to learn to recognize cats in pictures, or to safely drive a car. These successes are impressive, but are wholly specific solutions to very particular problems. Not one of the researchers involved believes that those approaches will lead to a generally intelligent machine. These are tools and nothing but tools. Their output makes us better in the same way that the invention of the hammer or screwdriver, or general purpose computer, made us better. They will not, cannot, take over the world.
Bostrom is, at the end, pessimistic about our chances for survival. Perhaps this is what happens when one spends a lot of time studying global catastrophic risks. Bostrom and Milan M. Cirkovic previously edited a book of essays exploring just such risks in 2011 [Goodreads]. More information is available on the book's Web site. The first chapter is available online. These three paragraphs from Superintelligence anchor his position in relation to AI:
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.
Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the entire firmament. Nor is there a grown up in sight.
One could imagine the same pessimistic argument being made about nuclear weapons. They must be reigned in before "some little idiot" gets his hands on one. Is that not what has happened? The Treaty on the Non-Proliferation of Nuclear Weapons has been a major force for slowing the spread of nuclear weapons in spite of the five countries that do not adhere to its principles. Separate agreements, threats, and sanctions has so far worked just well enough to plug the holes. Grown ups, from Albert Einstein to the current batch of Strategic Arms Limitation Talks negotiators, have come out of the woodwork when needed. Not only has no one dropped a nuclear bomb since the world came to know of their existence at Hiroshima and Nagasaki, but even the superpowers have willingly returned to small, and relatively low-tech ways of war.
Bostrom urges us to spend time and effort urgently to consider our response to the coming threat. He warns that we may not have the time we think we have. Nowhere does he presume that we will not choose our own destruction. "The universe is change;" said the Roman emperor Marcus Aurelius Antoninus, "our life is what our thoughts make it." Bostrom might learn to temper his pessimism with an understanding of how humans relate to existential threats. Only then do they seem to do the right thing. He might also observe that unexpected events should not be handled using old tools, as noted by the industrialist J. Paul Getty ("In times of rapid change, experience could be your worst enemy.") or management theorist Peter Drucker ("The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic.") We will need new conceptual tools to handle a new intelligence.
Our erstwhile fear mongers seem also certain that any new general intelligence would, as humans are wont to do, wish to destroy a competing intelligence, us. People fear this not because this is what an artificial intelligence will necessarily be, but because that is what our form of intelligence is. Humans have always feared other humans and for good reason. As historian Ronald Wright noted in A Short History of Progress [Goodreads],
"[P]rehistory, like history, teaches us that the nice folk didn't win, that we are at best the heirs of many ruthless victories and at worst the heirs of genocide."
This raises the fascinating question of how we, as a species, would react to the presence of a newly competitive intelligence on our planet. History shows that we probably killed off the Neanderthals, as earlier human species killed off Homo Erectus and our earlier predecessors. We don't play well with others. Perhaps our own latent fears will insist on the killing off of a new, generally intelligent AI. We should consider this nasty habit of ours before we worry too much about how a hypothetical AI might feel about us. If an AI considers us a threat, should we really blame it? We probably will be a threat to its existence.
It is possible that a single hyper-intelligent machine might not even matter much in the wider course of human affairs. Just like the natural, generational genius does not always matter. The history of the human race seems to be more dominated by the slow, inexorable march of individual decisions than it is by the, often temporary, upheavals of the generational genius. How would human development have changed if the Persian commander Spithridates had succeeded in killing Alexander the Great at the Battle of the Granicus? He almost did. Spithridates' axe bounced off Alexander's armor. Much has been made of the details, but people would still spread through competition, and contact between East and West would still have eventually occurred. The difference between having a genius and not having a genius can be smaller than we think in the long run.
Bostrom's main point is that we should take the development of general artificial intelligence seriously and plan for its eventual regulation. That's fine, for what it is worth. It is not worth very much, really. We are much more likely to react once a threat emerges. That's what humanity does. Bostrom is at best early at delivering a warning and at worst barking up the wrong tree.

Writer's Notebook - 19 January 2015

Teaching Evolution to Martin Luther King

On this Martin Luther King Day, I would like to explore the differences between the way we see the world and the way it really is. Specifically, we often come to understand human behavior as if it is solely driven by culture. This is just not so.
Consider Dr. King's thoughts on why people hate:
"Men often hate each other because they fear each other; they fear each other because they don't know each other; they don't know each other because they can not communicate; they can not communicate because they are separated." -- Martin Luther King, Jr.
It sounds plausible. The underlying presumption is that we could stop hating if we learn to communicate. Unfortunately, this is only partially so.
Like so much of our cultural conversations, Dr. King failed to take human evolution into account. Our ancestral ground state, life in small hunter-gatherer groups, colors our relationships with other people as much as, perhaps more than, the relatively recent geographical diaspora of our species has colored our skins. We constantly define an in-group, which leaves everyone else in the out-group. To our consternation, we have learned to create multiple, confusing, and sometimes contradictory in-groups. Cubs fans, military organizations, fellow hobbyists, church goers. The list is nearly endless. Ten thousand years ago almost all of us would have lived and died in the same in-group.
My revised version of Dr. King's statement looks like this:
Humans hate outsiders because they fear outsiders. Evolution taught us to fear anyone outside of our group. This can be overridden by creating a culture, by redefining an in-group, but it cannot ever become universal without changing what we are as a species.

Quotes of the day: On responding to technological change

  • "The unleashed power of the atom has changed everything save our modes of thinking, and we thus drift toward unparalleled catastrophes." -- Albert Einstein
  • "Most of our assumptions have outlived their uselessness." — Marshall McLuhan
  • "The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic." — Peter Drucker
  • "In times of rapid change, experience could be your worst enemy." — J. Paul Getty

Tuesday, January 13, 2015

Writer's Notebook - 13 January 2015

Various epistemological "razors"

  • (William of) Occam's Razor: Among competing hypotheses, the one with the fewest assumptions should be selected. ("Plurality must never be posited without necessity")
  • (Christopher) Hitchens' Razor: "What can be asserted without evidence can be dismissed without evidence"
  • (Robert J.) Hanlon's Razor: "Never attribute to malice that which is adequately explained by stupidity"
  • (David) Hume's Razor: "If the cause, assigned for any effect, be not sufficient to produce it, we must either reject that cause, or add to it such qualities as will give it a just proportion to the effect"
  • (Mike) Alder's Razor (AKA "Newton's flaming laser sword"): "What cannot be settled by experiment is not worth debating."
  • (Ayn) Rand's Razor: "The requirements of cognition determine the objective criteria of conceptualization." (This is Occam's Razor with a corollary: Concepts are not to be multiplied beyond necessity nor are they to be integrated in disregard of necessity.)
  • (Albert) Einstein's razor: "Everything should be made as simple as possible, but no simpler." (possibly originally, "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.")

Quote of the day

"Mathematics stipulates structures by axioms: anything that satisfies the group axioms is a group, etc.. Programming takes given structures and builds new ones out of them, and the basic stock of building blocks is centrally important. Very different ways of thinking." -- Pat Hayes

Sunday, January 11, 2015

Writer's Notebook - 11 January 2015

Religious Freedom in the United States

The Virginia Statute of Religious Freedom was drafted 238 years ago in my town of Fredericksburg, Virginia. It was introduced to the Virginia legislature two years later and finally enacted into law in 1786. The statute formally separated church from state in Virginia and was the model for the First Amendment to the US Constitution (adopted in 1791).
Those who suggest that the principle of religious freedom allows citizens of the United States to choose a religion, but does not protect those who profess no religion whatsoever, should read the statute more carefully. The final paragraph makes clear that in the Commonwealth of Virginia, "all men shall be free to profess, and by argument to maintain, their opinions in matters of Religion". There is no requirement to select a religion.
The statute's echo in the First Amendment is slightly less clear and has been widely debated in the last two centuries. The First Amendment states,
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
That short passage guarantees six rights to US citizens and is the basis for many of the "rights" so often taken for granted:
  1. freedom from establishment of a national religion (the "establishment clause")
  2. freedom to freely exercise religious choice (the "free exercise clause")
  3. freedom of speech
  4. freedom of the press
  5. freedom to peacefully assemble
  6. freedom to petition the government for redress of grievances
The Virginia Statute of Religious Freedom is celebrated annually in Fredericksburg. The Annual Fredericksburg Religious Freedom March and Celebration includes a march from the Fredericksburg train station to the statute's monument on Washington Avenue. Other events are generally held in the area. This year's events include a presentation by University of Mary Washington Professor Mary Beth Mathews, who will speak on the topic  "Religious Freedom: Always Approaching, Never Reaching".  There will be a presentation of awards for the three winners of the middle school Importance of Religious Freedom Essay Contest that was sponsored by the University of Mary Washington and Fredericksburg Coalition of Reason.

Quote of the Day

"Human beings are at one and the same time utterly splendid and utterly insignificant." -- James Robertson

Friday, January 09, 2015

Je Suis Charlie

Like so many, I am appalled at the destruction of people and property, and the suppression of ideas, that occurred at the Charlie Hebdo offices in Paris on the 7th of January, 2015. Unlike so many, I cannot accept that the attacks "had nothing to do with religion", a view expressed by a French Muslim today on NPR's Morning Edition. The attacks were carried out by religious extremists for religious reasons. That is, of course, not to suggest that the majority of Muslims condone extremist violence, any more than the majority of Christians, Jews, Hindus or Buddhists condone violence. It is to say that religion, when believed literally, is a powerful and dangerous motivator.

Religion is only dangerous when it is believed.

I also cannot agree with the decision of many news organizations to refrain from publishing the Charlie Hebdo cartoons that are said to spark the violence. The cartoons are not, and never were, the issue. The issue is whether freedom of speech trumps religious sensibilities. It simply must do so.

Freedom of speech is a cornerstone of all freedoms in that it allows, and encourages, civil discourse. We accept the risk that it may sometimes cause offense because the benefits strongly outweigh the detriments. Freedom of speech is even more important than freedom of religion in that free speech allows for free expression in religious and other contexts. It is, in fact, only in a climate of free speech that religious tolerance can thrive.

Freedom of speech is more important than the fear of giving offense to others in that there is no end to what may, in some sense, cause offense. I choose not to be stifled by a few fanatics - even if they arm themselves and perpetrate violence.

For those reasons, I have decided to join a few brave (and mostly online) news organizations in republishing one of the Charlie Hebdo cartoons. I do not intend to cause offense. I do intend to stand for the one principle that underlies pluralistic society for the benefit of all.

The cartoon below is entitled "If Mohammed returned" and features the prophet being beheaded by a Muslim extremist. Mohammed is saying, "I am the prophet, fool!". The extremist responds, "Shut up, infidel!". I can think of no more poignant satire of the Charlie Hebdo tragedy than that.


Thursday, January 08, 2015

Writer's Notebook - 8 January 2015

Notes on the Tabula Rasa

The tabula rasa is the philosophic concept that the human mind at birth is blank and without form; only experience is thought by adherents of this school of thought to create a human being.
The term tabula rasa comes from the Latin, which literally means "scraped tablet" and is a reference to a wax tablet used for writing in Roman times. The translation "blank slate" is more commonly heard in modern English.
  • Aristotle recorded the first usage in de Anima (he called it an "unscribed tablet", Book III, chapter 4).
  • ibn Sīnā, known as Avicenna in the West, first used the term tabula rasa in his translation of and commentary to de Anima.
  • John Locke in Essay Concerning Human Understanding (he used the term "white paper" in Book II, Chap 1, Sect 2, and said that "there was in the understandings of men no innate idea of identity" in Book I, Chap 3, Sect 5, and "Whole and part, not innate ideas" in Book I, Chap 3, Sect 6), but see also his (contradictory?) idea that children may learn something in the womb (Book II, Chap 9 Sect 5).
  • Jean-Jacques Rousseau used the idea of the tabula rasa to suggest that humans must learn warfare (18th century).
  • Sigmund Freud used the idea to suggest that personality was formed by family dynamics.
The short course is that the tabula rasa was incredibly important to the historical development of philosophy right into the twentieth century. Unfortunately for those twenty three hundred years of history, the idea was simply wrong in its extreme form.
The philosophical schools contending over the existence and degree of tabula rasa in the human mind are known as Rationalism vs. Empiricism.
The current debate seems to have coalesced around an understanding that human babies are in fact born with innate cognitive biases, and this would seem to negate any idea of the tabula rasa as the term was initially used. However, many philosophers argue (because this is what they do) that of course that is not what was really meant. I think it was exactly what was meant by Aristotle and ibn Sīnā. What Locke and later thinkers thought is much more up for discussion.
I am a rationalist, in that I believe that the Innate Concept Thesis is correct ("We have some of the concepts we employ in a particular subject area, S, as part of our rational nature" -- Stanford Encyclopedia of Philosophy): We are born with cognitive biases that implement those concepts, such as mirror neurons and Theory of Mind. I discussed these features in more detail in my book review of Why We Believe in God(s) by J. Anderson Thomson.
Modern opponents of the tabula rasa include the linguist Noam Chomsky and the psychologist Steven Pinker. Chomsky is known for his theories of rationalist epistemology, including his theory that aspects of language are innate to a newly born child. Pinker claimed in his The Blank Slate: The Modern Denial of Human Nature [Goodreads] that the tabula rasa was responsible for "most mistakes" in modern social science, urging his colleagues to view humanity through the lens of evolution first and forgo preconceived notions gleaned from philosophic thought alone.
The Computational Theory of Mind, the relatively recent idea that human cognition is a form of computation (although implemented in a way very different from an electronic computer), was formulated partly by the rejection of the tabula rasa. The theory was developed primarily by mathematician and philosopher Hilary Putnam, philosopher Jerry Fodor, and extended in recent times by Steven Pinker.
The intellectual heritage of the Computational Theory of Mind can be summarized as follows, where red arrows indicate theories that have been replaced with new understanding and black arrows stand:

Quote of the Day

"Pristine prose or voice or funny or a brilliant simile in the first page or a great title or a great character name or authority or what the fuck or whole new world or something intangible but moving or alarming or surprising or terrifying or consoling or titillating or suicidal." -- Betsy Lerner on what makes a "perfect" book manuscript, in an interview by LitStack.

Tuesday, January 06, 2015

Writer's Notebook - 6 January 2015

Relationship Between the Brain and the Mind

Scientists seem to have accepted that the mind is created by the brain, e.g.:
  • "[O]ur brain creates the experience of our self as a model - a cohesive, integrated character - to make sense of the multitude of experiences that assault our senses throughout a lifetime and last lasting impressions in our memory." Bruce Hood, The Self Illusion: How the Social Brain Creates Identity, Oxford University Press, 2012, pp. xiii [Goodreads]
  • "Everything we think, do, and refrain from doing is determined by the brain. The construction of this fantastic machine determines our potential, our limitations, and our characters; we are our brains." Dick Swaab, We Are Our Brains: From the Womb to Alzheimer's, Allen Lane, 2014, pp. 3 [Goodreads].
  • "the human brain, in all its electro-chemical complexity, creates what we call our minds. The neurological functioning of the brain, like the structure and functioning of other parts of the body, is a a human universal." David Lewis-Williams and David Pearce, Inside the Neolithic Mind: Consciousness, Cosmos and the Realm of the Gods, Thames & Hudson, 2005, pp. 6 [Goodreads].
  • "[T]he mind is not the brain but what the brain does, and not even everything it does, such as metabolizing fat and giving off heat." Stephen Pinker, How the Mind Works, Norton, 2009, pp. 24 [Goodreads].
The idea is not new, just newly accepted:
It should be widely known that the brain, and the brain alone, is the source of our pleasures, joys, laughter, and amusement, as well as our sorrow, pain, grief, and tears. It is especially the organ we use to think and learn, see and hear, to distinguish the ugly from the beautiful, the bad from the good, and the pleasant from the unpleasant. The brain is also the seat of madness and delirium, of the fears and terrors which assail us, often at night, but sometimes even during the day, of insomnia, sleepwalking, elusive thoughts, forgetfulness, and eccentricities. -- Hippocrates
But note how recently the scientific establishment has come to accept the thesis that the mind is what the brain does. in 1997, Pinker needed to add a huge and careful caveat to his book:
The evolutionary psychology of this book is, in one sense, a straightforward extension of biology, focusing on one organ, the mind, of one species, Homo sapiens. But in another sense it is a radical thesis that discards the way issues about the mind have been framed for almost a century.
The so-called mind-body problem has been discussed and argued for millennia. Plato thought they were separate, Aristotle thought they were two aspects of the body. The combination of religion and respect for classical culture confused philosophers on the issue so deeply that real progress was obliged to wait for modern neuroscience in the post-computing era. Even twentieth century philosophers like John Searle did no better than repeat Aristotle's argument of a false dichotomy.
Today we tend to think of the mind as transient, malleable software running on the brain's hardware. That is a poor analogy, but it is the analogy of our age. Earlier ages uses steam engine or clockwork analogies and no doubt future ones will choose new analogies. A better way to think about the mind is that it is an emergent property of the brain's general ability to learn, as in Jeff Hawkins Cortical Learning Algorithm.

Quote of the Day

"If an elderly but distinguished scientist says that something is possible, he is almost certainly right; but if he says that it is impossible, he is very probably wrong." -- Arthur C. Clarke

Saturday, January 03, 2015

Writer's Notebook - 3 January 2015

David Wheeler on Indirection

David Wheeler, the first person to receive a Ph.D. in Computer Science, has been widely quoted on the topic of indirection. The basic quote is "All problems in computer science can be solved by another level of indirection", but many variations exist:
  • The current Wikipedia article ("All problems in computer science can be solved by another level of indirection, except of course for the problem of too many indirections.") quotes a non-authoritative source: Andy Oram; Wilson, Greg; Andrew Oram (2007). Beautiful code . Sebastopol, CA: O'Reilly. ISBN 0-596-51004-7.
  • The Wikipedia page on Indirection suggests that the corollary "...except for the problem of too many layers of indirection." was added by  Kevlin Henney, not Wheeler. I tend to believe this because I do not recall hearing this ending until after 2008.
  • The Wikipedia talk page contains an important variation reported by Markus Kuhn, who reports speaking to Wheeler prior to his death in 2004: 'He did however stress that he considered the inclusion of the – often omitted – second part "But that usually creates another problem." as significant.' This matches my memory of the original quote.
  • There are also variations which substitute the word "layer" for "level", which Kuhn reports Wheeler as feeling are insignificant.
My preference is therefore for "Every problem in computer science can be solved with another layer of indirection - but that will generally create another problem." The thought is important enough to put a stake in the ground.
Note that Maurice Wilkes, David Wheeler, and Stanley Gill invented the closed subroutine as an implementation of this idea. Compare to an open subroutine ("macro").
Question: Does Wheeler's insight apply to any sufficiently complex system?
Possible answer: Yes, maybe. It applies to software because software is abstracted from its implementation on hardware (see the Church-Turing thesis for proof that software is not tied to an implementation). Is not any complex system (e.g. modern culture) also highly abstract, and possibly abstracted from its implementation in human beings?

Earth's Biosphere

Astrophysicist René Heller suggests that Earth is nearing the end of its time as a habitable planet in an article entitled Planets More Habitable Than Earth May Be Common in Our Galaxy in Scientific American's January 2015 issue.
  • The Earth is 4.54 ± 0.05 billion years old (see Wikipedia article and USGS Age of the Earth page).
  • The earliest (scientifically) undisputed evidence for life on Earth is 3.5 billion years ago (see Abiogenesis Wikipedia article and bibliography).
  • The Sun's radiation output is increasing as it ages, thus slowly pushing the Goldilocks Zone, where water can exist in a liquid form, outward.
  • Within 500 million years, Heller calculates that multi-cellular life will not be able to sustain form.
  • Within 1.75 billion years, water will not be able to be in a liquid state on the Earth's surface. This will remove the ability for uni-cellular life to exist on the Earth's surface.
Ronald Wright has noted, "Each time history repeats itself, the cost goes up." (A Short History of Progress [AmazonGoodreads]).
Question: Were humanity to fail, would another intelligent species have time to evolve?
Possible answer: Maybe not. Our species is only 3 million years old since the last common ancestor with chimpanzees, but the earliest mammalian fossils are "are dated about 167 million years ago in the Middle Jurassic" (according to Wikipedia). A new species surely could not push the 500 million year date since environmental pressure on development would presumably be severe long before multi-cellular life ceased to be viable. Also recall that:
  • There is no such thing as a "ladder" of evolution. Evolution does not direct life toward any goal other than survival in ecological niches. Another intelligent species is not guaranteed to evolve.
  • Humanity is currently causing a mass extinction event. This massive reduction in biodiversity will surely have an impact on any future evolutionary possibilities.
  • We are heating our planet faster than Heller's analysis alone suggests. The human impact on climate may last for 100,000 years even if we stopped now (which we are not). See Deep Future by Curt Stager. We might cause a runaway greenhouse effect, which could significantly limit our time horizon.
We might need to survive as a species, and possibly as a civilization, long enough to get off of the planet. This might be our only chance.

Quote of the Day


"I can explain it to you, but I cannot comprehend it for you." -- New York City Mayor (1978-1989) Ed Koch

Book Review: A Short History of Progress by Ronald Wright

A Short History of Progress [AmazonGoodreads] by Ronald Wright was published in 2004 alongside Bill Bryson's A Short History of Nearly Everything [AmazonGoodreads]. Wright and Bryson were hardly the first to summarize human history in a few short pages and will certainly not be the last. They do seem to have been at the forefront of an explosion of such books in the last decade. That outpouring has included such bestsellers as A History of the World in 6 Glasses by Tom Standage [AmazonGoodreads] in 2006 and A History of the World in 100 Objects by Neil MacGregor [AmazonGoodreads] in 2010. This year has brought A Short History of the World by Christopher Lascelles [AmazonGoodreads].
Most of these histories simply summarize the work of many others who have painstakingly wrested our past from the ground, from oral and written traditions, and from newer techniques like genetics and linguistics. Wright actually proposes a new theory of human cultural evolution. He analyses the very nature of our cultural progress from hunter-gatherer groups to our present global monoculture and comes to the conclusion that we are vulnerable for some very understandable reasons. He introduces his concept of a progress trap, in which an invention initially provides great benefits but its use at scale results in greater risk. "[W]hen the bang we can make can blow up our world, we have made rather too much progress." His analysis is strong and, in spite of several nits that are worth picking, his conclusions stand firm.
Wright has summarized his book in a short film called Surviving Progress , directed by Martin Scorsese, which is available on popular video sharing sites such as Netflix. Wright's Web site is at http://ronaldwright.com/.
Wright, like Winston Churchill ("The farther back you can look, the farther forward you are likely to see") believes that the scrutinization of our past can guide us in our future: "If we can see clearly what we are and what we have done, we can recognize human behavior that persists through many times and cultures. Knowing this can tell us what we are likely to do, where we are likely to go from here."
I both agree and disagree with that, Wright's central thesis. I think we can and should, indeed must, gain the insight to see clearly who we are as a species. Wright says that our cultural history to date has been like "sleepwalking" from crisis to invention to new crisis, and he is correct in that our trajectory has been more an emergent feature of many individual actions than a clear-eyed macroscopic set of policies. Even now we struggle to scale our actions to match the scope of our civilization. However, I do not believe that we dare apply our newfound knowledge to impact only culture. Our stone age brains are just not suited to solving the problems that our sleepwalking has led us to. We have been sleepwalking to the edge of a cliff and should really wake up before we step off. Says Wright,
Like all creatures, humans have made their way in the world by trial and error; unlike other creatures, we have a presence that is so colossal that error is a luxury we can no longer afford. The world has grown to small to afford us any big mistakes.
The problem is that, by Wright's own analysis, we are really very unlikely to operate efficiently for the first time in our history. We will need to upgrade our stone age brain for that to happen. We are now in a race to see which happens first, the end of our global culture or the application of our new-found scientific knowledge to change what we are as a species.
Let's reconsider a quote from Wright that I mentioned earlier: "[W]hen the bang we can make can blow up our world, we have made rather too much progress." That statement is true in at least two conditions. The first is when human decision making is such that someone might actually decide to blow up the world. That was the worry during the Cold War with its literally insane doctrine of mutually assured destruction (MAD). It is a worry still that the nuclear arsenals of the Cold War not only exist but are being constantly maintained and even upgraded. The only thing that has changed is that the nuclear forces are not routinely at their highest levels of alertness. We have made too much progress when our stone age brains have not caught up with our ability to create a bigger bang. This applies to politicians as much as suicide bombers.
The second condition in which we should worry is when a big bang might go off accidentally. We often put more effort into building a big bang maker than in ensuring that it is a safe and maintainable technology. This was the case when the space shuttles Challenger and Columbia disintegrated in mid flight, and when the RMS Titanic sank beneath the frigid waters of the North Atlantic.
As far as I can tell, Wright explored neither of these two conditions. He simply noted that too much progress could be made. He didn't explain why. That would be a useful topic for a later book. I suggest that changing one or both of those conditions will require a change to the human animal.
Wright notes that our present idea of progress is tightly tied to the Industrial Revolution and its Victorian idea of a "ladder" of progress. Like the idea of a ladder of evolution, the very idea is flawed and illustrates a very basic difference between our human intuition of the world and the truth of it. We were not destined to develop in the way that we did, either culturally or as a species. Evolution may solve any given problem in a variety of ways.
The idea of progress has become a modern myth. "Myth is an arrangement of the past, whether real or imagined, in patterns that reinforce a culture's deepest values and aspirations...Myths are so fraught with meaning that we live and die by them. They are the maps by which cultures navigate through time." This is our stone age brain at work. We have a primitive need to establish myths as a tool to establish communities, and to promulgate those communities by living by the myths. The abstraction of reality by myth may be a useful technique for a limited brain, but we will need to see reality much more clearly than we currently do to navigate our immediate future with anything less than a societal collapse coupled with further mass extinctions and catastrophic global climate change. That is, of course, if we avoid both nuclear war and a complete ecological collapse.
Is the situation really that dire? We might like to think not. We might reasonably think that we can avoid catastrophe through continued technical advancement. This is the point made by Steven Levitt and Stephen Dubner in SuperFreakonomics: Global Cooling, Patriotic Prostitutes, and Why Suicide Bombers Should Buy Life Insurance [AmazonGoodreads]. Levitt and Dubner open the sequel to their best-selling Freakonomics: A Rogue Economist Explores the Hidden Side of Everything [AmazonGoodreads] by reminding us of the tragic state of New York City at the turn of the twentieth century. Large cities had reached a practical limit to their growth because horse excrement was created much faster than it could be removed. An amazing five million pounds of the stuff was generated by the equine bellies of New York every day. Levitt and Dubner paint a smelly picture:
In vacant lots, horse manure was piled as high as sixty feet. It lined city streets like banks of snow. In the summertime, it stank to the heavens; when the rains came, a soupy stream of horse manure flooded the crosswalks and seeped into people's basements. Today, when you admire old New York brownstones and their elegant stoops, rising from street level to the second-story parlor, keep in mind that this was a design necessity, allowing a home owner to rise above the sea of horse manure.
It was of course the automobile, and its distant cousin the electric streetcar, that removed the horse from the streets of New York. Levitt and Dubner trumpet our species' ability to solve every problem we face in a similar way. We can simply keep inventing. Wright, far from dismissing this idea, acknowledges it and also points to its built-in limitation: "A seductive trail of successes," says Wright, "may end in a trap." A progress trap. What will happen to our huge cities when we cannot solve the next challenge in time?
It has happened before. Ours is not the first society to face an existential crisis due to overusing resources. The others tell their stories only to archaeologists and historians. One instantly brings to mind Shelley's Ozymandias, written as an ode to the discovery in 1817 of a broken statue of Egyptian pharoah Ramesses II:
I met a traveller from an antique land
Who said: "Two vast and trunkless legs of stone
Stand in the desert. Near them, on the sand,
Half sunk, a shattered visage lies, whose frown,
And wrinkled lip, and sneer of cold command,
Tell that its sculptor well those passions read
Which yet survive, stamped on these lifeless things,
The hand that mocked them and the heart that fed:
And on the pedestal these words appear:
'My name is Ozymandias, king of kings:
Look on my works, ye Mighty, and despair!'
Nothing beside remains. Round the decay
Of that colossal wreck, boundless and bare
The lone and level sands stretch far away.
As Wright notes, "Each time history repeats itself, the cost goes up."
We sometimes extrapolate poorly because we cannot see obvious solutions to the problems we have caused. We also sometimes look to the positive when we should not. This was likely the case when a group of climate scientists reported in 2012 that anthropogenic climate change would delay the next ice age. Much media coverage was given to the positive aspects of climate change. "It's an interesting philosophical discussion - 'would we better off in a warm [interglacial-type] world rather than a glaciation?' and probably we would,"said one of the authors, Dr. Luke Skinner of Cambridge University in a BBC interview, "But it's missing the point, because where we're going is not maintaining our currently warm climate but heating it much further, and adding CO2 to a warm climate is very different from adding it to a cold climate. The rate of change with CO2 is basically unprecedented, and there are huge consequences if we can't cope with that." Indeed.
Wright's initial and most powerful examples of progress traps are hunting ("Palaeolithic hunters who learnt how to kill two mammoths instead of one had made progress. Those who learnt how to kill 200 - by driving a whole herd over a cliff - had made too much.") and farming. Along the way he returns often to the one that scares him most: The prehistoric control of fire that led all the way to nuclear weapons.
He does not touch on the one that scares me the most. The so-called Green Revolution has tied the production of our food supply to non-renewable petrochemicals for fertilizer and transportation. I might temporarily look past that if the end result were not a dangerous increase in the number of babies born. Our population has spiked to match our food supply, and quite ignores other minor annoyances such as fresh water, and the limitations of fish, wild animals and non-farm plants. Feeding all the babies we can make is made obscene by the refusal of the Catholic Church to permit the distribution of birth control measures in the poorest countries. It is by the much lauded Green Revolution that we hasten our collapse.
Wright attempts to view our species from the outside, as if he were the mythical anthropologist from another planet. His objective stance shows us for the violent ape that we are, and suggests that we are likely responsible for killing off other intelligent hominids. "[P]rehistory, like history, teaches us that the nice folk didn't win, that we are at best the heirs of many ruthless victories and at worst the heirs of genocide." We are what we are, it seems, and culture can only go so far to clean us up.
Wright notes that two civilizations were quite long-lived, the ancient Maya and the ancient Egyptians. Recent evidence suggests that the Mayan civilization collapsed due to climate change. It would seem that their population was built, as with others, to the limit of their available resources. Climate change has also been suggested as a possible culprit in the demise of the Egyptian New Kingdom. These new findings support Wright's conclusions in a way that he could not have known in 2004.
Wright may or may not have been right about Neanderthal genocide. He acknowledges that he is not certain. However, his point is well taken about the Neanderthal-Cro Magnon war being "so gradual that it may have ben barely perceptible - a fitful, inconclusive war with land lost and won at the rate of a few miles in a lifetime." We no longer have the luxury of living in the nearly timeless wars of our ancestors. We are suddenly asked to adjust several times in a lifetime to complete cultural shifts - and we are ill suited for it.
Wright and I almost parted ways in regard to his description of early agriculture. The Neolithic Revolution, as the advent of farming has been known since the archaeologist Gordon Childe coined the phrase, has fallen on hard times in the decade since I spent time studying it.  My generation was taught in school that the revolution was a single event that occurred in Mesopotamia's  Fertile Crescent, and something that happened quickly, in the span of a single generation. Modern scholarship has disabused both notions.
Agriculture was indeed the single most necessary condition for the rise of civilization. Agriculture infers settlements. Our diet today consists of the same basic cereals that were farmed first. Wright uses current scholarship to point out that the Neolithic Revolution could not have been an isolated innovation, that it could not have occurred in a particular location, and that it was more likely to have been a strategy of desperation.
Our Neolithic forebears, modern humans in every sense of the word save culture, had an intimate understanding of plant life. They carefully observed the passing of the seasons. They knew when and where individual plants passed through their life cycles. There was no single moment of innovation that translated that knowledge into a decision to control the process. Scholars have agreed with this view for some years, starting with the popularization of the concept by the biologist Colin Tudge in his short masterpiece Neanderthals, Bandits & Farmers: How Agriculture Really Began [AmazonGoodreads]. They have been split until recently on whether farming started in one place and spread (the "diffusion" theory) or whether it was independently invented in multiple locations (the "parallel development" theory). I was a staunch diffusionist until reading Wright's book and following through his bibliography.
I have long held diffusionist beliefs for the simple and insufficient reason that I thought it somehow more likely that such an important invention required a sole genius. I must now admit that I was wrong. New findings in the Americas and elsewhere support Wright's contention that farming was a reaction to the end of the last ice age in many places at times that corresponded with the movement of the ice. The diffusion theory seems to have died the death that it deserved. That tells us something fundamental about human inventiveness: It strongly suggests that individual invention is not nearly as important in the course of human affairs as the environmental conditions in which a larger group of people find themselves. This is not simply "geographical determinism", but a recognition of the importance of the impact of geography on the development of societies. It also speaks to the capability of people to respond to environmental changes in the absence of a generational genius.
Diffusion wasn't necessary because hunter-gatherer groups already knew how to farm. They simply chose not to. They had intimate knowledge of plants, their timings and their ranges. They probably also understood that they could eat better by gathering plants than by growing a few. It would have been beyond obvious to them that hunting would be seriously curtailed by having a fixed settlement.
So why did agriculture come to dominate? Agriculture is a meme, in the sense proposed by the person who devised the word, Richard Dawkins. A meme is "an idea, behavior, or style that spreads from person to person within a culture." (The Selfish Gene [AmazonGoodreads]). Dawkins had in mind a unit of cultural transmission that was parallel to, and in some ways similar to, a gene. A successful meme spreads so well that it comes to dominate a culture.
Agriculture came to dominate for a very simple reason: Farmers do not need to practice infanticide to control their population. Hunter-gatherers and herders always do. Farming populations, no matter how sickly and limited in nutrition, have a lot of babies and keep as many as they can manage. They outbreed hunter-gatherers in a few generations.
The hunter-gatherer practice of infanticide by exposure keeps their numbers balanced with their environment. We hear echoes of infanticide by exposure and its justification as religious sacrifice in the earliest Western literature. The Pentateuch includes several (such as Exodus 1:16 "When you serve as midwife to the Hebrew women and see them on the birthstool, if it is a son, you shall kill him, but if it is a daughter, she shall live.") and the the Iliad (Agamemnon sacrificed his daughter Iphigenia to gain favorable winds to sail to Troy!). The nomadic herders and early farmers of the Pentateuch and the Iliad had more in common with the hunter-gatherer lifestyle than with modern farmers.
Tudge has suggested another important aspect to farming. He pointed out that farming changes the very nature of the predator-prey balance in an ecology:
Today, suburban domestic cats provide the perfect model. They are sustained by Kit-e-Kat and Whiskas, and remain thick on the ground even when the local song-birds and mice decline. Hence they remain a predatory force long after the prey species have become extremely rare. For prey animals in a state of nature rarity is a refuge. But when the predator has secure, independent food base, mere scarcity is no longer protective.
It was farming that separated humans from the rest of nature. We have considered ourselves separate ever since.
Wright has leaned rather heavily on the works of Gordon Childe, especially his What Happened in History [AmazonGoodreads]. Childe, a lifelong atheist and dedicated Marxist, is rightly remembered for having placed archaeology on a purely materialistic basis. It was Childe's Marxism that led him to a material understanding of the past ahead of his peers. However, one might reasonably question his contention that human progress was essentially a class struggle from prehistoric times. It might have been better for Wright to have carefully read Childe's own 1951 treatise on human progress, Man Makes Himself [AmazonGoodreads].
Wright might very well be wrong about "most" humans being familiar with constant struggle and starvation. That has certainly been true since agriculture began but is unlikely to have been true for hunter-gatherers and has rapidly become less true since the Green Revolution according to the United Nation's World Food Program. Today's population is both the largest in history by far and the best fed since agriculture began, as dire as the situation is for tens of millions of people in poor countries.
If hunter-gatherers had leisure time, we need to chip away at another long held presumption about farming's influence. Perhaps it is not so much that farming's surpluses enabled cultural specialists as it allowed them to build faster on their own inventions. Any inventor's shop is littered with earlier models. Farmers benefit from specialized tools and, critically, can afford to store them when they are not being used. Nomadic hunter-gatherers are, by every aspect of our current understanding, just as capable of invention as farmers, but eschew carrying extra tools in the same way that they eschew carrying extra babies.
"The modern human animal - our physical being - is a generalist." Wright tell us. "We have no fangs, claws, or venom built into our bodies. Instead, we've devised tools and weapons - knives, spearheads, poisoned arrows... Our specialization is the brain." This separates us from over-specialized species like the panda or the koala, both of which suffer from over-specializing on single foodstuffs. We are still not able to escape the judgement of history. Nothing does. History is not over for us. Even if we survive, we seem to be taking down most other species with us.
So among the things that we need to know about ourselves is that the Upper Paleolithic period, which may well have begun in genocide, ended with an all-you-can-kill barbecue. The perfection of hunting spelled the end of hunting as a way of life. Easy meat meant more babies. More babies meant more hunters. More hunters, sooner or later, meant less game. Most of the great human migrations across the world at this time must have been driven by want, as we bankrupted the land with our moveable feasts.
Wright's "all-you-can-eat barbecue" is known to anthropologists as the Pleistocene Overkill.
He also does a fine job correcting the historical record where it has been munged by the xenophobes of our recent imperial past. Evidence for some small amount of continued communication between Polynesia and the Americas, for example, serves to illustrate a larger point that trade between "primitives" was often denied out of hand as impossible.
Wright makes a strong case that humans live beyond their means whenever the conditions enable them to do so. It is only when environmental conditions make human life marginal that humans are forced to live close to the Earth. Extant hunter-gatherers live that way because they are forced to. Wright uses two particular examples of this phenomenon to make his case. He analyzes the environmental degradation caused by early farming in the (previously) Fertile Crescent and the religiously driven degradation of Easter Island. 
 The example of Easter Island is particularly troubling because it shows that, at least in one place and time, religious beliefs encouraged a people to bring their civilization crashing down. It is amazing that any of them survived. That, at least, is a testament to the tenacity of our ability to survive if not to prosper. Wright rightly points out that, whether or not the Easter Islanders saw it coming, the person who chopped down the island's last tree certainly did know what he was doing. 
 Two other examples, not in Wright's history, serve to reinforce his point. The harbor of Athens, Piraeus, was originally an island separated from the city by a salty tidal plain. The name of the plain, Halipedon, means "salt field" - showing some societal memory of the event. Although modern classical scholars sometimes debate whether ancient Romans and Greeks understood the impact their activities were having on the environment, I cannot understand why this is so. Both Plato and Aristotle recorded complaints about harbor silt and blamed it on agricultural runoff. Perhaps the situation was more like our own in relation to anthropogenic climate change: The situation was obvious to a few educated elite, but the powers that be were inadequately motivated to change the basis of their wealth and power. The similarity with our current situation is not lost on me.
My favorite example of the degradation of farming is the desert of Libya, once known with Western Egypt as the "granary of Rome". It's fragile grasslands briefly supported bountiful grain harvests. The removal of its loosened topsoil by the ghibli winds ended that description.
There are surely some who are working hard to balance our planet's environment. We are a distributed species, each of us taking our own actions. Some of us, for moral, ethical, philosophical, or religious reasons, would rather maintain our biosphere instead of using it up. They attempt to balance those more self-centered individuals who are out for number one. Unfortunately, the latter have wrested power since the halcyon days of Sumer. It is not by coincidence that environmental activists like Greenpeace, The Nature Conservancy, and the World Wildlife Fund (WWF), are considered to be radical and even dangerous by business and political leaders. The Intergovernmental Panel on Climate Change (IPCC), a UN body, was blasted in 2013 for using material prepared by Greenpeace and the WWF. In a fight between advocacy groups, businesses, and governments, one cannot reasonably expect advocacy groups to win.
There have been some significant environmental victories, most notably the US the Clean Air and Clean Water Acts of the 1970s. Unfortunately, there have been significant attempts to roll back that legislation by the Republican Party from 2002 to the present, including a bill passed in the US House of Representatives recently with an impressive, and depressing, majority of 262-152. Economics, it is said, is the science of incentives. Business interests will return to dominate whenever ecology is in anything less than absolute crisis.
Wright seems to have missed mentioning, and seeing the importance of, the Baldwin Effect. Evolution, that "universal acid" as philosopher Daniel Dennett called it, is the mechanism that created us, and the Baldwin Effect is the evolutionary means by which we acquired an ability to learn. The psychologist James Baldwin proposed the effect an 1896 paper. Learning, the type of learning gifted to us by evolution, is the single most important mechanism of cultural extension to future generations. It is the Baldwin Effect that allows culture to build upon itself. Wright notes the powerful ratcheting of culture without mentioning the key aspect of evolutionary theory that predicts it. Critically, once a child is born that has an advanced ability to learn, it can learn from its parents. This one fact is sufficient, in the extreme form taken by the human race, to create progress, and Wright's progress traps.
Yuval Noah Harari, in his 2014 book Sapiens: A Brief History of Humankind [AmazonGoodreads], blames our dangerous situation squarely, if indirectly, on the limitations of the Baldwin Effect. Wright, if he has read Harari, will curse the timing of his publication. "We tell each other stories", Harari said in a recent interview  following the book's release. "It does not matter whether they are true." Therein lies our greatest strength and our greatest failure.
Religions well illustrate how people act in the face of direction. Harari notes that Christianity became "the world's most successful religion" while being simultaneously "a complete fiction". Our ability to cooperate was greatly enhanced by the development of language, and language enabled us to pass down hard won lessons from generation to generation. Unfortunately, we cannot tell when someone is lying to us. We also have a difficult time passing down metaphor, which is often taken as literal truth by recipients and becomes religion. Our stories can thus lead us astray and become maladaptive. It is our belief in these stories, our culture, that can slow our reactions to new threats such as environmental degradation. Our stories and other aspects of group cohesion compete with our intellect to sustain a civilization. When the intellect is right and the stories wrong, we can find ourselves on the path to hell.
We really do sleepwalk through our lives. Modern neuroscience suggests that this should be no surprise, since we are not nearly as conscious as we think we are (c.f. The Self Illusion: How the Social Brain Creates Identity by Bruce Hood [AmazonGoodreads]). We are conscious enough, though, to see that we are stepping off of a very high cliff. As Wright says, we have kicked out the rungs under us as we have climbed the ladder of progress. Our failure will kill billions. This now seems to be nearly inevitable - exactly the kind of bad news story that does not sell. Like the Sumerians, whose rulers died or moved when sediment salted their farmland, we flirt with total collapse. Like the Easter Islanders, we have nowhere left to run.
Wright correctly notes the large-scale similarities of civilizations worldwide. His depiction of the development of kingship, with its attendant supposition that some people's lives are more valuable than others, is intentionally and eerily reminiscent of extant governments, regardless of style, philosophies, or record of human rights abuses. His point that all post-agricultural societies share this concept, and all hunter-gatherer societies reject it, is well taken. It is odd, then, that he fails to contrast the daily life of a notional slave with an agricultural serf or the persistent underclasses of today's urbanity. He is too willing to suggest that legal slavery belongs to the past when recognizable forms are institutionalized in most modern states. For all of Wright's dire warnings, he is perhaps not as pessimistic as he should be.
The astronomer Carl Sagan once said, "We are a way for the cosmos to know itself". He very carefully did not say the way. It really is up to us whether we choose to continue to sleepwalk through our daily lives while we allow vast historical cycles to knock us back to the stone age, or whether, like Hamlet, we dare take arms against the sea of our troubles. That sea is all the more deep for being buried deep within what we are as a species.