Sunday, October 19, 2014

Book Review: On Intelligence by Jeff Hawkins

On Intelligence [AmazonGoodreads] purports to explain human intelligence and point the way to a new approach toward artificial intelligence. It partially succeeds on the former and knocks it out of the park on the latter.
This is only book that Jeff Hawkins has written. Silicon Valley insiders may remember Hawkins as the creator of the PalmPilot back in the 1990s and, when the owners restricted his vision, he left to create Handspring. Both companies made a lot of money, which is all that matters on the Sand Hill Road side of Silicon Valley. The tech side of the Valley cares more about the fact that Hawkins succeeded in the handheld computing market where the legendary Steve Jobs had failed (with the Newton).
Hawkins' journalist co-author Sandra Blakeslee, on the other hand, has an Amazon author page that scrolls and scrolls.  She has co-authored ten books, several of which have related to the mind, consciousness and intelligence.  Her most recent book, Sleights of Mind: What the Neuroscience of Magic Reveals About Our Everyday Deceptions, was published as recently as 2011 with neuroscientists Stephen L. Macknik and Susana Martinez-Conde and was an international best seller. She has seemingly made a career out of helping scientists effectively communicate thought-provoking ideas.
Hawkins focuses all of his attention on uncovering the algorithm implemented by the human neocortex. Where that is impossible due to lack of agreement or basic science, he makes some (hopefully) reasonable assumptions and proceeds without slowing down. That will strike most neuroscientists as inexcusable. It makes perfect sense to an engineer.
Albert Einstein once said, "Scientists investigate that which already is; Engineers create that which has never been." Or, to quote myself, scientists look at the world and ask, "How does this work?". Engineers look at the world and say, "This sucks! How can we make it better?" There is a fundamental difference in philosophy required of scientists and engineers. Hawkins proved himself to be an engineer through and through even when he bends over backward when attempting to do some science.
There is a particularly useful review on Goodreads that drives a crowbar though the core of the book as if it were the left frontal lobe of Phineas Gage. The reviewer who goes solely by the name of Chrissy rightly points out Hawkins' overfocus on the neocortex.
It became clear that Hawkins was so fixated on the neocortex that he was willing to push aside contradictory evidence from subcortical structures to make his theory fit. I've seen this before, from neuroscientists who fall in love with a given brain region and begin seeing it as the root of all behaviour, increasingly neglecting the quite patent reality of an immensely distributed system.
Chrissy is correct. Hawkins' work is nevertheless critically important. Although the cortex is without doubt only part of the brain and only part of the "seat" of consciousness, his work to define a working theory of the "cortical learning algorithm" has lead directly to a new branch of machine learning. It is one that has borne substantial fruit since the book's 2004 debut.
It shouldn't surprise anyone that Hawkins' reviewers confuse science and engineering. Professionals are often confused on the separation themselves. Any such categorization is arbitrary and people have the flexibility to change their perspective, and thus their intent, on demand. To make matters worse, computer science is neither about computers nor science. It is the Holy Roman Empire of the engineering professions. Computer science involves the creation and implementation of highly and increasingly abstract algorithms to solve highly and increasingly abstract problems of information manipulation. It is certainly different from computer engineering, which actually does involve building computers, and it is also generally different from its subfield software engineering. Of course reporters and even scientists get confused.
Writing On Intelligence has not made Hawkins into a neuroscientist. That does not seem to have been his goal. Hawkins goal was to build a more intelligent computer program - one that "thinks" more like a human thinks. His explorations of the human brain have had that goal constantly in mind.
Hawkins himself states his goal differently, but I stand by my interpretation. Why? Consider what he says (pp. 90):
What has been lacking is putting these disparate bits and pieces into a coherent theoretical framework. This, I argue, has not been done before, and it is the goal of this book.
That makes him sound like a scientist. But he went on to do exactly what I claim. He described a framework and then implemented it as a computer program. That's engineering.
It seems almost strange that it took fully five years from the book's publication for Hawkins' group at the Redwood Neuroscience Institute (now called the Redwood Center for Theoretical Neuroscience at UC Berkeley) to publish a more technical white paper detailing the so-called cortical learning algorithm (CLA) described in the book. The white paper provides sufficient detail to create a computer program that works the way that Hawkins understands the human neocortex to work. Again surprisingly, another four years passed before an implementation of that algorithm became available for download by anyone interested. The Internet generally works faster than that when a good idea comes along. The only reasonable explanation is that a fairly small team has been working on it.
You can, since early 2013, download an implementation of the CLA yourself and run it on your own computer to solve problems that you give it. Programmers normally love this sort of thing. It is interesting to note that the Google self-driving car uses exactly the traditional artificial intelligence techniques that Hawkins denigrates in his first chapter. Hawkins may have come too late for easy acceptance of his ideas. There are entrenched interests in AI research and Moore's Law ensures that they can still find success with their existing approaches. A specialist might note that the machine learning algorithms in the Google car have stretched traditional neural networking well beyond its initial boundaries and toward many of the aspects described by Hawkins, without ever quite buying into his approach.
The implementation is called the Numenta Platform for Intelligent Computing (NuPIC). It is dual licensed under a commercial license and the GNU GPL v3 Open Source license. That means that you can use it for free or they will help you if you want to pay. You can choose.
Hawkins lists and briefs brief critiques for the major branches of artificial intelligence, specifically expert systems, neural networks, auto-associative memories and Bayesian networks. He is right to criticize all of them for not having looked more carefully at the brain's physical structure before jumping to simple algorithmic approaches. The closest of the lot is perhaps neural networks, which is notionally based on composing collections of software-implemented "neurons". These artificial neurons are rather gross simplifications of biological neurons and the networks, with their three-tier structure, are poor substitutes for the complex relationships known to exist in the brain of even the most primitive animals. Still, the timing of Hawkins book was unfortunate in that its publication occurred at the beginning of our current golden age of neuroscience. AI is back and AI research is suddenly well funded again. So-called deep learning networks currently contain many more than the three traditional layers, up to eight or even more. IBM has recently moved neural networks to hardware with their announcement of their SyNAPSE chip that "has one million neurons and 256 million synapses" implemented in silicon. All approaches are currently blooming for AI and are being applied to everything from voice and facial recognition to automatically filling spreadsheet cells to autonomous robots. There is currently less reason for the AI community to investigate, or lobby for hardware implementing, a brand new general approach. None of that makes Hawkins wrong. The human brain is still the only conscious system we know of and neuroscience is still doing a bad job of looking at its structures from the top down.
The largest single criticism of On Intelligence from me is that the cortex Hawkins describes is a blank slate, also called a tabula rasa. We know that the human brain is not. The idea that a mind is empty until filled solely by experience dates back at least to Aristotle. The Persian philosopher Ibn-Sīnā, popularly called Avicenna in Europe - a name still taught in Western universities, coined the term tabula rasa a thousand years ago as he interpreted and translated Aristotle's de Anima. We have known for decades that we are born with a number of innate functions, such as facial perception, so the brain is not a blank slate. Other animals have their own innate behavior such as the fear that many bird species have for the shape of a hawk. Hawkins does address the changing nature of brain function during life but does not even peripherally describe how innate functions fit into his theory.
Hawkins is often criticized for failing to provide a collated list of his assumptions. They are indeed buried in the prose. Hawkins comes right after the book's last chapter by providing an appendix that lists eleven predictions. They are all testable given the right science. Scientists are explicitly asked to validate or repudiate those predictions. A decade later, I am not aware of a comprehensive attempt to do so.
I have attempted to find all of Hawkins presumptions and have listed them here in the hope that they will both help other reviewers and neuroscientists who might pick away at them. All page numbers are from the 2004 St. Martin's Griffen paperback edition. All indications of emphasis are in the original text unless otherwise marked. The assumptions generally flow from the highest level of abstraction to the lowest, as Hawkins mostly does.
1. "We can assume that the human neocortex has a similar hierarchy [to a monkey cortex]" pp. 45. This one not only seems reasonable but is an assumption held by many scientists. It is in line with the many independent threads of evidence from evolutionary theory. Hawkins was intentionally careful when he used the word "similar".
2. "We don't even have to assume the cortex knows the difference between sensation and behavior, to the cortex they are both just patterns." pp. 100. This is actually a negative assumption in that he is not making one. This kind of thinking, determining what assumptions are necessary to a system, is in keeping with Hawkins' coding background. It is an engineering necessity.
3. "Prediction is not just one of the things your brain does. It is the primary function of the neocortex, and the foundation of intelligence." pp. 89. This is Hawkins' central idea and the one that informs not only the book and the implementation of NuPIC but the philosophic approach to his understanding of the brain and its functions. Hawkins relates the traditional AI approach of artificial auto-associative memories and declares, "We call this chain of memories thought, and although its path is not deterministic, we are not fully in control of it either." pp. 75. He proposes that "the brain uses circuits similar to an auto-associative memory to [recall memories]" pp. 31.
Here is also where Hawkins is forced to leave the cortex and venture into its relationships with another area of the brain. He notes the large number of connections between the cortex and the thalamus and the delay inherent in passing signals that way. He declares that the cortex-thalamus circuit is "exactly like the delayed feedback that lets auto-associative memory models learn sequences." pp. 146. He is onto something here, but one must question his oversimplification. The thalamus is also known to be involved in the regulation of sleep and thus almost assuredly implements more than just a delayed communication loop with the cortex.
Eventually he is able to bring his prediction model into sharp focus: "If the cortex saw your arm moving without the corresponding motor command, you would be surprised. The simplest way to interpret this would be to assume your brain first moves the arm and then predicts what it will see. I believe this is wrong. Instead I believe the cortex predicts seeing the arm, and this prediction is what causes the motor commands to make the prediction come true. You think first, which causes you to act to make your thoughts come true." pp. 102. This focus on the predictive nature of the neocortex is key to Hawkins understanding. Either the neocortex implements an algorithm really quite similar to the CLA as described by Hawkins and is therefore a "memory-prediction framework" or he has got it wrong. The predictive abilities of NuPIC suggest that he is on the right track in spite of his many assumptions.
4. Hawkins makes two interesting and useful assumptions for the purposes of developing a top down theory: "For now, let’s assume that a typical cortical area is the size of a small coin" pp. 138 (he does acknowledge there is substantial variation), and "I believe that a column is the basic unit of prediction" pp. 141. Why does it matter to Hawkins how large a cortical area is, much less a typical one? It shouldn't matter to a typical neuroscientist. They take the anatomy the way they find it. Remember though that Hawkins' purpose is to build a more intelligent computer program. He betrays his intent in making assumptions that all cortical regions have fundamentally the same structure (in spite of minor variations that he readily admits are in the literature) and in setting a typical size for an area of cortex. These assumptions will help him to design a computer program that learns in a new way. He is on better footing with the purpose of a cortical column. Cortical columns are indeed very regular in their construction and distribution, a fact that Hawkins dug out of 1970s research and relies upon heavily. It is striking and probably key to any successful high-level theory.
From this point forward Hawkins' assumptions get progressively more technical as he moves toward something that he can implement using existing technology. This may be the most important criticism of On Intelligence even though I personally find it perfectly excusable. Those seeking new neuroscience will be disappointed. Those seeking new and more general ways to approach artificial intelligence will be rapt.
Any review attempting to list Hawkins' more technical assumptions will need to pause to introduce new vocabulary for the general reader. A cortex, animal or human, is the outer layer of the brain. It consists of valleys and folds in order to increase its surface area in the small space afforded it in the skull. Its basic structure is a "cortical column" of six layers. The human brain has "some 100,000 neurons to a single cortical column and perhaps as many as 2 million columns." The Blue Brain Project of the Brain and Mind Institute of the École Polytechnique in Lausanne, Switzerland is currently attempting to model a complete brain, or at least the cortex. They have already succeeded in modeling a rat's cortical column. This is much more than Hawkins attempted, but a top-level theory of cortical function has yet to emerge from the project.
The six layers of a cortical column have many connections to other layers, other columns, other regions of the cortex and other areas of the brain. It is a complex network. Each layer consists of differently shaped cells. Hawkins collected the many, many neurons in a cortical column into functions at each layer. That alone may be a very valuable contribution if it is shown that level of abstraction can be made without sacrificing higher level function.
It will be useful and fascinating to see what emerges from a study of the Blue Brain Project's cortical column models. In the meantime, Hawkins has provided us with a roadmap of questions to ask.
5. Noting the obvious disparity between streams of sensory inputs and highly abstract thought, Hawkins illustrates how a hierarchical set of relationships between cortical areas could produce abstractions ("invariant representations") at the higher levels. "The transformation—from fast changing to slow changing and from spatially specific to spatially invariant—is well documented for vision. And although there is a smaller body of evidence to prove it, many neuroscientists believe you’d find the same thing happening in all the sensory areas of your cortex, not just in vision." pp. 114. Hawkins goes on to take this as written, which is just what he needs to do in the absence of established science in order to build a system.
6. Continuing with the vision system, possibly the best studied areas of the brain to date, Hawkins discusses some of the key regions called by neuroscientists V1, V2 and so on. He says, "I have come to believe that V1, V2, and V4 should not be viewed as single cortical regions. Rather, each is a collection of many smaller subregions." pp. 122. Hawkins is making a rather classic reductionist argument here. The question is not how arbitrary regions are defined or what they are called. The problem in front of our engineer is how they are connected. He needs that information to make reasonable (not necessarily physiologically accurate) assumptions if he is to uncover the mechanisms of the brain's learning system.
7. A region of cortex, says Hawkins, "has classified its input as activity in a set of columns." pp. 148. It is hard to argue with this suggestion given the success of Hawkins' artificial CLA in making predictions without the traditional training necessary to other forms of AI. Further, the cortex gets around limits on variation handling found in early artificial auto-associative memories, "partly by stacking auto-associative memories in a hierarchy and partly by using a sophisticated columnar architecture." pp. 164.
8. There are several assumptions about the detailed workings of a cortical column. "Let's also assume that one class of cells, called layer 2 cells, learns to stay on during learning sequences", says Hawkins (pp. 152). He makes no judgement whether that "learning" is innate or actively learned during life. He doesn't even know that it is really there. Something like it must be in order to make his theory work. That is no criticism! It is instead a testable hypothesis and thus the very model of scientific advancement. It also allows him to build something.
"Next, let’s assume there is another class of cells, layer 3b cells, which don’t fire when our column successfully predicts its input but do fire when it doesn’t predict its activity. A layer 3b cell represents an unexpected pattern. It fires when a column becomes active unexpectedly. It will fire every time a column becomes active prior to any learning. But as a column learns to predict its activity, the layer 3b cell becomes quiet." pp. 152. This might seem unjustified. What would make Hawkins jump to a conclusion in the apparently complete absence of supportive science. The answer is that the engineer clearly sees the necessity of feedback when it is presented to him. There simply must be a mechanism that fills the role or no learning could occur. Hawkins merely suggests a reasonable place for it and encourages the neuroscience community to look for it.
As for the lowest level, layer 6: "cells in layer 6 are where precise prediction occurs." pp. 201.
9. Finally, Hawkins rightly notes some differences between biological neurons and the artificial neurons used in neural networking models. It makes one wonder what IBM implemented on their SyNAPSE chip. How biologically correct were they? Hawkins says, "neurons behave differently from the way they do in the classic model. In fact, in recent years there has been a growing group of scientists who have proposed that synapses on distant, thin dendrites can play an active and highly specific role in cell firing. In these models, these distant synapses behave differently from synapses on thicker dendrites near the cell body. For example, if there were two synapses very close to each other on a thin dendrite, they would act as a 'coincidence detector.' That is, if both synapses received an input spike within a small window of time, they could exert a large effect on the cell even though they are far from the cell body. They could cause the cell body to generate a spike." pp. 163. This is exactly the sort of thing that can have great biologic effect and cause great trouble for overly simplistic implementors. It would seem that Hawkins was careful to avoid this over simplification even while embracing others.
Hawkins has also uncovered something really quite important and almost painfully subtle. Philosophers of mind, psychologists and priests have for centuries argued that the mind is fundamentally different from the body. We moderns have become comfortable with considering huge swaths of the body as mechanistic in nature. We can replace an arm, a leg, a kidney, even a heart for a while. We can insert a pacemaker, or a hearing aide. Surgery can cut, sew and sometimes almost magically repair, replace or augment much of our bodily infrastructure. We tend to view the body as a mechanism, however complicated, as a natural result. The brain, though, the mind, is a different matter. All the neuroscience conducted to date fails to convince most of us that the brain implements an algorithm. We cannot, so it is said, be reduced to an algorithm because that would imply that we could - one day - make a machine with all the abilities of people. Perhaps it would need to have all the rights, too. That scares people badly.
Parts of the brain have come to be accepted as algorithmic. Are you aware that a computerized cerebellum has been created for a rat? That was in 2011. Scientists and engineers are starting to soberly discuss creating such a device for paralyzed human beings.
The slow, painfully slow, admission that the body is a series of devices each of which chemically implement algorithms has been a long time coming. Parts of the brain have now unarguably fallen to the algorithmic worldview. First the ears, the eyes, the entire vision system. The cerebellum. The pineal gland. Hormonal balances. Most of the pons. Hawkins takes on the neocortex and, in spite of Chrissy's complaint, he did find it necessary to include the thalamus in his model. The bottom line is that the cortical learning algorithm is an algorithm. Philosophers of mind fear such a finding.
The idea that thinking is a form of computation dates from 1961 when Hilary Putnam first expressed it publicly. It has become known as the Computational Theory of Mind or CTM. Although CTM has its detractors (especially John Searle's Chinese Room, although that has been debunked to my personal satisfaction) it has become the basis for current thinking in evolutionary and cognitive psychology. The so called new synthesis of CTM is roughly a combination of the ideas of Charles Darwin's evolution, mathematician Alan Turing's universal computation and limits to computability proofs and linguist Noam Chomsky's rationalist epistemology. The basic idea is still the same, that human thought in human brains are algorithms even if they are quite complex ones that we haven't fully deconstructed. The new synthesis is about proving that theory.
"The dissociation between mind and matter in men and machines is very striking", observed David Berlinski in his book The Advent of the Algorithm[AmazonGoodreads], "it suggests that almost any stable and reliable organization of material objects can execute an algorithm and so come to command some form of intelligence."
We know what to do with algorithms. We implement them. It doesn't really matter how. We can implement algorithms in computer software or by creating DNA from a vat of chemicals or by lining up sticks and stones in clever ways. The only difference is the efficiency of the implemented algorithm. Electronic computers give us a way to perform calculations - implement algorithms - blindingly fast but they aren't the fastest way to implement all algorithms. Optical computers can do some things faster. Bodily chemistry, too. Or quantum computing. Each is just another way to implement algorithms be they designed by people or discovered by the search algorithm that we call evolution.
Discovering that the brain is algorithmic is arguably the most important realization of this or any other century. It means we can make more by any means we choose. That will shatter many world views even if Hawkins only got us part way there.

Tuesday, October 14, 2014

Book Review: Freethinkers: A History of American Secularism by Susan Jacoby

Freethinkers: A History of American Secularism [AmazonGoodreads] is exactly what it claims to be: A major piece of the missing history of secular thought heretofor diligently and thankfully incompletely surpressed. Jacoby has joined Jennifer Michael Hecht, author of Doubt: A History: The Great Doubters and Their Legacy of Innovation from Socrates and Jesus to Thomas Jefferson and Emily Dickinson [AmazonGoodreads], and Christopher Hitchens, author of The Portable Atheist: Essential Readings for the Nonbeliever [AmazonGoodreads], as one of the preeminent historians of nonbelieving.
Negative reviews of Freethinkers invariably say that the book is "a slog", "packed with too much information" or describe Jacoby's writing style as "condescending". They are not completely without basis. Later chapters veer from the impassioned and erudite opening in which Jacoby, at her best, quotes the prominant nineteenth century orator Robert Ingersoll as saying, "We have retired the gods from politics" and immediately contrasting that sentiment strongly with President George W. Bush's post-911 sermon thunderingly delivered from the pulpit of the National Cathedral.
As Jacoby sings the praises of the secular founding of the United States, she fails to follow up on the irony of the National Cathedral itself. Why does a secular government have a National Cathedral? The answer goes back almost to the where Jacoby's history starts: 1792. That is when the architect of Washington, D.C., Pierre L'Enfant, set aside a place for a central church, prominantly on the National Mall between the Congress and White House. Congress itself chartered the building of the cathedral in the late nineteenth century and has designated the building as the "National House of Prayer". The Congressional mandate was made in spite of the operating of the building by the Episcopal Church or the ownership by the Protestant Episcopal Cathedral Foundation. Our nationwide conversation regarding the separation of church and state has never been fully resolved.
How many of the positive reviews of Freethinkers (sample: "A must-read for freethinkers!!") are due to the nearly complete removal of freethinking from history books adopted as texts in US schools?  Perhaps many atheists, agnostics and freethinkers have been simply stunned to discover that their thoughts should not make them lonely. I certainly have been.
Stealing history from a subculture does not make them love you. Yet it doesn't keep rulers from trying. Other modern examples of minority cultures losing their history include Australia's shameful Stolen Generations, the kidnapping of children by Nazi Germany for the purpose of "Germanification", stories of European Jews hiding their heritage from their children (who sometimes regained it). Earlier examples abound, especially in areas once controlled by native Americans, conquered in war or folded into empires. The hiding of American secularism from new generations hardly reaches near to these extremes. Nevertheless, a shock of recognition is bound to occur when the light is eventually allowed to shine.
Jacoby sometimes gets so close to her subject that she forgets that the rest of us do not command her source material as well as she. She refers to a petition signed by 'four hundred Quakers, wittingly signed "your real Friends".' The joke may well be lost on those coastal Americans unfamiliar with the Religious Society of Friends, who are only colloquially called Quakers for their founder's admonition to "tremble at the word of the Lord".
I am nearly forced at this point to interlude long enough for the only Quaker joke that I know (perhaps because it is one of very few):
A Quaker farmer is milking his cow. The cow had been walking through brambles and has a tail full of burrs. The cow whacks the farmer across the face with her tail. The farmer shakes his head and continues to milk. The half-ton cow then steps on the farmer's foot. The farmer puts his shoulder into the cow, pushes and extracts his foot. When the milking is finished, the farmer stands up. The cow kicks over the bucket of milk.
The farmer looks at the spilled milk and then walks around to look at the cow in her eye. "Thee knows," says the farmer, "that I may not strike thee. And thee knows that I may not curse thee. But what thee does not know is that I may sell thee to a Methodist."
It may be difficult for us moderns to comprehend the subtly and difficulties inherent in that recitation. What was it like in Western Europe or the fledgling United States at a time when Catholics and Jews were considered as heathan alongside the Deists, so prevalent in the countryside, scores of minor Protestant sects, outright atheists and the smattering of Asian faiths seeping in during the imperial precursor to globalization? So many sects abounded in the early US that it was in almost everyone's best interest to keep the others from gaining too much power. Our situation today is so different partly due to the widespread majority of Protestant Christianity of the evangelical nature. Evangelicals have forgotten, as Jacoby points out, that they were the natural political allies of atheists during the contentious negotiations leading to the Constitution. More to the point, they have fallen into the desire to wield their powerful majority when they have it. John Adams coined the phrase "tyranny of the majority" in 1788 and he knew exactly what he was talking about.
Those interested in learning more about Jacoby's star of the show, Robert Ingersoll, will be interested to learn that Jacoby recently reprised her biosketch in Freethinkers with a complete biography. The Great Agnostic: Robert Ingersoll and American Freethought [AmazonGoodreads] has garnered the same high marks as its predecessor. Jacoby could not have initially benefited from Pulitzer Prize winning journalist Tim Page's distillation of Ingersoll's work in What's God Got to Do with It?: Robert Ingersoll on Free Thought, Honest Talk and the Separation of Church and State [Amazon,Goodreads] since it was published the year following Freethinkers.
Jacoby has many successes in this book and I do not wish to diminish them. There is one aspect of the book that did not stand up to the rest. She seems to have entirely missed the European atheistic influence on her American heroes.
Ingersoll seems to have been a staunch Utilitarian in his philosophy. Utilitarianism judges each course of action on its effects, positive or negative, to the greatest number of people. This ever-so-practical philosophy was pioneered by Jeremy Bentham in his 1789 book An Introduction to the Principles of Morals and Legislation [AmazonGoodreads] and was intended to form the underpinnings of an atheistic moral philosophy to replace the prevailing Judeo-Christian and Deistic philosophies of his day. Amazingly, and to Jacoby's theme, the word "atheist" does not currently appear on Jeremy Bentham's Wikipedia entry.
Ingersoll's following quotation is wholly in line with Bentham's "greatest happiness principle" and with the philosophy espoused by his British contemporary John Stuart Mill:
Happiness is the only good. The place to be happy is here. The time to be happy is now. The way to be happy is to make others so.
Bentham went on to have great influence on the founding of University College London, the first university in Britain to allow for the tertiary education of atheists, Jews, Hindus and members of other religious minorities. The leading universities of the time, Oxford and Cambridge, required membership in the Church of England. He was also the first person to donate his body to science. Prior to Bentham, anatomists acquired their corpses from graveyards with or without the permission of the law or by receiving the bodies of executed criminals. Bentham's body was publicly dissected in UCL's medical theatre by his friend Dr. George Fordyce, the remains later preserved, dressed in his own clothes and placed on permanent display in UCL's South Cloisters. His so-called "auto-icon" may still be seen there today. All of this was in accordance with Bentham's atheism, his Utilitarianism and his last will and testament.
Can we be certain that Betham influenced Ingersoll? Yes. Project Gutenberg contains a freely available copy of Ingersollia , or "Gems of thought from the lectures, speeches, and conversations of Col. Robert G. Ingersoll, representative of his opinions and beliefs". In that useful collection we find this particular gem: 'The glory of Bentham is, that he gave the true basis of morals, and furnished the statesmen with the star and compass of this sentence: "The greatest happiness of the greatest number."' Ingersoll himself admits the influence and yet Jacoby seems to have either missed it or ignored it. Interestingly, Hecht too seems to have missed the connection although she focuses more of her considerable attention on Bentham than on Ingersoll. To miss this connection is to suggest an Americanism to Ingersoll that belies its European inheritance.
Together Bentham, Mill, Ingersoll and other Utilitarians created the atheistic golden age in the nineteenth century. Jacoby's American-centric history inexplicably leaves out this European connection to Ingersoll's life and times.
Jacoby also failed to mention alongside the women's suffrage movement and that of civil rights that the expansion of voting rights has corresponded directly with the simplification of political speech. George Washington's first inaugural address is full of big juicy words as little understood by the common people of his time as by the uneducated of ours: "Among the vicissitudes incident to life, no event could have filled me with greater anxieties than that of which the notification was transmitted by your order, and received on the fourteenth day of the present month." Can one imagine Barak Obama or George W. Bush using a word like vicissitudes and getting away with it? Actually, Obama tried early in the 2007 campaign season and was castigated as being "professorial" as a result. There is little doubt in my mind that the democratic expansion of suffrage brought along the religious concerns of the masses.
The state of atheism and agnosticism in the US today is, as it always been, complicated. I agree with Jacoby that it is difficult in the extreme for an openly atheistic person to be elected to national office. Indeed, the Huffington Post reported earlier this year that 24 US congressmen reported being "privately" nonbelievers but would not say so publicly. American atheists and agnostics are, as a group, mostly in the closet.  This is slightly contrasted by the widely-reported 1997 survey of members of the National Academy of Sciences. The original article in Nature requires a subscription to access, but a summary is available via the Internet Archive's Wayback Machine. 93% of our nation's best scientists reported being either atheist or agnostic (72.2% atheists, 20.8% agnostics). A 2009 survey by the Pew Research Center confirmed a much higher secularism in all scientists compared to the general public. The sense of tension reported by Jacoby between theists and atheists, with agnostics sometimes caught in the middle, has survived intact through to the modern day, with the vastly increased pace of scientific discovery bringing the conversation to an uncomfortable head.
Jacoby's litany of examples from modern America, from Justice Antonin Scalia to President G.W. Bush to Al Gore, in her final chapter is well titled as Reason Embattled. It certainly feels that way. Scalia's repetitive quotations from St. Paul should make any of us wonder how he thinks about the New Testament's 1 Timothy 2:12. However, the First Amendment is still in force even in a period where the Fourth is held to be first among equals. We have not lost yet. Jacoby's call to arms to regain the pride of place for American secularism should be heeded or it is our fault indeed.

Friday, October 03, 2014

Rewriting the US Pledge of Allegiance

The Humanist, a publication of the American Humanist Association, is having a contest to rewrite the US pledge of allegiance.

I have never, and I mean never, been a fan of the pledge. I first encountered the pledge in kindergarten where, along with all the other five-year-olds I was informed that I would stand every morning, face the flag in the corner of the room and recite:
I pledge allegiance to the Flag of the United States of America, and to the Republic for which it stands, one Nation under God, indivisible, with liberty and justice for all.
The phrase "under God" sits poorly with most non-theists. I also never understood the point of pledging allegiance to a flag per se.

The pledge itself contains a respectable amount of baggage for such a short sentence. "one Nation" and "indivisible" are clear reflections of their time. The pledge was written to coincide with the quadcentennial of Columbus' initial landing in New World on an unknown island in The Bahamas. America in 1892 was still reeling from the society-wide shock of the Civil War, the economic rebuilding of the American South and the migration of both native Americans and former slaves into some semblance of citizenship. Although the direct memories of the war were fading from living memory and US patriotism was rising to the crescendo that would culminate in the Spanish-American War, no educated person of the time could have mistaken the words "one Nation" and "indivisible" as meaning anything other than "Let us never again fight a civil war".

The contentious phrase "under God" was snuck into the pledge in 1954 after six years of agitation from various groups of religious bent including both the Sons and the Daughters of the American Revolution, the Knights of Columbus and the organizers of the National Prayer Breakfast.

Arguably (because I will argue it) the best words in the pledge are "with liberty and justice for all". These simple words betray a social commitment that was extreme from the day they were written. Socialist, Baptist minister and pledge creator Francis Bellamy originally wanted to include the words "equality" and "fraternity" in the pledge. They must remind one of the Enlightenment-era motto of the French Revolution (and the current national motto of both France and the Republic of Haiti): Liberté, Égalité, Fraternité. Fear of extending the franchise to women, African Americans and other marginalized people would have kept the pledge from wide adoption had those words been included. Bellamy successfully conned his ship of patriotism past the shoals of overt prejudice.

The America of today may not be less prejudiced but it is certainly more multicultural. Most of the adults of the country's roughly 12% African American and 16% Hispanic or Latino population have the ability to vote, as do most of the 5% of adults of Asian decent. The total population is also more than five times the size of the country in the 1890s.

How can we repair the current pledge? I doubt we can. The fractious nature of the US Congress, especially the House of Representatives, and the vocalism of our religious fellow citizens make any agreement to change the pledge elusive. Some have argued that we should not pledge at all, to a country or to a god. That need not stop a reasonable discourse. It is just likely to stop an official agreement.

My own suggestion is to get back to basics. The original purpose of the pledge was to "instill into the minds of our American youth a love for their country and the principles on which it was founded". This is social engineering if I have ever heard it. Call it instilling patriotism in the new generation or mindwashing or anything else, it is absolutely an explicit form of cultural transmission. So what form of cultural transmission should we wish for?

I would love to get any idea of someone else's god out of the picture. Religion is best when it is silent in public. Nationalism, too, does not tend to serve us well as it leads us into wars that may not need to be fought.

The Preamble to the US Constitution is, for me, one of the cleanest statements of the goals of a secular society in which citizens have liberty to pursue their own interests as long as the general welfare is not infringed. My right to swing my arm, as my father would say, ends at the tip of your nose.

I therefore propose the following restatement of the US pledge of allegiance:
I pledge to work for a more perfect union, the establishment of justice, the insurance of domestic tranquility, the provision for common defense and the promotion of general welfare in order to secure the blessings of liberty for ourselves and our posterity.

Monday, September 15, 2014

Book Review: Kurt Vonnegut's A Man Without a Country

A Man Without a Country [Amazon, Goodreads] is a short little book and the last of Vonnegut's all too short life. The simple fact that he wrote it at eighty two years of age should alone make it worth reading. So few authors, great or not, continue to write into their eighties.  Vonnegut himself must have thought his age important since he informs us on two of the book's 160 pages and again in the table of contents.

Vonnegut calls his own work "windy" in comparison to Lincoln's Gettysburg Address and similarly lauds other tight poetry.  His signature style, though, is still obvious.  His zingy one-liners abound.  "The woman behind the counter has a jewel between her eyes", he says of a retail clerk in New York. "Now isn't that worth the trip?"

It would be unreasonable to expect an octogenarian not to sound like an old man.  Vonnegut doesn't disappoint. He is tired, he says, grouchy, and plainly wishes the human race were something other than he has observed it to be. Those familiar with his more famous works, particularly Slaughterhouse 5, will be familiar with his reasoning.  His horrific experiences during the firebombing of Dresden informed not only that great work but this lesser one.  His distrust of authority, and the rewriting of history, are well earned.  No doubt his natural tendency to aged crankiness coupled too tightly with his antiestablishment bent.  They left him, as he says, unable to joke.  His lifelong defensive mechanism finally gave way to despair.

Perhaps surprisingly, Vonnegut did not in the end commit suicide.  Other famous American writers have, such as Ernest Hemingway, Hunter Thompson and, more recently, David Foster Wallace.  He dances around the idea of it, though.  The use of the word "suicide" appears more than the word "depression", including a quote by Albert Camus ("There is but one truly serious philosophical problem, and that is suicide").  Vonnegut plummeted down a flight of stairs at his home in New York City on April 11, 2007, just three months after the paperback release of this book, and died instantly from the fall.

Vonnegut had despaired not just of life but of all of humankind by the time of his death.  He informs us that his "distinct betters" Albert Einstein and Mark Twain had done the same.  Twain, of course, had insisted that his famous War Prayer stay unpublished until after his death.  And no wonder.  Can you imagine the public reaction to these words in US evangelical churches during the last invasion of Iraq or Afghanistan?

O Lord our God, help us tear their soldiers to bloody shreds with our shells; help us to cover their smiling fields with the pale forms of their patriot dead; help us to drown the thunder of the guns with the shrieks of their wounded, writhing in pain; help us to lay waste their humble homes with a hurricane of fire; help us to wring the hearts of their unoffending widows with unavailing grief; help us to turn them out roofless with their little children to wander unfriended in the wastes of their desolated land in rags and hunger and thirst, sports of the sun flames in summer and the icy winds of winter, broken in spirit, worn with travail, imploring thee for the refuge of the grave and denied it.

Twain's family waited an additional thirteen years before allowing the War Prayer to be published, buried in an anthology.

No stranger to despair was Twain when he looked upon the human race at the end of his life.  Vonnegut's rants are not much different in spirit but certainly lesser in both scope and bite.  "If you are an educated, thinking person, you will not be welcome in Washington, D.C.", we are informed. "But if you make use of the vast fund of knowledge now available to educated persons, you are going to be lonesome as hell."  Vonnegut was surely lonesome as hell.

Vonnegut's observations that "Human beings are chimpanzees who get crazy drunk on power" or "Only nut cases want to be president" set the political tone for the book.  There is nothing fundamentally new or even particularly striking about his hatred of the Bush administration nor in his repulsion of the American culture of war.  We are left with a feeling that he gave up trying to be insightful at the same time that he gave up on people.  This is a mistake, however.  A close read of A Man Without a Country will yield plenty of worthy Vonnegutisms.  He notes the relative disparity between evangelical Christians' quoting of the Ten Commandments versus the Beatitudes, for example.  His characterizations of those "guessers" who seek power without first acquiring understanding in Chapter 8 is brilliantly conceived and executed.  This hit-or-miss peculiarity is endemic to any book of collected essays.

Vonnegut's claim that Einstein similarly "gave up on people" is harder to verify or even to comprehend.  There is no sense that Vonnegut was joking, nor subtly misdirecting.  I suspect he was simply mistaken.  Einstein did, in fact, sign the Russell–Einstein Manifesto decrying the stockpiling and use of nuclear weapons just days before he died. However, this was clearly taken as an act of hope.  The eleven luminaries whose signatures appear were calling for a way forward, not a giving up.  Indeed, the famous phrase "Remember your humanity, and forget the rest" taken from the manifesto itself infers an understanding of "humanity" different than Twain's or Vonnegut's.

Vonnegut takes more from Twain than his attitude.  He borrows liberally from his style.  Vonnegut's description of sealing a manila envelope ("First I lick the mucilage - it's kind of sexy") is as tight a piece of American writing as anything produced by his hero.  The mixture of pedestrian vulgarity and erudite vocabulary would be as comfortable in Twain's Following the Equator if not in Huckleberry Finn.

To the end, Vonnegut loved learning and the experience that comes with it.  But he sometimes recoiled from its effects.  He was desperately uncomfortable with artists' inability to affect political policy.  Therein lies his reason for giving up on us.  He might have held onto his impact on MIT's Sherry Turkle, the preeminent sociologist of Internet relationships, the writers Douglas Adams and Bill Bryson and even The Grateful Dead.  The creation of art and science has driven our civilization at least as much as the warfare that he so despised.

Kurt Vonnegut may have been surprised, as we learn in the final chapter, to have become a writer, but we should all be glad that he did.  His voice has transcended generations of American fads while being unabashedly unique the entire time.  We can and should forgive his old man's rantings if the end result is just to make us wait a bit longer for his next bullseye.

Monday, August 18, 2014

The Customer Service Disaster that is US Airways

It has been fascinating to watch the rise of customer-centric business models over the last decade.  From Rackspace's "Fanatical Support" promise to the acquisition of Zappos by Amazon solely for their customer service capabilities, consumers have benefited from this new-old way of saying, "the customer is always right".  Unsurprisingly customers like that.  Perhaps more surprisingly in some board rooms, businesses who try it find that they experience less churn and higher profit margins.

Nobody seems to have mentioned this trend to US Airways.

The problem at US Airways is really quite simple:  Their people are not empowered to fix problems easily - so everything becomes hard.

After a routine booking problem, US Airways:

  • Informed me that they could not put me on another flight the same day, even though they had seats available.
  • Suggested that they could sell me a new ticket for $1,200 in place of mine.
  • Canceled my return flight against my express communication to the contrary.
  • Caused me over $900 in additional costs, some of which went to them and the remainder to other companies.
  • Blamed each stage on US Airways computers who would "not let" the humans handle a routine variance.


Here is my story.  I was invited to attend a reunion of some former shipmates from the US Navy.  The reunion was in Maine from a Friday-Sunday in August, but I needed to be in San Francisco that Sunday.  I very much wanted to go.  We were a close group back in the day and I wanted them to know that I cared enough to attend.  Trying to get from Washington, DC to Maine, back to DC and then to the West Coast was going to be tight.  I am a disabled veteran and find that too much travel, even daily commuting, can reduce my ability to walk well or even at all.  I lose workdays due to these problems far too often.  However, I hadn't been bitten by airline problems in a few years so I was confident that I could squeeze it in and willing to take the risk.

The best flight seemed to be on US Airways. US Airways Flight 3332 leaves DC's Reagan National Airport at 10:20 AM and arrives in Portland, Maine just before noon.  I carefully sorted the flights by arrival time so I could calculate how much time I could actually spend with my friends.  The last thing I wanted was to arrive late or need to leave too early.  My flight to California was scheduled to leave in the early morning of the Sunday, so I booked a flight back to DC that left Portland Saturday at 4:00 PM.  Both flights were direct and therefore quick.  I reported to my wife that the trip was possible and booked the tickets via Expedia.

Traffic around DC can be unpredictable and even more so currently with the construction of an expanded HOV lane on I-95.  I left home early and arrived more than two hours before my scheduled flight.  US Airways happily checked me in and I breezed through security.  My mood was upbeat as I contemplated the reunion.  I wished I had been to the gym more regularly.

My boarding pass said that my flight was to leave from Gate 41.  The flight wasn't on the departures board yet and a flight to New York was advertised on the sign above the gate.  I put both down to the fact that I was so early.  I killed time by topping off the charge on my cell phone and chatting with other passengers.

I noticed about an hour before my flight that the New York departure hadn't occurred yet.  In fact, it was due to depart just ten minutes before my flight.  There was no way that another aircraft was going to get to that gate, refuel and reprovision, board passengers and depart in ten minutes.  Something was not right.

The gate agent told me not to worry.  The New York flight would leave and mine would be along next.  OK.  I had my shoes shined, looked through the shops, checked my email and wandered over to the departures board.  There was still no listing for my flight.  Uh oh.  I carefully reviewed my boarding pass.  I was checked in on a flight that was scheduled to leave twelve hours later, at 10:10 PM.  US Airways Flight 3460.  Now how did that happen?  I thought back to my careful sorting by arrival times.  It would have been difficult to have selected a flight from the very bottom of that list whose arrival time was the following day.

The gate agent directed me to a separate desk between gates where three ticket agents for US Airways sat.  None of the three were helping anyone.

"Um, I seem have made a terrible mistake", I started.  "I was supposed to be on Flight 3332 in 45 minutes, but was booked for a flight this evening.  Can you help me?"  The ticket agent took my boarding pass and turned to her computer.  She typed.  A lot.  Airline systems seem to require a very large amount of input.  She informed me that she couldn't help me because I was not within six hours of my flight's departure.  I was shocked.

"I really need to get to Maine today", I said.  "Surely there is something that you can do?".  "Well", she replied, "you could buy a new ticket."  She informed me that Flight 3332 had "plenty" of seats and that for a mere $1,200 she would be more than happy to put me on it.

"Wait.  Are you telling me that even though I am booked on your airline for a flight to the same destination today and have checked in for that flight that you cannot give me one of those empty seats on the earlier flight?".  I had no checked bags.  I was carrying an over-the-shoulder bag with just enough for a one-night stay.  My suitcase for my longer trip to San Francisco was packed and waiting at home.

I wondered at this point why US Airways would have let me check into my flight and receive a paper boarding pass fourteen hours before my flight was scheduled.  I knew you could do that online but it struck me as odd that it would be allowed in person.  The TSA agent who checked my boarding pass and ID similarly said nothing.  Did they not care or think nothing of someone apparently intending to spend all day hanging around a terminal?  The security implications seemed to be less than ideal.  On the other hand, a casual mention by either the US Airways agent or the TSA agent would have served to tip me off to the problem.

"That's right, sir.  The computer won't let me do it."  She went on to explain that even if I had been within the six hour window she would need to charge a $200 change fee.  I could live with the fee but baulked at the proffered ticket price.  I asked for my boarding pass but she grabbed it away from me.  More typing ensued while she explained that she needed to remove me from the morning flight and put me back on the evening flight.  Minutes passed.  Minutes more.  Eventually she stopped typing and handed me my boarding pass.

I went to the gate for Flight 3332.  Maybe I was just dealing with the wrong person.  The gate agent was pleasant and sympathized. She expressed dismay that she couldn't help me.  "The computer", she said, "won't allow me to give you one of the seats we have."  We spoke briefly about post-911 changes to the power of gate agents to move passengers through the system.  She suggested that I could go to the ticket agents outside of security but doubted that the answer would be different.

I called my wife and asked her to look for other flights to Maine.  The ticket agents outside of security were polite but firm.  There were no flights that US Airways or its partners could put me on that would allow me to arrive on Friday.  No seats were to be found on the United flight out of Dulles Airport, 45 minutes drive to the West.  Things were not looking up.

Discussions with my wife went better.  She identified a flight from Baltimore-Washington International Airport an hour to the North and booked me on it for roughly $250.  I informed the US Airways ticket agent that I could get on another flight and would take my scheduled return flight back on Saturday.  She said nothing to indicate that would be a problem.  I had to hustle, but left National Airport for BWI having exhausted all other avenues for arriving in Maine on the day I intended.

Where did the error in booking occur?  My Expedia receipt confirmed that I purchased a ticket for the evening flight.  Perhaps it was my error and perhaps an unintentional bait-and-switch between Expedia, Sabre and US Airways; a bug somewhere.  I know software systems well enough not to presume that the error was mine although it certainly could have been.  The important thing to me was getting to Maine on time and without further damaging my health.

I did get to Maine in time for dinner.  The only way for me to make the trip at that late hour was to drive to BWI then fly to Boston, rent a car in Boston and drive the three hours to my destination in Maine.  I could barely walk by the time I arrived.  The rental car company charged me an extra $200 to drop the car at the Portland Jetport instead of returning it to Boston.  I had no choice: my return flight on US Airways left from Portland.  Besides, I'm not sure I could have driven back to Boston if I wanted to.

My old shipmates and I spoke until the wee hours, reminiscing and catching up on lives after the service.  It was just after six in the morning when I received a text from my wife that my return flight had been canceled by US Airways.  I called Expedia as the message suggested.  In less than 10 minutes the Expedia representative determined that he could not help me because my ticket had been somehow "taken over" by US Airways.  I would need to deal with them.  You should not be surprised at what happened next.

US Airways refused to allow the Expedia rep to transfer my call to them.  They instead insisted that I call them directly.  I am left wondering why this basic courtesy offended them.  Perhaps it is simply a matter of corporate policy designed to insult Expedia and other middlemen who are reducing their profit margin.  Whatever the reason, I spent the next two hours on the phone with two US Airways representatives.  The first call dropped or was cut off.  I called back.  They informed me that there was no way to get me back to DC that day.  I couldn't miss my flight to California for work!  Finally, by 8:30 AM, I was magically offered a flight from Portland to DC's National Airport if I could depart at 12:30.  I hustled down to breakfast with my friends, spent the time I could and raced to the airport.

A woman waiting to board in Portland complained to me that she had come in on a flight with a two-year-old and been assigned exit row seats by US Airways.  The error was entirely the airline's fault and, when caught by the gate agent, she wasn't allowed to board.  This is just the kind of routine problem that used to be handled quite competently by flight attendants on an aircraft.  No more on US Airways flights.  The woman said she was kept from boarding until everyone but three people had boarded the flight.  Those three were asked whether any of them would volunteer to switch seats with the woman and her child.  Fortunately, she was seated and the flight took off with her on it.  What would have happened otherwise?  She was told she was to be left behind.  As crazy as that sounds, it was entirely consistent with my experience.  Again, US Airways rigid policies snatched defeat from the jaws of victory.  They made something easy into something hard.

I arrived at National Airport and began the long Metro and bus rides to BWI to retrieve my car.  I tweeted my frustration to @USAirways and was pleased to get an instant response.  The conversation, however, went nowhere.  The droid pushing the sound bites back to me had no more power to right wrongs than the gate agents, ticket agents, call center workers or flight attendants.  US Airways had systematically stripped them of any power to go off-script.  After six queries and responses my questions passed a magic threshold of undetermined quality.  I was awarded a hyperlink to the formal customer complaint system.  I arrived home, less than an hour's drive from National Airport, five and a half hours after landing.  I'm now in San Francisco and relying on my wife to carry my bags, a situation I find demeaning and embarrassing.  It takes me minutes to climb out of a car.  I avoid stairs and standing for any period of time.  I simply hurt more and I hate it.  All because a poorly paid ticket agent couldn't put me on a flight with "plenty" of seats available because her computer wouldn't let her.  Her company is demeaning to her as well.  I pity her need to work for them.

This lengthy blog post is my answer to US Airways.  I have submitted a link to it via the US Airways customer relations site.  I will wait with bated breath for their answer which, no doubt, will be polite, firm and utterly unyielding.  To do otherwise would not, perhaps, be permitted by their computers nor in compliance with their policy.  Updates here when I know more.

Tuesday, October 08, 2013

Repairing Makerbot Replicator 2 Nozzle Insulation

The Problem: Filament Ball Tears Insulation Tape

I have finally used my Makerbot Replicator 2 (not a 2x) enough to run into a problem that is not strictly end-user serviceable.  A large filament ball stuck to the ceramic insulation tape surrounding the Replicator's nozzle and ripped the tape when the ball was removed.

Makerbot does have a video on removing a filament ball (or "blob" as they call it - must be a technical term).  Look for the video entitled "Removing a PLA Blob" on their troubleshooting page.  They suggest heating the extruder nozzle and gently lifting off the ball while being "careful not to rip the ceramic insulation tape".  Yes, indeed.  I agree that it is always best not to remove a filament ball when the nozzle is cold, but the described process wasn't sufficient for me.  I have since discovered that I can use the "Load filament" option under the Utilities menu to heat the extruder and use newly applied filament to melt and gently force downward the ball.  That seems to make it easier to remove cleanly.

I decided to replace the ceramic insulation tape and the kapton tape that holds it in place.  This required a rather complete tear-down of the extruder assembly, but it wasn't really that bad.  Anyone reasonably careful should be able to do it themselves.


The filament ball complete with insulation tape stuck to it:


Parts

I sourced some ceramic insulation tape and some kapton tape to cover it from UltiMachine, a RepRep printer supply company in Tennessee.  Unfortunately, Makerbot doesn't supply such parts.



  • 1 x Dupont Kapton High Temp. Adhesive Tape 1/4" ($6.00), SKU UMKPTON025
  • 1 x Ceramic Insulation Tape ($1.50), SKU UMCERTAPE


The kapton tape from UltiMachine was quite thin side-to-side, much thinner than the tape used by Makerbot.  I used it anyway and just wrapped it around to ensure a complete coverage.  It seems to be fine.

Repair Instructions

The first thing to do is to carefully remove the fan assembly and extruder motor from your Makerbot Replicator 2.  These instructions will help you if you are unfamiliar with the process.  They were developed for the upgrade of the extruder assembly.  Follow them just until you remove the fan assembly and the motor, but do not take off the extruder itself. 


Next you will need to remove the side fan assembly.  Use a Phillips screwdriver to remove the two screws holding the assembly to its chassis (shown at 10 o'clock and four o'clock in the picture below).


Disconnect the electrical wire connectors so the fan assembly hangs down out of your way.  Note that you will still have two wires (in a single insulated cable) connected at this time.  Those are the thermocouple wires that are used to heat the nozzle bock.  They will be removed shortly.

Remove the two hex screws from the base of the chassis that connect the chassis to the aluminum block beneath it.


Removal of the chassis exposes the aluminum heat sink.  The nozzle assembly is attached via a bolt and hangs beneath the heat sink.

Removal of the aluminum heat sink requires you to look underneath the assembly to find two more hex screws, one on each side of the heat sink.


Once you have the heat sink entirely removed from the extruder mount, it should look something like this:


Now it is time to remove the thermocouple wires.  Unscrew the thermocouple connector from the nozzle block.  Note that the wires will twist if you leave the block in place!  Instead, you should rotate the nozzle block to remove the thermocouple connector as soon as you loosen the connection enough to do so.  That will keep the wires from breaking.


You should now have the nozzle block and heat sink completely removed from the printer.  You can move it to a more convenient working surface, such as a workbench.

Use a crescent wrench or socket wrench to remove the nut at the top of the heat sink, as shown.  You will see that the bolt has a hole through its center; that is where the filament passes on its way to the nozzle.


Remove the heat sink and set it aside.  You should also remove the nut between the nozzle block and the heat sink.


Clean off the old kapton tape and ceramic insulation tape.  Some of mine was baked on and was difficult to remove.  Carefully remove the tape so you don't damage the brass nozzle, the aluminum block or the wires.  I used a utility knife to remove most of the material and found that heating the material with a portable butane touch made it easier to remove.

Please note that a steel utility knife blade is substantially harder than the aluminum of the block or even the brass of the nozzle.  You can shave off bits of them if you aren't careful.  You only want to remove the old tape.


I placed the block into a vise to facilitate cleaning it and also (carefully) used a wire brush for the final touches.


Cut some ceramic insulation tape to the width of the nozzle block.


Cut holes in the ceramic insulation tape for the bolt and the nozzle.  The round hole on the right is for the nozzle because I wanted to ensure that the join in the tape would not be at or near the nozzle.  Wrap the tape around the nozzle block by putting the nozzle through one hole and the bolt through the other.  Obviously, it works better to start by putting the tape over the long bolt and then wrapping around to the nozzle.  Cut the end of the tape to fit the size of the block.


Wrap the nozzle block with kapton tape to hold the ceramic insulation tape in place.


Reassembly

You will need to follow the tear-down instructions in reverse.  You might follow the pictures.  I did :)

There are a couple of things to watch out for:

1.  Make certain to align your nozzle block on the heat sink so it is straight (not crooked as shown below) and make sure that the screw holes are facing toward the nozzle (so you can screw them in from the bottom when you remount the unit).


2.  Next, and this is really very important, ensure that the bolt on the top of the nozzle block-heat sink unit (shown on the right in this picture) does not protrude above the top of the nut.  If it does, the extruder will not fit over top of it.

This also seems to ensure that the nut in between the nozzle block and the heat sink does not compress the insulation.


3.  Another minor gotcha occurs when remounting the side fan to the fan housing.  The picture below shows where the two sets of black-and-white wires fit in a slot on the left side of the housing (toward the back of the Replicator 2).  The picture shows me pushing the wires into the slot with my index finger.  This will keep the wires from being kinked when the fan is screwed into place.


Getting it Working Again

Don't forget to level your build plate after the reassembly!  This is critical because you may have changed the nozzle height, however slightly.  Look under the Makerbot's Utilities menu for the "Level the build plate" option and follow the on screen directions.

Next, load some filament and try a small test print.  The "Mr. Jaws" model from the Makerbot's SD card printed flawlessly for me the first time.  Yay!

You might ask how long it took me to generate another filament ball.  Less than an hour ;-)  Fortunately, that one didn't tear my newly fixed insulation.