Sunday, April 23, 2017

The Sorry State of Browser Privacy

Every one of the estimated 3.7 billion Internet users should be concerned that the vast majority of their searches, the contents of their shopping baskets both on and off line,  often their location, and, by careful statistical analysis, their associates are exposed to the corporate desires of the likes of Google, Microsoft, and Facebook. This information, once collected, is available to law enforcement agencies in many international jurisdictions. Some governments additionally collect information directly to spy on their citizens. One might also consider that logs of private information are also ripe for hackers, paid by organized crime or governments, who break into notionally "secure" systems.

Our mobile devices are also directly inspectable by customs agents when we cross international borders, and in some jurisdictions by police on the street.

Those who say that they have no care for privacy on the Internet have seemingly no idea of the abuse to which such information may be put. The Holocaust was perpetrated by a vicious regime primarily on the basis of household religious indications from a century of national census collection. No government of the past has ever had access to the amount of information available about the location and habits of individual citizens.

How can we possibly protect ourselves from a technically savvy authoritarian government that is willing to abuse this treasure trove of data?

Our browsers, those critical tools for our daily lives, are not currently our friends. They are the portal by which our personal information flees to corporate and government interests.

There are two fundamental approaches to securing our personal information in browsers. The first and easiest is to avoid recording your history from your local device. This is the primary tool behind browsers' privacy modes such as Firefox's private mode or Safari's incognito mode. No having local data will provide some level of protection if your phone or computer is seized.

Removing or avoiding local data storage does nothing to protect you from Web analytics companies who use data your browser happily sends to them during an online session. Advertising companies install trackers into their ads that are implemented in the JavaScript language understood by each browser. That computer code can and does read as much information as it can find, and combine it into a full picture of your individual browser through a process known as browser fingerprinting. It is this fingerprint, good perhaps to identify one person in tens of millions, that your browser happily passes back to the companies that asked for it.

The Electronic Frontier Foundation (EFF) has made a useful tool called Panopticlick to test browsers vulnerability to online tracking. The odd but fitting name is a reference to the Panopticon, a type of jail designed in 1787 by English philosopher Jeremy Bentham. A single jailer could see a large number of prisoners in the Panopticon.

This post reports on a series of Panopticlick tests on a variety of browsers. Desktop browsers were tested on a MacBook Pro. Mobile browsers were tested on an Apple iPhone 6 and a Sony tablet running Android Marshmallow.

Panopticlick asks four questions of browsers:
  • Is your browser blocking tracking ads?
  • Is your browser blocking invisible trackers?
  • Does your browser unblock 3rd parties that promise to honor Do Not Track?
  • Does your browser protect from fingerprinting?
A perfect browser would respond in the affirmative to each question, and a report might look like this:

Ads Trackers DNT Fingerprints
My good browser yes yes yes yes

A browser that failed all four tests would have a negative report. The last question would be answered by noting that a unique fingerprint could be calculated:

Ads Trackers DNT Fingerprints
A terrible browser no no no unique

It is naturally possible for some browsers to provide partial implementations to block tracking ads or other trackers. Partial implementations are marked in yellow.

Desktop Browser Tests

Tests were performed on an Apple MacBook Pro, running MacOS Sierra version 10.12.4.

Safari version 10.1 (12603.1.30.0.34)

Ads Trackers DNT Fingerprints
Safari (Mac, default) partial partial no unique
Safari (Mac, private browsing, default) partial partial no unique
Safari (Mac, private browsing, block cookies and website data) partial partial no unique

Chrome version 57.0.2987.133 (64-bit)

Ads Trackers DNT Fingerprints
Chrome (Mac, default) yes no no unique
Chrome (Mac, EFF Privacy Badger installed) yes yes no unique
Chrome (Mac, incognito mode, default) partial partial no unique
Chrome (Mac, incognito mode, block cookies and website data) yes yes no unique

Blocking all sites entirely using manual control of Privacy Badger yielded the same results as having Privacy Badger installed.

Safari’s incognito mode blocks plugins including Privacy Badger, so using plugins is ineffective to increase privacy on Safari.

Firefox version 52.0.2

Ads Trackers DNT Fingerprints
Firefox (Mac, default) no no no unique
Firefox (Mac, EFF Privacy Badger installed) yes yes yes unique
Firefox (Mac, NoScript installed) yes yes yes yes
Firefox (Mac, private mode, EFF Privacy Badger installed) yes yes yes unique
Firefox (Mac, private mode, NoScript installed) yes yes yes yes

Firefox’s private mode does not block plugins, so Privacy Badger could be used with private mode. 

NB: JavaScript was disallowed for panopticlick.eff.org with NoScript; disabling JavaScript is a key way to avoid trackers. Unfortunately, it is also a key way to break modern Web pages.

NoScript maintains a white list of common sites to minimize the breakage of legitimate JavaScript functionality. It blocks all others, but gives a useful user interface to allow exceptions. As shown in Figure 1 below, most sites are analytics trackers such as Google Analytics, Facebook, and Doubleclick.

Figure 1. NoScript's list of recently blocked sites

Mobile Browser Tests on iOS

Tests on iOS were performed on an Apple iPhone 6, running iOS version 10.3.1.

Safari iOS version 10.3.1

Ads Trackers DNT Fingerprints
Safari (iOS, default) partial partial no unique
Safari (iOS, private browsing, default) partial partial no unique
Safari (iOS, private browsing, block cookies and website data) partial partial no unique
Safari (iOS, Disconnect Privacy Pro installed and VPN active) yes yes no unique

Firefox iOS version 7.1 (2565)

Ads Trackers DNT Fingerprints
Firefox (iOS, default) no no no unique
Firefox (iOS, private mode, default) partial partial no unique
Firefox (iOS, Disconnect Privacy Pro installed and VPN active) yes yes no unique

Firefox Focus iOS version (current as of 17 April 2017)

Ads Trackers DNT Fingerprints
Firefox Focus (iOS, default) yes yes no unique
Firefox Focus (iOS, “Block other content trackers” option on) yes yes no unique
Firefox Focus (iOS, Disconnect Privacy Pro installed and VPN active) yes yes no unique

The motto for Firefox Focus is “Browse, erase, repeat”, which shows its focus on erasing local history.

Chrome iOS version 57.0.2987.137

Ads Trackers DNT Fingerprints
Chrome (iOS, default) no no no unique
Chrome (iOS, incognito mode, default) no no no unique
Chrome (iOS, Disconnect Privacy Pro installed and VPN active) yes yes no unique

Opera Mini iOS version 14.0.0.104835

Ads Trackers DNT Fingerprints
Opera Mini (iOS, default) no no no unique
Opera Mini (iOS, “Accept Cookies” turned off and “Block Pop-ups” turned on) no no no unique

EFF suggests rather concerningly, “switching to another browser or OS that offers better protections.”

Mobile Browser Tests on Android

Tests on Android were performed on a Sony Xperia Z2 Tablet SGP511, Android version 6.0.1 (Marshmallow), kernel 3.4.0-perf-gc14c2d5

Chrome Android version 57.0.2987.132

Ads Trackers DNT Fingerprints
Chrome (Android, default) no no no unique
Chrome (Android, incognito mode, default) no no no unique

Firefox Android version 52.2

Ads Trackers DNT Fingerprints
Firefox (Android, default) no no no unique
Firefox (Android, private mode, default) yes yes no unique

Opera Mini Android version 24.0.2254.115784

Ads Trackers DNT Fingerprints
Opera Mini (Android, default) yes yes no unique
Opera Mini (Android, private tab, default) yes yes no unique

NB: Opera Mini tested “no” in all categories last week, but Opera seems to be adding an effective ad blocking technology, which seems to have come to Android before iOS.

Disconnect free edition for Android (no version number, as of 23 April 2017)

Ads Trackers DNT Fingerprints
Disconnect in-app browser(Android, default) partial partial no unique

NB: Disconnect Pro/Premium versions were not tested on Android because I was borrowing the device and didn't want to buy my friend a $50 subscription.

Conclusions

One clearly needs to shop around to find a browser that will protect your privacy. That is easier on a computer than on a mobile device.

The combination of Firefox and the NoScript plugin was the only way discovered to pass all EFF tests, and that combination is only available on desktop and laptop computers. That is a shame given the power performance of Safari, or the Google app integration with Chrome.

There is no apparent way to avoid browser fingerprinting on iOS or Android.

Apple users seem to have a choice between the new Firefox Focus and installing (and using!) Disconnect Privacy Pro. It is easy to forget to turn on Disconnect's VPN. There is a cost, of course, but that should be nothing new to Apple users. Better privacy is part of what we pay for with Apple. It is surprising that Apple hasn't done with browser privacy what they have done with server-side encryption of user data.

Android users fare reasonably well using either Firefox's private mode or (surprise!) the new Opera Mini. Both browsers have decent blockers for ad trackers and other online trackers. Unfortunately, neither option does a thing to stop browser fingerprinting. In 2017 and beyond, blocking direct tracking is just not good enough. One cannot help but wonder why one needs to use Firefox's private mode to access apparently built-in functionality.

In summary, be careful. Practice safe computing to avoid infections of one form or another. It might be wise to both use a browser with good privacy support and also to check the status of updates once in a while.

We remain with poor tradeoffs. Should we increase privacy and suffer inconvenience, or opt for convenience? Unfortunately, I am sure I know what most people will do. Browser vendors, especially the Mozilla Foundation, should ensure that privacy protection is enabled by default. Action against browser fingerprinting is urgently needed.

Your privacy is in your hands.

Monday, December 28, 2015

Writers Notebook - 28 Dec 2015

Relationships with Technology

  • The historian Ronald Wright has noted, "From ancient times until today, civilized people have believed that they behave better, and are better, than so-called savages." But this is just not so. It is a belief that is unjustified specifically because we have the same stone age brain as those savages. Their cultures are different from our own, but we cannot be said in any meaningful way to be "better", either individually or as a group. Hunter-gatherers can be just as friendly, brutal, caring, dismissive, helpful and murderous as modern, civilized people. The difference between the two would seem to be simply a matter of technology. Use of technology certainly changes brain structures, but does not change the fundamentals of emotion, including compassion and fear.
  • The word in the Pashto language for an AK-47, the world's most ubiquitous military assault rifle is, disturbingly, "machine". The AK-47 may be the only machine rural Pashtun children ever see. Trucks are rare in their mountain villages, time is told by the sun, and plumbing is unheard of.

Quote of the Day

Tuesday, December 22, 2015

Star Wars' Dirty Little Secret: We are the Empire


We all love Star Wars. For those of us old enough to remember the 1977 original Episode IV, it was a life changing event. We absorbed Star Wars memes before the rest of society, and integrated them into our lives. Atheists, Hindus, Jews, and Christians all speak of the Force without irony. Even those of us who wouldn't be caught dead dressing up as wookies or droids feel its influence.

We even love the brainchild of Jedi Master Lucas when the last word should properly spelled "wares". Lucas reportedly made the majority of his fortune selling the marketing rights to Star Wars paraphernalia until Lucasfilm's sale to Disney for more than 4 billion USD as recently as 2012.

Star Wars, though, has a secret. A dirty little secret. As with Avatar, and the much older The Lord of the Rings, the bad guys feature shiny new constructions of huge scale under some form of magical control. They are everything we wish to make in our Brave New World. Never mind the cost.

We are awed by the relative size of the Imperial star cruiser in Episode IV and again by the massive Death Star. Comparisons of scale go right through to the brand-new Episode VII, where we are shown how the new superweapon Starkiller Base dwarfs the original Death Star. The Empire and the First Order like things big, new, shiny, and made of metal. They are as dehumanizing as Saruman's industrial mines or the mining machines of Avatar.

The good guys, on the other hand, are nature-loving tree huggers. Yoda hangs out on a swampy world full of life because the Force is generated by life. Nature is Yoda's place of power. Life, big and often dangerous life, abounds in the Star Wars universe. Our heroes can't turn around without being surprised by an outsized creature, from the driest desert to an asteroid in the depths of space. Life is everywhere. In many ways, it is central player. And it contrasts completely with the modern, clean, metal world of the Empire.

Star Wars rebels live in small, human-centric groups. Luke Skywalker makes his way to the Rebel Alliance on Yavin, only to find his childhood friend Biggs among the pilots. How can two friends meet at random in a galaxy teeming with life and millions of inhabited planets? Because the rebels are a tiny group. In fact, they are a tiny group of tiny groups, each the approximate size of hunter-gatherer groups.

How many "snub fighters" did the "well equipped" Alliance send out against the Death Star? Thirty. That's it. Thirty just happens to be the median size of a traditional hunter-gatherer group.

Everything about the Star Wars universe pits insanely big, dehumanizing, industrial, machine-dominated governmental forces against something else that we can all relate to: Tiny, ragtag groups of friends who know each other well and act as a team only because they wish to. The rebels have choices, as Han Solo's character demonstrates over and over again.

Even the weapons show the contrast of scale. The Empire has the Death Star. The First Order has Starkiller Base. The rebels have one-person fighter ships and the occasional lightsaber.

The rebels fight at human scale with personal weapons against a huge enemy that awes them with its size and power. And yet they win. And we cheer.

There's more. Rebels have babies, real flesh-and-blood human babies. Leia and Han had a baby between Episode's VI and VII. Even Darth Vadar had a mom when he was a cute little kid. Their parents loved them even when they went horribly wrong. But those babies that are even exposed a little bit to the Dark Side start turning into machines, bit by bit. The ultimate expression of dehumanization are the storm troopers. They are clones to the last under the Empire, and orphans painfully ripped from their parents' loving arms under the First Order.

Let's leave the comfortable fantasy of Star Wars for just a moment, and take a trip to Afghanistan. The incredibly brave Afghan reporter Najibullah Quraishi reported on PBS's Frontline on the indoctrination of child soldiers in Afghanistan's Eastern provinces. Quraishi insists that we listen to an ISIS commander as he instructs nine-year-olds in the use of grenades, and AK-47 rifles.

And what is the word in the Pashto language for an AK-47, the world's most ubiquitous military assault rifle? The word is, disturbingly, "machine". The AK-47 may be the only machine these children ever see. Trucks are rare in their mountain villages, time is told by the sun, and plumbing is unheard of.

Machines have a similar relationship in the Star Wars universe. Droids are everywhere. Who makes them? The only droid we see being made was C-3PO, Anakin's homebrew friend. In fact, Anakin's creation of the metal man was the first indication we were given that he would turn to the dark side. We see Luke repairing C-3PO's arm in Episode IV, and charging R2-D2 in Episode V. Chewie fixes C-3PO again in VI. Others hack away on-screen and off at the Millennium Falcon and other gear. But who makes them? It must be the Empire. The rebels sure don't. The rebels are too busy running and fighting.

The droids of the Alliance are machines like the Pashtuns' AK-47. Both groups of backwoods fighters are mere users of high technology. They are not the progenitors.

It is time we faced facts. The terrorists and freedom fighters that we Americans purport to abhor are the prototype for the Rebel Alliance. We are the Empire, just as Iran and Hezbollah have told us we are.

Star Wars shows us the central schizophrenia of modern Western society. We yearn for the tight-knit, human-scale societies of friends working for a common cause. We also want our indoor plumbing, Netflix, regular food supplies, and pornography. We drown our social discomfort with the next hit of sugar.

The great irony of Star Wars is that we collectively sit in air-conditioned comfort, munching our popcorn and drinking our sugary sodas, rapt by the magic of CGI-induced scenes of stickin' it to the man. We cheer the dirty and ill-equipped heroes that tear down the great metal empire of oppression. Then we go to work the next day and keep building the Empire.


Tuesday, July 07, 2015

Writer's Notebook - 7 July 2015

On Pandering

The 2016 US presidential campaign is seemingly in full cajole. This gem was uncovered in the Economist news magazine in the Lexington column of their 20 June 2015 issue:
"Rick Perry, a former governor of Texas, rode to the barbecue on a Harley belonging to a disabled war hero, accompanied by exNavy SEALS, to raise funds for a charity that gives puppies to military veterans."

Mr. Perry spent five years in the US Air Force from 1972-1977, three years of which he was a pilot of C-130 cargo planes. As the Economist journalist noted, his appearance in Iowa reached the level of performance art.

My wife points out, "I would laugh at that if I read it in a Terry Pratchett book." Indeed. Perhaps my next business venture should be a family game called "Truth or Pratchett?". One would vie with peers to guess whether a ridiculous quotation actually happened, or was rebranded from a Pratchett novel.

Quotes of the Day

Both of today's quotations are from the classicist Edith Hamilton, from her masterwork The Greek Way.

  • "The ancient priests had said, 'Thus far and no farther. We set the limits to thought.' The Greeks said, 'All things are to be examined and called into question. There are no limits set to thought.'"
  • "Before Greece the domain of the intellect belonged to the priests. They were the intellectual class of Egypt. Their power was tremendous. Kings were subject to it. Great men must have built up that mighty organization, great minds, keen intellects, but what they learned of old truth and what they discovered of new truth was valued as it increased the prestige of the organization. And since Truth is a jealous mistress and will reveal herself not a whit to any but a disinterested seeker, as the power of the priesthood grew, and any idea that tended to weaken it met with a cold reception, the priests must fairly soon have become sorry intellectualists, guardians only of what seekers of old had found, never using their own minds with freedom."



Tuesday, February 17, 2015

Introductory JSON-LD Videos

The prolific Manu Sporny has created a useful series of videos explaining JSON-LD, the preferred format for representing structured data on the Web. JSON-LD is a serialization of the RDF data model, which allows it to be much more than just a format.

Here are Manu's videos:


  • What is Linked Data? A short non-technical introduction to Linked Data, Google's Knowledge Graph, and Facebook's Open Graph Protocol.
  • What is JSON-LD? A short introduction to JSON-LD for Web developers, designers, and hobbyists. It covers how to express basic Linked Data in JSON.
  • JSON-LD: Compaction and Expansion An overview of JSON-LD's compaction and expansion features and how you can use them to merge data from multiple sources.
  • JSON-LD: Core Markup An overview of some of the core markup features of JSON-LD including types, aliasing, nesting, and internationalization support.

Monday, January 19, 2015

Book Review: Superintelligence by Nick Bostrom

Superintelligence: Paths, Dangers, Strategies [Goodreads] by Nick Bostrom is a big idea book. The big idea is that the development of truly intelligent artificial intelligence is the most important issue that our generation will face. According to Bostrom, it may be the most important issue the human race has ever faced. This view suggests that how we approach the development and response to AI will be more important than how we respond to nuclear proliferation, climate change, continued warfare, sustainable agriculture, water management, and healthcare. That is a strong claim.
The sale of Bostrom's book has no doubt been helped by recent public comments by super entrepreneur Elon Musk and physicist Stephen Hawking. Musk, with conviction if not erudition, said
With artificial intelligence we are summoning the demon.  In all those stories where there’s the guy with the pentagram and the holy water, it’s like yeah he’s sure he can control the demon. Didn’t work out.
One almost wishes that Musk didn't live in California. He provided ten million US dollars to the Future of Life Institute to study the issue three months later. Bostrom is on the scientific advisory board of that body.
Hawking agrees with Musk and Bostrom, although without the B movie references, saying,
Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks.
Bostrom, Musk and Hawking make some interesting, and probably unfounded, presumptions. This is hardly uncommon in the current public conversation around strong AI. All seem to presume that we are building one or more intelligent machines, that these machines will probably evolve to be generally intelligent, that their own ideas for how to survive will radically differ from ours, and that they will be capable of self-evolution and self-reproduction
Jeff Hawkins provides the best answer to Elon Musk, Stephen Hawking, and Nick Bostrom that I have read to date:
Building intelligent machine is not the same as building self-replicating machines. There is no logical connection whatsoever between them. Neither brains nor computers can directory self-replicate, and brainlike memory systems will be no different. While one of the strengths of intelligent machines will be our ability to mass-produce them, that's a world apart from self-replication in the manner of bacteria and viruses. Self-replication does not require intelligence, and intelligence does not require self-replication. (On Intelligence [Goodreads], pp. 215)
Should we not clearly separate our concerns before we monger fear? The hidden presumptions of self-evolution and self-reproduction seem to be entirely within our control. Bostrom makes no mention of these presumptions, nor does he address their ramifications.
At least Bostrom is careful in his preface to admit his own ignorance, like any good academic. He seems honest in his self assessment:
Many of the points made in this book are probably wrong. It is also likely that there are considerations of critical importance that I fail to take into account, thereby invalidating some or all of my conclusions.
Beautifully, a footnote at the end of the first sentence reads, "I don't know which ones." It would be nice to see Fox News adopt such a strategy.
Another unstated presumption is that we are building individual machines based on models of our communal species. Humans may think of themselves as individuals, but we could not survive without each other, nor would there be much point in doing so.
We have not even begun to think about how this presumption will affect the machines we build. It is only in aggregate that we humans make our civilization. Some people are insane, or damaged, or dysfunctional, or badly deluded. Why should we not suspect that a machine built on the human model could not, indeed, would not, run the same risk? We should admit the possibility of our creating an intelligent machine that is delusional in the same way that we should admit the mass delusions of our religious brethren.
Is my supposition too harsh? Consider the case of Ohio bartender Michael Hoyt. Hoyt is not known to have had any birth defects, nor to have suffered physical injury. Yet he lost his job, and was eventually arrested by the FBI, after threatening the life of Speaker of the House John Boehner. Hoyt reportedly heard voices that told him Boehner was evil, or the Devil, or both. He suspected the Boehner was responsible for the Ebola outbreak in West Africa. He told police that he was Jesus Christ. Is Hoyt physically ill, or simply the victim of inappropriate associations in his cortex? We have many reasons to suspect the latter.
Bostrom originally spelled his name with an umlaut (Boström), as befits his Swedish heritage. He apparently dropped it at the same time as he started calling himself "Nick" in place of his birth name Niklas. Bostrom lives in the UK and is now a philosopher at St. Cross College, University of Oxford. Perhaps the Anglicization of his name is as much related to his physical location as the difficulty in convincing publishers and, until recently, the Internet Domain Name System, to consistently handle umlauts. His Web site at nickbostrom.com uses simple ASCII characters.
According to Bostrom, we have one advantage over the coming superintelligence. It is a bit unclear what that advantage is. The book's back jacket insists that "we get to make the first move." Bostrom's preface tells us that "we get to build the stuff." I tend to trust Bostrom's own words here over the publicist's, but think that both are valid perspectives. We have multiple advantages after all.
Another advantage is that we get to choose whether to combine the two orthogonal bits of functionality mentioned earlier, self-evolution and self-replication, with general intelligence. Just what the motivation would be for anyone to do so has yet to be explained by anyone. Bostrom makes weak noises about the defense community building robotic soldiers, or related weapons systems. He does not suggest that those goals would necessarily include self-evolution nor self-replication.
The publisher also informs us on the jacket that "the writing is so lucid that it somehow makes it all seem easy." Bostrom, again in his preface, disagrees. He says, "I have tried to make it an easy book to read, but I don't think I have quite succeeded." It is not a difficult read for a graduate in philosophy, but the general reader will occasionally wish a dictionary and Web browser close at hand. Bostrom's end notes do not include his supporting mathematics, but do helpfully point to academic journal articles that do. Of course, philosophic math is more useful to ensure that one understands an argument being made than in actually proving it.
Perhaps surprisingly, Bostrom makes scant mention of Isaac Asimov's famous Three Laws of Robotics, notionally designed to protect humanity from strong AI. This is probably because professional philosophers have known for some time that they are woefully insufficient. Bostrom notes that Asimov, a biochemistry professor during his long writing career, probably "formulated the laws in the first place precisely so that they would fail in interesting ways, providing fertile plot complications for his stories." (pp. 139)
To be utterly picayune, the book includes some evidence of poor editing, such as adjoining paragraphs that begin with the same sentence, and sloppy word order. I would have expected Oxford University Press to catch a bit more than they did.
Bostrom, perhaps at the insistence of his editors, pulled many philosophical asides into clearly delineated boxes that are printed with a darker background. Light readers can easily give them a miss. Those who are comfortable with the persnickety style of the professional philosopher will find them interesting.
Bostrom does manage to pet one of my particular peeves when he suggests in one such box that, "we could write a piece of code that would function as a detector that would look at the world model in our developing AI and designate the representational elements that correspond to the presence of a superintelligence... If we could create such a detector, we could then use it to define our AI's final values." The problem is that Bostrom doesn't understand the nature of complex code in general, nor the specific forms of AI code that might lead to a general intelligence.
There are already several forms of artificial intelligence where we simply do not understand how they work. We can train a neural network, but we cannot typically deconstruct the resulting weighted algorithm to figure out how a complex recognition task is performed. So-called "deep learning", which generally just means neural networks of more than historical complexity due to the application of more computing power, just exacerbates the problem of understanding. Ask a Google engineer exactly how their program recognizes a face, or a road, or a cat, and they will have no idea. This is equally true in Numenta's Cortical Learning Algorithm (CLA), and will be true of any eventual model of the human brain. Frankly, it is even true of any large software program that has entered maintenance failure, which is almost always an admission by a development team that the program has become too complex for them to reliably change. Bostrom's conception of software is at least as old as the Apple Newton. That is not a complement.
We will surely have less control over any form of future artificial intelligence than it will require to implement his proposed solution. Any solution will not be as simple as inserting a bit of code into a traditional procedural program.
Critically, Bostrom confuses the output of an AI system with its intelligence (pp. 200). This equivalence has been a persistent failure of philosophy. To quote Jeff Hawkins again, who I think sees this particularly clearly,
But intelligence is not just a matter of acting or behaving intelligently. Behavior is a manifestation of intelligence, but not the central characteristic or primary definition of being intelligent. A moment's reflection proves this: You can be intelligent just lying in the dark, thinking and understanding. Ignoring what goes on in your head and focusing instead on behavior has been a large impediment to understanding intelligence and building intelligent machines.
How will we know when a machine becomes intelligent? Alan Turing famously proposed the imitation game, now known as the Turing test, which suggested that we could only know by asking it and observing its behavior. Perhaps we can only know if it tells us without being programmed to do so. Philosophers like Bostrom will, no doubt, argue about this for a long time, in the same way they now argue whether humans are really intelligent. Whatever "really" means.
Bostrom's concluding chapter, "Crunch time", opens with a discussion of the top mathematics prize, the Fields Medal. Bostrom quotes a colleague who likes to say that a Fields Medal indicates that a recipient "was capable of accomplishing something important, and that he didn't." This trite (and insulting) conclusion is the basis for a classic philosophical ramble on whether our hypothetical mathematician actually invented something or whether he "merely" discovered something, and whether the discovery would eventually be made later by someone else. Bostrom makes an efficiency argument: A discovery speeds progress but does not define it. Why he saves this particular argument for his terminal chapter would be a mystery if he had something important to say about what we might do. Instead, he simply tells us to get on studying the problem.
I find that professional philosophers often slip in scale in this way. One moment they are discussing the capabilities and accomplishments of an individual human, generally assumed to be male, and the next they switch to a bird's eye view of our species as if the switch in perspective were justified mid-course. I find this both confusing and disingenuous. It is as if the philosopher cannot bear to view our species from the distance that might yield a more objective understanding.
The actions of individuals, both male and female, are inextricably linked to our cognitive biases. We do not make rational decisions, we make emotional ones, even when we try not to. We make decisions that keep our in-groups stable, by and large. A few, a very few, spend their days trying to think rationally, or exploring the ramifications of rational laws on our near future. A few dare to challenge conventional thinking aimed at in-group stability. Those few are not better than the rest. They are just an outward-looking minority evolved for the group's longer term survival. But the aggregate of our individual decisions looks much like a search algorithm. We explore physical spaces, new foods to eat, new places to be, new ways to raise families, new ways to defeat our enemies. Some work and some don't. Evolution is also a search algorithm, although a much slower one. Our species is where it is because our intelligence has explored more of our space faster and to greater effect. That is both our benefit and our challenge.
The strengths and weaknesses of the professional philosopher's toolbox are just not important to Bostrom's argument. Superintelligence would have been a stronger book if he has transcended them. Instead, it is a litany of just how far philosophy alone can take us, and a definition of where it fails.
I could find no discussion of the various types of approaches to AI, nor how they might play out differently. There are at least five, mostly mutually contradictory, types of AI. They are, in rough historical order:
  1. Logical symbol manipulation. This is the sort that has given us proof engines, and various forms of game players. It is also what traditionalists think of when they say "AI".
  2. Neural networks. Many problems in computer vision and other sort of pattern recognition problems have been solved this way.
  3. Auto-associative memories. This variation on neural networks uses feedback to allow recovery of a pattern when presented with only part of the pattern as input.
  4. Statistical, or "machine learning". These techniques use mathematical modeling to solve particular problems such as cleaning fuzzy images.
  5. Human brain emulation. Brain emulation may be used to predict future events based on past experiences.
Of these, and the handful of other less common approaches not mentioned, only human brain emulation is currently aiming to create a general artificial intelligence. Not only that, but few AI researchers actually think we are anywhere close to that goal. The popular media has represented a level of maturity that is not currently present.
The recent successes of the artificial intelligence community are a much longer way from general intelligence than one hears from news media, or even some starry eyed AI researchers. There are also good reasons not to worry even if we do manage to create intelligent machines.
Recent news-making successes in AI have been due to the scale of available computing. Examples include the ability for a program to learn to recognize cats in pictures, or to safely drive a car. These successes are impressive, but are wholly specific solutions to very particular problems. Not one of the researchers involved believes that those approaches will lead to a generally intelligent machine. These are tools and nothing but tools. Their output makes us better in the same way that the invention of the hammer or screwdriver, or general purpose computer, made us better. They will not, cannot, take over the world.
Bostrom is, at the end, pessimistic about our chances for survival. Perhaps this is what happens when one spends a lot of time studying global catastrophic risks. Bostrom and Milan M. Cirkovic previously edited a book of essays exploring just such risks in 2011 [Goodreads]. More information is available on the book's Web site. The first chapter is available online. These three paragraphs from Superintelligence anchor his position in relation to AI:
Before the prospect of an intelligence explosion, we humans are like small children playing with a bomb. Such is the mismatch between the power of our plaything and the immaturity of our conduct. Superintelligence is a challenge for which we are not ready now and will not be ready for a long time. We have little idea when the detonation will occur, though if we hold the device to our ear we can hear a faint ticking sound.
For a child with an undetonated bomb in its hands, a sensible thing to do would be to put it down gently, quickly back out of the room, and contact the nearest adult. Yet what we have here is not one child but many, each with access to an independent trigger mechanism. The chances that we will all find the sense to put down the dangerous stuff seem almost negligible. Some little idiot is bound to press the ignite button just to see what happens.
Nor can we attain safety by running away, for the blast of an intelligence explosion would bring down the entire firmament. Nor is there a grown up in sight.
One could imagine the same pessimistic argument being made about nuclear weapons. They must be reigned in before "some little idiot" gets his hands on one. Is that not what has happened? The Treaty on the Non-Proliferation of Nuclear Weapons has been a major force for slowing the spread of nuclear weapons in spite of the five countries that do not adhere to its principles. Separate agreements, threats, and sanctions has so far worked just well enough to plug the holes. Grown ups, from Albert Einstein to the current batch of Strategic Arms Limitation Talks negotiators, have come out of the woodwork when needed. Not only has no one dropped a nuclear bomb since the world came to know of their existence at Hiroshima and Nagasaki, but even the superpowers have willingly returned to small, and relatively low-tech ways of war.
Bostrom urges us to spend time and effort urgently to consider our response to the coming threat. He warns that we may not have the time we think we have. Nowhere does he presume that we will not choose our own destruction. "The universe is change;" said the Roman emperor Marcus Aurelius Antoninus, "our life is what our thoughts make it." Bostrom might learn to temper his pessimism with an understanding of how humans relate to existential threats. Only then do they seem to do the right thing. He might also observe that unexpected events should not be handled using old tools, as noted by the industrialist J. Paul Getty ("In times of rapid change, experience could be your worst enemy.") or management theorist Peter Drucker ("The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic.") We will need new conceptual tools to handle a new intelligence.
Our erstwhile fear mongers seem also certain that any new general intelligence would, as humans are wont to do, wish to destroy a competing intelligence, us. People fear this not because this is what an artificial intelligence will necessarily be, but because that is what our form of intelligence is. Humans have always feared other humans and for good reason. As historian Ronald Wright noted in A Short History of Progress [Goodreads],
"[P]rehistory, like history, teaches us that the nice folk didn't win, that we are at best the heirs of many ruthless victories and at worst the heirs of genocide."
This raises the fascinating question of how we, as a species, would react to the presence of a newly competitive intelligence on our planet. History shows that we probably killed off the Neanderthals, as earlier human species killed off Homo Erectus and our earlier predecessors. We don't play well with others. Perhaps our own latent fears will insist on the killing off of a new, generally intelligent AI. We should consider this nasty habit of ours before we worry too much about how a hypothetical AI might feel about us. If an AI considers us a threat, should we really blame it? We probably will be a threat to its existence.
It is possible that a single hyper-intelligent machine might not even matter much in the wider course of human affairs. Just like the natural, generational genius does not always matter. The history of the human race seems to be more dominated by the slow, inexorable march of individual decisions than it is by the, often temporary, upheavals of the generational genius. How would human development have changed if the Persian commander Spithridates had succeeded in killing Alexander the Great at the Battle of the Granicus? He almost did. Spithridates' axe bounced off Alexander's armor. Much has been made of the details, but people would still spread through competition, and contact between East and West would still have eventually occurred. The difference between having a genius and not having a genius can be smaller than we think in the long run.
Bostrom's main point is that we should take the development of general artificial intelligence seriously and plan for its eventual regulation. That's fine, for what it is worth. It is not worth very much, really. We are much more likely to react once a threat emerges. That's what humanity does. Bostrom is at best early at delivering a warning and at worst barking up the wrong tree.

Writer's Notebook - 19 January 2015

Teaching Evolution to Martin Luther King

On this Martin Luther King Day, I would like to explore the differences between the way we see the world and the way it really is. Specifically, we often come to understand human behavior as if it is solely driven by culture. This is just not so.
Consider Dr. King's thoughts on why people hate:
"Men often hate each other because they fear each other; they fear each other because they don't know each other; they don't know each other because they can not communicate; they can not communicate because they are separated." -- Martin Luther King, Jr.
It sounds plausible. The underlying presumption is that we could stop hating if we learn to communicate. Unfortunately, this is only partially so.
Like so much of our cultural conversations, Dr. King failed to take human evolution into account. Our ancestral ground state, life in small hunter-gatherer groups, colors our relationships with other people as much as, perhaps more than, the relatively recent geographical diaspora of our species has colored our skins. We constantly define an in-group, which leaves everyone else in the out-group. To our consternation, we have learned to create multiple, confusing, and sometimes contradictory in-groups. Cubs fans, military organizations, fellow hobbyists, church goers. The list is nearly endless. Ten thousand years ago almost all of us would have lived and died in the same in-group.
My revised version of Dr. King's statement looks like this:
Humans hate outsiders because they fear outsiders. Evolution taught us to fear anyone outside of our group. This can be overridden by creating a culture, by redefining an in-group, but it cannot ever become universal without changing what we are as a species.

Quotes of the day: On responding to technological change

  • "The unleashed power of the atom has changed everything save our modes of thinking, and we thus drift toward unparalleled catastrophes." -- Albert Einstein
  • "Most of our assumptions have outlived their uselessness." — Marshall McLuhan
  • "The greatest danger in times of turbulence is not the turbulence; it is to act with yesterday’s logic." — Peter Drucker
  • "In times of rapid change, experience could be your worst enemy." — J. Paul Getty

Tuesday, January 13, 2015

Writer's Notebook - 13 January 2015

Various epistemological "razors"

  • (William of) Occam's Razor: Among competing hypotheses, the one with the fewest assumptions should be selected. ("Plurality must never be posited without necessity")
  • (Christopher) Hitchens' Razor: "What can be asserted without evidence can be dismissed without evidence"
  • (Robert J.) Hanlon's Razor: "Never attribute to malice that which is adequately explained by stupidity"
  • (David) Hume's Razor: "If the cause, assigned for any effect, be not sufficient to produce it, we must either reject that cause, or add to it such qualities as will give it a just proportion to the effect"
  • (Mike) Alder's Razor (AKA "Newton's flaming laser sword"): "What cannot be settled by experiment is not worth debating."
  • (Ayn) Rand's Razor: "The requirements of cognition determine the objective criteria of conceptualization." (This is Occam's Razor with a corollary: Concepts are not to be multiplied beyond necessity nor are they to be integrated in disregard of necessity.)
  • (Albert) Einstein's razor: "Everything should be made as simple as possible, but no simpler." (possibly originally, "It can scarcely be denied that the supreme goal of all theory is to make the irreducible basic elements as simple and as few as possible without having to surrender the adequate representation of a single datum of experience.")

Quote of the day

"Mathematics stipulates structures by axioms: anything that satisfies the group axioms is a group, etc.. Programming takes given structures and builds new ones out of them, and the basic stock of building blocks is centrally important. Very different ways of thinking." -- Pat Hayes

Sunday, January 11, 2015

Writer's Notebook - 11 January 2015

Religious Freedom in the United States

The Virginia Statute of Religious Freedom was drafted 238 years ago in my town of Fredericksburg, Virginia. It was introduced to the Virginia legislature two years later and finally enacted into law in 1786. The statute formally separated church from state in Virginia and was the model for the First Amendment to the US Constitution (adopted in 1791).
Those who suggest that the principle of religious freedom allows citizens of the United States to choose a religion, but does not protect those who profess no religion whatsoever, should read the statute more carefully. The final paragraph makes clear that in the Commonwealth of Virginia, "all men shall be free to profess, and by argument to maintain, their opinions in matters of Religion". There is no requirement to select a religion.
The statute's echo in the First Amendment is slightly less clear and has been widely debated in the last two centuries. The First Amendment states,
“Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press, or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
That short passage guarantees six rights to US citizens and is the basis for many of the "rights" so often taken for granted:
  1. freedom from establishment of a national religion (the "establishment clause")
  2. freedom to freely exercise religious choice (the "free exercise clause")
  3. freedom of speech
  4. freedom of the press
  5. freedom to peacefully assemble
  6. freedom to petition the government for redress of grievances
The Virginia Statute of Religious Freedom is celebrated annually in Fredericksburg. The Annual Fredericksburg Religious Freedom March and Celebration includes a march from the Fredericksburg train station to the statute's monument on Washington Avenue. Other events are generally held in the area. This year's events include a presentation by University of Mary Washington Professor Mary Beth Mathews, who will speak on the topic  "Religious Freedom: Always Approaching, Never Reaching".  There will be a presentation of awards for the three winners of the middle school Importance of Religious Freedom Essay Contest that was sponsored by the University of Mary Washington and Fredericksburg Coalition of Reason.

Quote of the Day

"Human beings are at one and the same time utterly splendid and utterly insignificant." -- James Robertson

Friday, January 09, 2015

Je Suis Charlie

Like so many, I am appalled at the destruction of people and property, and the suppression of ideas, that occurred at the Charlie Hebdo offices in Paris on the 7th of January, 2015. Unlike so many, I cannot accept that the attacks "had nothing to do with religion", a view expressed by a French Muslim today on NPR's Morning Edition. The attacks were carried out by religious extremists for religious reasons. That is, of course, not to suggest that the majority of Muslims condone extremist violence, any more than the majority of Christians, Jews, Hindus or Buddhists condone violence. It is to say that religion, when believed literally, is a powerful and dangerous motivator.

Religion is only dangerous when it is believed.

I also cannot agree with the decision of many news organizations to refrain from publishing the Charlie Hebdo cartoons that are said to spark the violence. The cartoons are not, and never were, the issue. The issue is whether freedom of speech trumps religious sensibilities. It simply must do so.

Freedom of speech is a cornerstone of all freedoms in that it allows, and encourages, civil discourse. We accept the risk that it may sometimes cause offense because the benefits strongly outweigh the detriments. Freedom of speech is even more important than freedom of religion in that free speech allows for free expression in religious and other contexts. It is, in fact, only in a climate of free speech that religious tolerance can thrive.

Freedom of speech is more important than the fear of giving offense to others in that there is no end to what may, in some sense, cause offense. I choose not to be stifled by a few fanatics - even if they arm themselves and perpetrate violence.

For those reasons, I have decided to join a few brave (and mostly online) news organizations in republishing one of the Charlie Hebdo cartoons. I do not intend to cause offense. I do intend to stand for the one principle that underlies pluralistic society for the benefit of all.

The cartoon below is entitled "If Mohammed returned" and features the prophet being beheaded by a Muslim extremist. Mohammed is saying, "I am the prophet, fool!". The extremist responds, "Shut up, infidel!". I can think of no more poignant satire of the Charlie Hebdo tragedy than that.


Thursday, January 08, 2015

Writer's Notebook - 8 January 2015

Notes on the Tabula Rasa

The tabula rasa is the philosophic concept that the human mind at birth is blank and without form; only experience is thought by adherents of this school of thought to create a human being.
The term tabula rasa comes from the Latin, which literally means "scraped tablet" and is a reference to a wax tablet used for writing in Roman times. The translation "blank slate" is more commonly heard in modern English.
  • Aristotle recorded the first usage in de Anima (he called it an "unscribed tablet", Book III, chapter 4).
  • ibn Sīnā, known as Avicenna in the West, first used the term tabula rasa in his translation of and commentary to de Anima.
  • John Locke in Essay Concerning Human Understanding (he used the term "white paper" in Book II, Chap 1, Sect 2, and said that "there was in the understandings of men no innate idea of identity" in Book I, Chap 3, Sect 5, and "Whole and part, not innate ideas" in Book I, Chap 3, Sect 6), but see also his (contradictory?) idea that children may learn something in the womb (Book II, Chap 9 Sect 5).
  • Jean-Jacques Rousseau used the idea of the tabula rasa to suggest that humans must learn warfare (18th century).
  • Sigmund Freud used the idea to suggest that personality was formed by family dynamics.
The short course is that the tabula rasa was incredibly important to the historical development of philosophy right into the twentieth century. Unfortunately for those twenty three hundred years of history, the idea was simply wrong in its extreme form.
The philosophical schools contending over the existence and degree of tabula rasa in the human mind are known as Rationalism vs. Empiricism.
The current debate seems to have coalesced around an understanding that human babies are in fact born with innate cognitive biases, and this would seem to negate any idea of the tabula rasa as the term was initially used. However, many philosophers argue (because this is what they do) that of course that is not what was really meant. I think it was exactly what was meant by Aristotle and ibn Sīnā. What Locke and later thinkers thought is much more up for discussion.
I am a rationalist, in that I believe that the Innate Concept Thesis is correct ("We have some of the concepts we employ in a particular subject area, S, as part of our rational nature" -- Stanford Encyclopedia of Philosophy): We are born with cognitive biases that implement those concepts, such as mirror neurons and Theory of Mind. I discussed these features in more detail in my book review of Why We Believe in God(s) by J. Anderson Thomson.
Modern opponents of the tabula rasa include the linguist Noam Chomsky and the psychologist Steven Pinker. Chomsky is known for his theories of rationalist epistemology, including his theory that aspects of language are innate to a newly born child. Pinker claimed in his The Blank Slate: The Modern Denial of Human Nature [Goodreads] that the tabula rasa was responsible for "most mistakes" in modern social science, urging his colleagues to view humanity through the lens of evolution first and forgo preconceived notions gleaned from philosophic thought alone.
The Computational Theory of Mind, the relatively recent idea that human cognition is a form of computation (although implemented in a way very different from an electronic computer), was formulated partly by the rejection of the tabula rasa. The theory was developed primarily by mathematician and philosopher Hilary Putnam, philosopher Jerry Fodor, and extended in recent times by Steven Pinker.
The intellectual heritage of the Computational Theory of Mind can be summarized as follows, where red arrows indicate theories that have been replaced with new understanding and black arrows stand:

Quote of the Day

"Pristine prose or voice or funny or a brilliant simile in the first page or a great title or a great character name or authority or what the fuck or whole new world or something intangible but moving or alarming or surprising or terrifying or consoling or titillating or suicidal." -- Betsy Lerner on what makes a "perfect" book manuscript, in an interview by LitStack.