But What If We're Wrong? Read online

Page 19


  Here again, I must reiterate that I am like this, too. When I claim that Obama is the finest president of my lifetime, I’m using criteria I’ve absorbed without trying, all of which are defined by my unconscious assumption that the purest manifestation of representative democracy would be the best scenario for the country and the world. This is, in fact, what I believe. But I don’t know why I believe this, outside of the realization that I can’t really control my own thoughts and feelings. When I see a quote from Plato that condescendingly classifies democracy as “charming” and suggests democracy dispenses “a sort of equality to equals and unequaled alike,” my knee-jerk reaction is to see this as troubling and unenlightened. But Plato is merely arguing that democracy is a nice idea that tries to impose the fantasy of fairness upon an organically unfair social order. I’m not sure how anyone could disagree with that, myself included. But if you’re really into the idea of democracy, this is something you reject out of hand.

  On those rare occasions when the Constitution is criticized in a non-academic setting, the criticisms are pointed. It’s often argued, for example, that the Second Amendment is antiquated65 and has no logical relationship to the original need to own a musket in order to form a militia, or that the Fourteenth Amendment’s extension of personhood to corporations has been manipulated for oppressive purposes.66 The complaints suggest we tweak the existing document with the intent of reinforcing the document’s sovereignty within the present moment (because the present is where we are, and no one would ever suggest starting over from scratch). But sometimes I think about America from a different vantage point. I imagine America as a chapter in a book, centuries after the country has collapsed, encapsulated by the casual language we use when describing the foreboding failure of the Spanish Armada in 1588. And what I imagine is a description like this: The invention of a country is described. This country was based on a document, and the document was unassailable. The document could be altered, but alterations were so difficult that it happened only seventeen times in two hundred years (and one of those changes merely retracted a previous alteration). The document was less than five thousand words but applied unilaterally, even as the country dramatically increased its size and population and even though urban citizens in rarefied parts of the country had nothing in common with rural citizens living thousands of miles away. The document’s prime directives were liberty and representation, even when 5 percent of the country’s population legally controlled 65 percent of the wealth. But everyone loved this document, because it was concise and well composed and presented a possible utopia where everyone was the same. It was so beloved that the citizens of this country decided they would stick with it no matter what happened or what changed, and the premise of discounting (or even questioning) its greatness became so verboten that any political candidate who did so would have no chance to be elected to any office above city alderman. The populace decided to use this same document forever, inflexibly and without apprehension, even if the country lasted for two thousand years.

  Viewed retrospectively, it would not seem stunning that this did not work out.

  Now, do I have a better alternative here? I do not. If George Washington truly had been offered the chance to be king, I am not of the opinion that life would be better had we handed him the crown, since that would mean we’d currently be governed by some rich guy in Virginia who happens to be his distant nephew. It often seems like a genteel oligarchy would make the most theoretical sense, but the fact that this never works in practice (and the fact that they never remain genteel) contradicts that notion. Sometimes I fantasize about the US head of state as a super-lazy, super-moral libertarian despot and think, “That would certainly make everything easier,” even though I can’t think of one person who’d qualify, except maybe Willie Nelson. I’m not looking to overthrow anybody. The first moment someone calls for a revolution is usually the last moment I take them seriously. I’m not Mr. Robot. And I’m not saying we’re “wrong” for caring about the Constitution or separating the powers of government or enforcing an illusion of equality through the untrue story of how democracy works. I’m just working through my central hedgehog thought, which is this: The ultimate failure of the United States will probably not derive from the problems we see or the conflicts we wage. It will more likely derive from our uncompromising belief in the things we consider unimpeachable and idealized and beautiful. Because every strength is a weakness, if given enough time.

  But What If We’re Right?

  When John Horgan published his book The End of Science in 1996, he’d been a staff writer for Scientific American for ten years. A year later, he was fired from the magazine. According to Horgan, his employers suggested his book had caused a downturn in advertising revenue. This claim seems implausible, until you hear Horgan’s own description of what his book proposed.

  “My argument in The End of Science is that science is a victim of its own success,” he tells me from his home in Hoboken. “Science discovers certain things, and then it has to go on to the next thing. So we have heliocentrism and the discovery of gravity and the fundamental forces, atoms and electrons and all that shit, evolution, and DNA-based genetics. But then we get to the frontier of science, where there is still a lot left to discover. And some of those things we may never discover. And a lot of the things we are going to discover are just elaborations on what we discovered in the past. They’re not that exciting. My belief is that the prospect for really surprising insights into nature is over, and the hope for future revolutionary discoveries is pretty much done. I became a science journalist because I thought science was the coolest thing that humans have ever done. So if you believe the most important thing about life is the pursuit of knowledge, what does it mean if that’s over?”

  It’s now been twenty years since the release of The End of Science. Horgan has written four additional books and serves as the director of the Center for Science Writings at the Stevens Institute of Technology (he’s also, somewhat interestingly, returned to Scientific American as a blogger). The central premise of his book—that the big questions about the natural world have been mostly solved, and that the really big questions that remain are probably impossible to answer—is still marginalized as either cynical or pragmatic, depending on the reader’s point of reference. But nothing has happened since 1996 to prove Horgan wrong, unless you count finding water on Mars. Granted, twenty years is not that long, particularly if you’re a scientist. Still, it’s remarkable how unchanged the conversational landscape has remained. Horgan’s most compelling interview in The End of Science was with the relatively reclusive Edward Witten, a Princeton professor broadly viewed as the greatest living theoretical physicist (or at least the “smartest,” according to a 2004 issue of Time magazine). One of the first things Witten noted in that interview was that Horgan had been journalistically irresponsible for writing a profile on Thomas Kuhn, with Witten employing much of the same logic Neil deGrasse Tyson used when he criticized Kuhn in our 2014 conversation for this book.

  Now, there’s at least one significant difference between those two interviews: I was asking if it’s possible that science might be wrong. Horgan was proposing science has been so overwhelmingly right that all that remains are tertiary details. Still, both tracts present the potential for an awkward realization. If the answer to my question is no (or if the answer to Horgan’s question is yes), society is faced with a strange new scenario: the possibility that our current view of reality is the final view of reality, and that what we believe today is what we will believe forever.

  “One of the exercises I always give my [Stevens Institute] students is an essay assignment,” Horgan says. “The question is posed like this: ‘Will there be a time in our future when our current theories seem as dumb as Aristotle’s theories appear to us now?’ And the students are always divided. Many of them have already been infected by postmodernism and believe that knowledge is socially constructed, and they believe we’l
l have intellectual revolutions forever. You even hear that kind of rhetoric from mainstream science popularizers, who are always talking about science as this endless frontier. And I just think that’s childish. It’s like thinking that our exploration of the Earth is still open-ended, and that we might still find the lost city of Atlantis or dinosaurs living in the center of the planet. The more we discover, the less there is to discover later. Now, to a lot of people, that sounds like a naïve way to think about science. There was a time when it once seemed naïve to me. But it’s really just a consequence of the success of science itself. Our era is in no way comparable to Aristotle’s era.”

  What Horgan proposes is mildly contradictory; it compliments and criticizes science at the same time. He is, like Witten and Tyson, blasting Kuhn’s relativist philosophy and insisting that some knowledge is real and undeniable. But he’s also saying the acquisition of such knowledge is inherently limited, and we’ve essentially reached that limit, and that a great deal of modern scientific inquiry is just a form of careerism that doesn’t move the cerebral dial (this is a little like what Kuhn referred to as “normal science,” but without the paradigm shift). “Science will follow the path already trodden by literature, art, music, philosophy,” Horgan writes. “It will become more introspective, subjective, diffuse, and obsessed with its own methods.” In essence, it will become a perpetual argument over a non-negotiable reality. And like all speculative realities, it seems like this could be amazingly good or amazingly bad.

  “By the time I finally finished writing The End of Science, I’d concluded that people don’t give a shit about science,” Horgan says. “They don’t give a shit about quantum mechanics or the Big Bang. As a mass society, our interest in those subjects is trivial. People are much more interested in making money, finding love, and attaining status and prestige. So I’m not really sure if a post-science world would be any different than the world of today.”

  Neutrality: the craziest of all possible outcomes.

  [2]When I spoke with Horgan, he’d recently completed his (considerably less controversial) fifth book, The End of War, a treatise arguing against the assumption that war is an inescapable component of human nature. The embryo for this idea came from a conversation he’d had two decades prior, conducted while working on The End of Science. It was an interview with Francis Fukuyama, the political scientist best known for his 1989 essay “The End of History?” The title of the essay is deceptive, since Fukuyama was mostly asserting that liberal capitalist democracies were going to take over the world. It was an economic argument that (thus far) has not happened. But what specifically appalled Horgan was Fukuyama’s assertion about how a problem-free society would operate. Fukuyama believed that once mankind eliminated all its problems, it would start waging wars against itself for no reason, almost out of boredom. “That kind of thinking comes from a kind of crude determinism,” Horgan insists. “It’s the belief that what has always been in the past must always be in the future. To me, that’s a foolish position.”

  The level to which you agree with Horgan on this point reflects your level of optimism about human nature (and Horgan freely admits some of his ideas could be classified as “traditionally hippie-ish”). But it can be securely argued that Fukuyama’s perspective is much more common, particularly among the kind of people who produce dystopic sci-fi movies. Whether it’s Avengers: Age of Ultron, The Matrix, the entire Terminator franchise, or even a film as technologically primitive as War Games, a predictable theme inexorably emerges: The moment machines become self-aware, they will try to destroy people. What’s latently disturbing about this plot device is the cynicism of the logic. Our assumption is that computers will only act rationally. If the ensuing assumption is that human-built machines would immediately try to kill all the humans, it means that doing so must be the most rational decision possible. And since this plot device was created by humans, the creators must fractionally believe this, too.

  On the other end of this speculatory scale—or on the same end, if you’re an especially gloomy motherfucker—are proponents of the Singularity, a techno-social evolution so unimaginable that attempting to visualize what it would be like is almost a waste of time. The Singularity is a hypothetical super-jump in the field of artificial intelligence, rendering our reliance on “biological intelligence” obsolete, pushing us into a shared technological realm so advanced that it will be unrecognizable from the world of today. The best-known advocate of this proposition, futurist Ray Kurzweil, suggests that this could happen as soon as the year 2045, based on an exponential growth model. But that is hard to accept. Everyone agrees that Kurzweil is a genius and that his model makes mathematical sense, but no man truly believes this is going to happen in his own lifetime (sans a handful of people who are already living their lives very, very, very differently). It must also be noted that Kurzweil initially claimed this event was coming in 2028, so the inception of the Singularity might be a little like the release of Chinese Democracy.

  Even compared with Bostrom’s simulation hypothesis or the Phantom Time conspiracy, the premise of the Singularity is so daunting that it can’t reasonably be considered without framing it as an impossibility. The theory’s most startling detail involves the option of mapping and downloading the complete content of a human brain onto a collective server, thus achieving universal immortality—we could all live forever, inside a mass virtual universe, without the limitations of our physical bodies (Kurzweil openly aspires to create an avatar of his long-dead father, using scraps of the deceased patriarch’s DNA and exhaustive notes about his father’s life). The parts of our brain that generate visceral sensations could be digitally manipulated to make it feel exactly as if we were still alive. This, quite obviously, generates unfathomable theological and metaphysical quandaries. But even its most practical aspects are convoluted and open-ended. If we download the totality of our minds onto the Internet, they—we—would effectively become the Internet itself. Our brain avatars could automatically access all the information that exists in the virtual world, so we would all know everything there is to know.

  But I suppose we have a manual version of this already.

  [3]I was born in 1972, and—because I ended up working in the media—I feel exceedingly fortunate about the timing of that event. It allowed me to have an experience that is not exactly unique, but that will never again be replicated: I started my professional career in a world where there was (essentially) no Internet at all, and I’ll end my professional career in a world where the Internet will be (essentially) the only thing that exists. When I showed up for my first day of newspaper work in the summer of ’94, there was no Internet in the building, along with an institutional belief that this would be a stupid thing to want. If I aspired to send an e-mail, I had to go to the public library across the street and wait for the one computer that was connected to a modem (and even that wasn’t an option until 1995). From a journalistic perspective, the functional disparity between that bygone era and the one we now inhabit is vast and quirky—I sometimes made more phone calls in one morning than I currently make in two months. But those evolving practicalities were things we noticed as they occurred. The amplification of available information and the increase in communication speed was obvious to everyone. We talked about it constantly. What was harder to recognize was how the Internet slowly reinvented the way people thought about everything, including those things that have no relationship to the Internet whatsoever.

  In his autobiography Chronicles, Bob Dylan (kind of) explains his motivation for performing extremely long songs like “Tom Joad,” a track with sixteen verses. His reasoning was that it’s simply enriching to memorize complicated things. Born in 1941, Dylan valued rote memorization, a proficiency that had been mostly eliminated by the time I attended grade school in the eighties (the only long passages I was forced to memorize verbatim were the preamble to the Constitution, the Gettysburg Address, and a whole bunch of prayers). Still,
for the first twenty-five years of my life, the concept of intelligence was intimately connected to broad-spectrum memory. If I was having an argument with a much older person about the 1970 Kent State shootings, I’d generally have to defer to her analysis, based on the justifiable fact that she was alive when it occurred and I was not. My only alternative was to read a bunch of books (or maybe watch a documentary) about the shooting and consciously retain whatever I learned from that research, since I wouldn’t be able to easily access the data again. It was also assumed that—anecdotally, speaking off the cuff—neither party would be 100 percent correct about every arcane detail of the shooting, but that certain key details mattered more than others. So a smart person had a generalized, autodidactic, imperfect sense of history. And there was a circular logic to this: The importance of any given memory was validated by the fact that someone remembered it at all.