But What If We're Wrong? Page 22
This is an old problem, best answered (and maybe even solved) by the philosopher Thomas Nagel in his 1974 essay “What Is It Like to Be a Bat?” For philosophy students, the essay is about the conflict between objectivity and subjectivity, and Nagel’s exploration of a bat’s consciousness was simply the example he happened to use. But the specifics of “What Is It Like to Be a Bat?” are pertinent to the problem of personification. Nagel asks if it’s possible for people to conceive what it’s like to be a bat, and his conclusion is that it (probably) is not; we can only conceive what it would be like to be a human who was a bat.73 For example, bats use echolocation sonar to know what’s in front of them (they emit a sound and listen for the returning echo). It’s not difficult to imagine humans having echolocation sonar and how that would help us walk through a pitch-black room. That experience can be visualized. But what we can’t understand is how that experience informs the consciousness of a bat. We can’t even assess what level of consciousness a bat possesses, since the only available barometer for “consciousness” is our own. The interior life of a bat (or an octopus, or any nonhuman creature) is beyond our capacity. And as a society, we are comfortable with not knowing these things—although less comfortable than we were in (say) the nineteenth century and much less comfortable than we were in (say) the fifteenth century. So imagine that this evolution continues. What would happen if we eventually concluded—for whatever collection of reasons—that our human definition of logic is an inferior variety of intelligence? Humans would still be the Earth’s dominant life form, but for reasons that would validate our worst fears about humanity.
For a little under three years, I wrote an advice column called “The Ethicist” for The New York Times Magazine. It was a job that was easy to do poorly but difficult to do well. The risks were greater than the rewards. But I always enjoyed fielding the questions, and my favorite came late in my tenure. It involved Koko, a gorilla living in the San Francisco Zoo who was renowned for her use of sign language and the intimate relationship she shares with her handlers. The reader’s query focused on the suicide of comedic actor Robin Williams. Koko had met Williams, once, in 2001, and they evidently had an excellent rapport. According to press reports, Koko cried when informed of Williams’s 2014 death. The writer of the question wanted to know if there was any moral purpose in making a gorilla depressed over the suicide of a person she met only once, fifteen years prior.
The ethical ramifications of this act certainly matter. But they don’t matter nearly as much as the scenario itself, were we somehow able to prove that it was real.
From the perspective of a human, the whole story seems a tad specious. At worst, it looks like an exploitative publicity stunt by the zoo; at best, it seems like a smart gorilla might adopt the characteristics of sadness anytime her handlers suggested that she had something to be sad about. Moreover, the alternative possibilities in-between are fucking bananas. Since Koko is a gorilla, there is no way she can comprehend the concept of “celebrity” (which would mean either she deduced something about Williams that’s alien to her own species or she remembers every person she’s ever met, even if they met only once). Would this mean that apes empathize with all other animals equally? How would a gorilla know what death is, or what suicide is, or that death is sad, or that she, too, will die? These are unthinkable abstractions to apply to a creature with the cognitive faculties of a three-year-old human toddler. But when I said that to veterinarian Vint Virga, the respected pro-animal author of The Souls of All Living Creatures, he told me my view was too locked into a simplistic conception of intellect (and that it would be unethical not to tell Koko about the passing of Williams).
“I would set aside the issue of the animal’s cognitive intelligence and focus on the concept of an animal’s emotional intelligence, which studies continue to show is much greater than we ever previously imagined,” says Virga. “Animals and humans both experience joy and sadness throughout their life. Why would you want to shelter a gorilla from that experience? I believe a gorilla absolutely has the ability to understand the loss of someone who was important to her, and animals are often able to deal with grieving and loss much more effectively than humans.”
Let’s assume that Virga is not only correct, but underselling his correctness. Let’s imagine deeper neurological research shows an inherent inverse relationship between logical intelligence and emotional intelligence, and that mammalian species strong in the former category (i.e., people) tend to be weak in the latter category. Let’s also assume that the standard perception of what makes any given person intelligent continues to shift. As recently as the 1980s, the idea of “emotional intelligence” was not taken seriously, particularly by men; today, most professions regard it as important as any scholastic achievement. In a hundred years, qualitative intelligence might be unilaterally prioritized over quantitative aptitude. So if humankind decides that emotional intelligence is really what matters while simultaneously concluding that nonhuman species are superior to humans in this specific regard . . . society would adopt a highly uncomfortable imbalance. I mean, the relationship between man and beast wouldn’t really change. Humans would remain the dominant species. But that dominance would (suddenly) appear to derive exclusively from brute force. It would essentially derive from the combination of opposable thumbs and a self-defined “inferior” brand of intellect that places us in a better position to kill and control our rivals.74 This actuality would swap the polarity of existence. The current belief (among the animal rights community) is that humans are responsible for the welfare of animals and that we must protect them. Our apex slot in the intellectual hierarchy forces us to think on behalf of animals, since they cannot think for themselves. Their innocence is childlike. But if animals are actually more intelligent than humans—and if we were all to agree that they are, based on our own criteria for what constitutes an intelligent being—it would mean that our sovereignty was founded on mental weakness and empathetic failure. It would mean the undeniable success of humankind is just a manifestation of our own self-defined stupidity.
Would this change the world? It would not. This is not a relationship that can be switched. The world would continue as it is. We would not elect a cat as president, or even as comptroller. But this would be a helluva thing to be wrong about, and maybe a good thing to pretend we’re wrong about (just in case).
[6]In the early pages of this book (that you are about to finish), I refer to the writer and critic Kathryn Schulz, based on her publication of Being Wrong and her role as a book critic for New York magazine. In the gap between my interviewing her and writing this sentence, Schulz published an article for The New Yorker that received roughly as much attention as everything else she’d done in her entire career. The article, headlined “The Really Big One,” was about the Cascadia Subduction Zone, a fault line running through the Pacific Northwest. The story’s takeaway was that it’s merely a matter of time before the tectonic plates abutting along this fault line will rupture, generating an earthquake with a magnitude in the vicinity of 9.0, followed by a massive tsunami that will annihilate the entire region. According to multiple researchers, the likelihood of a significant Pacific Northwest earthquake happening in the next fifty years is one in three. The likelihood that it will be “the really big one” is one in ten. FEMA projects that such an event would kill thirteen thousand people. The story’s most memorable quote came from the region’s FEMA director, Kenneth Murphy: “Our operating assumption is that everything west of I-5 will be toast.”
The timing of this story was not ideal. I realize there’s no “ideal time” for information about a killer earthquake, but this one was problematic for personal reasons. Over the past two years, my wife and I have been talking about moving to Portland, Oregon, where my wife was born and raised. Her childhood home sits twenty-five miles west of I-5 (although I doubt the earthquake would actually use a road atlas when deciding which areas to devastate). Whenever we mention the
possibility of relocating to Portland to anyone who reads magazines or listens to NPR or lives in New York, we are now asked, “But aren’t you worried about the earthquake?” My standard response equates to some rambling version of “Kind of, but not really.” It’s not something I think about, except when I’m writing this book. Then I think about it a lot.
This is, at its core, a question about the security of our informed imagination. In one sense, thinking about this earthquake is like thinking about climate change. It’s not really speculative: Tectonic plates shift, and—eventually—these plates will, too. The part that’s unknown is the timing and the specific consequence. In another sense, our move (and the thinking behind that move) becomes a You’re Doing It Wrong proposition: The existence of an article about an event doesn’t increase the chance that the event will happen; the seismic danger of living in Portland right now is the same as it was five years ago. The article could likewise be seen as another example of unhelpful analytics: Though I’m confident the mathematical odds of this earthquake’s transpiring in my lifetime are roughly one in three (or, for the worst-case scenario, one in ten), the validity of those calculations has no practical or instructional application, outside of my knowing what they supposedly are.75 Most significant, it’s also an illustration of the limits of my mind and the tenacity of my own naïve realism: Perhaps I’m just not able to intellectually accept the inevitability of an event I can’t comprehend, so I’m fixating on a geographic risk I know about without considering all the rival risks that have yet to be discovered or written about in periodicals.
The future is always impossible.
But, you know, at least we’re used to it.
In 2005, Indiana senator Richard Lugar surveyed eighty-five national security experts about the possibility of a nuclear detonation “somewhere in the world.” They placed the odds of an attack within the next ten years at around 29 percent. Ten years have now elapsed, and it doesn’t seem like such a scenario was ever particularly close to coming to fruition. Yet as we continue to look forward, it always seems plausible. In 2010, CBS did a story on the possibility of nuclear terrorism. Martin Hellman, a professor emeritus at Stanford (specializing in engineering and cryptography), estimated that the odds of this event increase about 1 percent every year and will approach 40 percent in five decades. Certainly, there’s a logic ladder here that’s hard to refute. An organization like ISIS would love to possess a nuclear weapon, and the potential availability of nuclear technology is proliferating. Everything we know about the group’s ethos suggests that if ISIS were to acquire such a weapon, they would want to use it immediately. If the target wasn’t Israel or France, the target would be the United States. Based on common sense and recent history, the two cities most likely to be attacked would be New York and Washington, D.C. So if I believe that a nuclear weapon will be detonated in my lifetime (which seems probable), and I believe it will happen on US soil (which seems possible), and I live in New York (which I do), I’m consciously raising my family in one of the few cities where I suspect a nuclear weapon is likely to be utilized. Based on this rationale, it would make way more sense for me to move to Portland, where there’s only a 10 percent chance we’ll drown in a tsunami.
But I don’t think like this, except when I’m trying to make a point about how this is not the way I usually think. Instead, I think about whether Jon Franzen will get over, or how people who no longer watch television will remember what television was, or if I’ll still be able to follow the Dallas Cowboys as I deteriorate in an assisted living facility. I think about a future that is totally different, yet unambiguously familiar; people are still walking around and arguing about art and politics and generating the same recycled realizations that every emerging demographic inevitably consumes as new. Do I believe our current assumption about how the present will eventually be viewed is, in all probability, acutely incorrect? Yes. And yet I imagine this coming wrongness to resemble the way society has always been wrong about itself, since the beginning of time. It’s almost like I’m showing up at the Kentucky Derby and insisting the two-to-one favorite won’t win, but refusing to make any prediction beyond “The winner will probably be a different horse.”
Somebody once told me a joke about meteorology. (It’s the kind of joke that somebody’s dad would put on Facebook.) The premise is that we’ve been trying to predict the weather since 3000 BC. The yearly budget for the National Weather Service is $1 billion, which doesn’t even include all the costs incurred by privately funded meteorological institutions and the military and local TV stations and every other organization with a vested interest in predicting what the unexperienced world will be like. Even a conservative estimate places the annual amount of money spent on meteorology at somewhere around $5 billion. And as a result of this investment, our weather can be correctly predicted around 66 percent76 of the time. As a society, we can go two out of three. Yet if some random dude simply says, “I think the weather tomorrow will be the same as the weather today,” he will be right 33 percent of the time. He can go one for three. So we’ve invested hundreds of billions of dollars and countless hours into meteorological research, with the net effect of becoming twice as accurate as some bozo who looks out a window and points at the sky. (And that’s the joke.) I assume this joke is supposed to be a commentary on governmental waste, or an anti-intellectual criticism of science, or proof that nobody knows anything. It might be all of those things. But I don’t care about any of that jazz. I just want the bozo to get lucky. I want the weather to stay the same. I’m ready for a new tomorrow, but only if it’s pretty much like yesterday.
Acknowledgments
The first person I must thank is Melissa Maerz, without whom nothing else is possible.
The second person is researcher Dmitry Kiper, the person who helps me find things I need to find.
Next on the list is affable Brant Rumble, followed by dogged Daniel Greenberg. I’d also like to express appreciation to everyone at Blue Rider Press (particularly David Rosenthal, Aileen Boyle, and Anna Jardine) for making this book exist, along with all the folks back at Scribner who put me in this position to begin with.
I sincerely appreciate everyone I’ve interviewed over the past eighteen months for providing their time and intelligence. I’d like to acknowledge everyone whose work is specifically cited in this book (for having conceived of ideas I could merely replicate). I must also express gratitude to all the interesting people who helped me without even knowing it, particularly James Burke (creator of the documentary series The Day the Universe Changed), Jim Holt (author of Why Does the World Exist?), and George Harrison (for All Things Must Pass and Living in the Material World).
The following fine people read versions of this manuscript and provided feedback directly reflected in the text: Jon Dolan, Jennifer Raftery, Mat Sletten, Bob Ethington, Sean Howe, David Giffels, Rex Sorgatz, Ben Heller, Rob Sheffield, Brian Raftery, Greg Milner, Michael Weinreb, Willy Staley, Phoebe Reilly, Aja Pollock.
I would also like to thank my mom, because I can never thank her enough.
A final note about hedgehogs: In “The Case Against Freedom,” I spend a few pages describing a period of my life when I watched a hedgehog from the balcony of my Akron apartment. It turns out there is a problem with this memory—hedgehogs are not native to North America. Whatever was chomping apples outside my window must have been either a groundhog or a woodchuck (although it was definitely something). I have to assume this is not a well-known fact, since I’ve been telling this anecdote for almost two decades and not one person has ever remarked, “Hey idiot—don’t you realize there are no hedgehogs in Ohio?” That said, I’ve never dated an Erinaceidae zoologist. I’m (very slightly) embarrassed by all this, since I based an entire chapter around a metaphor I did not technically experience. But there was no practical solution to this contradiction, outside of re-naming this book But What If We’re Wrong? Thinking About Woodchucks As If They Were Hedgeh
ogs. Chuck Klosterman regrets the error.
Index
The page numbers in this index refer to the printed version of this book. The link provided will take you to the beginning of that print page. You may need to scroll forward from that location to find the corresponding reference on your e-reader.
Aaronovitch, David, 133–34
abuse as motivation, 188–90
AC/DC, 83n
Adams, Ryan, 72–73, 74, 86–87
Adams, Zed, 148–50
Adler, Renata, 235
After Birth (Albert), 50
age of the universe, 112
Albert, Elisa, 50
“Albums of the Year” list (SPIN), 92n
Allman, Gregg, 85n
alternative universe. See multiverse hypothesis
America’s Game (MacCambridge), 181
analytics, sports, 250–52
“ancillary verisimilitude” of TV, 163–65
animals
intelligence of, 256–57
personification of, 255–56
Anna Karenina (Tolstoy), 56
Aphex Twin, 38n
architecture, 90–92
Aristotle, 5, 99, 100, 101, 106, 111, 149, 225
Arkani-Hamed, Nima, 130–31
Armstrong, Louis, 77
art
appreciation, changing nature of, 70, 243
“good job” response to, 188–89
shared selectively, 37–38