Are We Smart Enough to Know How Smart Animals Are?

Given that the discontinuity stance is essentially pre-evolutionary, let me call a spade a spade, and dub it Neo-Creationism. Neo-Creationism is not to be confused with Intelligent Design, which is merely old creationism in a new bottle. Neo-Creationism is subtler in that it accepts evolution but only half of it. Its central tenet is that we descend from the apes in body but not in mind. Without saying so explicitly, it assumes that evolution stopped at the human head. This idea remains prevalent in much of the social sciences, philosophy, and the humanities. It views our mind as so original that there is no point comparing it to other minds except to confirm its exceptional status. Why care about what other species can do if there is literally no comparison with what we do? This saltatory view (from saltus, or “leap”) rests on the conviction that something major must have happened after we split off from the apes: an abrupt change in the last few million years or perhaps even more recently. While this miraculous event remains shrouded in mystery, it is honored with an exclusive term—hominization—mentioned in one breath with words such as spark, gap, and chasm.5 Obviously, no modern scholar would dare mention a divine spark, let alone special creation, but the religious background of this position is hard to deny.

In biology, the evolution-stops-at-the-head notion is known as Wallace’s Problem. Alfred Russel Wallace was a great English naturalist who lived at the same time as Charles Darwin and is considered the coconceiver of evolution by means of natural selection. In fact, this idea is also known as the Darwin-Wallace Theory. Whereas Wallace definitely had no trouble with the notion of evolution, he drew a line at the human mind. He was so impressed by what he called human dignity that he couldn’t stomach comparisons with apes. Darwin believed that all traits were utilitarian, being only as good as strictly necessary for survival, but Wallace felt there must be one exception to this rule: the human mind. Why would people who live simple lives need a brain capable of composing symphonies or doing math? “Natural selection,” he wrote, “could only have endowed the savage with a brain a little superior to that of an ape, whereas he actually possesses one but very little inferior to that of the average member of our learned societies.”6 During his travels in Southeast Asia, Wallace had gained great respect for nonliterate people, so for him to call them only “very little inferior” was a big step up over the prevailing racist views of his time, according to which their intellect was halfway between that of an ape and Western man. Although he was nonreligious, Wallace attributed humanity’s surplus brain power to the “unseen universe of Spirit.” Nothing less could account for the human soul. Unsurprisingly, Darwin was deeply disturbed to see his respected colleague invoke the hand of God, in however camouflaged a way. There was absolutely no need for supernatural explanations, he felt. Nevertheless, Wallace’s Problem still looms large in academic circles eager to keep the human mind out of the clutches of biology.

I recently attended a lecture by a prominent philosopher who enthralled us with his take on consciousness, until he added, almost like an afterthought, that “obviously” humans possess infinitely more of it than any other species. I scratched my head—a sign of internal conflict in primates—because until then the philosopher had given the impression that he was looking for an evolutionary account. He had mentioned massive interconnectivity in the brain, saying that consciousness arises from the number and complexity of neural connections. I have heard similar accounts from robot experts, who feel that if enough microchips connect within a computer, consciousness is bound to emerge. I am willing to believe it, even though no one seems to know how interconnectivity produces consciousness nor even what consciousness exactly is.

The emphasis on neural connections, however, made me wonder what to do with animals with brains larger than our 1.35-kilogram brain. What about the dolphin’s 1.5-kilogram brain, the elephant’s 4-kilogram brain, and the sperm whale’s 8-kilogram brain? Are these animals perhaps more conscious than we are? Or does it depend on the number of neurons? In this regard, the picture is less clear. It was long thought that our brain contained more neurons than any other on the planet, regardless of its size, but we now know that the elephant brain has three times as many neurons—257 billion, to be exact. These neurons are differently distributed, though, with most of the elephant’s in its cerebellum. It has also been speculated that the pachyderm brain, being so huge, has many connections between far-flung areas, almost like an extra highway system, which adds complexity.7 In our own brain, we tend to emphasize the frontal lobes—hailed as the seat of rationality—but according to the latest anatomical reports, they are not truly exceptional. The human brain has been called a “linearly scaled-up primate brain,” meaning that no areas are disproportionally large.8 All in all, the neural differences seem insufficient for human uniqueness to be a foregone conclusion. If we ever find a way of measuring it, consciousness could well turn out to be widespread. But until then some of Darwin’s ideas will remain just a tad too dangerous.

This is not to deny that humans are special—in some ways we evidently are—but if this becomes the a priori assumption for every cognitive capacity under the sun, we are leaving the realm of science and entering that of belief. Being a biologist who teaches in a psychology department, I am used to the different ways disciplines approach this issue. In biology, neuroscience, and the medical sciences, continuity is the default assumption. It couldn’t be otherwise, because why would anyone study fear in the rat amygdala in order to treat human phobias if not for the premise that all mammalian brains are similar? Continuity across life-forms is taken for granted in these disciplines, and however important humans may be, they are a mere speck of dust in the larger picture of nature.

Increasingly, psychology is moving in the same direction, but in other social sciences and the humanities discontinuity remains the typical assumption. I am reminded of this every time I address these audiences. After a lecture that inevitably (even if I don’t always mention humans) reveals similarities between us and the other Hominoids, the question invariably arises: “But what then does it mean to be human?” The but opening is telling as it sweeps all the similarities aside in order to get to the all-important question of what sets us apart. I usually answer with the iceberg metaphor, according to which there is a vast mass of cognitive, emotional, and behavioral similarities between us and our primate kin. But there is also a tip containing a few dozen differences. The natural sciences try to come to grips with the whole iceberg, whereas the rest of academia is happy to stare at the tip.

In the West, fascination with this tip is old and unending. Our unique traits are invariably judged to be positive, noble even, although it wouldn’t be hard to come up with a few unflattering ones as well. We are always looking for the one big difference, whether it is opposable thumbs, cooperation, humor, pure altruism, sexual orgasm, language, or the anatomy of the larynx. It started perhaps with the debate between Plato and Diogenes about the most succinct definition of the human species. Plato proposed that humans were the only creatures at once naked and walking on two legs. This definition proved flawed, however, when Diogenes brought a plucked fowl to the lecture room, setting it loose with the words “Here is Plato’s man.” From then on the definition added “having broad nails.”

Frans de Waal's books