An Introduction to Evolutionary Musicology
Presented August 2, 2002, XVIIth Int’l Congress of the International Musicological Society, Leuven, Belgium.
Jonathan G. Secora Pearl
Center for the Interdisciplinary Study of Music
University of California, Santa Barbara
Because I have invited this distinguished panel of speakers, and the prospects of their talks has brought you all here, it would be useful for me to outline just what enterprise we are engaging in. Can music be reasonably discussed among such diverse fields as Anthropology, Archeology, Neurobiology, and Psychology; and can Musicologists ably engage such an assorted entourage? To answer in the affirmative, I present the emerging field of Evolutionary Musicology, which we will be discussing today. Each of the speakers here will present a unique viewpoint on the subject matter, bringing to bear a great variety of backgrounds, interests, and methods. We will not agree on all points, but rather share a belief that the enterprise is worthy of our impassioned debate. To begin the discussion, I will present some preliminary thoughts on the subject from my own vantage, focusing on the comparison of spoken language to song, and thereby providing an explanation of how someone like me, trained as a singer and musicologist, gained an interest in this emerging field.
What is Evolutionary Musicology?
Evolutionary Musicology simply put is the study of music within an evolutionary context. The questions we raise relate to the nature of music and the nature of the human animal who, alone among the species, engages in and comprehends musical behaviors. Evolution by natural selection is seen as a prominent explanation of the development of physical traits and by analogy cognitive capacities within a species. In its simplest and least contentious form, the theory asserts that characteristics arising within a species which confer upon their bearer some reproductive advantage over others, tend to be those which survive and increase through successive generations of offspring. The question arises then to what extent does our genetic inheritance influence the behaviors we engage in? Or put the other way: to what extent do our behaviors influence our genetic inheritance? Surely certain behaviors have a greater impact on our reproductive success than others, and by implication these are more likely to have been forged in some way by evolutionary pressures.
Without risking at the outset however a full-fledged debate on the relative merits and demerits of various schools of thought on the topic of evolution, (for there are indeed quite a few), it is sufficient for our present purpose to recognize that certain features of the human species appear to be universal, or nearly so. Two ears, ten fingers, the structure and position of the larynx, and so forth. Further it can be observed that certain physical and physiological traits lend themselves to certain sorts of behaviors. Having wings lends itself to flight, prehensile hands to tool-use, voices to the production of song. It is also quite apparent that, in addition to physical traits, certain behaviors predominate any given species to such an extent that they might likewise be considered universal.
To be sure, universality is not essential for genetic inheritance (male and female genders are surely inherited though neither universally; further, the wide variety of physical environments in which humans, for instance, find themselves determine that the value of some behaviors are clearly context-dependent—perhaps beneficial in one region or context, but detrimental in another). In general, however, the recognition of traits that are species-universal (or species-typical, as some prefer) and which are specific or unique to a given species (species-specific) has been considered sufficient argument to view them as products of evolutionary processes—adaptations as they are often called. Beyond physical traits, similar arguments have been made for the inclusion of behaviors, and mental capacities.
Language, why not music?
Among the behaviors most cited in this regard is human language use. Interestingly, some theorists most vocal on behalf of language, have been likewise vociferous in their denial of music as an adaptation, despite the fact that both in some form can be found in all cultures and throughout known history. For instance, Steven Pinker (1997: 528) has written: “As far as biological cause and effect are concerned, music is useless. It shows no signs of design for attaining a goal such as long life, grandchildren, or accurate perception and prediction of the world. Compared with language, vision, social reasoning, and physical know-how, music could vanish from our species and the rest of our lifestyle would be virtually unchanged.” The problem with such reasoning however, besides the obvious, is that it fails entirely to acknowledge not only the great complexity of language and of music, but also their great similarity in terms of biological and physical material. In particular, a comparison between spoken language and singing reveals a great difficulty for those who would argue for language but against music as an adaptation. This holds not only at the surface level of the sound itself, but apparently also for the underlying cognitive processes involved in production and perception of both music and language.
As Aniruddh Patel and colleagues (Patel, et al., 1998: 136) put it: “It is obvious that prosodic and musical processing share resources at certain neural levels: there are not separate cochlea for language and music.” They go on to state that “it is equally obvious that at some point melodic and rhythmic patterns and linguistic prosody follow separate pathways.” Yet it is not clearcut just at what point these separations occur, nor what causes the processing of linguistic prosody and music to follow “separate pathways”. While it is apparent that music and language differ in both their functions and their products, it is not entirely clear whether these differences warrant separate evolutionary stories.
Dichotic listening and brain hemispheres
The more one looks at the ambiguous domains defined by speaking and singing, the more similarity one finds. Part of the difficulty lies in understanding just what has been meant by terms like “music” and “language”. For the most part, theorists like Pinker have dealt with these as abstractly constructed categories, rather than as the sounds, movements, and perceptions that most of us experience when we engage in speech or music. This abstraction has muddled the discussion unnecessarily. What has confused this matter even more has been the abundant research over the years, in particular during the 1960s & ‘70s, that purports to address brain processing for music or for language, without sufficiently defining these terms. The arguments therefore become circular.
Two stimuli may be presented that differ in some regard. The one is arbitrarily termed linguistic (or “verbal”), the other nonverbal, music being lumped in with the latter category of items. As long as the experimental or observational results find that the two stimuli are treated somehow differently, some conclusion can be drawn, regardless of its ecological validity, its relevance to ordinary experience. In most of the studies from the 1960s and 1970s the common paradigm was known as dichotic listening. Two stimuli were presented simultaneously, one in each ear. Based on subject performance, determinations were made regarding which half of the brain better processed the given stimuli.
In popular parlance, people began to speak of individuals as right-brained or left-brained. These studies, while perhaps innovative for their day, were crude and in many cases poorly designed. What does it mean that someone is more able to pick out words sounding in their right ear than simultaneous words sounding in their left, or that the same person is better at picking out the sound of a toilet flushing in their left ear than the sound of a car engine starting in their right? (Curry 1967) From an evolutionary standpoint, such occurrences were unlikely to have presented themselves to early hominids; and what is worse, they reveal little about the online cognitive processing in modern humans.
This is not meant to criticize the important work that is done in studying brain processing in living subjects. It is however meant to point out that recent technological advances have rendered such crude work obsolete, and even suspect. Current capabilities allow far greater sensitivity to selected brain regions (in some cases individual neurons), than earlier attempts which sought merely to identify hemispheric dominance for particular tasks, tacitly viewing each brain hemisphere as quasi-undifferentiated wholes. Because work dating back to the late nineteenth and early twentieth centuries, most prominently by Paul Broca, Carl Wernicke, and Hughlings Jackson, had correlated various syndromes with particular brain regions even within the same hemisphere, this over-reliance on hemispheric difference in the mid-twentieth century seems outdated even for its day. It appears to have been a case of the technology driving the questions rather than the other way around.
The essence of the matter
Beyond the technology itself however, the methods and categories of observation in many of these studies are problematic. Neither music nor language are simple behaviors, lending themselves to simplistic labels (lumping them into opposing categories, like verbal vs. nonverbal). When dealing with spoken language: pitch, rhythm, timbre, tempo, and intensity are all a natural part of the sound source, just as they are with music. In what ways do these elements differ in the context of speech from that of music, in particular song? If the same words can be both spoken and sung, what is it that distinguishes the one sort of stimuli from the other? There have been many attempts to define these, the most obvious being duration and stability of pitch, yet for each example supporting these arguments, there are just as many countering them.
Because of the great diversity of musical and linguistic cultures around the world, any evolutionary or biological theories on their origins and development over time must accommodate this variety. If music and language can properly be distinguished, any such theory must accommodate that variety yet maintain coherent definitions for language and music that distinguish the one from the other across cultures as well. Relying merely on differences in the sound source itself, without regard to the cognitive domain and the social and cultural context that informs cognition, is likely to fail.
The question remains fundamentally, what is music? How does this behavior relate to other human traits such as language and tool use? How does it fit into the scheme of social and communicative behaviors that have evolved in humans and those arising in other animals? And how do modern humans acquire the capacity to comprehend and produce music?
Connection to history
Back in the States, colleagues often wonder what all this has to do with music history and analysis, so I’ll tell you a little story. Back in 1854 in the northern Moravian village of Hukvaldy was born a curious fellow who became the composer Leoš Janáček. As you may well know, he composed folksy operas to librettos in the Czech language, and he was a nationalist and a passionate advocate for a Czech University in his adopted home of Brno. You may also realize that, like Bartók, Janáček traveled the countryside in search of folk songs to preserve. What you may not know about him is his curious habit of eavesdropping on conversations. He was fascinated by the melodies and rhythms of everyday speech, finding in them the beauty of nature, really of human nature.
In addition to his collections of folk music, Janáček left behind four decades’ worth of speech melody transcriptions. He described these as revealing a deep, inner truth that otherwise lay hidden. He wrote (Zemanová 1989: 121-122): “Perhaps it was like this, strange as it seemed, that whenever someone spoke to me, I may have not grasped the words, but I grasped the rise and fall of the notes. At once I knew what the person was like: I knew how he or she felt, whether he or she was lying, whether he or she was upset.” That is, to turn the cliché around, Janáček wished to investigate the music of language, rather than the language of music.
But focusing on the melody and rhythm of speech, as Janáček did, rather than on the words or grammar—and thus drawing a connection between language and music—is a product of the cognitive flexibility that we all inherit. Janáček himself was quite interested in the cognitive processes that gave rise to his perceptions. He studied the work of Helmholtz and others; he devoured aesthetic and philosophical treatises, contributing his own collection of essays and theoretical and pedagogical texts in an attempt to make sense of these issues. As a twenty-first century scholar, seeking to understand his experience, I am compelled to utilize the tools of my era, bringing to bear the last century’s research in cognitive science. And yet the issues go beyond his individual experience, and lead on a path toward universals, the shared, common experience of all humans that Janáček sought. And this path leads inevitably to questions of human origins, and the mechanism by which we as humans inherited the capacities that we share, and by which we continue to develop these capacities reliably and automatically as we grow from infants into socialized adults. And this, of course, is the path of Evolutionary Musicology.
The various speakers today will provide their own views on these questions. I would like to draw your attention to the two posters that are part of our panel. Unfortunately, Steven Brown and Bruce Richman were unable to attend, but have graciously provided their planned talks which are displayed [location]. We will be starting off this morning session then with Don Hodges… etc.
Curry, Frederic K. W. “A comparison of left-handed and right-handed subjects on verbal and non-verbal dichotic listening tasks,” Cortex 3 (1967), 343-352.
Patel, Aniruddh D., Isabelle Peretz, Mark Tramo, and Raymonde Labreque. “Processing Prosodic and Musical Patterns: A Neuropsychological Investigation,” Brain and Language, 61 (1998), 123-144.
Pinker, Steven. How the Mind Works. New York: W.W. Norton & Company, 1997.
Zemanová, Mirka, ed. & trans. Janáček’s Uncollected Essays on Music. (New York: Rizzoli International, 1989).