How Do Our Brains Process Music?
I listen to music only at very specific times. When I go out to hear
it live, most obviously. When I’m cooking or doing the dishes I put on
music, and sometimes other people are present. When I’m jogging or
cycling to and from work down New York’s West Side Highway bike path, or
if I’m in a rented car on the rare occasions I have to drive somewhere,
I listen alone. And when I’m writing and recording music, I listen to
what I’m working on. But that’s it.
I find music somewhat intrusive in restaurants or bars. Maybe due to
my involvement with it, I feel I have to either listen intently or tune
it out. Mostly I tune it out; I often don’t even notice if a Talking
Heads song is playing in most public places. Sadly, most music then
becomes (for me) an annoying sonic layer that just adds to the
background noise.
As music becomes less of a thing—a cylinder, a cassette, a disc—and
more ephemeral, perhaps we will start to assign an increasing value to
live performances again. After years of hoarding LPs and CDs, I have to
admit I’m now getting rid of them. I occasionally pop a CD into a
player, but I’ve pretty much completely converted to listening to MP3s
either on my computer or, gulp, my phone! For me, music is becoming
dematerialized, a state that is more truthful to its nature, I suspect.
Technology has brought us full circle.
I go to at least one live performance a week, sometimes with friends,
sometimes alone. There are other people there. Often there is beer,
too. After more than a hundred years of technological innovation, the
digitization of music has inadvertently had the effect of emphasizing
its social function. Not only do we still give friends copies of music
that excites us, but increasingly we have come to value the social
aspect of a live performance more than we used to. Music technology in
some ways appears to have been on a trajectory in which the end result
is that it will destroy and devalue itself. It will succeed completely
when it self-destructs. The technology is useful and convenient, but it
has, in the end, reduced its own value and increased the value of the
things it has never been able to capture or reproduce.
Technology has altered the way music sounds, how it’s composed and
how we experience it. It has also flooded the world with music. The
world is awash with (mostly) recorded sounds. We used to have to pay for
music or make it ourselves; playing, hearing and experiencing it was
exceptional, a rare and special experience. Now hearing it is
ubiquitous, and silence is the rarity that we pay for and savor.
Does our enjoyment of music—our ability to find a sequence of sounds
emotionally affecting—have some neurological basis? From an evolutionary
standpoint, does enjoying music provide any advantage? Is music of any
truly practical use, or is it simply baggage that got carried along as
we evolved other more obviously useful adaptations? Paleontologist
Stephen Jay Gould and biologist Richard Lewontin wrote a paper in 1979
claiming that some of our skills and abilities might be like
spandrels—the architectural negative spaces above the curve of the
arches of buildings—details that weren’t originally designed as
autonomous entities, but that came into being as a result of other, more
practical elements around them.
Dale Purves, a professor at Duke University, studied this question
with his colleagues David Schwartz and Catherine Howe, and they think
they might have some answers. They discovered that the sonic range that
matters and interests us the most is identical to the range of sounds we
ourselves produce. Our ears and our brains have evolved to catch subtle
nuances mainly within that range, and we hear less, or often nothing at
all, outside of it. We can’t hear what bats hear, or the subharmonic
sound that whales use. For the most part, music also falls into the
range of what we can hear. Though some of the harmonics that give voices
and instruments their characteristic sounds are beyond our hearing
range, the effects they produce are not. The part of our brain that
analyzes sounds in those musical frequencies that overlap with the
sounds we ourselves make is larger and more developed—just as the visual
analysis of faces is a specialty of another highly developed part of
the brain.
The Purves group also added to this the assumption that periodic
sounds— sounds that repeat regularly—are generally indicative of living
things, and are therefore more interesting to us. A sound that occurs
over and over could be something to be wary of, or it could lead to a
friend, or a source of food or water. We can see how these parameters
and regions of interest narrow down toward an area of sounds similar to
what we call music. Purves surmised that it would seem natural that
human speech therefore influenced the evolution of the human auditory
system as well as the part of the brain that processes those audio
signals. Our vocalizations, and our ability to perceive their nuances
and subtlety, co-evolved.
In a UCLA study, neurologists Istvan Molnar-Szakacs and Katie Overy
watched brain scans to see which neurons fired while people and monkeys
observed other people and monkeys perform specific actions or experience
specific emotions. They determined that a set of neurons in the
observer “mirrors” what they saw happening in the observed. If you are
watching an athlete, for example, the neurons that are associated with
the same muscles the athlete is using will fire. Our muscles don’t move,
and sadly there’s no virtual workout or health benefit from watching
other people exert themselves, but the neurons do act as if we are
mimicking the observed. This mirror effect goes for emotional signals as
well. When we see someone frown or smile, the neurons associated with
those facial muscles will fire. But—and here’s the significant part—the
emotional neurons associated with those feelings fire as well. Visual
and auditory clues trigger empathetic neurons. Corny but true: If you
smile you will make other people happy. We feel what the other is
feeling—maybe not as strongly, or as profoundly—but empathy seems to be
built into our neurology. It has been proposed that this shared
representation (as neuroscientists call it) is essential for any type of
communication. The ability to experience a shared representation is how
we know what the other person is getting at, what they’re talking
about. If we didn’t have this means of sharing common references, we
wouldn’t be able to communicate.
It’s sort of stupidly obvious—of course we feel what others are
feeling, at least to some extent. If we didn’t, then why would we ever
cry at the movies or smile when we heard a love song? The border between
what you feel and what I feel is porous. That we are social animals is
deeply ingrained and makes us what we are. We think of ourselves as
individuals, but to some extent we are not; our very cells are joined to
the group by these evolved empathic reactions to others. This mirroring
isn’t just emotional, it’s social and physical, too. When someone gets
hurt we “feel” their pain, though we don’t collapse in agony. And when a
singer throws back his head and lets loose, we understand that as well.
We have an interior image of what he is going through when his body
assumes that shape.
We anthropomorphize abstract sounds, too. We can read emotions when
we hear someone’s footsteps. Simple feelings—sadness, happiness and
anger—are pretty easily detected. Footsteps might seem an obvious
example, but it shows that we connect all sorts of sounds to our
assumptions about what emotion, feeling or sensation generated that
sound.
No comments:
Post a Comment