Ruth Nall is a talented talker. Always has been. When she was a child, her mother taught her to enunciate her words when she spoke, which she did often and at length. So wordy was she that, in grammar school, her friends nicknamed her “Yakky Roo,” partly for her ace Yakky Doodle impersonation, but also for her loquaciousness.
I know this because Nall, who these days teaches high school kids, told me so, in a pleasantly wide-ranging conversation about her participation in a study led by UCSF Health neurosurgeon Edward Chang. Some years back, Nall was diagnosed with epilepsy, which landed her in Chang’s charge. To find where her seizures were originating, Nall had an array of tiny electrodes placed directly on the surface of her brain. Around the same time, Chang, who is also a researcher at UC San Francisco, was trying to identify the area of the human brain responsible for controlling pitch.
Pitch is an important part of conveying information through speech; the subtext of the phrase “I love you” varies according to whether you emphasize the first, second, or third word. Chang wanted to see whether he could isolate the region of the brain responsible for imparting our speech with these nuances of prosody.
But there was a hitch: To do it, Chang would need to take high-resolution readings of electrical activity from inside his test subjects’ skulls. This research method, called electrocorticography, or ECoG, requires major surgery, which is why it almost never happens without the voluntary participation of patients who have electrodes on their brains for therapeutic reasons. Fortunately for Chang, Nall, with her ample appreciation for the spoken word, was just the person to ask.
“I said, wow, of course. Because verbal communication with people is so crucial. The technologies we have—with our texting and our email—they’re great, but nothing beats a conversation with somebody,” Nall says. “The things you hear—the tone, the emphasis, the feeling—are what convey the information behind the words. It’s like I tell my students: It’s not just what you say, it’s how you say it.”
Physiologically, pitch is controlled by the larynx, a hollow hunk of multitalented muscle perched at the top of your neck. It’s involved in everything from breathing to choking-prevention. Commonly known as the voice box, it also houses your vocal cords. The muscles of the larynx generate sound by bringing those cords together, and modifies the pitch of that sound by varying their tension.
Just as the larynx controls the vocal cords, the brain controls the larynx. That’s where ECoG comes into play. To study the brain-larynx link, Chang recruited Nall and 11 other patients with epilepsy who already had electrode arrays implanted inside their skulls. The results of the investigation, which appear in the latest issue of Cell, uncover the region of the brain that controls the vocal folds of the larynx, and point the way towards speech prosthetics that could allow people unable to speak to express themselves in more natural, nuanced ways.
Chang and his team began by asking their test subjects to repeat the phrase “I never said she stole my money” multiple times, emphasizing a different word on every utterance. Maybe you recognize this sentence: It’s semi-famous on the internet, especially on Reddit, where it pops up every few months on subreddits like /r/mildlyinteresting, /r/whoadude, and /r/todayilearned. The reason: “I never said she stole my money” is a seven-word sentence that has seven different meanings, depending on which word in the sentence you stress. Try it for yourself. Fun, right?
“Whenever I present on this research, I go through and perform every single phrase. I never said she stole my money. I never said she stole my money. I never said she stole my money. Even linguists who are well aware of this phenomenon are amused by it,” says bioengineer Ben Dichter, the study’s first author. And before you ask, yes: He found the sentence while browsing Reddit. “Having a longer sentence allows you to look at greater variations in prosody,” he says. “‘I never said she stole my money’ was the perfect stimulus.”
Dichter, Chang, and their colleagues recorded their test subjects’ neural activity while they recited the sentence. Based on their observations, the control of pitch seems to originate from the dorsal region of the laryngeal motor cortex, where the neuronal activity they recorded was associated with quick changes in pitch. More fascinating still: When the researchers zapped this patch of neurons via ECoG, their test subjects’ larynx muscles contracted in response. Some patients even vocalized spontaneously—an “UHHHH” or “AHHHH” springing forth unbidden from their mouths.
“That was clever—probably one of the strongest features of the study—going back and stimulating those locations that they found in their initial observations and producing physiological behaviors,” says Albany Medical College neuroscientist Gerwin Schalk, an expert in neurotechnology and ECoG research who was unaffiliated with Chang’s investigation. Just because you see activity in an area of the brain when someone stresses a word doesn’t mean the activity is causing the change in pitch. But triggering patients’ laryngeal muscles by stimulating the associated neurons strongly enforces the case for a causal link.
That’s an important step on the very, very long path toward the development of a speech prosthetic: a medical device that would restore the ability to speak to people who have lost their voices to illness or injury. In what is currently a very-blue-sky scenario, such a device could interpret neuronal activity and translate it into full-blown sentences. “And if you want to design your prosthetic to control pitch or prosody or even an accent—which is another vocal quality this study associated with the dorsal laryngeal motor cortex—then the results of this study could be particularly useful,” says Northwestern neuroscientist and neural engineer Marc Slutzky, an expert in neuroprosthetics and rehabilitation who was also unaffiliated with the study.
Granted, such a prosthetic likely won’t exist for some time. But it might not be as long as you think. “If you asked me three years ago, I would have said this is definitely way far off. But Ben [Dichter] and other people in the lab have convinced me otherwise,” Chang says. He’s optimistic that a brain-machine interface could soon decode a limited set of vocabulary from a person’s brain and it into speech. “Based on demonstrations—from our lab and others—I think it’s feasible,” he says. “The big challenge is, how do you generalize that kind of translation to all vocabulary?”
That’s a question for another study. Or rather, several studies, the majority of which will probably hinge on volunteers like Nall. “I firmly believe in defining your life by the lives that you change,” she says of her decision to lend her naked brain to Chang’s study. “If my experience can be of benefit to somebody later—even if it’s much later—then it will have been worthwhile.”