Decoding Speech Sounds From The Human Brain For A Communication Neuroprosthetic Device

Tech ID: 23111 / UC Case 2013-124-0

Background

Speech requires the precise movement of the lips, tongue, and jaw in order to produce the wide variety of sounds that comprise any given language.  These movements are controlled by a small region on the surface of the brain known as the ventral sensorimotor cortex (vSMC).                 

Certain neurodegenerative disorders such as Lou Gehrigs’s disease/ALS or multiple sclerosis, along with paralysis due to injury or stroke, can leave individuals unable to speak due to their inability to move the required muscles.  In many of these cases, the patients still retain the cognitive ability to compose speech and visualize the muscle movements required to generate that speech.  This inability to communicate is particularly acute for a subset of patients suffering from locked-in syndrome (LIS).  These individuals are fully conscious but are only, at best, able to move their eyes.  The inability of these patients to effectively communicate is compounded by the fact that stroke-related LIS has 5- and 10-year post-onset survival rates of over 80% [1].  Various speech generating devices have been developed to assist LIS patients and others with severe motor-related speech defects.  However, none of these devices have been able to offer efficient communication since they are limited by the speed at which patients can select individual letters and/or words by movement of a finger or an eye.   Despite these limitations, the market for speech generating devices is projected to grow to $505M by 2018.

Technology Description

Physicians and scientists at UCSF have developed a method for decoding speech signals from the human brain surface . The researchers mapped the electrical activity patterns in the vSMC that correspond to 57 different consonant-vowel syllables commonly found in American English.  This information was generated from patients who had been fitted with high density multi-electrode arrays prior to surgery.  The researchers recorded the electrical activity in the vSMC while the patients read aloud a series of consonant-vowel syllables.  Comparison with an audio recording revealed that specific electrical activity patterns corresponded with the movements of the lips, tongue, and jaw required to produce certain sounds.  These findings have recently been published in the leading scientific journal, Nature [2].  The abstract and a link to the full article can be found here.                 

This method can be used as a platform to develop systems, software, and devices to decode speech.  Since the same patterns of vocal tract movement are used in most languages, this technique could be utilized to universally decode speech from any language.

Advantages

  • Allows the automated production of artificial speech
  • Does not require selection of individual letters and/or words on an external device by the patient, leading to significantly faster communication
  • Allows effective communication for individuals suffering from locked-in syndrome or other severe motor-related speech defects.
  • Could be used for real-time speech decoding
  • Could be used to decode speech from any language

Applications

  • Neuroprosthetic device to allow speech generation in patients mute due to paralysis or injury to the larynx
  • Devices to aid in the diagnosis of speech motor disorders (i.e. stuttering, aphasia, dysarthria)

Related Materials

Patent Status

Country Type Number Dated Case
United States Of America Issued Patent 10,438,603 10/08/2019 2013-124
United States Of America Issued Patent 9,905,239 02/27/2018 2013-124
European Patent Office Published Application 2959280 12/30/2015 2013-124
 

Contact

Learn About UC TechAlerts - Save Searches and receive new technology matches

Inventors

  • Bouchard, Kristofer
  • Chang, Edward F.

Other Information

Keywords

Neuro-prosthetic, speech generating device, stroke, paralysis, locked-in syndrome, speech, communication, neurodegenerative disorders

Categorized As