Speech requires the precise movement of the lips, tongue, and jaw in order to produce the wide variety of sounds that comprise any given language. These movements are controlled by a small region on the surface of the brain known as the ventral sensorimotor cortex (vSMC).
Certain neurodegenerative disorders such as Lou Gehrigs’s disease/ALS or multiple sclerosis, along with paralysis due to injury or stroke, can leave individuals unable to speak due to their inability to move the required muscles. In many of these cases, the patients still retain the cognitive ability to compose speech and visualize the muscle movements required to generate that speech. This inability to communicate is particularly acute for a subset of patients suffering from locked-in syndrome (LIS). These individuals are fully conscious but are only, at best, able to move their eyes. The inability of these patients to effectively communicate is compounded by the fact that stroke-related LIS has 5- and 10-year post-onset survival rates of over 80% . Various speech generating devices have been developed to assist LIS patients and others with severe motor-related speech defects. However, none of these devices have been able to offer efficient communication since they are limited by the speed at which patients can select individual letters and/or words by movement of a finger or an eye. Despite these limitations, the market for speech generating devices is projected to grow to $505M by 2018.
Physicians and scientists at UCSF have developed a method for decoding speech signals from the human brain surface . The researchers mapped the electrical activity patterns in the vSMC that correspond to 57 different consonant-vowel syllables commonly found in American English. This information was generated from patients who had been fitted with high density multi-electrode arrays prior to surgery. The researchers recorded the electrical activity in the vSMC while the patients read aloud a series of consonant-vowel syllables. Comparison with an audio recording revealed that specific electrical activity patterns corresponded with the movements of the lips, tongue, and jaw required to produce certain sounds. These findings have recently been published in the leading scientific journal, Nature . The abstract and a link to the full article can be found here.
This method can be used as a platform to develop systems, software, and devices to decode speech. Since the same patterns of vocal tract movement are used in most languages, this technique could be utilized to universally decode speech from any language.
|United States Of America||Issued Patent||10,438,603||10/08/2019||2013-124|
|United States Of America||Issued Patent||9,905,239||02/27/2018||2013-124|
|European Patent Office||Published Application||2959280||12/30/2015||2013-124|
Neuro-prosthetic, speech generating device, stroke, paralysis, locked-in syndrome, speech, communication, neurodegenerative disorders