|United States Of America||Published Application||20150297106||10/22/2015||2012-075|
Thousands of severely disabled patients are unable to communicate due to paralysis, locked-in syndrome, Lou Gehrig’s disease, or other neurological disease. Restoring communication in these patients have proven a major challenge. Prosthetic devices that are operated by electrical signals measured by sensors implanted in the brain are being developed in an effort to address this problem. Investigators at University of California at Berkeley have responded to this challenge by developing an algorithm to decode speech, including arbitrary words and sentences, using brain recordings from the human cortex. A computational model is trained and determines how recorded electrical signals at specific brain sites represent different speech features, for example acoustic frequencies. The trained model then takes as input novel brain recordings and outputs a set of predicted speech features. Once these steps are accomplished, speech sounds are either directly synthesized or words are identified from the predicted speech features using statistical techniques. The brain signal decoding algorithm can decode speech solely from brain signals and may permit communication via thought alone.
brain, brain signal, electrical signals, decode speech, speech, neural, prosthetic, video game, cortical activity, decoding mechanisms, speech recognition, neural signals, primary auditory cortex, superior temporal gyrus, spectro-temporal receptive field, intracranial, neural encoding, acoustic, auditory