In a genuinely major step toward a whole new kind of communication, scientists were able to tell which words a person was listening to through interpreting maps of human brain electrical activity.
In a genuinely major step toward a whole new kind of communication, scientists were able to tell which words a person was listening to through interpreting maps of human brain electrical activity. Being able to understand the thoughts of people unable to physically speak, like completely paralyzed patients or those had locked-in syndrome.
The excitement is about the very real possibility of developing devices to restore communication to those whose brains are still active, despite their disabilities, an ambition long held by many neuro-scientists. Even though these are very early days, lots of research still needing to be done, this is very welcome news
Science have been attempting to solve the puzzle of the ways in which our brains process audible sounds, extracting meaning from the noises which are words and sentences, much animal research helping to zoom in on those brain regions involved in response to audible stimuli.
One University of California research team enlisted 15 patients suffering from either brain tumours or epilepsy, affixing electrodes to their brain surfaces, so that the source of their seizures could be determined. Those being tested listened to fifty or so different speech sounds.
The resultant maps of brain electrical response to individual sounds allowed the team to predict, with about 90% accuracy which of two study sounds the brain was responding to, kind of like understanding the way in which one learns to play musical notes.
Understanding the relationships between brain activity and specific sounds represents a major leap forward in the quest for something to help those with communication difficulties, as well as having many more potential uses.
Devices which enable people to mentally operate robotic things by thought are already a reality, and of enormous benefit to the needy, but the prospect of brain-machine interfaces that will once more return the gift of speech to people who have lost it must for many be a mouth-watering idea.
Long before any such applications become practical, however, there are major hurdles to be traversed, because the study was fairly limited in the sounds employed, so much more extensive testing will be needed to explore the technological possibilities further, because all languages are very complex in totality, and complete maps will take some time to create. Obviously there is a very long way to go, but a big step has now been taken, and the future looks bright.