Can we talk?

No Gravatar

It’s the early 1970’s.

An accident.  A brilliant law student loses his sight.  His friends make a schedule to read his texts to him every night, so he can graduate.  It’s grueling, since we all have our own ridiculous reading and study loads to complete each and every night.

One very smart fellow solves the real problem.  He develops a program that converts text to speech.  OK.  You and I may not enjoy the way it sounds, but if you can’t read, it’s a god-send.  Especially when some of the readers who’ve been helping you with your texts have no clue what they are reading and you have to ‘listen hard’ to discern the context to get the real meaning of the words they are mouthing.   (That inventor was Ray Kurzweil from MIT [at the time], and the device became known as the Kurzweil Reading Machine.)

Kurzweil Reading Machine

I also recall the problems my grandfather had as his Parkinson’s Disease progressed.  He had been an active fellow (with story-book escapades), loquacious, and vibrant.  Until he was only able to shuffle and mumble.  Which is why he was willing to have part of his brain frozen- as a test case to see if this would remove the symptoms,and why he was the first to take L-Dopa (L-dopamine).   Losing the ability to communicate would be a devastating development for anyone.

Whether the subject suffered a stroke, has developed Lou Gherig’s disease (amyotrophic lateral sclerosis) or Parkinson’s, the problem is the same.  The ability to speak, to let folks know what you are thinking- or even that you hurt or can’t breathe.  The ability to restore speech to those who suffer from such maladies is among the most challenging items in neuroscience research today.

Which is what Drs.  G.K. Anumanchipalli, J. Chartier, and E.F. Chang from UC San Francisco are trying to address.  They have developed a brain ‘decoder’ that uses machine learning, artificial intelligence, and a speech synthesizer to “suck” information from a patient’s brain and convert it to speech.  OK.  Not quite yet- and probably not until 2030 or so, but right now they are able to create speech from folks who are able to talk directly from their brain signals.  (This article published in Nature describes something that is closer to a proof of concept type of experiment.)

The UCSF researchers used signals from implanted electrodes in the brains of the subjects to monitor the motor-nerve impulses sent by the brain to control the muscles that we use for speech.  It was hoped that those nerve impulses used to articulate speech  could be decoded by a machine, directly from those brain signals.  This first stage involved five men and women whose brains were already connected to electrodes; they had epilepsy and neurosurgeons had effected this invasive surgery to eradicate the source of their seizures.

Brain Signals to Speech

The researchers had the patients speak test sentences aloud as the impulses in the brain cortex were recorded.  (There are more than 100 muscles in the lips, jaws, tongue, and throat that we use to form words- they had to monitor all of them.)

After this phase, the researchers manipulated the brain signals via AI (artificial intelligence) that mapped the signals to a database of muscle movements.  These movements were then matched to the appropriate sounds that would normally emanate from such vocal activities.

Keep in mind that the researchers already knew what words may be spoken (they were test sentences, after all) as opposed to random word.  Still, they were able to produce speech sounds at almost normal velocity (about 150 words per minute) at a 70% accuracy rate.

Oh, yeah.  Those sounds were not produced in real time.  There was a 1 year lag between recording the impulses and the production of the net result.  And, any such system will not have to work much faster (waiting a year to communicate is not quite the goal) would also require brain surgery on the patients to collect their impulses.

And, of course, we have no idea what will happen when folks speech muscles are paralyzed due to stroke or other ailment.

(However, on a more positive note, when the epilepsy patients were told to ‘think’ the test sentences and not to physically verbalize them, the brain signals still matched those of when the subjects verbalized the test sentences.)

Another small step for man…Roy A. Ackerman, Ph.D., E.A.

Share this:
Share this page via Email Share this page via Stumble Upon Share this page via Digg this Share this page via Facebook Share this page via Twitter
Share

6 thoughts on “Can we talk?”

  1. This is so so cool, sci-fi comes to reality and so useful too, in so many ways that we may not know yet..

Comments are closed.