News Medical - A team of researchers in the GIPSA-Lab (CNRS/UniversitéGrenoble Alpes/Grenoble INP) and at INRIA Grenoble Rhône-Alpes has developed a system that can display the movements of our own tongues in real time. Captured using an ultrasound probe placed under the jaw, these movements are processed by a machine learning algorithm that controls an "articulatory talking head."
As well as the face and lips, this avatar shows the tongue, palate and teeth, which are usually hidden inside the vocal tract.
This "visual biofeedback" system, which ought to be easier to understand and therefore should produce better correction of pronunciation, could be used for speech therapy and for learning foreign languages. This work is published in the October 2017 issue of Speech Communication...>>>