disordered speech

use AI to classify and profile disordered speech

At Mayo Clinic, I analyze disordered speech in patients with neurodegenerative diseases to identify acoustic and temporal markers of cognitive and motor decline. In collaboration with Drs. Hugo Botha and Rene Utianski, I apply deep learning models to remote voice recordings from individuals with conditions such as apraxia and dysarthria. By examining features such as articulation rate, pause patterns, and vocal quality, our research aims to detect early and progressive speech changes that reflect underlying neurological deterioration. This work contributes to the development of scalable, speech-based tools for non-invasive and continuous monitoring of neurodegenerative disease.

Related Publications

2025

  1. BSMCS
    Using AI to detect articulatory modulations under delayed auditory feedback in PPAOS
    Fenqi Wang, Hugo Botha, and Rene Utianski
    In The 5th Biennial Boston Speech Motor Control Symposium, 2025
  2. ASA
    Deep learning-driven phonetic profiling of dysarthric speech
    Fenqi Wang, Rene Utianski, Joseph Duffy, David Jones, and Hugo Botha
    In The 188th Meeting of Acoustical Society of America, 2025