disordered speech
use AI to classify and profile disordered speech
At Mayo Clinic, I analyze disordered speech in patients with neurodegenerative diseases to identify acoustic and temporal markers of cognitive and motor decline. In collaboration with Drs. Hugo Botha and Rene Utianski, I apply deep learning models to remote voice recordings from individuals with conditions such as apraxia and dysarthria. By examining features such as articulation rate, pause patterns, and vocal quality, our research aims to detect early and progressive speech changes that reflect underlying neurological deterioration. This work contributes to the development of scalable, speech-based tools for non-invasive and continuous monitoring of neurodegenerative disease.
Related Publications
2025
- BSMCSUsing AI to detect articulatory modulations under delayed auditory feedback in PPAOSIn The 5th Biennial Boston Speech Motor Control Symposium, 2025
- ASADeep learning-driven phonetic profiling of dysarthric speechIn The 188th Meeting of Acoustical Society of America, 2025