Heather Dial, lead author and assistant professor in the University of Houston’s department of communication sciences and disorders, found that recording brain activity while a person listens to a story may help diagnose primary progressive aphasia.
Key Takeaways
- A UH researcher found that recording brain activity while a person listens to a story was up to 75% effective in diagnosing subtypes of primary progressive aphasia (PPA).
- The non-invasive electroencephalography-based method could lead to faster, more patient-friendly assessments for language-affecting disorders like PPA, Alzheimer’s dementia and stroke.
- The research lays groundwork for future clinical tools and is part of a larger federally funded project exploring brain responses to language.
A University of Houston researcher found that recording brain activity while a person listens to a story may help diagnose primary progressive aphasia, a rare neurodegenerative syndrome that impairs language skills.
Published Aug. 12 in Scientific Reports, the findings show this method was up to 75% effective in classifying the three PPA subtypes by using brain activity data and machine-learning algorithms.

The underlying cause of PPA is often Alzheimer’s disease or frontotemporal lobar degeneration. Diagnosing PPA — a type of dementia — is often challenging, as current methods require two to four hours of cognitive testing and sometimes brain scans that can be emotionally taxing for patients.
“Our thought with this project was, can we do something different that takes less time, that helps with diagnosis?” said Heather Dial, lead author and assistant professor in UH’s department of communication sciences and disorders.
While still in early stages, the non-invasive approach could lead to faster, more patient-friendly assessments for PPA and other language-effecting disorders such as Alzheimer’s dementia and stroke.
How it Works
Dial — along with researchers from University of Wisconsin-Madison, The University of Texas at Austin and Rice University — used electroencephalography, or EEG, to record electrical activity in participants’ brains as they listened to a story.
The EEG tracked how the brain processed different levels of language, from acoustic features (how the story sounded), to syntactic structure (how sentences were formed).
“If this method is reliable and valid, then we can feel confident in physicians using it to assess change in patient response to treatment and for diagnosis.”
— Heather Dial, lead author and assistant professor
Machine-learning models analyzed the data, with the most effective model reaching nearly 75% accuracy in classifying PPA subtype, suggesting a promising foundation for future diagnostic tools — though it’s not yet ready for clinical use.
“This suggests it’s worth pursing further and trying to find the optimal parameters,” Dial said. “What are the best modeling approaches? What are the best features? How can we use this to improve the tools that a clinician has access to for diagnosis?”
The research team plans to refine the algorithm to boost diagnostic accuracy and reliability. Dial’s team received a $375,000 grant in 2024 from the National Institutes of Health to apply the same story-listening technique to studying stroke-induced language deterioration. That project will run through 2026.
“If this method is reliable and valid, then we can feel confident in physicians using it to assess change in patient response to treatment and for diagnosis,” she said.