Japanese

Research News

Medicine/Health

Self-Administered Mobile Application to Detect Alzheimer's Disease Using Speech Data

image picture Image by Sergey Nivens/Shutterstock

Researchers from the University of Tsukuba and IBM Research have developed a self-administered mobile application that analyzes speech data as an automatic screening tool for the early detection of Alzheimer's disease. Using automatic speech recognition, the proposed mobile application reliably estimates the degree of language impairments and detects Alzheimer's disease in its prodromal stage.


Tsukuba, Japan—Alzheimer's disease (AD) is the most common form of dementia. It is important to start intervention from an early stage, e.g., the mild cognitive impairment (MCI) stage, to prevent or delay the progression of AD. For the early detection of AD and MCI, there is a growing need to develop user-friendly, self-administered screening tools for use in everyday life. Speech is a promising data source that can be used for developing such screening tools. Language impairments have been observed in the early stages of AD, and linguistic features characterizing these impairments have been used for the automatic detection of AD. However, the accuracy of automatic speech recognition used for converting human voice to text is generally of poorer quality in the case of elderly people than for people from other age groups, posing a challenge for developing an automatic tool.


Herein, researchers developed a prototype of a self-administered mobile application to help in the early detection of AD and MCI. Using this application, researchers collected and analyzed speech data of five cognitive tasks from 114 participants, including AD patients, MCI patients, and cognitively normal participants. The tasks were based on neuropsychological assessments used for dementia screening and included picture description and verbal fluency tasks. The results demonstrate that the degree of language impairments assessed by linguistic features, particularly those related to the semantic aspects (e.g., informativeness and vocabulary richness), could be reliably estimated at poor speech recognition accuracy. Moreover, by combining these linguistic features with acoustic and prosodic features of the participant's voice, machine learning models could reliably detect MCI and AD, showing 88% and 91% accuracy, respectively.


To the best of our knowledge, this is the first study to show the feasibility of an automatic, self-administered screening tool for detecting AD and MCI by reliably capturing language impairments even from the speech data obtained under poor automatic speech recognition accuracy conditions. The proposed tool may help increase the access to screening tools for the early detection of AD.


###
Yasunori Yamada reports that financial support was provided by IBM Research. Kaoru Shinkawa reports that financial support was provided by IBM Research. Tetsuaki Arai reports a relationship with Eisai Co Ltd that includes: speaking and lecture fees. Tetsuaki Arai reports a relationship with Daiichi Sankyo Co Ltd that includes: speaking and lecture fees. Tetsuaki Arai reports a relationship with Sumitomo Dainippon Pharma Co Ltd that includes: speaking and lecture fees. Kiyotaka Nemoto reports a relationship with Eisai Co Ltd that includes: speaking and lecture fees. This work was supported by the Japan Society for the Promotion of Science, KAKENHI (grant 19H01084).



Original Paper

Title of original paper:
A mobile application using automatic speech analysis for classifying Alzheimer's disease and mild cognitive impairment
Journal:
Computer Speech & Language
DOI:
10.1016/j.csl.2023.101514

Correspondence

Professor ARAI Tetsuaki
Institute of Medicine, University of Tsukuba


Related Link

Institute of Medicine



Celebrating the 151st 50th Anniversary of the University of Tsukuba
Celebrating the 151st 50th Anniversary of the University of Tsukuba