Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

  • Loading metrics

Development and validation of a self-administered computerized cognitive assessment based on automatic speech recognition

  • Hyun-Ho Kong,

    Roles Conceptualization, Data curation, Formal analysis, Funding acquisition, Methodology, Supervision, Visualization, Writing – original draft

    Affiliations Department of Rehabilitation Medicine, Chungbuk National University Hospital, Cheongju, Republic of Korea, Department of Rehabilitation Medicine, Chungbuk National University College of Medicine, Cheongju, Republic of Korea

  • Kwangsoo Shin,

    Roles Conceptualization, Project administration, Resources, Software

    Affiliation Graduate School of Public Health and Healthcare Management, Songeui Medical Campus, The Catholic University of Korea, Seoul, Republic of Korea

  • Dong-Seok Yang,

    Roles Conceptualization, Investigation, Project administration, Resources, Software, Supervision, Validation

    Affiliation Technology Strategy Center, Neofect, Seongnam, Republic of Korea

  • Aryun Kim,

    Roles Conceptualization, Data curation, Formal analysis, Methodology, Resources

    Affiliations Department of Neurology, Chungbuk National University Hospital, Cheongju, Republic of Korea, Department of Neurology, Chungbuk National University College of Medicine, Cheongju, Republic of Korea

  • Hyeon-Seong Joo,

    Roles Conceptualization, Data curation, Resources, Software, Validation

    Affiliation Department of Physical Therapy, Daejeon University, Daejeon, Republic of Korea

  • Min Woo Oh ,

    Contributed equally to this work with: Min Woo Oh, Jeonghwan Lee

    Roles Conceptualization, Software, Visualization, Writing – review & editing

    omo11017@naver.com (MWO); jeonghwan@cbnuh.or.kr (JL)

    Affiliation Department of Rehabilitation Medicine, Chungbuk National University College of Medicine, Cheongju, Republic of Korea

  • Jeonghwan Lee

    Contributed equally to this work with: Min Woo Oh, Jeonghwan Lee

    Roles Conceptualization, Funding acquisition, Resources, Writing – review & editing

    omo11017@naver.com (MWO); jeonghwan@cbnuh.or.kr (JL)

    Affiliation Department of Psychiatry, Chungbuk National University Hospital, Cheongju, Republic of Korea

Abstract

Existing computerized cognitive tests (CCTs) lack speech recognition, which limits their assessment of language function. Therefore, we developed CogMo, a self-administered CCT that uses automatic speech recognition (ASR) to assess multi-domain cognitive functions, including language. This study investigated the validity and reliability of CogMo in discriminating cognitive impairments. CogMo automatically provides CCT results; however, manual scoring using recorded audio was performed to verify its ASR accuracy. The mini–mental state examination (MMSE) was used to assess cognitive functions. Pearson’s correlation was used to analyze the relationship between the MMSE and CogMo results, intraclass correlation coefficient (ICC) was used to evaluate the test-retest reliability of CogMo, and receiver operating characteristic (ROC) analysis validated its diagnostic accuracy for cognitive impairments. Data of 100 participants (70 with normal cognition, 30 with cognitive impairment), mean age 74.6±7.4 years, were analyzed. The CogMo scores indicated significant differences in cognitive levels for all test items, including manual and automatic scoring for the speech recognition test, and a very high correlation (r = 0.98) between the manual and automatic CogMo scores. Additionally, the total CogMo and MMSE scores exhibited a strong correlation (r = 0.89). Moreover, CogMo exhibited high test-retest reliability (ICC = 0.94) and ROC analysis yielded an area under the curve of 0.89 (sensitivity = 90.0%, specificity = 82.9%) at a cutoff value of 68.8 points. The CogMo demonstrated adequate validity and reliability for discriminating multi-domain cognitive impairment, including language function, in community-dwelling older adults.

Introduction

Life expectancy is constantly increasing owing to advances in medical technology and improvements in overall healthcare [1]. However, with an aging population, the number of older adults with cognitive impairment (CI) is increasing. Dementia affected an approximately 57.4 million people worldwide in 2019, and this figure is expected to rise to 83.2 million by 2030 and 152.8 million by 2050 [2]. Older adults with CI have difficulties in independently performing daily activities [3], experience social isolation and withdrawal [4], and suffer from a decrease in their overall quality of life [5]. Additionally, families and society are burdened by the medical and daily care required for older adults with CI. The World Health Organization has estimated that the global cost of dementia was $1.3 trillion in 2019, which is expected to increase to $2.8 trillion by 2030 [6].

The annual incidence rate of individuals with normal cognition transitioning to Alzheimer’s disease (AD) or mild cognitive impairment (MCI) is 1–2% [7], whereas the incidence rate of dementia among individuals with MCI is 8–15% [8]. Furthermore, a study showed that approximately 80% of all patients with MCIs are eventually diagnosed with dementia based on their six-year follow-up data [9]. Because there is no commercially available disease-modifying treatment for dementia and only symptomatic treatments are available [10], reducing modifiable risk factors and appropriate lifestyle counseling through early detection of MCI are the only known methods for slowing the progression of MCI to dementia [11].

Clinicians diagnose CI through comprehensive evaluations by both observing the clinical features of patients with CI and through neuropsychological, laboratory, and neuroimaging tests [12]. However, the various neuropsychological tests used to diagnose CI have limited use in general clinical settings because they must be administered by trained professionals to ensure accurate assessments [13]. Therefore, simple cognitive screening assessment tools such as the mini–mental state examination (MMSE) and Montreal cognitive assessment (MoCA) are widely used to screen for CI [14]. In particular, the MMSE has been translated and adapted into various languages across different countries, with multiple validation studies demonstrating its high diagnostic value for detecting CI, regardless of language differences [15].

With advancements in digital healthcare, attempts have been made to evaluate cognitive function using computerized cognitive tests (CCTs) [16]. These tests have several advantages including lower examiner bias, the ability to be self-administered or by testing technicians rather than trained professionals, and the convenience of automatic scoring and storage of the results [16, 17]. However, most CCTs developed thus far use touch screens or mouse/keyboard input interfaces, which are difficult to use by older adults who are uncomfortable using digital devices. Additionally, it is difficult to assess higher cognitive functions, such as language comprehension/expression and language-related executive processes, because these tests do not include speech recognition. Furthermore, many CCT methods use only visual instructions, which have limitations in that they are challenging for illiterate individuals [16, 17].

Previous studies have demonstrated the potential of automatic speech recognition (ASR) for detecting CI [1820]. ASR is a technology that converts spoken language into text using advanced algorithms, including natural language processing and machine learning [21]. Some studies have utilized ASR to extract acoustic and linguistic features from spontaneous speech, achieving promising accuracy in distinguishing CI from healthy controls and even mild AD in certain cases [18, 19]. Other research has focused on using ASR to automate the scoring of language fluency tasks in assessments like the MoCA, highlighting its applicability in non-English-speaking populations [20]. The integration of ASR in cognitive assessments primarily facilitates the evaluation of language-related cognitive functions, as well as enhances objectivity, enables real-time analysis, and improves accessibility for individuals with limited literacy [18, 22, 23]. However, previous studies have primarily focused on isolated linguistic or acoustic features, lacked comprehensive multi-domain cognitive assessments, and frequently depended on complex preprocessing techniques, which may limit their scalability and broader applicability in various clinical settings [1820].

Therefore, we developed CogMo, a CCT designed to overcome the limitations of traditional cognitive assessments. CogMo evaluates language-related cognitive functions using a voice interface with ASR, ensuring accessibility for older adults, including those with limited literacy. It also organizes test content into familiar topics to enhance engagement, making the test more user-friendly for older populations. This study aimed to determine the validity and test-retest reliability of the CogMo for assessing cognitive functions compared with the MMSE traditionally used in clinical settings.

Materials and methods

Study population

This study recruited community-dwelling older adults aged ≥65 years who visited a local dementia care center for cognitive function assessment between March and October, 2023. The inclusion criteria were as follows: ≥65 years and understood the study content and voluntarily agreed to participate. The exclusion criteria were as follows: i) individuals with neurological or musculoskeletal conditions that make it difficult to operate a tablet personal computer (PC) using their fingers (e.g., hemiplegia due to stroke, upper extremity fractures, or amputations), ii) those with visual impairments assessed using Dr. Hahn’s standard vision chart based on the Snellen chart [24], and iii) those with hearing impairments identified through a verbal instruction test in which participants were asked to repeat simple words or sentences spoken at a conversational volume in a quiet environment [25]. Individuals with blindness or those who were unable to follow the verbal instruction test due to hearing impairments were excluded, as these conditions could interfere with the accuracy of cognitive assessments using a CCT. Finally, 102 participants were recruited, excluding those who had difficulty performing the CCT owing to hearing impairment (n = 1) and hemiplegia (n = 1), 100 participants (normal cognitive function group = 70 vs. CI group = 30) were included in the analysis. The data for this study were collected directly by the research team in compliance with ethical guidelines approved by the Institutional Review Board (IRB) of a tertiary hospital (IRB number: 2022-08-004). Written informed consent was obtained from all participants before data collection, ensuring their anonymity and confidentiality. All procedures for data collection and analysis adhered to relevant ethical and legal standards.

Evaluation and definition of cognitive impairment

The K-MMSE-2 (Korean, second edition, standard version) was administered as a neuropsychological test to evaluate the participants’ cognitive functions. The K-MMSE-2 assesses several cognitive characteristics, such as orientation, memory, attention, and language, to screen for dementia or evaluate the severity of CI, with a total score ranging from 0 to 30 and a standardized score corrected according to the age and education level of the participant [26]. In this study, individuals with CI were defined as those whose K-MMSE-2 scores corrected for age and education level were below the mean—1.0 standard deviation (SD) compared with healthy individuals [8, 27].

Development of computerized cognitive test

The CogMo test items were developed by a panel of clinical experts (psychiatrists, neurologists, and physiatrists) and is an Android-based application developed by computer engineers for on an Android-based tablet PC. CogMo utilizes the Google Cloud’s speech-to-text application programming interface (API), a widely used speech recognition tool powered by advanced deep neural networks, to transcribe spoken responses into Korean text with high accuracy. This API is specifically optimized to support Korean, ensuring precise recognition of linguistic nuances and phonetic variations inherent to the language [28]. The process for ASR-based cognitive assessment includes audio input, noise reduction preprocessing, speech-to-text conversion, error validation to ensure transcription accuracy, and scoring through the CogMo framework (Fig 1). The voice data of the participants were collected during the test using the built-in microphone of the tablet PC, ensuring high-quality audio input for reliable processing. The participants performed the CCT by touching the screen of tablet PC or verbally answering the test questions, and their responses were scored automatically. Verbal responses were transcribed using the Google Cloud’s speech-to-text API and automatically scored through the CogMo scoring framework, which evaluated the accuracy and relevance of the responses based on predefined criteria.

thumbnail
Fig 1. Flowchart of the autonomic speech recognition processing.

https://doi.org/10.1371/journal.pone.0315745.g001

The CogMo assesses various cognitive domains, including attention, visual perception, memory (verbal and visual), execution, and language. It consists of eight items (ten subtests in total), each designed to evaluate specific aspects of cognitive function. To calculate the total CogMo score (0–100), the raw scores for each subtest were converted to a standardized scale ranging from 0 to 10 points, reflecting the relative importance and cognitive demands of each subtest. The scoring framework was based on validated tools such as the MMSE and MoCA, and the content validity index methodology was utilized [29], with seven clinical experts evaluating the representativeness and relevance of the subtests. A detailed description of the tests is presented in Table 1 and Fig 2.

thumbnail
Table 1. Description of the content of each test item in CogMo.

https://doi.org/10.1371/journal.pone.0315745.t001

Testing procedures for the computerized cognitive test

The CogMo test was conducted in a quiet, properly illuminated room with minimal distractions. During the test, participants sat comfortably in front of a standardized tablet PC with a built-in microphone, following on-screen prompts and responding accordingly. The CogMo test is designed to be self-administered, with all instructions provided both visually and verbally. These instructions were repeated as necessary until participants fully understood the tasks. The examiner was present but intervened only if assistance was explicitly requested. Tablet settings, including volume and microphone sensitivity, were standardized across sessions to ensure consistent and reliable data collection. The total test time was approximately 15–20 min. To measure the accuracy of speech recognition and automated scoring, voice-response tests were manually scored by two independent, blinded raters who listened to the audio recorded during the CCT.

Statistical analysis

Depending on the type of data, continuous variables are presented as means with SD, and categorical variables are presented as numbers with percentages. Before conducting the main analyses, we assessed the necessary assumptions for each statistical test, including normality (using the Shapiro-Wilk test) and homogeneity of variances (using Levene’s test). To determine whether there were significant differences between the two groups of variables based on cognitive function levels, we performed the Student’s t-test for continuous variables and the chi-squared test for categorical variables. Additionally, Pearson’s correlation test was performed to analyze the correlation between the K-MMSE-2 score and that of each CogMo test item and the total CogMo score. The correlation coefficients were interpreted as negligible (<0.10), weak (0.10–0.39), moderate (0.40–0.69), strong (0.70–0.89), and very strong (0.90–1.00) [30]. Moreover, to verify the test-retest reliability of CogMo, the intraclass correlation coefficient (ICC) was evaluated, and an ICC values of <0.40, 0.40–0.59, 0.60–0.74, and 0.75–1.00 indicated poor, fair, good, and excellent agreements [31]. Additionally, receiver operating characteristic (ROC) curve analysis was conducted to determine the cutoff value (using the Youden Index) at which the total CogMo score distinguished the CI group from the normal cognitive group. All statistical analyses were performed using SPSS (version 25.0; IBM, Armonk, NY, USA) and MedCalc (version 22.005, MedCalc Software). Statistical significance was set at P < 0.05.

Results

Characteristics of participants

The participants’ characteristics are listed in Table 2. Their mean age was 74.6 ± 7.4 years (normal cognition group = 73.2 ± 7.3 years, CI group = 77.8 ± 6.9 years, p-value < 0.01), and no differences existed in sex (p-value = 0.32), education (p-value = 0.26), or literacy (p-value = 0.35) between the two groups according to the cognitive level. In the evaluation of cognitive function through the K-MMSE-2 score, a statistically significant difference between the normal cognition (27.4±2.8 points) and CI (20.5±3.1 points) (p < 0.001) group was observed.

Comparison of the results of each CogMo item according to cognitive function

The results for each CogMo test item according to the cognitive function level are presented in Table 3. For all test items, including manual and automatic scoring of speech recognition, the CogMo results showed statistically significant differences between the normal and CI groups. In addition, there was a moderate-to-strong correlation between manual and automatic scoring for the following speech recognition items: speaking a sentence (r = 0.86, p < 0.001), reading and acting (r = 0.77, p < 0.001), and uttering words (r = 0.68, p < 0.001). Moreover, the total CogMo score exhibited a significant difference between the two groups for cognitive function in both manual and automatic scoring (p < 0.001), and a very high correlation between manual and automatic scoring (r = 0.98, p < 0.001).

thumbnail
Table 3. Comparison of items used for computerized cognitive assessment between groups according to the cognitive function.

https://doi.org/10.1371/journal.pone.0315745.t003

Correlation analysis between the MMSE and the CogMo results

The Pearson’s correlation coefficients between the MMSE scores and those for each CogMo item are shown in Table 4. Among the tested items, counting numbers (Dots: r = 0.70, Dices: r = 0.77, p < 0.001) and speaking a sentence (total score, r = 0.70, p < 0.001) showed a strong correlation, and the other items showed a moderate correlation, ranging from r = 0.48 (matching shadows) to 0.65 (backward ordering of lights). The total CogMo score strongly correlated with the MMSE score (r = 0.89, p < 0.001) (Fig 3).

thumbnail
Fig 3. Scatterplot demonstrating the correlation between the CogMo and MMSE results (r = 0.89, p < 0.001).

https://doi.org/10.1371/journal.pone.0315745.g003

thumbnail
Table 4. Correlations between the MMSEs and scores for each CogMo item.

https://doi.org/10.1371/journal.pone.0315745.t004

Test-retest reliability of CogMo

To verify the test-retest reliability of CogMo, randomly selected participants (normal cognition group = 17, CI group = 8) were assessed again after an average of 86.2 days after the first assessment. The scores for all test items showed good or excellent agreement, except for reading and acting (ICC = 0.49, p = 0.06) and speaking words (ICC = 0.35, p = 0.15). Additionally, the total CogMo score was 62.3 ± 20.6 and 64.4 ± 19.8 for the first and second measurements, respectively, and the ICC was 0.94 (p-value < 0.001), indicating a very high test-retest reliability (Table 5).

ROC analysis for discriminating CI using CogMo

The ROC analysis for discriminating between the normal cognition and CI groups using the total CogMo score showed an area under the curve (AUC) value of 0.894 (p < 0.001), indicating high accuracy. Additionally, at the optimal cutoff value (total CogMo score = 68.8), the sensitivity and specificity were 90.0% and 82.9%, respectively (Fig 4).

thumbnail
Fig 4. Receiver operating curve analysis discriminating cognitive impairment from normal based on the total CogMo score (AUC = 0.89, p < 0.001).

https://doi.org/10.1371/journal.pone.0315745.g004

Discussion

All test items of the CogMo indicated statistically significant differences between the normal cognition and CI groups. Items using speech recognition also demonstrated significant differences between the manual and automatic scores for both groups. The MMSE and total CogMo scores were highly correlated, and the total CogMo score suggested a cut-off value to distinguish between the normal cognition and CI groups. Finally, the CogMo results confirmed that it can be used to screen and monitor CI in a community setting with a high test-retest reliability.

CogMo showed a high diagnostic accuracy for distinguishing the CI group from the normal cognition group (AUC = 0.894, sensitivity = 90.0%, specificity = 82.9%). In contrast, the diagnostic accuracy of the MMSE, which is widely used in clinical practice to screen for CI, varies depending on the study population and version employed [3234]. A recent meta-analysis reported and AUC of 0.88 for the diagnostic accuracy of the MMSE for MCI screening in primary healthcare settings, which was the same as that of CogMo [35]. In addition, the MoCA, which is another cognitive function screening tool, showed a slightly lower diagnostic accuracy than CogMo (AUC = 0.846) in a meta-analysis [36]. Furthermore, the diagnostic accuracies of CCTs for discriminating CI reported thus far are similar or lower than that of CogMo, with AUC values ranging from 0.62 to 0.91 [37]. CogMo demonstrates enhanced diagnostic accuracy compared to existing CCTs by incorporating a voice recognition input interface, whereas most existing CCTs rely solely on touch-based input methods, such as touch screens or mouse/keyboard [16]. This feature allows CogMo to assess cognitive functions in a manner that closely resembles face-to-face tests commonly used in clinical practice, such as the MMSE and MoCA, while maintaining the advantages of a computerized approach. However, the automated scoring system strictly adheres to predefined rules, which may result in slightly stricter scoring compared to manual scoring. This difference can be attributed to the inherent limitations of ASR systems, such as transcription errors caused by unclear speech or dialectical variations. Despite these differences, the relative trends between the two groups based on CI remained consistent, supporting the diagnostic utility of CogMo. Additionally, CogMo can be used to evaluate various cognitive functions, such as verbal memory, language, and execution, using speech recognition. Considering these advantages, CogMo is expected to be effectively utilized for screening and continuous monitoring of cognitive functions in community-dwelling older adults.

CogMo demonstrated a high test-retest reliability, with good to excellent agreement between the test results of not only the total CogMo score (ICC = 0.94, p < 0.001) but also for most test items. Given that previous studies on the test-retest reliability of cognitive screening tools used in clinical practice have shown that the ICC of the MMSE ranges from 0.80 to 0.95 [38] and that the MoCA ranges from 0.87 to 0.96 [39, 40], the reliability of CogMo can be concluded to be sufficiently high for its clinical applicability. The test-retest reliability is important for cognitive function tests to ensure that changes in scores across repeated cognitive function tests reflect actual changes in cognitive function and not measurement errors due to learning differences. In particular, because CCTs such as CogMo can be self-administered to monitor cognitive functions without the need for trained technicians [4143], a high measurement reliability over multiple tests is essential for their clinical application. Considering that the test-retest reliability (ICC) of the previously developed CCTs has ranged from 0.43 to 0.97 in previous studies [16, 43], that of the proposed CogMo was sufficiently high compared to other CCTs.

In this study, cognitive functions, such as verbal memory (speaking sentences) and execution (speaking words), were tested using speech recognition and statistically significant differences between the normal cognition and CI groups were observed. In our thorough literature review, we found few CCTs that used newly developed test items rather than clinically validated metrics for cognitive function (e.g., MoCA), which measure multi-domain cognitive functions, from visual perception to executive function, similar to CogMo [20, 4345]. Generally, language function deficits may precede episodic memory impairment in individuals at risk of developing MCI [46]. Because existing CCTs mostly involve touching a screen interface, they can only test limited cognitive functions, such as attention and memory, using visual stimuli. However, CogMo can assess higher cognitive functions related to language, such as verbal memory and word fluency (execution), using the ASR technology. In the future, advances in digital healthcare technology are expected improve the accuracy of ASR, allowing the use of CCTs for assessing language functions in older adults.

CogMo has several advantages over existing face-to-face cognitive function assessments. First, it provides participants with pre-recorded visual and auditory instructions for performing the test, thereby minimizing errors because of instructional bias among testers [47]. Moreover, it can be administered by testers without specialized training. These advantages allow using CogMo as a tool for cognitive function screening in public health assessments and cohort studies. Second, because all data associated with the test are automatically processed and stored as personal health records, both the overall cognitive performance and changes in specific cognitive domains can be continuously monitored. This allows for the provision of personalized cognitive rehabilitation content tailored to the individual’s cognitive profile based on these data. Third, as some individuals may be reluctant to have their cognitive functions assessed in a formal clinical setting, a computerized test may make them more receptive to subsequent formal clinical assessments [48]. Lastly, CogMo’s ASR technology could be adapted to incorporate open-ended tasks, such as picture description, which previous studies have shown to be closely associated with CI, particularly in the language domain [49, 50]. Such adaptations could enable the evaluation of richer linguistic data and a broader range of cognitive functions, potentially enhancing its diagnostic accuracy.

This study has several limitations. First, the speech recognition accuracy of CogMo was lower than expected. However, the accuracy for all test items, except for the “Speaking Words” task, was comparable to that of other voice recognition-based CCTs [43], and the total CogMo score showed a very high correlation (r = 0.98) between manual and automatic scoring, which can be considered sufficient for clinical use. Second, because we only assessed cognitive functions using the MMSE, participants may have been misclassified between the CI and non-CI groups. We acknowledge that CI is typically diagnosed through comprehensive evaluations that combine clinical symptoms, neuropsychological tests, and other findings to reach a diagnosis confirmed by clinicians. However, this limitation was unavoidable as our study aimed to validate CogMo against the MMSE, the most widely used cognitive screening tool. We plan to conduct additional research involving comprehensive diagnostic evaluations, including neuropsychological tests, to validate CogMo’s ability to detect CI as confirmed by clinicians. Finally, CogMo was developed and validated in Korean, which presents a limitation in generalizing the findings of this study to other linguistic and cultural contexts. However, certain subtests, such as finding a puppy, matching the shadows, and counting numbers, assess cognitive domains beyond language and are less dependent on the Korean language. With minor adaptations, these subtests could be applied to other linguistic groups, expanding CogMo’s applicability across diverse populations.

Conclusion

Newly developed CogMo exhibited high validity and reliability compared with the MMSE, the most commonly used neuropsychological test in clinical practice. Therefore, CogMo can be used to assess and monitor multi-domain cognitive functions in community-dwelling older adults, including those who are illiterate.

Acknowledgments

The author has no acknowledgments to report.

References

  1. 1. Gu D, Andreev K, Dupre ME. Major Trends in Population Growth Around the World. China CDC Wkly. 2021;10, 604–613. pmid:34594946
  2. 2. GBD 2019 Dementia Forecasting Collaborators. Estimation of the global prevalence of dementia in 2019 and forecasted prevalence in 2050: an analysis for the Global Burden of Disease Study 2019. Lancet Public Health. 2022;7: e105–e125. pmid:34998485
  3. 3. Yang Y, Yang HD, Hong Y-J, Kim JE, Park M-H, Na HR, et al. Activities of Daily Living and Dementia. Dement Neurocognitive Disord. 2012;11: 29–37.
  4. 4. Shen C, Rolls ET, Cheng W, Kang J, Dong G, Xie C, et al. Associations of Social Isolation and Loneliness With Later Dementia. Neurology. 2022;99: e164–e175. pmid:35676089
  5. 5. Banerjee S, Smith SC, Lamping DL, Harwood RH, Foley B, Smith P, et al. Quality of life in dementia: more than just cognition. An analysis of associations with quality of life in dementia. J Neurol Neurosurg Psychiatry. 2006;77: 146–148. pmid:16421113
  6. 6. World Health Organization (2021) Global status report on the public health response to dementia. https://www.who.int/publications/i/item/9789240033245, Last updated September 1, 2021, Accessed on October 6, 2020.
  7. 7. Petersen RC, Smith GE, Waring SC, Ivnik RJ, Tangalos EG, Kokmen E. Mild Cognitive Impairment: Clinical Characterization and Outcome. Arch Neurol. 1999;56: 303–308. pmid:10190820
  8. 8. Petersen RC. Mild Cognitive Impairment. Contin Lifelong Learn Neurol;2016: 22, 404.
  9. 9. Petersen RC. Mild cognitive impairment as a diagnostic entity. J Intern Med. 2004; 256: 183–194. pmid:15324362
  10. 10. Belder CRS, Schott JM, Fox NC. Preparing for disease-modifying therapies in Alzheimer’s disease. Lancet Neurol. 2023;22: 782–783. pmid:37463598
  11. 11. Petersen RC, Lopez O, Armstrong MJ, Getchius TSD, Ganguli M, Gloss D, et al. Practice guideline update summary: Mild cognitive impairment. Neurology. 2018;90: 126–135.
  12. 12. Arvanitakis Z, Shah RC, Bennett DA. Diagnosis and Management of Dementia: Review. JAMA. 2019;322: 1589–1599. pmid:31638686
  13. 13. Jacova C, Kertesz A, Blair M, Fisk JD, Feldman HH. Neuropsychological testing and assessment for dementia. Alzheimers Dementa. 2007;3: 299–317. pmid:19595951
  14. 14. Siqueira GSA, Hagemann P de MS, Coelho D de S, Santos FHD, Bertolucci PHF. Can MoCA and MMSE Be Interchangeable Cognitive Screening Tools? A Systematic Review. The Gerontologist. 2018;59: e743–e763.
  15. 15. Steis MR, Schrauf RW. A review of translations and adaptations of the Mini-Mental State Examination in languages other than English and Spanish. Res Gerontol Nurs. 2009;2(3):214–24. pmid:20078011
  16. 16. Tsoy E, Zygouris S, Possin KL. Current State of Self-Administered Brief Computerized Cognitive Assessments for Detection of Cognitive Disorders in Older Adults: A Systematic Review. J Prev Alzheimers Dis. 2021;8: 267–276. pmid:34101783
  17. 17. Zygouris S, Tsolaki M. Computerized Cognitive Testing for Older Adults: A Review. Am J Alzheimers Dis Dementias®. 2015;30: 13–28. pmid:24526761
  18. 18. Gosztolya G, Vincze V, Tóth L, Pákáski M, Kálmán J, Hoffmann I. Identifying mild cognitive impairment and mild Alzheimer’s disease based on spontaneous speech using ASR and linguistic features. Computer Speech & Language. 2019;53:181–97.
  19. 19. Toth L, Hoffmann I, Gosztolya G, Vincze V, Szatloczki G, Banreti Z, et al. A speech recognition-based solution for the automatic detection of mild cognitive impairment from spontaneous speech. Curr Alzheimer Res. 2018;15(2):130–8. pmid:29165085
  20. 20. Kantithammakorn P, Punyabukkana P, Pratanwanich PN, Hemrungrojn S, Chunharas C, Wanvarie D. Using automatic speech recognition to assess Thai speech language fluency in the Montreal Cognitive Assessment (MoCA). Sensors. 2022;22(4):1583. pmid:35214483
  21. 21. Alharbi S, Alrazgan M, Alrashed A, Alnomasi T, Almojel R, Alharbi R, et al. Automatic speech recognition: systematic literature review. IEEE Access. 2021;9:131858–76.
  22. 22. Plauche M, Nallasamy U, Pal J, Wooters C, Ramachandran D, editors. Speech recognition for illiterate access to information and technology. 2006 International Conference on Information and Communication Technologies and Development; 2006 May 25–26; Berkeley, CA, USA.
  23. 23. Ter Huurne D, Possemis N, Banning L, Gruters A, König A, Linz N, et al. Validation of an automated speech analysis of cognitive tasks within a semiautomated phone assessment. Digit Biomark. 2023;7(1):115–23. pmid:37901366
  24. 24. Tielsch JM, Sommer A, Witt K, Katz J, Royall RM. Blindness and visual impairment in an American urban population: the Baltimore Eye Survey. Arch Ophthalmol. 1990;108(2):286–90.
  25. 25. Jung Y, Han J, Choi HJ, Lee JH. Reliability and validity of the Korean matrix sentence-in-noise recognition test for older listeners with normal hearing and with hearing impairment. Audiol Speech Res. 2022;18(4):213–21.
  26. 26. Baek MJ, Kim K, Park YH, Kim S. The Validity and Reliability of the Mini-Mental State Examination-2 for Detecting Mild Cognitive Impairment and Alzheimer’s Disease in a Korean Population. PLoS ONE. 2016;11: e0163792. pmid:27668883
  27. 27. Kang Y, Na D-L, Hahn S. A validity study on the Korean Mini-Mental State Examination (K-MMSE) in dementia patients. J Korean Neurol Assoc; 1997: 300–308.
  28. 28. Google Cloud [Internet]. Sunnyvale: Google; c2024 [cited 2024 Nov 22]. Available from: https://cloud.google.com/speech-to-text/?hl=ko.
  29. 29. Yusoff MSB. ABC of content validation and content validity index calculation. Education in Medicine Journal. 2019;11(2):49–54.
  30. 30. Schober P, Boer C, Schwarte LA. Correlation Coefficients: Appropriate Use and Interpretation. Anesth Analg. 2018;126: 1763. pmid:29481436
  31. 31. Cicchetti DV. Guidelines, criteria, and rules of thumb for evaluating normed and standardized assessment instruments in psychology. Psychol Assess. 1994;6: 284–290.
  32. 32. Kim KW, Lee DY, Jhoo JH, Youn JC, Suh YJ, Jun YH, et al. Diagnostic Accuracy of Mini-Mental Status Examination and Revised Hasegawa Dementia Scale for Alzheimer’s Disease. Dement Geriatr Cogn Disord. 2005;19: 324–330. pmid:15785033
  33. 33. Pezzotti P, Scalmana S, Mastromattei A, Di Lallo D, the "Progetto Alzheimer" Working Group. The accuracy of the MMSE in detecting cognitive impairment when administered by general practitioners: A prospective observational study. BMC Fam Pract. 2005;9: 29.
  34. 34. Pinto TCC, Machado L, Bulgacov TM, Rodrigues-Júnior AL, Costa MLG, Ximenes RCC, et al. Is the Montreal Cognitive Assessment (MoCA) screening superior to the Mini-Mental State Examination (MMSE) in the detection of mild cognitive impairment (MCI) and Alzheimer’s Disease (AD) in the elderly? Int Psychogeriatr. 2019;31: 491–504. pmid:30426911
  35. 35. Karimi L, Mahboub–Ahari A, Jahangiry L, Sadeghi-Bazargani H, Farahbakhsh M (2022). A systematic review and meta-analysis of studies on screening for mild cognitive impairment in primary healthcare. BMC Psychiatry. 2022;22: 97. pmid:35139803
  36. 36. Ciesielska N, Sokołowski R, Mazur E, Podhorecka M, Polak-Szabela A, Kędziora-Kornatowska K. Is the Montreal Cognitive Assessment (MoCA) test better suited than the Mini-Mental State Examination (MMSE) in mild cognitive impairment (MCI) detection among people aged over 60? Meta-analysis. Psychiatr Pol. 2016;50: 1039–1052.
  37. 37. Aslam RW, Bates V, Dundar Y, Hounsome J, Richardson M, Krishan A, et al. A systematic review of the diagnostic accuracy of automated tests for cognitive impairment. Int J Geriatr Psychiatry. 2018;33: 561–575. pmid:29356098
  38. 38. Tombaugh TN, McIntyre NJ. The Mini-Mental State Examination: A Comprehensive Review. J Am Geriatr Soc. 1992;40: 922–935. pmid:1512391
  39. 39. Gupta M, Gupta V, Nagar Buckshee R, Sharma V. Validity and reliability of hindi translated version of Montreal cognitive assessment in older adults. Asian J Psychiatry. 2019;45: 125–128. pmid:31586818
  40. 40. Wong A, Xiong YY, Kwan PWL, Chan AYY, Lam WWM, Wang K, et al. The Validity, Reliability and Clinical Utility of the Hong Kong Montreal Cognitive Assessment (HK-MoCA) in Patients with Cerebral Small Vessel Disease. Dement Geriatr Cogn Disord. 2009;28: 81–87. pmid:19672065
  41. 41. Wong A, Fong C, Mok VC, Leung K, Tong RK. Computerized Cognitive Screen (CoCoSc): A Self-Administered Computerized Test for Screening for Cognitive Impairment in Community Social Centers. J Alzheimers Dis. 2017;59: 1299–1306. pmid:28731437
  42. 42. Takahashi J, Kawai H, Suzuki H, Fujiwara Y, Watanabe Y, Hirano H, et al. Development and validity of the Computer-Based Cognitive Assessment Tool for intervention in community-dwelling older individuals. Geriatr Gerontol Int. 2020;20: 171–175. pmid:31916344
  43. 43. Zhao X, Hu R, Wen H, Xu G, Pang T, He X, et al. A voice recognition-based digital cognitive screener for dementia detection in the community: Development and validation study. Front Psychiatry 2022;13: 899729. pmid:35935417
  44. 44. Mackin RS, Rhodes E, Insel PS, Nosheny R, Finley S, Ashford M, et al. Reliability and Validity of a Home-Based Self-Administered Computerized Test of Learning and Memory Using Speech Recognition. Aging Neuropsychol Cogn. 2022;29: 867–881. pmid:34139954
  45. 45. Zhao X, Wen H, Xu G, Pang T, Zhang Y, He X, et al. Validity, feasibility, and effectiveness of a voice–recognition based digital cognitive screener for dementia and mild cognitive impairment in community–dwelling older Chinese adults: A large–scale implementation study. Alzheimers Dement. 2024;1–13. pmid:38299756
  46. 46. McCullough KC, Bayles KA, Bouldin ED. Language Performance of Individuals at Risk for Mild Cognitive Impairment. J Speech Lang Hear Res. 2019;62: 706–722. pmid:30950734
  47. 47. Ridha B, Rossor M. The Mini Mental State Examination. Pract Neurol. 2005;5: 298–303.
  48. 48. Freedman JL, Fraser SC. Compliance without pressure: The foot-in-the-door technique. J Pers Soc Psychol. 1966;4: 195–202. pmid:5969145
  49. 49. Hernández-Domínguez L, Ratté S, Sierra-Martínez G, Roche-Bergua A. Computer-based evaluation of Alzheimer’s disease and mild cognitive impairment patients during a picture description task. Alzheimers Dement (Amst). 2018;10:260–8. pmid:29780871
  50. 50. Kavé G, Dassa A. Severity of Alzheimer’s disease and language features in picture descriptions. Aphasiology. 2018;32(1):27–40.