scispace - formally typeset
Search or ask a question
Topic

Dysarthria

About: Dysarthria is a research topic. Over the lifetime, 2402 publications have been published within this topic receiving 56554 citations.


Papers
More filters
DissertationDOI
12 May 2016
TL;DR: In this paper, the authors focused on the development of markerless methods for studying facial expressions and movements in neurology, focusing on Parkinson's disease (PD) and disorders of consciousness (DOC).
Abstract: This project is focused on the development of markerless methods for studying facial expressions and movements in neurology, focusing on Parkinson’s disease (PD) and disorders of consciousness (DOC). PD is a neurodegenerative illness that affects around 2% of the population over 65 years old. Impairments of voice/speech are among the main signs of PD. This set of impairments is called hypokinetic dysarthria, because of the reduced range of movements involved in speech. This reduction can be visible also in other facial muscles, leading to a hypomimia. Despite the high percentage of patients that suffer from dysarthria and hypomimia, only a few of them undergo speech therapy with the aim to improve the dynamic of articulatory/facial movements. The main reason is the lack of low cost methodologies that could be implemented at home. DOC after coma are Vegetative State (VS), characterized by the absence of self-awareness and awareness of the environment, and Minimally Conscious State (MCS), in which certain behaviors are sufficiently reproducible to be distinguished from reflex responses. The differential diagnosis between VS and MCS can be hard and prone to a high rate of misdiagnosis (~40%). This differential diagnosis is mainly based on neuro-behavioral scales. A key role to plan the rehabilitation in DOC patients is played by the first diagnosis after coma. In fact, MCS patients are more prone to a consciousness recovery than VS patients. Concerning PD the aim is the development of contactless systems that could be used to study symptoms related to speech and facial movements/expressions. The methods proposed here, based on acoustical analysis and video processing techniques could support patients during speech therapy also at home. Concerning DOC patients the project is focused on the assessment of reflex and cognitive responses to standardized stimuli. This would allow objectifying the perceptual analysis performed by clinicians.

2 citations

Journal ArticleDOI
TL;DR: Investigation of the abilities of Greek speakers with dysarthria to signal lexical stress at the single word level found that it was found that the relationship between the listeners’ judgments of stress location and the acoustic data was not conclusive, and the pattern of difficulty was different for each speaker.
Abstract: The study reported in this paper investigated the abilities of Greek speakers with dysarthria to signal lexical stress at the single word level. Three speakers with dysarthria and two unimpaired control participants were recorded completing a repetition task of a list of words consisting of minimal pairs of Greek disyllabic words contrasted by lexical stress location only. Fourteen listeners were asked to determine the attempted stress location for each word pair. Acoustic analyses of duration and intensity ratios, both within and across words, were undertaken to identify possible acoustic correlates of the listeners’ judgments concerning stress location. Acoustic and perceptual data indicate that while each participant with dysarthria in this study had some difficulty in signaling stress unambiguously, the pattern of difficulty was different for each speaker. Further, it was found that the relationship between the listeners’ judgments of stress location and the acoustic data was not conclusive.

2 citations

Journal ArticleDOI
TL;DR: In this paper, a convolutional neural network (CNN) was used to detect presence of dysarthria in speech and detect level of severity based on deep learning approach for diagnosis of Parkinson's disease.
Abstract: Parkinson’s disease (PD) is a neurological disorder marked by decreased dopamine levels in the brain. Persons suffering from PD, exhibits vocal symptoms such as dysphonia and dysarthria. Speech impairments in PD are grouped together and called as hypokinetic dysarthria. Traditional PD management is based on a patient’s clinical history and through physical examination as there are currently no known biomarkers for its diagnosis. Automatic analysis techniques aid clinicians in diagnosis and monitoring patients using speech and provide frequent, cost effective and objective assessment. This paper presents pilot experiment to detect presence of dysarthria in speech and detect level of severity based on deep learning approach. Automated feature extraction and classification using convolutional neural network shows 77.48% accuracy on test samples of TORGO database with five fold validation. Using transfer learning, system performance is further analyzed for gender specific performance as well as in detection of severity of disease.

2 citations

Proceedings ArticleDOI
01 Nov 2017
TL;DR: This work focuses on mapping the phones with the EMA sensor channels based on their place of articulation, and shows that the Ema sensor combinations with (y, z, Φ) parameter group for each phone coincide well with the acoustic and FDA scores.
Abstract: Dysarthria is a traumatic neuromotor disorder that affects the physical production of speech. It reduces the function of primary articulators that are involved in speech. Recent research has presented that the use of articulatory data provide a better assessment towards identifying the speech intelligibility of dysarthric speakers over acoustic models that are modelled based on the biological perception of speech through Mel scale than directly involving the anatomy of speech production. Articulatory data of speech, include positional data of the articulators obtained from 12 Electromagnetic Articulograph (EMA) sensor channels with each sensor channel containing 6 parameters namely x, y, z, Φ, θ, and rms. While identifying the phone intelligibility deficit for the dysarthric speakers, articulatory data of all 12 sensor combinations each with 6 parameters may create turbulence in the intelligibility assessment. Hence, an appropriate mapping of the sound units with the sensor combinations is vital. The current work, focuses on mapping the phones with the EMA sensor channels based on their place of articulation. The parameters are also grouped based on the 3D-spherical co-ordinate distribution of the articulator as (x, y, θ) and (y, z, Φ). The mapping of the EMA sensor channels with the sound units are further validated through HMM-based acoustic only models and FDA scores. The phones are trained through their corresponding articulatory sensor combination information using a 5-class support vector machine and tested through a 10-fold cross validation technique for both the parameter groups. The 5 classes include mild, moderate, moderate-to-severe, severe and normal. Each phone can be classified to any of the 5 class in both the parameter groups based on their speech intelligibility. The results show that the EMA sensor combinations with (y, z, Φ) parameter group for each phone coincide well with the acoustic and FDA scores.

1 citations

Journal ArticleDOI
TL;DR: In this article, the authors compared orthographic transcription and visual analogue scale (VAS) with two groups of listeners, experienced listeners and naive listeners, and examined the relationship across the four sets of speech intelligibility scores by means of correlational analysis.
Abstract: BACKGROUND Speech intelligibility is a global indicator of the severity of a speech problem. It is a measure that has been used frequently in research and clinical assessment of speech. Previous studies have shown that factors, such as measurement method and listener experience, can influence speech intelligibility scores. However, these factors of speech intelligibility assessment have not yet been investigated in people with Down syndrome (DS). AIMS To compare the speech intelligibility scores in speakers with DS measured using two methods: orthographic transcription and visual analogue scale (VAS), by two groups of listeners, experienced listeners and naive listeners. Also, to examine the relationship across the four sets of speech intelligibility scores by means of correlational analysis. METHODS & PROCEDURES A total of 30 adolescents and adults with DS read or repeated 12 sentences from a standardized test of intelligibility for adults with dysarthria. Each sentence was saved as a separate sound file and the 360 sentences were divided to form eight sets of stimuli. A total of 32 adults (16 experienced and 16 naive) served as listeners of speech intelligibility. Each listener heard a single set of sentences and independently estimated the level of intelligibility for each sentence using a VAS in one task and wrote down the words perceived (i.e., orthographic transcription) in another task. The order of the two tasks was counterbalanced across listeners and the tasks were completed at least 1 week apart. OUTCOMES & RESULTS Repeated-measures analysis of variance (ANOVA), confirmed by mixed-methods analysis, showed that the scores obtained using orthographic transcription were significantly higher than those obtained using VAS; and the experienced listeners' scores were significantly higher than the naive listeners' scores. Spearman rank correlation analysis showed that the four sets of scores across all conditions were strongly positively correlated with each other. CONCLUSIONS & IMPLICATIONS Listeners, both experienced and naive, may udge speech in DS differently when using orthographic transcription versus VAS as the method of measurement. In addition, experienced listeners can judge speech intelligibility differently compared with listeners who are less exposed to unclear speech, which may not represent 'real-world' functional communicative ability. Speech and language therapists should be aware of the effect of these factors when measuring intelligibility scores and direct comparison of scores obtained using different procedures and by different groups of listeners is not recommended. What this paper adds What is already known on the subject Previous research on other clinical groups (e.g., Parkinson's disease) has shown that speech intelligibility scores can vary across different measurement methods and when judged by listeners with different experience. However, these factors have not yet been investigated in people with DS. What this paper adds to existing knowledge Similar to the findings reported for other clinical groups, using an impressionistic measurement method, such as VAS, can result in different speech intelligibility scores compared with scores obtained from orthographic transcription in speakers with DS. Furthermore, experienced listeners can perceive intelligibility as better compared with naive (untrained) listeners for this group. What are the potential or actual clinical implications of this work? When measuring speech intelligibility, speech and language therapists should be aware that scores obtained using orthographic transcription can be higher than those obtained using VAS. They should also be aware that their increased exposure to hearing atypical speech may cause them to judge the speech difficulty as less severe and lead to an inaccurate representation of speech performance. Speech and language therapists should consider these factors when interpreting assessment results and especially when using intelligibility measures for treatment outcomes.

1 citations


Network Information
Related Topics (5)
Parkinson's disease
27.9K papers, 1.1M citations
82% related
Multiple sclerosis
26.8K papers, 886.7K citations
77% related
White matter
14.8K papers, 782.7K citations
77% related
Cerebellum
16.8K papers, 794K citations
76% related
Traumatic brain injury
25.7K papers, 793.7K citations
76% related
Performance
Metrics
No. of papers in the topic in previous years
YearPapers
2023229
2022415
2021164
2020138
2019125
201888