26 Participants Needed

AI-Assisted Treatment for Speech Sound Disorders

JP
NB
Overseen ByNina Benway, PhD
Age: < 18
Sex: Any
Trial Phase: Academic
Sponsor: Syracuse University
No Placebo GroupAll trial participants will receive the active study treatment (no placebo)

Trial Summary

Will I have to stop taking my current medications?

The trial information does not specify whether participants need to stop taking their current medications. However, it does exclude those on antiepileptic medication, so it's best to discuss your specific medications with the trial coordinators.

What data supports the effectiveness of the AI-Assisted Treatment for Speech Sound Disorders?

Research shows that Speech Motor Chaining (SMC) can help children with speech sound disorders successfully learn new speech patterns and apply them to untrained words. Additionally, using technology like automatic speech recognition (ASR) for feedback has been effective in improving speech accuracy in individuals with speech disorders, suggesting that AI-assisted methods could enhance traditional speech therapy.12345

How does the AI-Assisted Treatment for Speech Sound Disorders differ from other treatments?

The AI-Assisted Treatment for Speech Sound Disorders is unique because it uses artificial intelligence to provide real-time and objective feedback on speech performance, unlike traditional methods that rely on subjective assessments by speech-language pathologists. This approach can enhance the consistency and accuracy of speech therapy by analyzing speech patterns and providing tailored feedback to patients.45678

What is the purpose of this trial?

The goal of this randomized-controlled trial is to determine how artificial intelligence-assisted home practice may enhance speech learning of the "r" sound in school-age children with residual speech sound disorders. All child participants will receive 1 speech lesson per week, via telepractice, for 5 weeks with a human speech-language clinician. Some participants will receive 3 speech sessions per week with an Artificial Intelligence (AI)-clinician during the same 5 weeks as the human clinician sessions (CONCURRENT treatment order group), whereas others will receive 3 speech sessions per week with an AI-clinician after the human clinician sessions end (SEQUENTIAL treatment order group.

Eligibility Criteria

This trial is for children aged 9 to 17 with speech sound disorders affecting their 'r' sounds, who speak American English and started learning it by age 3. They must have normal hearing, no significant cognitive or neurological conditions, no obstructive orthodontic appliances, and access to videoconferencing over broadband internet.

Inclusion Criteria

Must receive a percentile score of 8 or below on the Goldman-Fristoe Test of Articulation-3 (GFTA-3) Sounds in Words subtest
Must have reported hearing within normal limits
I want to improve how I pronounce the 'r' sound.
See 10 more

Exclusion Criteria

Must have no known history of autism spectrum disorder, Down Syndrome, cerebral palsy, intellectual disability, permanent hearing loss, epilepsy/antiepileptic medication, or brain injury/neurosurgery/stroke
Must have no orthodontic appliances that block the roof of the mouth (e.g., palate expanders)
Must not demonstrate childhood apraxia of speech (CAS-only) features in BOTH articulatory and rate/prosody domains of the ProCAD
See 2 more

Timeline

Screening

Participants are screened for eligibility to participate in the trial

2-4 weeks

Treatment

Participants receive 5 speech lessons with a human speech-language clinician, 1 time per week for 5 weeks. Concurrently or sequentially, they receive 15 speech lessons with an AI clinician, 3 times per week for 5 weeks.

5 weeks

Follow-up

Participants are monitored for changes in speech sound accuracy and retention after treatment

10 weeks

Treatment Details

Interventions

  • Artificial Intelligence-led Speech Motor Chaining (CHAINING-AI)
  • Speech-Language Pathologist-led Speech Motor Chaining
Trial Overview The study compares two methods of improving the 'r' sound in children: traditional lessons from a human speech-language pathologist once a week and additional sessions three times a week with an AI clinician either concurrently or sequentially.
Participant Groups
2Treatment groups
Experimental Treatment
Group I: SEQUENTIAL treatment orderExperimental Treatment2 Interventions
* 5 speech lessons with a human speech-language clinician: 1 time per week for 5 weeks. * 15 speech lessons with an AI clinician (supervised by the caregiver), 3 times per week for the 5 weeks AFTER the human clinician sessions end.
Group II: CONCURRENT treatment orderExperimental Treatment2 Interventions
* 5 speech lessons with a human speech-language clinician: 1 time per week for 5 weeks. * 15 speech lessons with an AI clinician (supervised by the caregiver), 3 times per week DURING the same 5 weeks as the human clinician sessions.

Artificial Intelligence-led Speech Motor Chaining (CHAINING-AI) is already approved in United States for the following indications:

🇺🇸
Approved in United States as ChainingAI for:
  • Residual speech sound disorders, specifically for improving the 'r' sound in school-age children

Find a Clinic Near You

Who Is Running the Clinical Trial?

Syracuse University

Lead Sponsor

Trials
54
Recruited
118,000+

National Institute on Deafness and Other Communication Disorders (NIDCD)

Collaborator

Trials
377
Recruited
190,000+

State University of New York - Upstate Medical University

Collaborator

Trials
176
Recruited
27,600+

National Institutes of Health (NIH)

Collaborator

Trials
2,896
Recruited
8,053,000+

Findings from Research

Ultrasound biofeedback therapy (UBT) shows promise in improving speech for children with residual speech sound disorder (RSSD), but it can be complex and demanding, necessitating simpler articulation targets for better understanding.
The study developed an image-analysis program, TonguePART, which successfully quantified tongue movements and distinguished accurate from misarticulated /ɑr/ sounds with up to 85% classification accuracy, highlighting the importance of tongue dorsum and blade movements for effective speech production.
Classification of accurate and misarticulated /&#593;r/ for ultrasound biofeedback using tongue part displacement trajectories.Li, SR., Dugan, S., Masterson, J., et al.[2023]
The study found that somatosensory inputs to oro-facial structures significantly improved speech processing for low-frequency words, indicating that these sensory cues can enhance lexical access and speech production accuracy.
In contrast, stimulation applied to non-speech areas (forehead) did not produce any significant effects, reinforcing the idea that targeted somatosensory interventions can effectively influence motor speech treatment outcomes.
Cross-Modal Somatosensory Repetition Priming and Speech Processing.Namasivayam, AK., Yan, T., Bali, R., et al.[2022]
The iPad-based speech therapy app using automatic speech recognition (ASR) showed an 80% agreement with human judgment on speech accuracy, indicating reliable feedback for users.
Participants with apraxia of speech and aphasia demonstrated significant improvements in word production accuracy over a 4-week therapy program, with gains maintained one month after treatment, suggesting the app's effectiveness as a therapeutic tool.
Feasibility of Automatic Speech Recognition for Providing Feedback During Tablet-Based Treatment for Apraxia of Speech Plus Aphasia.Ballard, KJ., Etter, NM., Shen, S., et al.[2020]

References

Classification of accurate and misarticulated /&#593;r/ for ultrasound biofeedback using tongue part displacement trajectories. [2023]
Cross-Modal Somatosensory Repetition Priming and Speech Processing. [2022]
Feasibility of Automatic Speech Recognition for Providing Feedback During Tablet-Based Treatment for Apraxia of Speech Plus Aphasia. [2020]
Tutorial: Speech Motor Chaining Treatment for School-Age Children With Speech Sound Disorders. [2020]
Multimodal Speech Capture System for Speech Rehabilitation and Learning. [2018]
Gradient boosting for Parkinson's disease diagnosis from voice recordings. [2021]
Management of Parkinson's Disease Dysarthria: Can Artificial Intelligence Provide the Solution? [2022]
Automatic speech recognition (ASR) and its use as a tool for assessment or therapy of voice, speech, and language disorders. [2018]
Unbiased ResultsWe believe in providing patients with all the options.
Your Data Stays Your DataWe only share your information with the clinical trials you're trying to access.
Verified Trials OnlyAll of our trials are run by licensed doctors, researchers, and healthcare companies.
Back to top
Terms of Service·Privacy Policy·Cookies·Security