50 Participants Needed

Voice-Based AI for Suicide Risk Assessment

LB
RS
RS
Overseen ByRoisin Slevin, BS
Age: 18+
Sex: Any
Trial Phase: Academic
Sponsor: Lyssn.io, Inc.
No Placebo GroupAll trial participants will receive the active study treatment (no placebo)

Trial Summary

Will I have to stop taking my current medications?

The trial information does not specify whether you need to stop taking your current medications.

What data supports the effectiveness of the treatment LyssnCrisis, LyssnCrisis, Lyssn for suicide risk assessment?

Research shows that using artificial intelligence (AI) and machine learning (ML) can accurately predict suicide risk, with some studies achieving over 90% accuracy. This suggests that AI-based treatments like LyssnCrisis could be effective in assessing suicide risk.12345

Is the voice-based AI for suicide risk assessment safe for humans?

The research on the computer-automated Columbia-Suicide Severity Rating Scale (eC-SSRS) using voice technology suggests it is feasible and valid for monitoring suicidality, with generally supportive feedback from participants. However, the study was small and further research is needed to confirm its safety and effectiveness in larger and more diverse groups.34678

How is the treatment LyssnCrisis unique for suicide risk assessment?

LyssnCrisis is unique because it uses voice-based artificial intelligence to assess suicide risk, focusing on speech and language patterns to identify individuals at risk. This approach differs from traditional methods by incorporating advanced technology to analyze vocal and textual data, potentially improving the accuracy of risk detection.123910

What is the purpose of this trial?

Effective training requires repeated opportunities for skills practice with performance-based feedback, which is challenging to provide at scale. This research study focuses on developing an AI-based, coding and feedback tool ("LyssnCrisis") for implementation in a nationally utilized crisis call center, training counselors (call-takers) in suicide risk assessment skills, and evaluating LyssnCrisis to improve services and client outcomes. This research study's goal is to maximize the human capacity of call-takers to help assess their callers for risk of suicidality, and thus, a core aspect of the current research is developing a novel training process that supports human call-taker capacities.

Research Team

DC

David C Atkins, PhD

Principal Investigator

Lyssn.io, Inc.

Eligibility Criteria

This trial is for call-takers employed at Protocall Services, Inc. It aims to train them in suicide risk assessment skills using a new AI tool called LyssnCrisis, designed to enhance their ability to help callers with self-harm or suicidal tendencies.

Inclusion Criteria

I work at Protocall Services, Inc.

Timeline

Screening

Participants are screened for eligibility to participate in the trial

2-4 weeks

Baseline

Call-takers undergo a 4-week baseline period of services-as-usual (SAU) where LyssnCrisis operates in the background without providing feedback.

4 weeks

Intervention

Call-takers are randomly assigned to either continue SAU or begin receiving feedback with LyssnCrisis for 12 weeks.

12 weeks

Follow-up

Participants are monitored for the acceptability, appropriateness, and feasibility of the intervention, as well as call-taker crisis counseling fidelity.

4 weeks

Treatment Details

Interventions

  • LyssnCrisis
Trial Overview The study tests 'LyssnCrisis', an AI-based feedback tool for crisis counselors. The goal is to see if it can improve training and the quality of support provided by call-takers dealing with clients at risk of suicide or self-harm.
Participant Groups
2Treatment groups
Experimental Treatment
Active Control
Group I: Lyssn CrisisExperimental Treatment1 Intervention
To compare LyssnCrisis to SAU, researchers will use a standard randomized design in which call-takers will have services-as-usual (SAU) and LyssnCrisis phases. All call-takers will start with a 4-week baseline period of SAU where LyssnCrisis is operating in the background, but does not provide AI feedback on suicide risk assessment. Lyssn team members will on-board and train the participating call-takers on the Lyssn software (i.e., reviewing calls, sharing and accessing calls for supervision), using a similar method used in the pilot field trial. After a 4-week baseline phase, call-takers will be randomly assigned to continue SAU or begin feedback with LyssnCrisis (LC) for 12 weeks. LC arm participants will have access to LyssnCrisis feedback tools for the duration of the 12 week period and will receive onboarding support for LyssnCrisis fidelity feedback tools.
Group II: Services As UsualActive Control1 Intervention
To compare LyssnCrisis to SAU, researchers will use a standard randomized design in which call-takers will have services-as-usual (SAU) and LyssnCrisis phases. All call-takers will start with a 4-week baseline period of SAU where LyssnCrisis is operating in the background, but does not provide AI feedback on suicide risk assessment. Lyssn team members will on-board and train the participating call-takers on the Lyssn software (i.e., reviewing calls, sharing and accessing calls for supervision), using a similar method used in the pilot field trial. After a 4-week baseline phase, call-takers will be randomly assigned to continue SAU or begin feedback with LyssnCrisis (LC) for 12 weeks. Participants in the SAU arm will continue services-as-usual for an additional 12-week period,where participants receive ProtoCall's regular supervision and feedback. Following this period (16 total weeks of SAU), SAU arm participants will receive LyssnCrisis for 12 weeks.

Find a Clinic Near You

Who Is Running the Clinical Trial?

Lyssn.io, Inc.

Lead Sponsor

Trials
4
Recruited
2,500+

ProtoCall Services, Inc.

Collaborator

Trials
1
Recruited
50+

Findings from Research

The study evaluated the effectiveness of combining the Columbia Suicide Severity Rating Scale (C-SSRS) with a machine learning model (VSAIL) to predict suicide attempts (SA) and suicidal ideation (SI) in a cohort of 120,398 patient visits, showing that the combined model significantly outperformed both individual assessments in predicting risk.
The ensemble model achieved higher accuracy metrics, with an area under the receiver operating curve (AUROC) of 0.874-0.887 for SA and 0.869-0.879 for SI, indicating that integrating clinician assessments with machine learning can enhance suicide risk detection and improve patient outcomes.
Integration of Face-to-Face Screening With Real-time Machine Learning to Predict Risk of Suicide Among Adults.Wilimitis, D., Turer, RW., Ripperger, M., et al.[2022]
A systematic review of 87 studies using artificial intelligence and machine learning found that these technologies can achieve over 90% accuracy in predicting suicidal behaviors, indicating their potential for improving risk detection.
Despite the promising accuracy, the studies varied widely in methodology and outcomes, highlighting the need for standardized approaches in future research to effectively utilize AI/ML in suicide prevention.
Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations.Bernert, RA., Hilberg, AM., Melia, R., et al.[2023]
This study analyzed 281 telephone calls to telehealth services to classify suicide risk using artificial intelligence, achieving a remarkable accuracy of 99.85% in identifying imminent risk versus low risk based on voice biomarkers.
The research identified 11 key voice biomarkers that effectively differentiate between low and imminent suicide risk, with models tailored for men and women, highlighting the potential for real-time risk assessment in mental health services.
Using Voice Biomarkers to Classify Suicide Risk in Adult Telehealth Callers: Retrospective Observational Study.Iyer, R., Nedeljkovic, M., Meyer, D.[2022]

References

Integration of Face-to-Face Screening With Real-time Machine Learning to Predict Risk of Suicide Among Adults. [2022]
Artificial Intelligence and Suicide Prevention: A Systematic Review of Machine Learning Investigations. [2023]
Using Voice Biomarkers to Classify Suicide Risk in Adult Telehealth Callers: Retrospective Observational Study. [2022]
Prediction models of suicide and non-fatal suicide attempt after discharge from a psychiatric inpatient stay: A machine learning approach on nationwide Danish registers. [2023]
The performance of machine learning models in predicting suicidal ideation, attempts, and deaths: A meta-analysis and systematic review. [2022]
Feasibility and validation of a computer-automated Columbia-Suicide Severity Rating Scale using interactive voice response technology. [2013]
Development and pilot testing of an online monitoring tool of depression symptoms and side effects for young people being treated for depression. [2015]
A Controlled Trial Using Natural Language Processing to Examine the Language of Suicidal Adolescents in the Emergency Department. [2018]
Suicide risk detection using artificial intelligence: the promise of creating a benchmark dataset for research on the detection of suicide risk. [2023]
A Machine Learning Approach to Identifying the Thought Markers of Suicidal Subjects: A Prospective Multicenter Trial. [2022]
Unbiased ResultsWe believe in providing patients with all the options.
Your Data Stays Your DataWe only share your information with the clinical trials you're trying to access.
Verified Trials OnlyAll of our trials are run by licensed doctors, researchers, and healthcare companies.
Back to top
Terms of Service·Privacy Policy·Cookies·Security