Logo image
Investigating acceptance of current and future artificial intelligence systems for suicide prevention
Journal article   Open access   Peer reviewed

Investigating acceptance of current and future artificial intelligence systems for suicide prevention

Jolene A Cox, Brianna Ivory, Paul Salmon and Gemma Read
AI & Society, Vol.Advanced access
18-Mar-2026
pdf
s00146-026-02949-3737.25 kBDownloadView
Published Version (Advanced Access)CC BY V4.0 Open Access

Abstract

artificial intelligence mental health suicide prevention technology acceptance
Suicide is a leading cause of premature mortality worldwide, making suicide prevention a global public health priority. As more Artificial Intelligence (AI)-based suicide prevention interventions are being developed and implemented, it is important to study acceptance of these AI systems. The present study aimed to investigate the factors that predict acceptance of current and future AI systems for suicide prevention and the perceived risks and benefits of these AI systems. Individuals from the Australian public were invited to participate in an online survey, which included six hypothetical scenarios of current AI systems (Artificial Narrow Intelligence [ANI]) and future advanced AI systems (Artificial General Intelligence [AGI]) for suicide prevention. Participants evaluated these scenarios on five technology acceptance factors (performance expectancy, effort expectancy, social influence, facilitating conditions, trust) and elaborated on its perceived risks and benefits. Performance expectancy, social influence, and trust predicted acceptance of ANI systems, but only trust predicted acceptance of AGI systems. Overall, the level of acceptance was higher for ANI systems than for AGI systems. Several perceived risks (e.g., risks to mental healthcare, distrust in AI, threats to humanity) and perceived benefits (e.g., benefits to mental healthcare, trust in AI, human help-seeking) were identified. AI systems represent potential avenues for effective suicide prevention. However, to ensure acceptance, AI systems for suicide prevention must be developed in a way that is safe, reliable, and trustworthy.

Details

Logo image