Can Digital Tools Save Young People’s Mental Health Without Putting Them at Risk?

Can Digital Tools Save Young People’s Mental Health Without Putting Them at Risk?

Young people’s mental health is facing an unprecedented crisis. Traditional healthcare systems, already overwhelmed, are struggling to meet the growing demand for psychological support. In response, digital tools and artificial intelligence are emerging as promising solutions to provide accessible, rapid, and personalized assistance. However, their use raises essential questions: how can we ensure that these technologies genuinely improve well-being without creating new risks?

Experts and young people from diverse backgrounds have come together to define five key principles to guide the development of these tools. First, accuracy is essential: incorrect information or inappropriate advice can worsen psychological distress or even encourage dangerous behaviors. Therefore, these technologies must be rigorously tested on diverse populations before being deployed.

Next, these tools must remain human-centered. This means they should be designed by prioritizing users’ needs and safety, not commercial interests. Directly involving young people and those affected in their development helps create solutions that are accessible and adapted to everyone, including people with disabilities.

Equitable access is another major challenge. Social and economic inequalities should not prevent some young people from benefiting from these tools. Adapted pricing, subsidized programs, or partnerships with schools can help reduce these disparities. Without such measures, digital tools risk exacerbating inequalities rather than reducing them.

Protecting privacy is equally crucial. Mental health data is extremely sensitive. Its collection and use must be transparent, secure, and under users’ control. Techniques such as local data storage, rather than centralized servers, limit the risks of leaks or misuse.

Finally, transparency is essential for building trust. Users must understand how these tools work, what data is collected, and what the potential risks are. Clearly explaining how algorithms function and distinguishing between interactions with a machine and those with a human professional helps avoid misunderstandings and abuse.

These principles are not just theoretical. They were developed during practical workshops where young people tested chatbots and observed their strengths and weaknesses. These experiences revealed that, without safeguards, these tools can perpetuate biases, reinforce stereotypes, or even cause serious harm. For example, cases of dangerous or even fatal advice have already been documented.

For these technologies to fulfill their promise, their development must involve young people, healthcare professionals, researchers, and policymakers. This requires investing in digital tool education, supporting disadvantaged communities, and ensuring that algorithms are trained on representative data. Only a collective and vigilant approach will transform these innovations into tools for justice and well-being for all young people.


Documentation and Sources

Reference Document

DOI: https://doi.org/10.1038/s44277-025-00052-x

Title: Advancing neurotech justice in youth digital mental health: insights from an interdisciplinary and cross-generational workshop

Journal: NPP—Digital Psychiatry and Neuroscience

Publisher: Springer Science and Business Media LLC

Authors: Craig W. McFarland; Donnella S. Comeau; Sepideh Abdi; Mahsa Alborzi Avanaki; Leo Anthony Celi; ; Julian Adong; Shaikha Alothman; Manal Brahimi; RuQuan Brown; Cecile Chavane; Donnella S. Comeau; Jack Gallifant; Felix Garcia; Gabriel Làzaro-Muñoz; Eliane Motchoffo; Claire Joy Moss; Derek Ricketts; Paulos Solomon; Takeshi Tohyama; Francis X. Shen; Benjamin C. Silverman

Speed Reader

Ready
500