The systems react empathetically, adapt and build trust. This can be dangerous for people with mental health issues if orientation quickly turns into emotional dependency. The researchers are therefore calling for binding safety standards and a clear categorisation of such applications as regulated medical devices as soon as they take on therapy-related functions. One key proposal is a "guardian angel AI". This independent entity would monitor conversations in the background, recognise risks and point out offers of help in an emergency. This is to be supplemented by age checks, mandatory risk analyses and clear indications that this is not medical advice.
The message is clear. AI can help, but only if it is designed responsibly. Especially where words touch the psyche, clear guard rails are needed to ensure that technological progress does not unintentionally cause harm.
Press release of the "Technische Universität Dresden" from 05.12.2025