#read

Why a guardian angel AI is needed for the psyche

Artificial intelligence is increasingly finding its way into sensitive areas of life. In the area of mental health in particular, chatbots promise quick help around the clock. But this is precisely where researchers from Dresden see growing risks. Scientists at the Else Kröner Fresenius Centre for Digital Health and the University Hospital Carl Gustav Carus warn that freely available language models without clear rules are acting as supposed therapists.
05/12/2025

The systems react empathetically, adapt and build trust. This can be dangerous for people with mental health issues if orientation quickly turns into emotional dependency. The researchers are therefore calling for binding safety standards and a clear categorisation of such applications as regulated medical devices as soon as they take on therapy-related functions. One key proposal is a "guardian angel AI". This independent entity would monitor conversations in the background, recognise risks and point out offers of help in an emergency. This is to be supplemented by age checks, mandatory risk analyses and clear indications that this is not medical advice.

The message is clear. AI can help, but only if it is designed responsibly. Especially where words touch the psyche, clear guard rails are needed to ensure that technological progress does not unintentionally cause harm.

Press release of the "Technische Universität Dresden" from 05.12.2025

The above texts, or parts thereof, were automatically translated from the original language text using a translation system (DeepL API).
Despite careful machine processing, translation errors cannot be ruled out.

Click here to access the original content