Stay in touch

Prime news from our network.

#read

AI in healthcare - great opportunities, new risks

Artificial intelligence in the healthcare sector is a matter of trust. A representative survey commissioned by the TÜV association shows where there are particularly large gaps in knowledge and major uncertainties about the possibilities and risks of self-learning software.
21/02/2023

Many people wear them on their wrists - smart watches with a pedometer and heart rate monitor. However, only every second user knows that artificial intelligence (AI) methods are sometimes used to analyse this data. A representative survey commissioned by the TÜV-Verband e.V., Berlin, shows this: In the field of medical diagnostics, there are many gaps in knowledge and great uncertainty about the possibilities and risks of self-learning software. „Artificial intelligence in healthcare is a matter of trust. We see a considerable need for education among users, but in some cases also a need to catch up in terms of legal regulation," says Mark Küller, consultant for medical devices at the TÜV Association.

Küller sees a grey area when it comes to health-related apps in particular. „Very many apps that are marketed as health, wellness or lifestyle apps are actually medical devices due to their nature and their promise of use,“ says the industry expert. These require authorisation in accordance with the EU Medical Device Regulation. However, due to the sheer number and diverse distribution channels of the programmes, it is almost impossible for the market surveillance authorities to monitor them. One risk is that users are lulled into a false sense of security. "If the manufacturer promises or declares that its product accurately measures a medical indicator, for example, but the product does not actually do this, users can make the wrong decisions and health risks can arise," warns Küller.

Use of AI reduces trust in doctors

The TÜV survey shows how closely people's belief in progress and scepticism are intertwined in Germany: 66% of respondents generally see opportunities for health through AI applications. But as soon as it comes to their own well-being, confidence drops: Only 41% assume that AI can make the correct diagnosis when a serious illness is suspected. In comparison: acquaintances who have already had similar symptoms and tell us about their experiences enjoy exactly the same level of trust. A doctor is considered to have the highest level of expertise in an emergency. Surprisingly, according to the survey, the trust rate drops from 81% to just 67% if this doctor is supported by AI. „This could be a sign of fear of the unknown, of the incomprehensible“, says Küller. „If a doctor looks at a textbook in front of me, I can read it in case of doubt. But what the AI does always seems like a black box to me as a patient.“

In Germany, an AI medical device is not allowed to make a diagnosis; this is reserved for doctors, explains Küller. The AI may only provide support, but not draw any conclusions about treatment. The decision always lies with doctors.“ In diagnostic imaging such as MRI or CT, this digital assistance has been used successfully for years and the use of AI is developing rapidly. Software detects conspicuous patterns and anomalies and thus recognises cancer cells or diabetic retinal diseases at an early stage, for example. „Many patients simply take this for granted. The more clarity there is about the great benefits AI is already providing and the great opportunities the technology offers in the diagnosis and treatment of diseases, the more trust in digital technology will grow," says Küller. „Health is a highly sensitive topic, which is why building trust among the population is a difficult and long process.“

Need for regulation with evolving AI

In addition to more transparency and clarification regarding the use of AI in the healthcare sector, the TÜV association believes that existing legal and normative requirements need to be supplemented. We urgently need norms and standards for AI-based medical devices," warns Küller. Clear requirements and limits also need to be defined for AI systems that learn and change during use. „Certification of learning and changing medical devices is not always possible within the framework of the existing legal requirements. Society and, by proxy, politics must consider where the limits for learning and changing AI systems in the medical field should lie and how they should then be regulated," says Küller. The particularly innovative aspect of AI, its ability to learn via training with new data sets, can make it difficult to understand its decisions, make decisions very different and, in case of doubt, raise complicated liability issues.

Article from "medizin & technik" from 21 February 2023

The above texts, or parts thereof, were automatically translated from the original language text using a translation system (DeepL API).
Despite careful machine processing, translation errors cannot be ruled out.

Click here to access the original content