Stay in touch
Prime news from our network.
Stay in touch
Prime news from our network.
Professor Budde, how fit is a system like ChatGPT for medical issues?
ChatGPT is a transformer language model. In principle, such systems can also provide information on medical issues. However, this only works if they have been developed or adapted for this purpose. In the specific case of ChatGPT, it was reported that it had passed a medical test. However, this does not mean that it can be used in a medical environment. Because this language model can do one thing above all: formulate very fluently. This is a significant advance in the field of artificial intelligence, which must be recognised. But such a language model can also create the illusion of knowledge and intelligence with its beautiful formulations, even if it is only hallucinating. Then it is even dangerous if someone relies on it for medical questions. So you always have to bear in mind the limitations of such systems.
How useful could transformer language models be for medical applications?Transformer language models offer interesting possibilities. For example, they could help to formulate doctors' letters and thus relieve doctors of routine tasks. This would be an application that does not take place directly on the patient and therefore has fewer legal hurdles to overcome. Nevertheless, this would also be a challenge for a language model. It requires a text corpus in German that can deal with medical terminology and correctly understand abbreviations. These can have different meanings depending on the department. In cardiology, for example, HWI stands for a posterior wall infarction, while in nephrology it refers to a urinary tract infection. Another obstacle is that medical diagnoses are often formulated in the negative. Examinations should rule out assumptions, such as a heart attack. The language model must recognise these subtleties and not just associate the term „heart attack“ with the patient who has not had one.
Are there already corresponding approaches?
Yes, several institutions are working on this, including in Germany. Several members of the Learning Systems Platform, including the German Research Centre for Artificial Intelligence and us at Charité, have been working together on this for five years. This involves a German text corpus that can form the basis for doctors' letters. However, large amounts of data are always needed to train a system. The necessary data exists, but it still needs to be collated. And according to data protection regulations, anonymisation is also required. That may sound like a small thing. But in the medical context, you have to be aware that anonymisation can also lead to the loss of content. For the training of an AI, this can be problematic under certain circumstances.
How useful could transformer language models be for medical applications?
Transformer language models offer interesting possibilities. For example, they could help to formulate doctors' letters and thus relieve doctors of routine tasks. This would be an application that does not take place directly on the patient and therefore has fewer legal hurdles to overcome. Nevertheless, this would also be a challenge for a language model. It requires a text corpus in German that can deal with medical terminology and correctly understand abbreviations. These can have different meanings depending on the department. In cardiology, for example, HWI stands for a posterior wall infarction, while in nephrology it refers to a urinary tract infection. Another obstacle is that medical diagnoses are often formulated in the negative. Examinations should rule out assumptions, such as a heart attack. The language model must recognise these subtleties and not just associate the term „heart attack“ with the patient who has not had one.
Are there already corresponding approaches?
Yes, several institutions are working on this, including in Germany. Several members of the Learning Systems Platform, including the German Research Centre for Artificial Intelligence and us at Charité, have been working together on this for five years. This involves a German text corpus that can form the basis for doctors' letters. However, large amounts of data are always needed to train a system. The necessary data exists, but it still needs to be collated. And according to data protection regulations, anonymisation is also required. That may sound like a small thing. But in the medical context, you have to be aware that anonymisation can also lead to the loss of content. For the training of an AI, this can be problematic under certain circumstances.
How do new solutions that utilise artificial intelligence make it into the healthcare sector?
It is necessary to create application cases in order to assess the quality of the results. On this basis, certification as a medical device can then follow - because that would be a chatbot in contact with patients. In addition, the context in which a system can and may be used must be recognisable for the specialists. A kind of instruction leaflet would be useful for this. This should state what the system can do safely, where it has weaknesses or what it cannot do at all. An AI that was trained with large amounts of data from US veterans, for example, almost exclusively processed data from men. It will probably not be able to provide much reliable information about women. The doctor who wants to use such an assistance system needs to know this. And liability issues also need to be clarified.
There are hardly any approved systems with AI yet. But AI is already playing a major role in research, for example in university hospitals.
What would be desirable, how could AI be used sensibly in medicine?
AI solutions that increase efficiency via language models are welcome. Whether it's doctor's letters, answering patient questions, researching specialist literature or converting text to speech so that a visually impaired patient can access certain information, for example. Translations would also be helpful to facilitate dialogue with patients who speak other languages than the doctor.
What could AI look like in connection with medical devices in the future?A medical device today issues alarms when required. Some of these concern the technician if something is going wrong in the device. Or they concern measured values relating to a patient's condition. In both cases, an integrated AI could make suggestions on how best to rectify a defect, for example, by comparing and interpreting similarly worded error messages. This is already possible with rule-based solutions, but further development might be possible. In the case of a ventilator as an example of the vision of an intelligent system, this could make suggestions as to how the parameters for the therapy should be selected or even issue reports. But, as already mentioned, such a system must be 100 per cent fact-based and ideally even refer to specific passages in the specialist literature. Under no circumstances should it hallucinate.
KI and ChatGPT are currently in the hype phase. How long will it be before AI is as commonplace in medicine as a smartphone is today?
It will take a few more years until then. ChatGPT is just the beginning, a first successful step in the development of the Transformer language models. I am sure that the technology will arrive in medicine and that it will not be a decade before then. However, the security must be guaranteed. Then nothing stands in the way of its use in Germany –
The above texts, or parts thereof, were automatically translated from the original language text using a translation system (DeepL API).
Despite careful machine processing, translation errors cannot be ruled out.