Oncologist in white coat talking to patient. A network of nodes is in the foreground to empasise AI
Oncologist in white coat talking to patient. A network of nodes is in the foreground to empasise AI

Should we use AI to help cancer patients?

One bottle of water per question on ChatGPT is about what it takes to keep this immensely popular artificial thinker's head cool. And then we're only talking about an innocently generated email or, let's be honest, a grammatical check. But imagine using ChatGPT to help patients with cancer decide on treatments. How many bottles of water would that take? How far can we go to improve our lives with artificial intelligence?

To operate or not to operate? To do or not to do chemotherapy? AI expert Johan Kwisthout believes humans should ultimately make the most difficult decisions in oncology. With this perspective, he explores how we can safely utilise AI for cancer patients to enhance human decision-making while still being environmentally friendly. In his PersOn project, Kwisthout develops an artificial intelligence system that assists doctors and cancer patients navigate challenging treatment options. 'Artificial intelligence should never make decisions for patients or doctors; rather, it should provide better information about the potential consequences of our choices. This way, we as human beings, can make more informed decisions,' Kwisthout explains.

Professor Johan Kwisthout

Artificial intelligence should never make decisions for patients or doctors; rather, it should provide better information about the potential consequences of our choices. This way, we as human beings can make more informed decisions

Black box and white box models 

What if such a system consumes so much energy that it harms the climate? Kwisthout also takes that into account. He uses so-called whitebox models for his AI systems; the knowledge used by the AI system has been carefully filled in beforehand by humans and therefore costs a lot less energy. And if its expertise is within boundaries, we always know where it comes from. That is essential in the medical world. 'ChatGPT, on the other hand, is the epitome of a black box model, scouring the web globally in search of an answer. That costs tons of energy and we don't even know exactly how a conclusion is reached,' says Kwisthout. 'Imagine we do this during a treatment trial and the AI says, "operate". As a doctor, do you think, "OK, if you say so"? As a patient, do you feel stronger and empowered?'

Our AI model provides the space for a better and clearer conversation between doctor and patient

Personal considerations

This is precisely why the AI systems of Kwisthout's team will never recommend the one best (follow-up) treatment for a cancer patient. Instead, the artificial intelligence provides statistical information about possible chances of remission, complications or expected life span after treatment. 'And how heavily that weighs is different for each person,' says Kwisthout. One might think a 90% chance of success plus a possible complication is okay. Others may want to continue looking for alternative treatment. 'Our AI model provides the space for a better and clearer conversation between doctor and patient.'

Transparency is the key to success

Currently, Kwisthout is working on the legal side of things. A whitebox AI model in oncology must always comply with medical regulations. But medical knowledge is constantly evolving. 'Does an AI model have to endure the entire legal and ethical approval process with every minor change?', Kwisthout wonders. We shouldn't always have to wait for legal approval to put new knowledge into the model. After all, someone's life is at stake. He is therefore investigating how to determine whether a change in the model affects its reliability or operation. He does this by working with his team to transparently show which parts of the model have been modified. 'This will make the use of AI in practice faster, more flexible and legally workable,' says Kwisthout.

Sympathetic AI

Ultimately, Kwisthout wants to help people make better choices with a model that is also honest about what it does not know. The trick, then, is to present all the information in a sustainable way that is understandable and transparent for patients, doctors, and lawyers alike. That is ultimately what makes artificial intelligence sympathetic and helps us make better decisions.

Contact information

Theme
Artificial intelligence (AI), Health & Healthcare