- Start date: 1 September 2021
- End date: 31 August 2022
- Funder: Leverhulme Trust and Royal Academy of Engineering
- Value: £57,500
- Primary investigator: Dr Zeike Taylor
Cancer remains one of the biggest killers in the UK, causing around 165,000 deaths pa – or, 28% of all deaths. 1 in 2 people will get cancer in their lifetime.
Radiotherapy (RT) has been a mainstay of cancer treatment for >60 years. Traditional RT target (tumour) definition and margins limit efficacy through over-dosing and toxicity of nearby organs-at-risk or under-dosing of tumour for a large fraction of patients. Internal patient motion is a major contributor to margin size. There is therefore a critical need to estimate the true 4D motion of internal anatomy during RT delivery, to in turn realise true motion-adaptive treatment.
DIADEM-ART proposes fundamentally new approaches to image-driven motion modelling for this purpose, based on recent advances in so-called domain-aware deep learning, which integrates powerful AI techniques with well-established physical/physiological principles. The research programme comprises two methodological development streams which will provide pilot results for a third stream, aimed at securing further funding to progress the work.
Stream 1 will develop new physics-informed neural networks for direct solution of large-deformation hyperelasticity problems, and so provide an essential formalism for incorporating solid mechanics into deep network learning approaches. Technical difficulties of handling large deformations and complex materials will be addressed, along with more challenging contact behaviour.
Stream 2 will incorporate this formalism in a new Graph Convolutional Network model that infers organ deformations from their appearance in in-treatment images, constrained by the deformation physics; that is, it seeks to expose the hidden physical deformation field from sparse image-based observations. These methods will be tested on in-silico phantoms and benchmark cases, in which ground truth deformations are known, and for which training data are readily generated.
Stream 3 will seek new funding to generalise these methods to patient-specific scenarios, which raise additional challenges, and to create adaptive RT technologies around them.