MRes + PhD : Show, Attend and Tell: Deep Learning for Medical Image Captioning
Department: Medical Physics and Bioengineering
Subsection: Translational Imaging Group (TIG) within the Centre for Medical Imaging Computing (CMIC)
Duration: 4 years
Stipend: £16,851 per annum tax-free, full fees paid.
Closing Date for Applications: on going – TBC
Neuroradiologists diagnose and characterize abnormalities in the brain using medical images. They interpret or “read” the acquired images and produce a report of their findings and impression or diagnosis. However, neuroradiological practice has been under severe pressure due to two competing forces: the number of acquired brain images images has increased in average by 10-15% per year since, while the number of radiologists has remained static. A solution is thus desperately needed to optimise the efficiency of neuroradiological practice, to appropriately triage/prioritise image reads, and in some specific and certain conditions, to fully remove the need for human radiological reads.
In computer vision, image captioning is the process of translating an image into an accurate textual description of its content, a process which is analogous to radiological image reading. General purpose captioning algorithms recently have undergone a revolution with the advent of deep convolutional neural networks for object identification, and recursive neural networks for text synthesis. A captioning and image interpretation algorithm that can robustly analyse even a small subset of neuroradiological data would have an immense impact in day-to-day clinical care and on the long-term feasibility of the fast-growing field of neuroradiology.
We have recently obtained ethical approval and access to explore one of the largest clinical imaging databases in the world with associated neuroradiological reports (2M+ images), and acquired sufficient hardware (16*Nvidia Titan X Pascal and 2*Nvidia DGX1) to train state-of-the-art models. This PhD will explore the hypothesis that the scale and complexity of this data, together with the development of three novel deep-learning networks – “show” (image encoding), “attend” (spatial attention), and “tell” (captioning) – tailored for medical data will provide a clinically useful image understanding and captioning system that can truly change neuroradiological practice.
In order to qualify candidates must have UK citizenship or EU status and have lived in the UK for at least 3 years. The studentship compromises of fees and a tax-free stipend of £16,851 per annum.
For more information about the UCL Translational Imaging Group
To apply for this post please click here