Skip to content

Genentech (Roche) has reported that a “deep learning” computational tool may be capable to use OCT predictive analytics for colour fundus photographs

A research report in the journal Investigative Ophthalmology & Visual Sciences (IOVS) identifies a computational process to develop a new tool for diabetic macular edema (DME) screening. The study aims to assess how “deep learning” (DL) may automatically predict OCT-equivalent macular thickness measures from colour fundus photographs (CFPs). In a retrospective analysis on almost 18,000 CFPs and associated OCT measurements, the new computational model compares between 2-D fundus photographs to 3-D OCT images and this introduces a potentially cost effective and rapid diagnostic approach for DME.  The paper stated that the achievement of the work “is the first time that DL has been shown to accurately accomplish such a challenging task in the field of ophthalmic imaging (i.e., reproducing three-dimensional clinical measurements from two-dimensional clinical images). This finding, together with what has been shown in previous studies, underlines the value of DL in enhancing ophthalmic disease surveillance through an automated approach.”


Diabetic macular edema needs to be diagnosed by the use of OCT (optical coherence tomography) which identifies how edema quantifies the macular thickness in the retina.  OCT is the gold standard for this technology and it can be an expensive tool – it is not always available to remote regions in the world and it is not always available in low income locations.  Prior to OCTs, clinicians or trained graders would be required to assess DME by examining colour fundus photographs and to pick up the signs of the images that analyse hard drusen exudates in the photos.  However, diabetes is an epidemic – in 2017, it is estimated that there are 425 million people worldwide with diabetes and this number is growing up to 629 million by 2045. Complications from diabetic retinopathy are exploding and it will require an army of clinicians and graders to screen fundus photographs by hand. This is a numbers game.  Labour intensive two-dimensional fundus photographs will help to identify hard exudates to follow up, review and treat these patients.  More expensive three-dimensional OCT images can provide the most valuable data of macular thickness.  In an ideal scenario, the requirement would be to obtain quantifiable macular thickness data in OCT, but obtain the qualitative data within cheaper fundus photographs.  One way to do this is to use a “big data” approach to correlate the data of fundus photos to OCT images.  If a cheap smartphone camera can capture a fundus photograph, then the collection of sufficient data can correlate the “gold standard type “of macular thickness.  The computing power of holding Genentech assets in the biological images could now recently bring these two datasets together.  When the training datasets became sufficiently large then the process works and then the more data it brings together, the more accurate the data becomes.  This is now in progress within Genentech and so far almost 97% of the tool provides for accuracy.


In concluding their paper, researchers stated that “DL (deep learning) is capable of predicting key quantitative TD (time domain)-OCT measurements related to MT (macular thickness) from CFPs (colour fundus photographs). The DL models presented here could enhance the efficiency of DME diagnosis in tele-ophthalmology programs, promoting better visual outcomes. Future research is needed to validate DL algorithms for MT in the real-world.”