Skip to content

Algorithm for a “deep learning” process can predict OCT measurements from colour fundus photographs

Research scientists in Switzerland and the United States have published a new study to apply “deep learning” (DL) algorithms using colour fundus photographs (CFPs) to detect 97% accuracy results in diabetic macular edema (DME).  While optical coherence tomography (OCT) is the gold standard in DME detection, there are significant limitations with equipment costs and access in many locations and countries.  Fundus photographs cannot easily identify macular thickening however, a deep learning algorithm to identify macular thickening in the photos is recently reported. The researchers stated that the work is “the first time that DL has been shown to accurately accomplish such a challenging task in the field of ophthalmic imaging (i.e., reproducing three-dimensional clinical measurements from two-dimensional clinical images)”.

 

The purpose of the research was designed to develop deep learning (DL) models for the automatic detection of optical coherence tomography (OCT) measures of diabetic macular thickening (MT) from colour fundus photographs (CFPs).  Retrospective analysis on 17,997 colour fundus photographs and their associated OCT measurements were collected from diabetic macular edema (DME) studies. According to the researchers, DL with transfer-learning cascade was applied on CFPs to predict time-domain OCT (TD-OCT)–equivalent measures of macular thickness (MT), including central subfield thickness (CST) and central foveal thickness (CFT). MT was defined by using two OCT cut-off points: 250 μm and 400 μm. A DL regression model was developed to directly quantify the actual CFT and CST from CFPs.

 

In results, the best DL model was able to predict CST ≥ 250 μm and CFT ≥ 250 μm with an area under the curve (AUC) of 0.97 (95% confidence interval [CI], 0.89–1.00) and 0.91 (95% CI, 0.76–0.99), respectively. To predict CST ≥ 400 μm and CFT ≥ 400 μm, the best DL model had an AUC of 0.94 (95% CI, 0.82–1.00) and 0.96 (95% CI, 0.88–1.00), respectively. The best deep convolutional neural network regression model to quantify CST and CFT had an R2 of 0.74 (95% CI, 0.49–0.91) and 0.54 (95% CI, 0.20–0.87), respectively. The performance of the DL models declined when the CFPs were of poor quality or contained laser scars.  In the study report the researchers stated that, “DL is capable of predicting key quantitative TD-OCT measurements related to MT from CFPs. The DL models presented here could enhance the efficiency of DME diagnosis in tele-ophthalmology programs, promoting better visual outcomes. Future research is needed to validate DL algorithms for MT in the real-world.”