Knowing the risk of a heart attack or a stroke could soon become as simple as an eye test as researchers at Google and its health-tech arm Verily have found that Artificial Intelligence (AI) and Machine Learning (ML) could help identify signals of heart diseases through retinal images.Scientists at Google’s health subsidiary Verily have figured out a way to determine someone’s risk of heart disease by just looking into their eyes.
The algorithm that the researchers produced can even help predict the occurrence of a future major cardiovascular event on par with current measures, said Michael McConnell, Head of Cardiovascular.
Cardiovascular disease is the leading cause of death globally and researchers know that lifestyle factors including exercise and diet in combination with genetic factors, age, ethnicity, and sex all contribute to it.
However, they do not exactly know how these factors add up in a particular individual, and so in some patients, it becomes essential to perform sophisticated tests, like coronary calcium CT scans, to help better stratify an individual’s risk for having a heart attack or a stroke, and such other cardiovascular events.
In this study, using deep learning algorithms trained on data from 284,335 patients, the researchers were able to predict cardiovascular risk factors from retinal images with surprisingly high accuracy for patients from two independent datasets of 12,026 and 999 patients.
The algorithm could distinguish the retinal images of a smoker from that of a non-smoker 71 percent of the time, the study found.
“In addition, while doctors can typically distinguish between the retinal images of patients with severe high blood pressure and normal patients, our algorithm could go further to predict the systolic blood pressure within 11 mmHg on average for patients overall, including those with and without high blood pressure,” study co-author Lily Peng, Product Manager, Google Brain Team, said.
The study isn’t without limitations, given that it only surveyed eye images with a 45-degree field of view. More research would resolve whether the model needs to be adjusted for larger or smaller photos, and a larger data set than what the researchers used is more appropriate for deep learning. In other words, it’s not yet ready for clinical testing, but it’s a promising start for non-invasive evaluation of cardiovascular health.