A simple eye exam may be able to tell you if you are at risk of a heart attack.

The researchers found that combining information about the pattern of blood vessels in the retina with traditional clinical factors enabled them to better identify participants' risk of an attack.

In an abstract, to be presented at the European Society of Human Genetics annual conference in Vienna on Monday, they detail how they used data from UK Biobank, which contains 500,000 people's medical and lifestyles records.

They combined it with factors such as age, sex, systolic blood pressure, body mass index and smoking status, studying people on the database who had experienced a heart attack.

The model we developed was able to better classify participants with low or high MI risk in the UK Biobank. If we added a score related to the genetic propensity of developing MI, our model would be even better.

There was a shared genetic basis between the two.

The average age for a heart attack is 60, and they found that their model achieved its best performance more than five years before the attack. In the future, they hope that a simple eye exam can identify people at risk.

Villaplana-Velasco said that the calculation of an individualised MI risk from those over 50 years old was appropriate. It would allow doctors to suggest ways to reduce risk, such as giving up smoking.

The researchers think it's possible that every condition has a unique variation profile and that it's useful to identify propensity to other diseases. If a sex-specific model for heart attack yields a better risk classification, the analysis should be done separately for males and females.

Prof Sir Nilesh Samani is the medical director of the British Heart Foundation. It will be necessary to understand the feasibility of this approach and determine how to incorporate these scans into routine clinical practice.

Our daily newsletter is sent out every weekday at 7am.

Dr Ware cautioned that the research had not been peer-reviewed and that the abstract contained limited detail. It appears that approaches like this that use computer vision and/or machine learning are promising.