07.01.2024 Views

YSM Issue 96.3

You also want an ePaper? Increase the reach of your titles

YUMPU automatically turns print PDFs into web optimized ePapers that Google loves.

FOCUS<br />

Artificial Intelligence<br />

DEEP<br />

LEARNING<br />

An Unexpected<br />

Tool To Fight<br />

Heart Valve Disease<br />

BY SOPHIA BURICK<br />

PHOTOGRAPH COURTESY OF CAROLINE BUCKY<br />

Severe aortic stenosis (AS) is a common form of valvular heart<br />

disease that involves the aortic valve becoming unusually<br />

narrow, affecting five percent of people above the age of<br />

sixty-five. Early diagnosis is essential to successful intervention.<br />

Usually, AS is detected through Doppler echocardiography, or<br />

ultrasound imaging of the heart. However, performing Doppler<br />

echocardiography requires access to specialized equipment as well<br />

as professionals who know how to operate the equipment and<br />

interpret the results. This discrepancy between the large population<br />

of individuals at risk for AS and the small amount of resources<br />

available for its diagnosis makes it difficult to achieve early diagnosis<br />

of AS, negatively impacting patient outcomes.<br />

Researchers at the Cardiovascular Data Science (CarDS) Lab at<br />

Yale recently published in European Heart Journal a creative new<br />

approach to making AS diagnostic tools more accessible—combining<br />

deep learning with simple ultrasound scans. Handheld devices that<br />

use ultrasound imaging to visualize the heart are much more widely<br />

available than the equipment necessary for Doppler echocardiography,<br />

but the images and videos alone produced by these ultrasound scans<br />

are difficult to use to diagnose AS. “Patients are often not seen by a<br />

cardiologist until they are very late in their disease stage,” Evangelos<br />

Oikonomou, a postdoctoral fellow in the CarDS Lab, said. “There’s a big<br />

opportunity to diagnose the disease earlier in this patient population.”<br />

The researchers at the CarDS Lab developed a novel deep learning<br />

model that is capable of using 2D echocardiograms, which are produced<br />

by simple ultrasound imaging, to identify AS without specialized<br />

Doppler equipment. Deep learning is a kind of machine learning<br />

that employs computer networks built to resemble human neural<br />

networks—in short, it teaches computers how to learn like humans.<br />

“You train the algorithm by showing it multiple different images<br />

and giving feedback to the algorithm as to whether its prediction<br />

[about what the image is] is correct or wrong,” Oikonomou said.<br />

“What the algorithm does is every time it gets [its prediction] wrong,<br />

it tries to adjust its approach and learn something from its errors.”<br />

These deep learning algorithms are often more perceptive to patterns<br />

than humans, allowing them to reach conclusions that might not be<br />

apparent to a doctor trying to interpret ultrasound images. “That’s<br />

where the performance of an AI algorithm may actually exceed that of<br />

a human operator,” Oikonomou said.<br />

To develop their algorithm, the researchers needed to train it to be<br />

able to recognize severe AS. To do this, they sourced a massive amount<br />

of 2D cardiac ultrasound videos from patients in the Yale New Haven<br />

Health system with no AS, non-severe AS, and severe AS. Using this<br />

dataset, the algorithm learned how to identify specific phenomena<br />

in the videos associated with each class of AS diagnosis. Once the<br />

researchers trained the algorithm to learn what to look for, they had<br />

to validate that the algorithm was truly capable of differentiating<br />

non-AS, non-severe AS, and severe AS ultrasound videos. To prove<br />

the algorithm’s success, they had it sort a new dataset from different<br />

patients in New England and California. The deep learning algorithm<br />

proved highly accurate in sorting the videos across all patient datasets.<br />

The researchers’ vision is that their algorithm can be used by any<br />

medical provider with a simple ultrasound scanner to catch AS early.<br />

This removes the existing barriers to AS diagnosis, like specialized<br />

Doppler echocardiography equipment and the training of medical<br />

providers to accurately interpret results, making AS diagnoses more<br />

accessible to patients and simpler for providers. If the algorithm<br />

is widely used, it could be a major step forward for successful AS<br />

intervention. “Hopefully, we can make this as cost-efficient as possible,”<br />

Oikonomou said. “It’s very easy to do—it takes two or three minutes,<br />

and people can probably be screened once in their lifetime.”<br />

Beyond its immediate impact in improving outcomes for AS patients,<br />

this deep learning algorithm reveals the broader potential of applying<br />

cutting-edge computer science to healthcare. “I think this could be<br />

applied to other things such as hypertrophic cardiomyopathy, which<br />

is a genetic heart condition that is very common but most people don’t<br />

ever get diagnosed,” Oikonomou said.<br />

With increasingly high patient burdens and medical staff stretched<br />

thin, it’s inevitable that some patients will slip through the cracks of<br />

the healthcare system. Machine and deep learning models could be<br />

used across a variety of applications to identify diagnoses that are<br />

sometimes missed by medical staff. The CarDS Lab’s algorithm is<br />

proof of the great positive impact that computer science and artificial<br />

intelligence stand to have on patient care and outcomes. ■<br />

8 Yale Scientific Magazine September 2023 www.yalescientific.org

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!