The Food and Drug Administration has authorized more than 1,000 AI-enabled medical devices as of mid-2025.
Sarah Silbiger via Getty Images
Note from the editor
Artificial intelligence tools have become more prominent in the medical device industry over the past several years. The Food and Drug Administration has authorized more than 1,000 AI-enabled medical devices as of mid-2025.
The technology has transformed what it means to be a medical device, and it is advancing the diagnostics space into new areas.
In June 2025, the FDA authorized the first AI tool to predict patients’ five-year breast cancer risk from a routine mammogram. While AI tools to detect breast cancer have been used for decades, the tool made by Clairity is the first to predict future risk.
AI is also being used to detect health conditions like sepsis and identify heart health issues, such as vessel narrowing and plaque build up.
As AI takes off in the medtech sector, new trends are emerging, such as the use of foundation models. The technology has attracted medical device companies like GE HealthCare and Aidoc, as well as tech giants like Nvidia and Amazon Web Services.
AI’s growing presence in the medtech industry has also raised questions around effective deployment, how to assess tools for safety and bias and how to regulate the new technology.
Read on for a collection of stories about how AI is changing diagnostics and the overall medtech space.
The Food and Drug Administration has authorized more medical devices that incorporate artificial intelligence. Keep track of the latest developments in this database.
By: Elise Reuter and Jasmine Ye Han• Published May 20, 2025• Updated Aug. 18, 2025
Artificial intelligence has become a buzzword in the medtech industry as more medical devices incorporate the technology. Device companies are using AI to speed up imaging, triage CT scans for serious conditions and identify potential heart arrhythmias.
The Food and Drug Administration is also fielding more submissions in recent years, with the 235 AI devices authorized in 2024 being the most in the agency’s history. As of the FDA’s latest data update in July 2025, the device center had authorized more than 1,200 AI devices since 1995.
The majority of AI devices are in radiology, but the technology is becoming more prevalent in other specialties, such as cardiology and neurology.
Regulators face a challenge as generative AI, which can create text and images, and foundation models, large-scale models that can be used for numerous purposes, gain traction among developers.
MedTech Dive is tracking the FDA’s list of authorized AI technologies to understand which companies are developing the most AI devices and how those products are used.
Devices authorized by the FDA from 1995 to July 10, 2025, which include hardware or software features.
Methodology:
MedTech Dive downloaded the FDA’s list of AI/ML-enabled medical devices on July 31, 2025. The FDA last updated the database on July 10, 2025. The FDA compiled the list using product codes and device summaries. The list is not exhaustive, but it is intended to represent devices that incorporate AI/ML across medical disciplines, the agency said. The definition of a device includes hardware and software features. The FDA sometimes adds older devices to the list.
MedTech Dive collected information on each applicant and whether a company was acquired for its analysis. In cases where a company was acquired multiple times, the most recent parent company or majority owner was listed. One exception is Siemens Healthineers, which is majority-owned by Siemens. Parent companies were defined as firms that make a product, and private equity and investment firms were not listed.
MedTech Dive also collected classification information from each filing to provide a description of each device. In a few instances, the classification field is empty because the filing didn’t contain that information.
We made edits to reflect MedTech Dive’s editorial style.
Article top image credit: Courtesy of GE Healthcare
How three companies are using foundation models in radiology
Some firms, such as Aidoc, are working directly with the FDA. Others, such as HOPPR, are offering their foundation models for medtech companies to build their own AI tools.
By: Elise Reuter• Published Sept. 16, 2025
Medtech firms in radiology are introducing some of the first artificial intelligence devices based on foundation models, even as a regulatory pathway for the technology remains unclear.
Foundation models are a type of AI that are trained on large datasets and can provide multiple outputs, such as detecting several health conditions in a scan. Large medtech firms like GE Healthcare, technology firms like Nvidia and Amazon Web Services, startups, and radiology practices are introducing their own models.
The radiology sector has been an early adopter of the buzzy technology, with the promise of greater accuracy and faster model development, but experts said its benefits still have yet to be proven.
Khan Siddiqui, CEO and co-founder of the AI startup HOPPR, previously worked on healthcare AI and medical imaging at Microsoft. Siddiqui, a trained radiologist who founded HOPPR in 2019, noticed rising radiology workloads amid an ongoing shortage of clinicians.
Khan Siddiqui is CEO of HOPPR.
Permission granted by HOPPR
“My spouse is also a radiologist — we started seeing people burning out,” Siddiqui said.
Many of these models have been one-off solutions, such as models to help flag hemorrhage in scans. However, this doesn’t always align with how patients go through a radiology practice.
“A patient comes in, and has a headache. A headache could be hemorrhage, could be cancer, could be nothing, could be trauma,” Siddiqui said, adding that covering all of the possible conditions would require buying many applications.
Siddiqui wanted to build a large, pre-trained model that could be used across multiple health conditions, patient demographics and scanner types. To accomplish that, he needed a lot of data, which meant building out protections around privacy, security and consent.
Building a foundation model is a “very long, very complex and very costly process,” Siddiqui said. “We’re talking about tens of millions of images.”
HOPPR trained a foundation model on a large dataset of chest X-rays, spanning hundreds of health conditions, Siddiqui said. But the company isn’t taking that model directly to the FDA. Instead, it’s offering it to other medtech companies that want to use HOPPR’s foundation model as the basis for their own fine-tuned models.
HOPPR’s data practices are an important part of that, because if a company wants to sell an AI model for commercial use, they must know where the data came from and have rights to use the data. A lot of research data restricts commercial use, and it’s difficult to know the source of images scraped from the internet.
“Because we source data directly from health systems and large radiology practices, we know exactly where the data came from and can provide that to the FDA as needed,” Siddiqui said.
Aidoc, like HOPPR, sees foundation models as a launching point for more specific radiology tools. The company started working on its own foundation model about two and a half years ago, after the launch of ChatGPT, CEO Elad Walach said.
“It was a massive effort all through the years, but we had a really big breakthrough about six months ago of the first working model,” Walach said. “It took us a lot of trial and error, a lot of resources spent on it.”
At the 2024 Radiological Society of North America conference, Aidoc debuted its CARE foundation model, a vision-language model built using CT and X-ray images, and supporting clinical information such as notes, labs and vitals.
"The way the FDA works today, it’s still disease by disease."
Elad Walach
CEO of Aidoc
Walach said Aidoc has since received FDA clearance for two derivative models based on the technology: a rib fracture triage tool and another to detect aortic dissection, a serious condition caused by a tear in the aorta.
“The way the FDA works today, it’s still disease by disease,” Walach said. “The accuracy … has to be super high to actually help physicians.”
Walach said the company’s CARE model is more accurate and comprehensive than the company’s past AI tools. For example, using the technology for abdominal CT scans, which contain dozens of diseases, “for each of them, it was better than every one of our prior models,” Walach said.
Like HOPPR’s Siddiqui, Walach also said the foundation model has reduced development time for derivative models. His goal is to cover more than 100 diseases with AI models in the next three years, “which was truly unimaginable a year ago.”
Aidoc also plans to open its foundation model to other developers in the future.
Nina Kottler, associate chief medical officer for clinical AI at Radiology Partners, presents Mosaic Clinical Technologies at the company’s practice leadership summit.
Permission granted by Radiology Partners
Radiology firm pilots foundation model
Radiology Partners, a national practice encompassing more than 3,400 sites, is taking a different approach. The company has made two different models that it is piloting with radiologists.
The first tool, called Mosaic Reporting, uses large language models and voice recognition to structure radiology reports and reduce time spent dictating notes. Radiology Partners has rolled out the model to about 316 radiologists so far, said Nina Kottler, associate chief medical officer for clinical artificial intelligence. Because the tool only uses language, it doesn’t fall under the FDA’s purview.
“A lot of people think of the technology, because it's so capable, that … someone could just buy it and plunk it in and maybe do a little training. It's actually far from the truth.”
Nina Kottler
Associate chief medical officer for clinical AI at Radiology Partners
The second model, Mosaic Drafting, is more complex. It combined a large language model and a large vision model to interpret images and pre-draft X-ray reports. A radiologist then reviews, edits and signs off on the findings.
Because the model interprets images, it must go through FDA review. Radiology Partners is testing the tool under an institutional review board, Kottler said, and so far 57 radiologists have access to it.
Feedback so far has been “unanimously positive,” Kottler said, while adding that the company is moving slowly with the rollout.
“A lot of people think of the technology, because it's so capable, that … someone could just buy it and plunk it in and maybe do a little training. It's actually far from the truth,” Kottler said. “There is a lot of work that is required for training and change management, and understanding how it fits in the workflow, and making sure you get feedback and continuous monitoring.”
Practices are still figuring out how to handle many of these aspects.
Kottler sees foundation models as the future of AI, adding that Radiology Partners hasn’t looked at a narrow AI solution for about a year.
“People, if they're worried about it, or they just hear about hallucinations and they're afraid, I would tell them … get involved,” Kottler said. “This is going to be a part of your future. And you, especially if you're a clinician or you're involved in clinical care, need to be the person that is driving it.”
Article top image credit: Courtesy of Aidoc
Clairity receives FDA OK for breast cancer risk prediction tool
The startup received de novo authorization for the AI tool, the first of its kind that analyzes mammogram images to predict breast cancer risk over five years.
By: Elise Reuter• Published June 3, 2025
The Food and Drug Administration has authorized the first artificial intelligence tool to predict patients’ five-year breast cancer risk from a routine mammogram.
Boston-based startup Clairity developed the tool, called Clairity Breast. The company plans to launch the risk assessment feature by the end of the year, according to the announcement.
“We're doing something radically different than what I've done my whole career in the area of computer-aided detection and diagnosis of breast cancer,” Connie Lehman, founder of Clairity and a diagnostic radiologist at Mass General Brigham, said in an interview.
AI has been used to detect breast cancer for decades; the first computer-aided detection system for breast cancer was approved in 1998, Lehman said. However, most of these tools are used to detect cancer, not predict future risk.
“What we’re doing here is doing something humans can’t do,” Lehman said. “It’s extracting subtle cues in the mammogram that the human eye can’t see, the human brain can’t process.”
Lehman, who has focused most of her career on better ways to detect breast cancer early, expects the new approach will “expand the net out to more women who truly are at high risk, and until now we haven’t been able to identify.”
Clairity Breast was developed using Hologic’s 2D screening mammography systems. It would be used as part of a patient’s routine mammogram.
The company was careful to train its AI models on a diverse group of patients, Lehman said, as historically, breast cancer risk prediction models were built on data from predominantly white women.
Clairity plans to launch the predictive tool starting with select centers and expanding access later in 2025, Lehman said. In 2024, the company hired Jeffrey Luber as CEO to help lead the market launch. Luber is the former CEO of testing firms Binx Health and Exact Sciences.
FDA grants de novo nod to AI tool for detecting sepsis
The CEO of Prenosis told MedTech Dive the company sees third-party validation as important, with the FDA having clarified that certain decision support tools should be regulated as medical devices.
By: Elise Reuter• Published April 4, 2024
The Food and Drug Administration granted de novo clearance to an AI tool to help clinicians predict and diagnose sepsis, the first time the agency has authorized such a tool.
The Sepsis Immunoscore software, developed by Chicago-based Prenosis, provides a risk score for clinicians on a patient having or developing sepsis within 24 hours. The score is based on 22 parameters, including respiratory rate, blood pressure and white blood cell count.
Hospitals already use early sepsis detection tools, despite lacking FDA review. The agency clarified in a final guidance in 2022 that clinical decision support software that provides a risk score or probability of a condition should be regulated as a medical device.
Because it is such a big problem, software companies and hospitals have dedicated resources to building early detection tools that can alert clinicians when a patient may be at risk of developing sepsis. These tools have recently come under scrutiny after researchers published a study in 2021 finding a widely used tool developed by electronic health record company Epic Systems only correctly predicted the risk of sepsis 63% of the time before clinician intervention.
Prenosis CEO Bobby Reddy Jr. said the company decided to take a different approach, seeking FDA clearance before the agency set out that requirement. The company received de novo authorization, which will allow its software to serve as a predicate for other sepsis tools.
Reddy pointed to a culture clash in the space with some companies breaking the rules by marketing their AI products without authorization.
“You have companies that are really used to regulation and rely on trust,” Reddy said in an interview with MedTech Dive, adding that some firms are excited about AI, but don’t have as much of an understanding or culture around regulations.
“We don’t think it’s right to have this out on the market without third-party validation,” the CEO added.
The Sepsis Immunoscore has four risk categories: a patient’s rate of sepsis within 24 hours, in-hospital mortality, length of stay in the hospital and ICU admission in 24 hours. The tool is integrated into the electronic health record and displays a number from 0-100 to indicate the risk of sepsis. It also provides a list of all 22 parameters, ranking them in the order of which contributed the most to the increased risk for that patient.
“A lot of clinicians don’t trust AI products for multiple reasons,” Reddy said. “We are trying very hard to counter that skepticism by making a tool that was validated by the FDA first, and then the second piece is we’re not trying to replace the clinician.”
The tool was cleared based on a 750-person study across three sites. Prenosis ran sub-analyses to assess the tool’s performance across gender, race and health comorbidities.
Roche Diagnostics started collaborating with the company in 2020 to expand Prenosis’ core dataset and help it work toward FDA clearance.
Article top image credit: Permission granted by Prenosis
AI devices with no clinical validation tied to more recalls, study finds
Public companies, which accounted for about half of AI-enabled devices on the market, had a higher rate of recalls and a lower rate of clinical evidence, according to a JAMA study.
By: Elise Reuter• Published Sept. 2, 2025
Artificial intelligence-enabled medical devices with no clinical validation were more likely to be the subject of recalls, according to a study published in JAMA Health Forum.
The study looked at 950 AI medical devices authorized by the Food and Drug Administration through November 2024. Sixty of the devices were associated with 182 recall events.
The most common causes of recalls were diagnostic or measurement errors, followed by functionality delay or loss. About 43% of all recalls also took place within one year of FDA authorization.
Tinglong Dai, lead author of the study and a professor at the Johns Hopkins Carey Business School, said the “vast majority” of recalled devices had not undergone clinical trials. For the majority of AI-enabled devices, which went through the FDA’s 510(k) pathway, clinical studies are not required.
“Unfortunately, it's not required, and so people don't do it,” Dai said in an interview. “So, that's why we believe it is one of the most important drivers of the recalls.”
By comparison, the study found that devices that had gone through retrospective or prospective validation were subject to fewer recalls.
The study also found that publicly traded companies accounted for disproportionately more recall events, with public company status associated with a nearly 6 times higher chance of a recall event. Publicly traded companies accounted for about 53% of AI-enabled devices on the market, but they were associated with more than 90% of recall events in the study and 98.7% of recalled units.
Public companies also had a lower rate of clinical validation compared to private companies. While about 40% of recalled devices from private companies lacked validation, by comparison, about 78% of devices from larger public companies and 97% from smaller public companies had no validation.
Dai was surprised by this finding, saying that “this fundamentally has something to do with the 510(k) clearance pathway.”
The results raise concerns about the devices’ post-market safety and reliability. Dai and his co-authors recommended requiring human testing or clinical trials before a device is authorized, or incentivizing companies to conduct ongoing studies and collect real-world performance data. The pre-market and postmarket data could also help manufacturers identify and reduce device malfunctions and errors.
Dai also suggested a process where clearances may be revoked after five years if a device has no public clinical data, postmarket validation or proof that it is effective in the real world.
In 2023, the FDA issued three draft guidances to improve the 510(k) program, including recommendations around choosing appropriate predicate devices and when clinical data may be needed to demonstrate substantial equivalence. However, the guidance documents still have not been finalized.
Researchers at the Johns Hopkins Carey Business School, the Johns Hopkins Bloomberg School of Public Health and Yale School of Medicine contributed to the study. It was funded by an award from Johns Hopkins University.
Article top image credit: Getty Images
More hospitals need to assess predictive models for accuracy, bias: Health Affairs
Just 61% of hospitals surveyed evaluated models for accuracy on their own data, and less than half evaluated the models for bias.
By: Elise Reuter• Published Jan. 10, 2025
About 65%, or 1,696, of U.S. hospitals surveyed reported using artificial intelligence or predictive models integrated with their electronic health record system, according to a study published in Health Affairs. The study used data from the 2023 American Hospital Association Annual Survey Information Technology Supplement.
Predictive models can include an AI or machine learning component. Hospitals reported using the models for care-related decisions, such as predicting health risks for patients. They were also used for administrative tasks including billing and scheduling.
More hospitals need to check models for accuracy and bias using their own data, the study’s authors wrote. Of hospitals that reported using predictive models, 61% evaluated the models for accuracy and just 44% assessed them for bias using local data.
The use of predictive models has come into focus as regulators develop new policies to address concerns about transparency and bias for AI in healthcare.
A final rule by the Assistant Secretary for Technology Policy, formerly known as the Office of the National Coordinator for Health Information Technology, went into effect in January 2025. The rule requires health IT companies to provide certain information about decision support tools, such as how the models were validated and approaches to reduce bias, and encompasses predictive tools that aren’t regulated as medical devices.
The study didn’t show how many predictive models used by hospitals were medical devices, but it described how the technology is being used and who developed it.
The tools were most commonly used to predict health trajectories or risks for inpatients (92%), identify high-risk outpatients for follow-up care (79%) and for scheduling (51%).
About 79% of hospitals that used predictive models said they came from their EHR developer, while 59% used other third-party tools and a little over half reported using self-developed models.
Hospitals that developed their own models were more likely to evaluate them locally for accuracy and bias, the researchers found.
Local evaluation is important because models trained on certain datasets might not be effective or useful in different settings. Algorithmic bias can make health inequities worse by adding barriers to care or underrepresenting patients.
“It is concerning that 56 percent of these hospitals did not report evaluating their deployed models for bias and therefore were not systematically protecting patients from possibly biased or unfair AI,” the researchers wrote.
They also cautioned against the idea that administrative tools are lower risk than clinical tools, pointing to studies that showed most patients are uncomfortable with models used to predict bill payment or missed appointments.
A limitation of the study is that excitement about the potential of AI and concerns about algorithmic bias could have led hospitals to overstate their use or evaluation of AI models.
The authors concluded that independent hospitals with fewer resources need support to ensure the use of accurate and unbiased AI, and that “the growth and broad impact of providers’ self-developed models that are currently outside the scope of federal regulation could warrant additional consideration.”
Article top image credit: Getty Images
Cleerly raises $106M to fund AI-driven heart disease tool
The company has generated evidence that shows its system is better than experts at assessing vessel narrowing, plaque volume and composition.
By: Nick Paul Taylor• Published Dec. 6, 2024
Cleerly has raised $106 million to scale the commercialization of a machine learning tool that generates a 3D model of patients' coronary arteries.
The artificial intelligence-driven tool uses coronary computed tomography angiography to noninvasively measure plaque, vessel narrowing and the likelihood of insufficient blood flow.
Cleerly has generated evidence that shows its system is better than experts at assessing vessel narrowing, plaque volume and composition, helping it to secure Medicare coverage for the technology.
CCTA enables physicians to noninvasively assess the risk of patients with coronary artery disease. The imaging technique, which is recommended in clinical guidelines, facilitates evaluation of atherosclerosis to determine the likelihood of disease progression and cardiovascular events. However, whole heart quantitative CCTA analysis is a skilled and time-intensive task.
Cleerly has developed a tool to automate CCTA analysis using 10 million images of 40,000 patients. The company has validated the technology in multi-center trials, showing its equivalence to other techniques and ability to reduce the need for invasive testing. One study that used the technology as a gatekeeper to detect vessel narrowing tracked an 86% reduction in invasive coronary angiograms.
A recent analysis of more than 3,500 people enrolled in Cleerly’s registry showed the potential for AI to provide a better understanding of plaque morphology. The insights could inform treatment strategies, but a randomized trial would be needed to show if tailoring therapy to the findings improves outcomes.
Investors committed $223 million to Cleerly in 2022. The latest financing extends that round, giving the company a further $106 million from backers including Insight Partners and Battery Ventures. Cleerly will use the money to scale its commercial growth and clinical evidence generation.
The financing follows the initiation of coverage by several Medicare administrative contractors and the establishment of a current procedural terminology service code that will take effect in January 2026.
Cleerly CEO James Min said in a statement that the use of the funding for commercialization “is especially germane” following the Medicare coverage and a CPT category I code for advanced plaque analysis.
Article top image credit: Retrieved from Cleerly on December 06, 2024
‘Our next frontier is prediction’: Medtronic on AI and heart disease
Stacey Churchwell, who leads cardiovascular diagnostics and services, says artificial intelligence is reducing false positives in Medtronic’s insertable cardiac monitors, saving clinicians time.
By: Susan Kelly• Published July 22, 2024
Medtech companies are rapidly integrating artificial intelligence into cardiovascular medicine, from imaging and electrocardiology to genetics and patient monitoring. Best practices are still evolving, however, and few AI tools have been shown to improve cardiovascular and stroke care enough to be widely adopted, according to a recent statement from the American Heart Association.
Medtronic is among the device makers working to support earlier disease detection and treatment through AI-enabled technologies. The company told MedTech Dive that better algorithms and sensing technologies and an expansion of available data are improving accuracy and access across multiple specialties. For example, AI is now being used to enhance detection of polyps in real-time during colonoscopies, eliminate finger sticks for calibrating continuous glucose monitors in diabetes management and optimize screw alignment in spine surgery planning.
MedTech Dive recently spoke with Stacey Churchwell, general manager of Medtronic’s cardiovascular diagnostics and services business, to discuss the company’s integration of AI algorithms for detecting abnormal heart rhythms into its insertable cardiac monitors. The miniature devices — the latest version is about a third the size of a triple A battery — are implanted under the skin in a patient’s chest to help diagnose abnormal heart rhythms such as atrial fibrillation.
This interview has been edited for length and clarity.
MEDTECH DIVE: How did Medtronic overcome some of the inherent challenges in developing AI, such as disparities built into algorithms that may reflect bias?
STACEY CHURCHWELL: With regard to AccuRhythm AI, we utilized a large global data repository for training and validation, including not only standard ECGs, but also rare and edge case scenarios, helping us to mitigate biases and ensure the AI's appropriate behavior when deployed in the market. We took active steps to develop additional permutations of data that account for common differences observed across populations.
To prevent the AI from re-learning undesirable habits or biases, we locked its training once our scientists were satisfied with its performance. Before launching the algorithms, we rigorously validated the AI's decisions with human adjudicators to ensure consistency with human decisions.
What is the goal of incorporating AI into the ICM?
We started our journey by improving the specificity of our devices so that we can remove false positives or false alerts that would otherwise create burden for clinics.
An ICM looks for anything abnormal, and it's really designed to be highly sensitive. It's looking for things that don't appear to be normal sinus rhythm, and it's flagging it. Before AI was introduced, there was a potential for things to kind of slip through that may not be clinically relevant or actionable and that would require the care team to have to sift through and look at those alerts to determine and adjudicate whether in fact they're real or not. We've got some great technology on board the device itself, but it's not perfect.
We saw that as an opportunity, and what we have done with AI is to implement it in the cloud. As those signals come in that are deemed to be suspect signals that could be an arrhythmia, that information gets sent up to the cloud where AI sits, into our CareLink network and that AI, which has been trained using millions of data points.
How is AI improving the ability of the device to do its job?
We've been doing this for 26 years. We've got a lot of data on board. We've used that data to train the AI to understand and look for those discrete patterns. It goes through a process of adjudicating those signals to say, “I have high confidence that this is real, I have high confidence that it's not real, or maybe it's borderline.”
When it gets through that filter, we immediately eliminate [false alerts] that we know with high confidence aren’t real. And so the physician, the clinician team, never sees that. We're going to always send forward [signals] that we know are real, but also those things that may be suspect. We're always going to put that in front of the clinician to make the final judgment on. But because the AI is so good, it does a fantastic job of weeding out a lot of that noise that otherwise, before the AI was activated, would have slipped through, and that burden would have been left on the care team to try to figure out.
How well is the AI performing?
We've been really, really thrilled with the results. We hear it from our clinicians that it's making a difference. We’re committed to not compromising on our ability to identify true events. That sensitivity, we try to keep in that 99% range, so that we can say that when we call something AFib, you can take to the bank that it's AFib. You're not going to miss those events that you could have otherwise missed had we compromised on our sensitivity capability.
What we've seen with AccuRhythm AI 2.0 — we launched 2.0 last summer — is that the AFib false alert reduction went from 74.1% to 88.2%. We're taking out even more of that false data.
What is the benefit of being cloud-based?
One, we get to take advantage of the computational power of the cloud, versus burdening the device itself, which would have a battery longevity consequence. The other great benefit is that we get to impact and improve devices that were implanted years ago.
When we flip the switch in the cloud, every device that's still active gets the benefit. Whether we implanted it today, two days ago, two weeks ago, two months ago, two years ago — if that device is still active, it's going to see the benefit of that AI. That clinic is going to see that relief.
Where will you be focusing your AI efforts next?
We think our next frontier is prediction, and one of the things that we have an opportunity to do is to demonstrate that Linq can play a role in facilitating heart failure management.
We've got a big clinical trial underway right now called Alleviate-HF that's exploring that very question, and we'll have results back in about a year’s time that will give us the green light to move forward in commercializing a solution and algorithm that will leverage AI for prediction. Then we can give our customers the heads up that a patient may be headed toward hospitalization, which would give them time to take action and adjust meds to perhaps keep that patient out of the hospital and drive costs out of the system.
When is it best for a heart patient to get an insertable monitor versus a wearable one?
Wearable solutions are considered as a first step. Holter monitors or external patches are generally used for a time period ranging from, say, seven days to 30 days. One of the big hurdles that those technologies have is around patient compliance.
For patients who have more infrequent symptoms, and these could be episodes that happen every two or three months or even six months, you really want to consider something more long term. The great thing about an ICM is it's set it and forget it. It's looking 24/7, and you have up to several years of a window to find what you're looking for.
There's a recent study [Stroke-AF] that showed that 78% of patients who had AFib would have been missed if you only monitor them for 30 days. We know without a doubt that 30 days just isn't enough. That's why we tend to think that ICM is a great solution to consider for those patients.
Article top image credit: Permission granted by Medtronic
How AI tools are transforming the medtech space
Artificial intelligence tools have become more prominent in the medical device industry over the past several years, advancing the diagnostics space into new areas. From detection to prevention, new AI-enabled devices are expanding capabilities across multiple health conditions.
included in this trendline
AI in medtech is booming. Track new devices here.
How three companies are using foundation models in radiology
‘Our next frontier is prediction’: Medtronic on AI and heart disease
Our Trendlines go deep on the biggest trends. These special reports, produced by our team of award-winning journalists, help business leaders understand how their industries are changing.