The Food and Drug Administration received more than 100 comments after seeking responses on how to monitor the real-world performance of artificial intelligence in medical devices.
The feedback diverged, with patients calling for stronger postmarket reporting and medical groups saying reporting should be the responsibility of manufacturers. Device companies, meanwhile, called for the FDA to use its existing regulatory frameworks instead of introducing new requirements.
The FDA’s emphasis on real-world performance comes as the agency considers how to regulate increasingly complex technologies, such as generative AI, and how to ensure the performance of AI models doesn’t degrade over time.
Industry groups oppose universal postmarket requirements
Medtech lobbying groups and individual companies called for the FDA to use existing quality metrics and a risk-based approach rather than implementing universal postmarket monitoring requirements.
AdvaMed, a medical device industry group, recommended that the FDA use existing regulatory requirements, such as those outlined in the Quality Management System Regulations, adding that they provide “robust mechanisms” for design validation and postmarket surveillance.
“Duplicative or prescriptive new requirements for performance monitoring of AI-enabled devices risks undermining both patient safety and innovation,” the trade group wrote in comments.
AdvaMed instead called for a risk-based approach built on QMS and international consensus standards, adding “there is no one-size-fits all approach to performance monitoring for AI-enabled devices.”
The Medical Device Manufacturers Association also called for a risk-based approach, adding that special monitoring requirements should be used only in special circumstances. The lobbying group said that locked AI models, which don’t change autonomously over time, may carry lower risk and not require postmarket monitoring.
“In contrast, continual machine learning models that update autonomously based on new data may introduce additional complexities and risks, which could call for specific monitoring mechanisms beyond standard controls,” wrote MDMA CEO Mark Leahey.
Olympus Corporation of the Americas also called for the use of existing quality management structures, and Masimo supported a risk-based approach.
Healthcare providers say monitoring should be manufacturers' job
Hospitals and medical groups see a need for postmarket monitoring of AI devices, but they said that work should be manufacturers’ responsibility. Comments emphasized the growing number of AI tools, but also noted that many hospitals don’t have the resources to evaluate or monitor these technologies.
The American Hospital Association wrote in comments that hospitals are expanding their use of AI applications. Although the technology is mostly used for administrative tools, facilities are also deploying AI-enabled medical devices.
“The potential for bias, hallucinations and model drift demonstrates the need for measurement and evaluation after deployment,” wrote Ashley Thompson, the AHA’s senior vice president of public policy analysis and development.
Thompson said the FDA should update adverse event reporting metrics to include AI-specific risks. The AHA also recommended that the FDA add monitoring requirements for manufacturers, from periodic revalidation to ongoing surveillance, depending on a device’s risk. The hospital lobbying group suggested that the FDA focus its efforts on higher-risk areas related to the diagnosis of conditions or the treatment or mitigation of disease, and not clinical decision support or administrative tools.
“The ‘black box’ nature of many AI systems can make it more challenging for hospitals and health systems to identify flaws in models that may affect the accuracy and validity of an AI tool’s analyses and recommendations,” Thompson wrote. “As such, post-market measurement and evaluation standards should be developed for vendors.”
Thompson added that some hospitals — particularly rural and critical access facilities — may not have the staff or resources to support AI governance and ongoing monitoring.
The American College of Surgeons offered similar comments, supporting postmarket monitoring but delegating that responsibility to vendors instead of surgeons or other clinicians.
Patients call for transparent monitoring
Patients wrote to the FDA calling for transparent monitoring and better performance metrics that reflect people’s lived experiences.
“When an AI-enabled device misfires, patients experience it as extra visits, extra tests, confusion about why their care plan changed, or mental distress when an automated output contradicts what they know about their own bodies,” wrote Andrea Downing, co-founder and president of the Light Collective, a nonprofit advocating for patient rights in health tech.
“From the patient perspective, approval is not the finish line — it is the starting line. Patients need confidence that these devices will continue to perform safely and fairly after deployment.”

Dan Noyes
Healthcare AI strategist
These types of burden rarely appear in traditional reporting metrics, Downing wrote, adding that device evaluations should include considerations such as additional appointments, delays in care, confusion about the AI’s role, and emotional or mental distress.
Dan Noyes, a healthcare AI strategist, mentioned his personal experience living with a chronic health condition in comments to the FDA. He called for disclosures to patients about how AI tools are involved in their care decisions, as well as transparency for when models are updated, and testing across diverse populations to ensure equitable performance.
“From the patient perspective, approval is not the finish line — it is the starting line,” Noyes wrote. “Patients need confidence that these devices will continue to perform safely and fairly after deployment.”