/ Jun 27, 2025
Trending
Marcus Schabacker called for more upfront regulations and postmarket monitoring to better understand how AI features affect patient care.
The Food and Drug Administration is grappling with a surge in the number of medical devices that contain artificial intelligence or machine learning features. The agency had authorized 882 AI/ML-enabled devices as of March, and many other devices include AI features that don’t require regulatory review.
Currently, AI is most often used in medical devices in the radiology field, although the technology is also used in pathology, for appointment scheduling and in clinical support tools that pull in a variety of metrics.
The influx of AI in medical devices has raised questions from lawmakers and patient safety groups. Medical device industry group Advamed responded to a Congressional request for information on AI in healthcare in May. Advamed said the FDA’s authority is “flexible and robust enough” for AI/ML in medical devices.
On the other hand, patient safety nonprofit ECRI listed insufficient governance of AI in medical technologies among its top health technology hazards for 2024. CEO Marcus Schabacker said AI has the potential to do good, but the technology also can do harm if it provides inaccurate results or magnifies existing inequalities.
MedTech Dive spoke with Schabacker about what developers, regulators and hospital administrators can do to ensure devices that use AI are safe and effective.
This interview has been edited for length and clarity.
MARCUS SCHABACKER: We think the FDA is taking a much too laissez faire approach here. They should be much more stringent, particularly with decision support tools, and have clear regulations.
If they don’t want to regulate it upfront, they need to put post-market surveillance programs in place at a minimum. But we believe they should be way more forceful in the regulatory pathway and get more evidence from the companies that their tools are truly supportive instead of influencing decisions in a more severe way.
The FDA’s always in a tough spot. On the one hand, they want to make sure that the population stays safe. That’s their mission. That’s our mission too. On the other hand, they want to make sure appropriate innovation can happen and don’t want to be seen as a blocker.
Marcus Schabacker called for more upfront regulations and postmarket monitoring to better understand how AI features affect patient care.
The Food and Drug Administration is grappling with a surge in the number of medical devices that contain artificial intelligence or machine learning features. The agency had authorized 882 AI/ML-enabled devices as of March, and many other devices include AI features that don’t require regulatory review.
Currently, AI is most often used in medical devices in the radiology field, although the technology is also used in pathology, for appointment scheduling and in clinical support tools that pull in a variety of metrics.
The influx of AI in medical devices has raised questions from lawmakers and patient safety groups. Medical device industry group Advamed responded to a Congressional request for information on AI in healthcare in May. Advamed said the FDA’s authority is “flexible and robust enough” for AI/ML in medical devices.
On the other hand, patient safety nonprofit ECRI listed insufficient governance of AI in medical technologies among its top health technology hazards for 2024. CEO Marcus Schabacker said AI has the potential to do good, but the technology also can do harm if it provides inaccurate results or magnifies existing inequalities.
MedTech Dive spoke with Schabacker about what developers, regulators and hospital administrators can do to ensure devices that use AI are safe and effective.
This interview has been edited for length and clarity.
MARCUS SCHABACKER: We think the FDA is taking a much too laissez faire approach here. They should be much more stringent, particularly with decision support tools, and have clear regulations.
If they don’t want to regulate it upfront, they need to put post-market surveillance programs in place at a minimum. But we believe they should be way more forceful in the regulatory pathway and get more evidence from the companies that their tools are truly supportive instead of influencing decisions in a more severe way.
The FDA’s always in a tough spot. On the one hand, they want to make sure that the population stays safe. That’s their mission. That’s our mission too. On the other hand, they want to make sure appropriate innovation can happen and don’t want to be seen as a blocker.
Your trusted source for reliable health tips, medical news, wellness advice, and expert-backed information to help you live a healthier, happier life.