
Look Before You Build: The Role of the MAUDE Database in Medical Device Development
TL;DR
- The MAUDE database helps teams learn from adverse events in medical devices.
- Reports are often incomplete, misclassified, or hard to search.
- AI-enabled devices introduce risks that MAUDE cannot capture well.
- MAUDE lacks mechanisms to report AI-specific failures like concept drift.
- It is best used as a complementary tool, not the sole source of device risk data.
Medical device development is a complex process that requires careful attention at every stage. One of the most important ethical principles guiding this work is non-maleficence, or the commitment to “do no harm” – a core tenet of medicine that also applies to medical devices.
As such, it is highly beneficial to be able to look at adverse events caused by similar devices, to help prevent these events from recurring in the devices being developed. A valuable source of insight is the FDA’s MAUDE database, which captures reports of medical device adverse events.
Understanding FDA the MAUDE Database
The Manufacturer and User Facility Device Experience (MAUDE) database is an FDA-curated collection of medical device reports (MDRs) documenting adverse events involving medical devices. Reports are submitted to the FDA by mandatory reporters such as manufacturers, importers, and device user facilities, as well as voluntary reporters such as healthcare professionals, patients, and consumers. A wide range of reports are submitted, with the last 10 years’ worth being searchable online through the FDA’s website.
The database provides both a Simple Search (keyword-based) and an Advanced Search, which allows filtering by fields such as Product Problem, Product Class, Event Type, and Manufacturer.
Key Fields in MAUDE Medical Device Reports
Each Medical Device Report (MDR) can include up to almost 150 fields, though not all are always completed. The most relevant fields include Brand Name, Event Type, Device Problem, Patient Problem, Event Description, and Manufacturer Narrative. These fields help identify the device, categorize the event, and explain what occurred. Well-completed MDRs provide detailed accounts of adverse events and can offer valuable insight into issues likely to arise with similar devices. Combined with the ability to search by product class, it can be easy to find many examples of adverse events caused by a certain type of device.
Limitations of the MAUDE Database
Currently, MAUDE is a useful tool to help teams conduct risk analysis, make design/usability decisions, reduce submission timelines, and more. However, it is not without its shortcomings. MAUDE reports can fall short for many reasons:
- Missing data is rampant among MDRs, with categories such as Event Date, and Event Location often being absent in a significant number of reports. While all reports do contain an Event Description category, the descriptions themselves are often short, incomplete, and/or poorly articulated, providing little insight on the event itself or its cause. For consumer-submitted reports, the device manufacturer often has difficulty obtaining information for further investigation to determine whether the incident is truly a device malfunction, or just user error. Multiple reports show the manufacturer making “multiple attempts to follow up with the user” but not receiving a response, or attempting to retrieve the device for investigation but being unsuccessful in doing so.
- Events are often misclassified – MDRs are generally categorized as Death (D), Injury (I), Malfunction (M), or occasionally Other (O). These categories are both limited and imbalanced: most reports are classified as malfunctions, though many event descriptions reveal user error rather than device failure. Similarly, some reports that are categorized as deaths turn out to be completely unrelated to the device. For example, a patient who passed away after falling from a wheelchair was reported under “Death,” though the event was attributed to the patient’s weakness rather than the wheelchair itself.
- Searching the database is a chore – MAUDE’s search abilities are limited. Simple Search only supports keyword queries, while the Advanced Search function allows filtering by fields but not by keywords. Additionally, results must be reviewed one report at a time to see all fields, making the process tedious and time-consuming.
Where MAUDE Falls Short for AI Devices
AI adoption is rapidly becoming ubiquitous, and medical devices are not an exception. As of August 20, 2025, the FDA’s AI-Enabled Medical Devices List includes 1,247 AI-enabled devices compared to 864 at the end of 2023, highlighting rapid growth. While there are many AI-enabled devices that span a wide range of applications, most are diagnostic and imaging devices. Examples include an AI diagnostic tool which evaluates a patient’s risk of developing sepsis, and multiple devices that use AI to guide users in acquiring ultrasound images.
As the development of AI-enabled devices accelerates, one central concern is how adverse events will be documented. The MAUDE database is designed to capture individual device-level reports of adverse events, which is useful for identifying patient/case-level issues in traditional devices. However, many issues with AI-enabled devices may not be obvious unless seen at a much larger scale; if a diagnostic device is supposed to provide a correct prediction with 90% accuracy, or an imaging device is supposed to detect cancer at 85% sensitivity, a single failure may not indicate malfunction. Instead, problems may only become apparent after aggregating large numbers of cases to see if the model is falling short of its advertised success rates.
At present, very few AI-enabled devices listed by the FDA have corresponding entries in the MAUDE database. Preliminary search efforts using the FDA’s search tools as well as a StarFish-developed internal MAUDE search tool* returned few results. In their review of approximately 950 devices, Babic et al. in “A general framework for governing marketed AI/ML medical devices” found that most adverse event reports originated from only two devices and were unrelated to the AI/ML algorithms themselves, suggesting that adverse events involving AI algorithms may be underreported.
Compounding this challenge, the current MAUDE system lacks a mechanism for reporting AI-specific failures. For instance, two common challenges in AI/ML systems are concept drift, where the relationship between inputs and outputs changes over time, and covariate shift, where the input/output relationships are unchanged but the distribution of input data changes over time. Both cases can result in inaccurate predictions and poor model performance, but there is no way to report this in the current system, as they are not technically “malfunctions”.
The Future of the MAUDE Database in an AI-enabled world
To capture and report on AI medical device safety, the MAUDE database should be updated to report characteristics relevant to AI-enabled devices, including challenges like concept drift and covariate shift, as well as algorithmic stability (similar inputs should produce similar outputs). Because AI systems may adapt or degrade even after they are deployed to market, continuous monitoring and reporting are critical. Without these reforms, regulators risk overlooking “hidden” but clinically significant failures that cannot be detected through case-by-case adverse event reporting alone.
In the meantime, caution must be used when using MAUDE to assess AI-enabled devices. As the database was not designed with AI algorithmic failures in mind, AI-specific adverse events will often go unrecognized or be misclassified. As a result, relying solely on MAUDE will likely provide an incomplete picture of risk. For now, it is best used as a complementary resource; useful for identifying certain device-level issues, but insufficient on its own for evaluating the ever-evolving safety and effectiveness of AI-enabled medical devices.
* The internally developed MAUDE search tool uses language models and semantic-similarity search to identify and summarize MAUDE entries that are relevant to the user’s search query, allowing for hundreds of entries to be viewed in minutes.
Andrew Xie is a Machine Learning Scientist co-op student at StarFish Medical. Previously, and still currently, a practicing pharmacist, he enjoys combining his healthcare background with machine learning to explore new ways of solving problems.
Images: StarFish Medical
Related Resources

Medical device development is a complex process that requires careful attention at every stage.

Many of the medical devices created at Starfish take advantage of sensors to convert the real world into digital data that can be understood by computers.

Even the best-designed devices, prepared with careful simulations and usability studies, can behave very differently when used in actual clinical or emergency situations.

Nigel Syrotuck and Nick Allan explore the surprising reality of inhaler spacer use. While these devices are often thought of as tools for children with asthma, research shows that adults struggle with them too.