reliability testingOn a quiet Monday morning, before the first cup of coffee has been poured, a faint sound can be heard coming from inside the organized chaos of an engineering lab. Tucked away in a corner, somewhere between an ever seemingly broken 3D printing machine and a box of orphaned circuit boards, a laptop screen glows blue in the dim light.
Beside the laptop, a tangled series of cables and air hoses connect the laptop to a small pneumatic cylinder. At the end of the cylinder, a crude, finger-like object shoots forward and back, monotonously depressing a button almost in time with the clock on the wall that reads 6 minutes slow. Taped above the bench hangs a hand-scribbled label with the words “Reliability Testing”

In the complex world of medical device development, we often lose sight of the fact that the true performance of a design cannot be accurately gauged by a single test on a single device. Any first year stats student will tell you that the smaller the sample size, the less confident you can be that the result is repeatable. A device may be able to pass one test once, but that does not necessarily guarantee reliability. And that can lead to costly failures once in production. Of course, in a perfect world we would test a million different samples a million different times and then do it all again just to be sure (just enough time to binge-watch your favourite series on Netflix).

The reality, however, is that one has to respect certain budgetary limitations imposed upon smaller companies that may only just be entering the medical device industry. Depending on the complexity of a device, the cost of even a single prototype can be five, or even six, figures. Because of this, we often see small sample sizes relied upon to satisfy NPI regulatory requirements and support regulatory submissions. However, as with so many things in life, there can always be compromise. It may not be practical to build multiple prototypes and test everything multiple times, but it’s not unreasonable to identify key design aspects related to safety and efficacy and make an effort to do more enhanced testing in these areas.

Determining the reliability of a device often starts by identifying possible undesired scenarios that may result from various failure modes. At the top of the list should always be scenarios where there is a potential for harm to a person. Following that, situations where a device is damaged and becomes unusable may be considered along with sub-optimal user experiences. A fault tree analysis (FTA) can be used for this and then once these scenarios are identified, a failure mode and effects analysis (FMEA) can be conducted to determine how severe a failure is and how often it is likely to occur. This process is very closely related to the risk management process described in ISO 14971 – Application of Risk Management to Medical Devices.

By identifying certain failure modes, we can develop reliability tests to try to create these failures. Often the main purpose of these tests is to determine how often a device fails and how long it takes to fail, commonly referred to as mean time to failure (MTTF). However, the exercise is also beneficial in that it also shows us “how” a device might fail. This is important because it allows us to identify design weaknesses that can be improved upon.

Physical failures often consist of, or begin with, the failure of a single component. They can be due to any number of reasons; unexpected wear, torsion, material strength, tolerance stack-up, just to name a few. We can observe these failures and estimate their likelihood of occurrence over time by performing extensive testing of certain components or functions of a device. Enter “The Buttonator 3000”.

Once a specific component or function of a device whose function we have deemed to be “really important” to safety and/or efficacy has been identified, we want to test this feature to ensure it works. When Bob from accounting’s undying love for jelly doughnuts finally catches up with him, the last thing we want is for the button on the automatic external defibrillator (AED) to fail in his time of need. Like Bob’s love for doughnuts, the engineering team has a love for getting paid on Friday so it’s in everyone’s best interest to ensure that the button on the AED is reliable.

By using the Buttonator 3000 test jig to test a number of buttons thousands of times, we can simulate real-life use and observe how and when the button fails. Observing how it fails can help us improve the design and make it more robust. Observing when it fails can help us estimate with a certain degree of certainty how long we can expect the component to last and in turn, helps us determine the effective lifetime of our device.

Benjamin Franklin said that there are only two things certain in life: death and taxes. All devices will eventually fail at some point but there are tools available to us that can allow us to better understand how and when they will fail. Reliability testing is one of these tools and it is an effective one that will save money by helping to produce a high-quality device that will consistently meet the needs of consumers. Now, if only the clock and the 3D printing machine could be so reliable…

Cam Neish is a StarFish Medical Quality Assurance Manager, an ASQ Certified Quality Engineer and a Certified ISO 13485 Lead Auditor. He reliably pushes buttons all the time.

Photo credit: ID 110868378 © Mykola Senyuk | Dreamstime.com


One response to “A “Pressing Issue”- Reliability Testing in Medical Device Development”

  1. Good article about an important part of NPI. With a device, there are hundreds of things that can go wrong but only a few will. Besides selecting a few functions determined in brainstorming (fmea) it would be good to test the heck out of a device to see what actually does fail. A lot of field failures are from unexpected component failures – ones that did not show up on the hit list.
    You added a good bit of humour to this article – makes for a fun read.

Leave a Reply

Your email address will not be published. Required fields are marked *