*Guest post by Annelies Tjebbes

Picture this, it’s product verification time. Today you are going to start testing your product.  You have put together a plan, thought of everything that could go wrong, and planned for it: you are ready. You read over the first few sections of the Verification Plan to refresh on the objectives and the materials required, and you get to the first test…

Test #1: Weigh the device and confirm that the weight is <  5kg and that the device weighs less than the predicate device.

You put your device on the scale and it displays 5kg. Here are some thoughts that might pass through your mind:

– “Can I squeeze a line under the < to make it look like a ≤  symbol?”

– “No, it’s not a big deal, I’ll just record it as a fail”

– “But wait, then I’ll have to alter the design and redo the test, or change the spec.”

– “Doesn’t the predicate device weigh 4.8kg? It’s one thing to change the numerical weight in the spec, but it was a goal to design a lighter device than the predicate…”

Wikipedia has a well formulated definition of a test plan. “A test plan is a document detailing a systematic approach to testing a system such as a machine or software”. They make it sound so easy. Verification is a “systematic approach”, how could it possibly go wrong? Well, it can certainly go wrong in many ways if the Verification Plan is not sound.

From my experience, the following are some of the key questions that are useful to consider when formulating a verification plan:

  • Is this verification test to be used as a point of reference (to benchmark your alpha design for example and perform a gap analysis between alpha and beta)?
  • Are all of the relevant input requirements that will be tested included in the verification plan?
  • Is there traceability for each test entry back to the input requirements  that this plan is based on? (Requirements Document, Regulatory Standard, etc.)
  • How much time has been allotted for verification?
  • Are these tests reasonable ones that have objective criteria? Can they be executed with the equipment available?
  • Is there a particular order the tests should be performed in? Some tests damage the device and can affect following tests. Sometimes the results from one test will influence whether other tests are needed.
  • Is there enough detail included in the plan to ensure that someone other than the author can perform the tests? And on that note, who will be doing the testing?

In short, there is a lot to consider, and a significant amount of effort that goes into laying out a thorough verification plan. Back to the earlier example test, some changes to the plan that could prevent this type of conundrum are: including a measurement tolerance on numerical values; not including requirements that aren’t objective or that require referring to another document (“must weigh less than the predicate device”); ensuring that the person executing the test plan is not the same as the author.

While developing the plan, I have also found it useful to start thinking about how to deal with elements that might fail the verification testing. Will I revise the spec or modify the design? What kind of deadline am I working towards (ex. FDA 510(k) submission, board meeting, etc.)? And on that note, how much time will I have following verification to make changes/fixes if anything fails?

Developing a verification plan is a highly challenging task, but I have found that careful consideration of the above key questions as well as peer reviews by a colleague external to the project help me create an effective and sound plan. The better the plan, the better the execution, so I find it’s always best to start off on the right foot!

 


Leave a Reply

Your email address will not be published. Required fields are marked *