5 Tips for medical device cyber security

Resources

5 Tips for medical device cyber security

Authors: Helen Simons

Editor’s note: This is the 2022 update to the August 2015 “5 Tips for medical device cyber security” blog by Vincent Crabtree. Read the original one here.

In this modern world, cyber security and data protection has become a familiar topic to us all. From agreeing to cookies on website, to data privacy agreements to sign, to ever increasing log ins and access controls, we all feel the impact of cybersecurity protections.

They may seem burdensome to the user but they are there for a reason. As devices and websites collect more and more data about us and our habits, this data becomes valuable to those wishing to exploit it. Cyber security attacks and data privacy breaches are a relatively common occurrence these days.

When it comes to medical devices we have an obligation to keep our users safe and protect their data privacy, especially in matters relating to their health status. It behooves us to keep on top of cybersecurity best practices to ensure this. As with any design feature it is best if this is considered and integrated from the beginning of the development and not just treated as an add on feature.

Safety Does Not Mean Security in Medical Device Cyber Security

Developers of medical devices are familiar with the concept of devices which are either fail safe for low risk devices or fail-over for devices which must maintain a certain level of performance. ISO 14971 defines a risk management process that qualifies risk, allowing identification for when mitigations must be implemented, and then subsequently verified. For devices which must fail-over, the IEC 60601 family of standards defines the essential performance which must be maintained for a large variety of medical device types.

An example which shows the difference between safety and security is the reverse park assist feature found on newer cars – those which turn the steering wheel for you as you back into a parking spot. A risk analysis might identify accidental activation of the reverse park feature while travelling at highway speeds, and the mitigation would be to disable the feature while travelling at speeds greater than 5km/h.

From a security perspective, this is not secure because a hacker who has compromised a car’s (Controller Area Network) CAN bus system could issue a command to indicate the speed is 3km/h and then sequentially issue a command to engage the reverse park feature. Once the system is compromised, it is trivial to issue multiple commands. The security risk mitigation here could be architecture related, for example dual CAN bus backbones with different allowed commands on each bus, which would require an attacker to compromise both CAN bus backbones.

Closer to home, the insulin pump security mitigation could ensure user confirmation for remote controlled administration of large doses, and/or limit the maximum number of doses in a certain time period.

Implement Regulatory Guidance

The FDA released a guidance document titled ‘Content of Premarket submissions for management of cybersecurity in Medical Devices’, issued in October 2014.

2014 might seem like a long time ago in the world of cybersecurity, and it would be, but this is still the current applicable guidance for this topic. This is a very limited scope guidance. FDA tried to implement a more extensive guidance in 2018 which did not make it past the draft stage due to the large number of comments on it. The FDA has recently released a new draft guidance, “Cybersecurity in Medical Devices: Quality System Considerations and Content of Premarket Submissions”.

The 2014 guidance document describes, at a minimum, a risk assessment approach to cybersecurity-

  1. Identification of threats, assets and vulnerabilities
  2. Assessment of the impact of threats and vulnerabilities being exploited
  3. Determination of risk level and suitable mitigation strategies
  4. Assessment of residual risk and risk acceptance criteria

This risk based approach should be familiar to medical device developers since it mirrors ISO 14971, and it would also be good practice to verify any mitigations that are not part of the architecture are effective, where possible and practical.

The guidance document also describes a framework for cybersecurity core activities, referring to “Framework for improving critical infrastructure cybersecurity”, V1.1 published by the National Institute of Standards in 2018. The features of the framework are discussed below:

  1. Identify – as above evaluate the possible attack vectors early in your system architecture, e.g. debug ports, or other connections.
  2. Protect – Where possible, mitigate by modifying the system architecture. For example, an internal debug port as opposed to accessible debug port, discussed below. For those attack vectors which cannot be protected via architecture changes, implement mitigations which will make it more difficult for an attacker to compromise the system, such as maximum number of login retries before timed lockout. Consider how to protect data – both in-transit encryption and data at rest encryption. Also consider layered approaches – limited access, multi-factor authentication and varying user privileges or roles.
  3. Detect – consider implementing features which will allow hacking attempts to be detected, such as login audit trails. Even simple devices can have a tamper detect or case open switch detect.
  4. Respond – As discussed above in IEC 60601 and essential performance, implement features which will continue to provide critical functionality.
  5. Recover – provide methods for restoration of normal device operation by an authenticated user with appropriate privileges. One option is the ‘Factory Reset’ button.

Beyond the US market, in the EU, the Medical Device Coordination Group published MDCG 2019-16 Guidance on Cybersecurity for medical devices initially in 2019, with an update to Rev.1 in July 2020.

This guidance being a total of 46 pages, goes into more depth than the current FDA 2014 guidance which is only 9 pages long. Providing specific details and methodologies to ensuring safety and security by design.

This guidance also links back to the General Safety and Performance Requirements (GSPR) listed in both the Medical Device Regulation (MDR) 2017/745 and the In Vitro Diagnostics Regulations (IVDR) 2017/746. The GSPR being the essential method for how to show that your device is safe and effective and compliant with these regulations.

Another useful point of content in the EU guidance, is an outline of responsibilities for cybersecurity between the different stakeholders. Whilst we can do all we can to design a safe and secure product, once it is in the hands of the user that have their part to play in ensuring that cybersecurity is maintained.

Health Canada have also published their own guidance on cybersecurity: Pre-market Requirements for Medical Device Cybersecurity (2019). This supports similar practices to those previously mentioned and also links back to the NIST framework previously mentioned.

One aspect of this guidance which is particularly helpful is the appendix B which gives illustrated explanations of how cybersecurity risks can interact with ISO 14971 patient safety risks both in the determination of potential hazardous situations and in the consideration of the influence of risk controls. I would recommend reviewing this guidance even if your product is no intended for the Canadian market.

Be Pragmatic

Engineers usually make pragmatic decisions when developing a system architecture, but a recognition of security issues as above can make this much more powerful. This works best when not presented as a regulatory issue, but as a technical challenge – challenge your team to hack their-own products.

For example, Bluetooth LE includes an encryption feature which is relatively well known, and a lesser known Privacy feature which changes a device address over time. Using a chipset which supports the Privacy feature would reduce an attacker’s ability to sniff data.

Another example is root access. While it might be convenient to have a development debug port on the back of the machine, this is a prime attack vector. Hence, debug ports should at a minimum be disabled in production firmware and covered/hidden, but preferably should not be exposed out of the enclosure or even populated. Root access on a range of android phones has been obtained by accessing a ‘hidden’ debug port which was brought out on the USB port via a USB multiplexor.

Hardcoded backdoor passwords present another gap in a device’s security– a long time ago an Internet computer was sold very cheaply. This device employed a proprietary OS which dialed (dialup shows just how long ago) a certain subscription-based internet provider, and popup ads were displayed while you surfed. The device was a regular PC with a hardcoded bios password – enter the password, install Windows 95 and you had a medium performance PC for the price of a family trip to the cinema. This shows that hard coded passwords should never be used, as they will almost certainly be discovered.

Firmware updates are another popular attack vector. To mitigate this, device developers must ensure only signed, authorized firmware can be installed, and only authenticated users with appropriate privileges can install firmware updates. Is it a good idea to enable over the air firmware updates, or should only hardwired connection permit an update? Also consider software roll-backs – do you want the potential for security patches to be undone?

Security Assurance

The approach described above constitutes the bare minimum for how a developer might manage cyber security when developing medical devices. However, the de-facto cyber security standard is ISO15408-1 (2009) Information technology — Security techniques — Evaluation criteria for IT security — Part 1: Introduction and general model, which introduces the Common Criteria framework. Like product requirement, the Common Criteria allows manufacturers to specify their security and assurance requirements using Protection Profiles (PPs), permitting testing laboratories to evaluate and determine the security claims have been met. This may have a marketing advantage – which would you buy, a medical device that has been certified security tested or one that has not? Developing Protection Profiles and assurance testing documentation is left as a topic for another blog.

Post-market Controls & Maintenance Plans for Medical Device Cyber Security

So you have submitted to your device for approval by the relevant regulatory body and its been rubber stamped to be marketed, now what. It was agreed on the basis of the information provided in the submission which reflected the status of the software and known vulnerabilities at that time. However, this isn’t a static piece of knowledge, understanding of cybersecurity is always being updated. The FDA regularly publishes notifications of cybersecurity alerts from medical device manufacturers and encourages others to learn from these. In the more recent draft guidances they have also suggested that manufacturers should make use of services such as the National Vulnerability Database (NVD) to keep aware of new information.

Once you become aware of issues in relation to cybersecurity and data protection which could impact your product you need to make changes to your device and/or software to maintain safety and security. Having a software maintenance plan or vulnerability management plan will help set expectations so that everyone knows what they need to do.

Assuming your device qualifies for a 510(k), the FDA does not need a 510(k) resubmission for software updates which remove security vulnerabilities, provided the intended use, features and performance remain the same as the original submission. You should, of course, be following your software development Design Controls process required for design changes by CFR 820.30(i), even for FDA Class I devices. You should be reviewing, verifying, and validating as you go, with appropriate levels of documentation and notes to file justifying the actions.

In addition, there is an argument for saying that it is a manufacturer’s responsibility to be vigilant and make security updates, as code with known vulnerabilities could be considered putrid, which is forbidden according to the Food, Drug &Cosmetic Act 21 CFR 351(1)(1). Code rot has regulatory consequences.

About the author: Helen Simons is a Senior Quality Assurance and Regulatory Affairs Specialist at StarFish Medical. Helen’s education is in Mechanical engineering, with a background of product development and QMS development across multiple industries with consumer and industrial products to medical devices, IVD and combination devices. Software updates contributed by Thor Tronrud.