In the media

Ensuring privacy and security in smart medical devices

Sharad Patel Radhika Bogahapitiya

By Ana Fernandes, Sharad Patel, Radhika Bogahapitiya

Med Device Online

10 September 2024

Pressures on healthcare systems are set to become worse with aging populations requiring round-the-clock care and an exponential increase in chronic diseases.

Technology-led smart solutions are becoming the lifeline in balancing cost reduction and improved efficiencies with better patient outcomes. We found that by 2030, the global market for at-home care will be worth $390.4 billion – an increase of $70 billion from today.

To this end, smart medical devices and at-home care have gathered significant momentum in recent years, as they allow healthcare providers to remotely monitor patients’ health and provide higher quality, personalized care whilst benefiting from improved margins. While the opportunity is exciting, integrating these devices into healthcare systems is not without drawbacks, with patient trust and regulatory compliance being high on the agenda. In this article we explore how organizations can build privacy, security, and ethics by design within medical devices.

Many technology components make up smart medical devices. Embedding security, privacy, and ethics across the entire life cycle of the device, from ideation to end of use, presents multiple challenges:

  • A globally divergent and polarized set of medical device, privacy, security, and AI regulations is raising operational, financial, and compliance issues. For example, global players need to comprehend regional regulations such as the General Data Protection Regulation, local or state level regulations such as California Consumer Privacy Act, as well as sectoral requirements such as Health Insurance Portability and Accountability Act and Section 524B of the Federal Food Drug & Cosmetic Act in the U.S.
  • A multifaceted security landscape includes threat actors who continue to target and exploit vulnerabilities in devices with increasing intensity. Naturally, this raises questions around confidentiality (encryption during transmission), integrity (reliability of the device vs. an in-person visit), and availability (failed data transmissions).
  • Ethical issues with the use of AI to enhance device and analytics capabilities, are raising concerns around bias, stigmatization, and discrimination through non-transparent sensitive data disclosure to manufacturers, healthcare professionals, or insurers.

Increasingly, we have seen organizations struggling to get ahead of these challenges due to:

  1. A lack of fit-for-purpose enterprise-wide privacy, security or AI framework that can be implemented in the development process.
  2. Organizational silos preventing stakeholders collaborating to deliver trusted outcomes.
  3. Lack of understanding of what needs to be done across the life cycle of the device.

Successfully navigating these challenges creates opportunities for smart medical device manufactures to stand out from the crowd while delivering better patient outcomes, building trust across the value chain, and reducing the strain on healthcare systems.

So, what can be done in practice?

  1. Co-create and embed a trust by design framework, which strikes a balance between low-level prescriptive and high-level principle-based business requirements, by:
    • bringing together functions such as product management, R&D, software development, and regulatory/quality to co-create and co-own requirements, reflecting their perspectives;
    • considering trade-offs between globally applicable requirements and providing flexibility for local needs while maintaining traceability across regulations, case law, and industry standards;
    • embedding governance, review, and assurance across the life cycle of devices and surrounding components to enforce usage and validation of how product teams meet key requirements.
  2. Set up integrated teams that break down organizational silos to deliver trusted security, privacy, and ethical outcomes by:
    • mapping stakeholder objectives (e.g., marketing teams looking to comply with consent requirements while using preferences to market personalized content) and validating perspectives to drive engagement from the outset;
    • challenging the composition of product delivery teams to not only consider engineers but also regulatory, quality, risk, security, and privacy practitioners; and
    • creating training and easy-to-use guidance or operating procedures that resonate with stakeholders and speak to real-life scenarios to enable engagement. Approaches such as prompt cards embedded across product development lifecycles often supports a mindset shift.
  3. Use well-designed processes and GRC tooling to simplify compliance activities that needs to be delivered by product teams, by:
    • standardizing compliance assessments across similar disciplines: security/privacy;
    • integrating GRC tools managed by different functions to enable cross-pollination of compliance assessments/responses and to reduce duplication (e.g., responses provided to a security assessment used for privacy assessments);
    • embedding smart workflows and logic so that assessments are dynamic, allowing questions to be shown based on inputs; and
    • using reports and dashboards so that risk remediation/control efforts can be prioritized.
  4. Leverage key stages of the product life cycle to enable privacy, security, and ethics, which include:
    1. Concept generation – Include objectives around these disciplines as part of business cases to further iterate over time;
    2. Proof of concept – Validate minimum control requirements early on, using Trust by Design frameworks, so that feasibility is considered up front;
    3. Integrated development – Consistently engage with experts to seek advice, validate the design of controls, and review compensating controls;
    4. Testing and scale-up – Dedicate effort for privacy, security, and ethics controls testing so that operating effectiveness of key controls are validated before go-live or scale-up;
    5. Post-market surveillance – Embed processes to identify event trends, prioritize and implement incremental updates to remediate vulnerabilities, and improve product security/privacy features through product increments; and
    6. End of life – Give patients control to delete data, for example, through privacy trust centers that are widely used across technology and consumer industries, before devices are returned and/or supporting applications are decommissioned.

There are many considerations to be made when building and maintaining trust in smart medical devices. For most parts, the organizations that operate in this space will continue to see a rapid evolution of technology trends as well as user needs that increasingly drive requirements around security, privacy, and ethics. So far, manufacturers proactively set out security white papers, and also explain security controls through mechanisms such as MDS2 Forms, a template for manufacturers to describe the key security controls of their devices. Going forward, these will need to extend to and explain privacy and ethical considerations as it will soon become an operating imperative for smart medical device manufacturers to overtly demonstrate compliance and trust to patients, healthcare providers, and governments.

The article was originally published in Med Device Online.

Explore more

Contact the team

We look forward to hearing from you.

Get actionable insight straight to your inbox via our monthly newsletter.