Medical Device Cybersecurity 2026: The MITRE Report Decoded

Interlynk

Medical Device Cybersecurity MITRE 2026

In April 2026, MITRE released a report that deserves more attention than it's getting in most compliance conversations: Cybersecurity Risk Analysis for Medical Devices in the Era of Evolving Technologies. Produced for the U.S. government and drawing on interviews with medical device manufacturers (MDMs), healthcare delivery organizations (HDOs), cybersecurity vendors, and regulatory consultants, it maps three technology domains — cloud computing, AI/ML, and post-quantum cryptography — against the cybersecurity realities facing medical device manufacturers today.

The report is not a checklist. It is a risk landscape. And reading it carefully, one theme surfaces repeatedly, even when the authors don't use the words directly: the medical device supply chain is now the attack surface.

This post breaks down what the report actually says, where it points MDMs toward action, and why the organizations that treat their software bill of materials as a living security instrument — not a regulatory deliverable — are the ones best positioned for what's coming.

The Central Shift: Shared Responsibility Across Ecosystem

For decades, the cybersecurity model for medical devices was relatively clean. An MDM built and sold the device; an HDO operated it on a network they controlled. Responsibilities were bounded.

That model is gone.

MITRE's report documents what practitioners have been experiencing firsthand: the modern medical device is increasingly a distributed system. Its essential functionality may depend on cloud infrastructure managed by a third-party provider, AI/ML models trained on data pipelines the MDM doesn't fully own, and cryptographic controls that will be obsolete within a decade. Meanwhile, the device itself may be running outside the hospital entirely — in a patient's home, managed by no one with security expertise.

The report puts it plainly: cybersecurity risk management of medical devices has always been a shared responsibility between MDMs and HDOs, but additional third parties are now part of the equation when defining roles and responsibilities.

What this means in practice: the attack surface is no longer the device. It's the ecosystem the device depends on.

Cloud in Medical Devices: Risk Categories

Cloud adoption in medical device design has accelerated, driven partly by the compute demands of AI/ML and partly by the economics of SaaS delivery. MITRE's report is candid about the risks this introduces.

When a medical device's essential functionality lives in the cloud — whether the MDM is consuming IaaS, PaaS, or SaaS from a third-party provider — the MDM cannot fully own the security posture of that infrastructure. Cloud service providers are not regulated under the same frameworks that govern medical devices. Their outages, their misconfiguration events, and their breach exposure all become the MDM's patient safety problem.

The Elekta ransomware attack is cited directly in the report as a proof point: a single cloud-based service disruption affected cancer treatment at over 170 facilities simultaneously. This is the blast-radius problem that on-premise architectures rarely created at that scale.

The report identifies three categories of mitigation for cloud risk:

  • Policies and processes. Service Level Agreements between MDMs and cloud service providers must define security expectations with specificity — not just uptime SLAs. ISO 13485:2016 purchasing controls (clauses 7.4.1–7.4.3) provide a framework for requiring suppliers, including cloud providers, to maintain adequate contingency plans.

  • Resilient architecture and design. This is where the SBOM conversation becomes unavoidable. MITRE's report states directly that SBOMs for cloud-based medical devices must include all cloud components — virtual machines, containers, every layer of the container image, machine images, and cloud-native services. An SBOM that stops at the application code boundary is incomplete for a cloud-connected device. It tells you what you shipped. It doesn't tell you what your device depends on at runtime.

  • Preparedness and response. Multi-region deployment, local caching as a fallback architecture, and backup strategies that account for cyberattack scenarios rather than just hardware failure.

The operational implication: MDMs need to know what's in their cloud stack with the same precision they bring to hardware component documentation. Most don't today.

AI/ML in Medical Devices: Black Box As a Patient Safety Issue

As of September 2025, the FDA's database of AI-enabled medical devices had grown to 1,357 entries — the overwhelming majority concentrated in radiology (76%) and cardiovascular (9.5%). These are not experimental technologies anymore. They are clearing pathways and reaching patients.

MITRE's report is notably measured about enthusiasm for AI/ML in medical contexts. Their interviews found that it is not generally clear to MDMs and HDOs where AI/ML would provide benefits that outweigh the risks it may introduce. That is a striking statement from a research organization, and it reflects what clinicians have been saying for years: the trust deficit is real.

The cybersecurity dimensions of that trust deficit are documented in detail.

Training data integrity is a supply chain problem. If an adversary can poison the data at any stage of the AI/ML lifecycle — raw collection, labeling, training, validation — the resulting model carries that corruption into production. This is functionally identical to a software supply chain attack. You ship the compromise with the product. Membership inference attacks can even be used to extract information about training data subjects, creating HIPAA exposure alongside the safety risk.

The black-box problem breaks traditional security assurance. Software security analysis depends on traceability: you can step through code, observe variable states, follow a stack trace, reproduce a failure. With AI/ML, the underlying behavior is typically a black box in which results can vary with each run. This is not a solvable problem with today's tooling — it's an inherent property of the technology. MITRE is direct: there cannot be complete confidence that a solution will work 100% of the time.

Adaptive mode creates continuous exposure. MDMs choosing adaptive AI/ML models — where the model continues learning in production — inherit a permanent attack surface: the data pipeline. Poisoned inputs can degrade model performance over time in ways that may not be immediately detectable, particularly if the adversary is patient.

Locked mode creates version-control clarity, but rigidity. Static, versioned models offer more predictable security properties and clearer auditability. The tradeoff is that updates require deliberate retraining cycles, which may lag the clinical reality the model needs to reflect.

The report's recommended mitigations — secured learning environments, guardrails with adversarial robustness testing, threat modeling that explicitly covers AI/ML components, Least Privilege architecture for AI subsystems — are all sound. But they require treating AI/ML components as first-class software artifacts with their own lineage, versioning, and dependency tracking. Which is, again, an SBOM problem.

Post-Quantum Cryptography: The Threat With a Deadline

Of the three technology areas the report covers, post-quantum cryptography (PQC) is the one with the most defined regulatory trajectory and the least industry urgency — a combination that historically produces bad outcomes.

Here is the situation plainly. Quantum computers capable of running Shor's algorithm at scale — what NIST calls a Cryptanalytically Relevant Quantum Computer (CRQC) — would render RSA, Diffie-Hellman, and Elliptic Curve Cryptography mathematically broken. Every medical device secured by those algorithms today would become vulnerable to decryption.

The "harvest now, decrypt later" attack is already underway at nation-state levels. Adversaries don't need a CRQC today. They need only capture and store encrypted traffic — device communications, patient data, proprietary telemetry — until quantum compute capability catches up. For medical devices with 10–15 year deployment lifespans, data transmitted today may be decryptable within the device's operational lifetime.

NIST published its first final post-quantum algorithm standards (FIPS 203, 204, and 205) in August 2024. NIST has announced plans to categorize all CRQC-vulnerable asymmetric algorithms as "Disallowed" by 2035. NSA has set a goal of transitioning national security applications to CNSA 2.0 (an all-post-quantum algorithm suite) by December 31, 2031.

The medical device industry's problem is structural. As MITRE's report notes, if an implantable medical device cannot be reprogrammed without physical access to the patient, cryptographic migration would require a medical procedure. That is not a software patch. That is a clinical decision, which means the transition timeline for some device categories is measured not in sprints but in device generations.

Practical actions the report recommends:

  • Conduct a cryptographic inventory across all devices and systems, identifying which use quantum-vulnerable asymmetric cryptography controls

  • Prioritize devices by risk profile and timeline exposure

  • Build crypto-agility into new device designs — the ability to swap cryptographic implementations without redesigning the device

  • Coordinate PQC migration planning with HDO partners, since legacy interoperability creates persistent risk even after new devices are updated

The report notes that automated cryptographic discovery and inventory (ACDI) tools exist but are oriented toward general enterprise IT — they are not validated for specialized medical device environments. MDMs cannot simply run an enterprise scanning tool and call the work done.

What the Report Expects From Your SBOM Practice

MITRE mentions SBOMs explicitly and specifically. The language is worth quoting in context: SBOMs for cloud-based medical devices would include all cloud components, such as virtual machines, containers and all the layers in the container image, the machine image, and the cloud-native services.

This is a materially different expectation than what most MDMs currently produce. The FDA's existing SBOM guidance focuses on software components in the device itself. MITRE is describing an SBOM that captures the full operational dependency graph — including runtime infrastructure that may be managed by a third party and that may change without the MDM's direct action.

For AI/ML-enabled devices, the implication extends further. A complete SBOM for an AI/ML medical device would need to encompass training data provenance, model versioning, inference infrastructure, and the data pipeline components through which new training data flows. None of this fits neatly into current SBOM format specifications, which were designed for software packages, not for data assets or model weights.

This is not a criticism of the current SBOM ecosystem. It is a description of where the work needs to go next.

10 Best Practices Decoded From the Report

The MITRE report is a risk analysis, not a prescription. But its content points clearly toward specific operational practices. These are not aspirational — they are what the report's risk findings demand.

  1. Map your cloud dependency graph, not just your code dependencies. Identify every cloud service your device touches during operation, including services used only during development pipelines (CI/CD infrastructure, model training environments). A supply chain attack on your training pipeline is a device integrity problem.

  2. Extend your SBOM to include container layers and cloud-native components. A container image is not a single artifact — it is a stack of layers, each carrying its own CVE exposure. Your SBOM needs to reflect that structure.

  3. Conduct contractual security reviews of every CSP relationship. Cloud providers are not regulated as medical device supply chain vendors, but they are functionally operating as such. Your ISO 13485 purchasing controls need to reach them.

  4. Build and test your offline degradation mode. If your cloud dependency becomes unavailable, what happens to the patient? Designing for graceful degradation is not optional for devices providing essential clinical functionality.

  5. Treat AI/ML model versions the same way you treat software releases. Model weights are code. Training hyperparameters are configuration. Data provenance is supply chain lineage. Version, sign, and audit them accordingly.

  6. Separate your training and test datasets with the same rigor you apply to production data access controls. Data contamination (where training data leaks into validation sets, or vice versa) is both a model quality failure and a potential security indicator.

  7. 7. Apply threat modeling to AI/ML components explicitly. MITRE ATLAS — the Adversarial Threat Landscape for Artificial-Intelligence Systems — provides the adversary framework that ATT&CK provides for traditional software. Use it.

  8. Inventory every asymmetric cryptographic control across your device portfolio. RSA, Diffie-Hellman, ECC — document where they appear, in what context, and what the migration path is. Do this before 2027, not in response to a regulatory requirement in 2034.

  9. Design new devices for crypto-agility from the start. The devices you are building today will still be in clinical use when NIST's 2035 "Disallowed" deadline arrives. If the cryptographic implementation cannot be updated without a hardware replacement, you are building a future recall problem.

  10. Define your PQC migration roadmap in coordination with your HDO customers. A device with PQC controls that interoperates with HDO legacy infrastructure that lacks those controls still carries quantum risk. Migration is a system-level problem, not a device-level problem.

Where the Industry Stands

The MITRE report is notable for what it does not claim. It does not say that MDMs are on top of these challenges. It documents what they face and what good practice looks like. The gap between the two is the real story.

Cloud-connected medical devices are being designed and cleared today by organizations that don't have a complete picture of their cloud dependency graphs. AI/ML components are entering production without formal threat models that account for adversarial inputs or training data poisoning. Cryptographic inventories that would take 18 months to act on haven't been started.

None of this is because MDMs are negligent. It's because these are genuinely hard problems, the regulatory frameworks lag the technology, and the tooling that would make compliance tractable doesn't yet exist for the medical device context specifically.

That last point is where the real work is happening. The organizations building that tooling — and the MDMs partnering with them early — will be the ones with clean migration paths when the regulatory deadlines solidify.

The MITRE report is a map. The question is whether you're using it to plan, or waiting until the terrain forces your hand.

Interlynk builds software supply chain security infrastructure for teams that take SBOM seriously as an operational discipline, not a regulatory checkbox. If you're working through what complete SBOM coverage looks like for cloud-connected or AI-enabled medical devices, we'd like to talk.

Request a demo | Read the MITRE report

Trusted by security and compliance teams at 100+ regulated companies

See your SBOM Done Right

Interlynk automates SBOMs, manages open source risks, monitors,suppliers, and prepares you for the post-quantum era, all in one trusted platform.

Trusted by security and compliance teams at 100+ regulated companies

Interlynk automatiseert SBOM's, beheert open-source risico's, monitort leveranciers en bereidt je voor op het post-quantum tijdperk, allemaal op één vertrouwd platform.

Zie uw SBOM goed gedaan

Trusted by security and compliance teams at 100+ regulated companies

Interlynk automatiseert SBOM's, beheert open-source risico's, monitort leveranciers en bereidt je voor op het post-quantum tijdperk, allemaal op één vertrouwd platform.

Zie uw SBOM goed gedaan

{{DKNiivMjg | unsafeRaw}}