Security vulnerabilities have been found in several implantable devices, allowing for alterations in their operation.
“Many implantable devices, probably almost all, have some type of security vulnerability or potential vulnerability, or were not designed with security in mind,” said Bill Aerts, executive director of the Archimedes Center for Medical Device Security at the University of Michigan.
What makes them potentially vulnerable is that they need to communicate with systems outside the body
An increasing number of electronic devices are being implanted in patients, and it is increasingly likely that they include some form of wireless connectivity. These devices communicate not only with hospital systems—allowing doctors to update them remotely and collect patient condition data—but also with electronics like smartphones, enabling patients to monitor their progress.
These connected devices, while beneficial, may pose security risks that could potentially endanger patients, at least in theory. So far, despite numerous warnings about their vulnerabilities, there have been no real-world attacks, and no patients are known to have been harmed.
Vulnerabilities
Security flaws have been discovered in various implantable devices, enabling modifications to their functionality, such as increasing or decreasing the insulin flow from a pump or adjusting a pacemaker’s rhythm.
The threat, though theoretical, became significant in 2013 when former U.S. Vice President Dick Cheney disabled the wireless connectivity of his pacemaker over fears it could be hacked and put his life at risk.
Last year, the U.S. Food and Drug Administration (FDA) warned users of Medtronic implantable cardioverter-defibrillators of a vulnerability that allowed unauthorized individuals to access and tamper with them. Similarly, in 2017, the FDA issued a warning about several pacemakers made by St. Jude Medical (now part of Abbott). The issue was addressed with a software update, but patients had to schedule a doctor’s appointment to install it.
Researchers have demonstrated, in principle, that implantable medical devices can be hacked, threatening patients in several ways. For instance, software malware has been shown to be remotely installed on a pacemaker, either withholding necessary shocks or delivering unnecessary ones.
By interfering with wireless communications, hackers can send unauthorized commands to devices. Researchers typically notify manufacturers of vulnerabilities they discover, allowing companies to fix the issues before hackers can exploit them.
Security Challenges
Protecting implantable devices poses unique challenges. These devices often have limited memory and computing power, restricting security features. They typically remain implanted for years, leaving them susceptible to newer attack methods. Encryption to prevent interception is often absent, as it can reduce battery life. Additionally, devices frequently lack mechanisms to authenticate whether changes are being made by authorized medical professionals or malicious actors.
“There is a security problem with implantable devices, and it has been demonstrated in various cases. But it’s hard to quantify how big it is because, for hackers, it’s difficult to monetize,” said Kevin Curran, a cybersecurity professor at the University of Ulster.
Traditional models that criminals use to profit from security vulnerabilities do not work with medical devices, he said. Malware creators typically earn money in two ways: extorting victims or infecting computers to mine cryptocurrencies for sale.
Medical devices lack the processing power for cryptocurrency mining, and issuing threats is challenging due to design limitations (e.g., no screens or significant internal memory). “It’s hard to see how this could become a large-scale money-making exercise,” Curran added.
However, medical device manufacturers are increasingly prioritizing security. Jake Leach, CTO of Dexcom, which makes continuous glucose monitors, told BMJ: “As technologies change, so do hackers trying to break them. You must always improve your security.”
Unlike manufacturers of many other technologies, medical device companies often withhold security details, making it harder for healthcare professionals to assess devices’ strengths and weaknesses.
Increased Awareness and Regulation
The medical industry is becoming more aware of the need for encryption and authentication in implantable devices to prevent hackers from exploiting vulnerabilities in wireless communications.
For example, the WannaCry ransomware attack in 2017 unintentionally affected medical equipment. While hospitals were not the intended targets, older, unpatched operating systems in hospital computers were disabled. Medical devices on the same network became collateral damage.
Rising Risks and Awareness
Doctors are increasingly likely to face patient questions about device security. According to Rohin Francis, a cardiologist at University College London, “We’ll see more panic and discussions between patients and doctors about the dangers of implantable devices. There will be alarmists, and there will be those who downplay the risks. Patients and doctors will find it hard to determine the reality.”
Communicating with patients authoritatively can be challenging for doctors. A study last year found that only 2% of summaries for FDA-regulated implantable devices included cybersecurity information. The study’s authors noted that this “is concerning as it prevents patients and doctors from making fully informed decisions about the risks associated with the products they use.”
The FDA has published pre- and post-market guidelines for the cybersecurity of implantable devices, emphasizing that device security responsibility is shared among manufacturers, healthcare providers, and patients. It also requires cybersecurity to be incorporated into the design and development of medical hardware and mandates that manufacturers have risk management programs to address vulnerabilities before they are exploited.
In Europe, new regulations implemented in May 2020 addressed implantable devices, requiring that devices using software be manufactured “according to the state of the art” for security. Manufacturers must set minimum security standards, including preventing unauthorized access.
In the UK, a 2019 update to the Code of Conduct for data-driven health technology introduced 10 principles for health technology, requiring manufacturers to prioritize security during design.
Educating Patients
Bill Aerts emphasized that patient education about cybersecurity for medical devices should “be integrated throughout the support chain, for instance, nurses and doctors who must assist patients in maintaining their devices.”
“There should also be more public education and awareness campaigns,” he added. “This doesn’t mean the doctor shouldn’t have some surface-level knowledge, but the burden shouldn’t fall entirely on them.”
“Public education on security is necessary, and healthcare professionals need resources to respond to questions and provide training.”
A Case Study: Pacemaker Vulnerability
In August 2017, the FDA announced that 465,000 pacemakers manufactured by Abbott were subject to a security breach. The pacemakers, from six brands acquired when Abbott purchased St. Jude Medical, were vulnerable to unauthorized access.
The flaw allowed unauthorized individuals (other than the patient’s physician) to send commands to the pacemaker, potentially draining its battery or delivering what the FDA described as “inappropriate pacing.”
Abbott released a firmware update to reduce the risk, requiring devices attempting changes to prove authorization.
Patients had to visit their doctors to install the firmware, which took about three minutes. During the update, the pacemaker operated in backup mode at 67 beats per minute. The FDA noted a very small risk (0.003%) of device failure or loss of programmed settings during the update.
Research showed that only 25% of patients scheduled for appointments after the software update opted to install it. Younger men and those with newer devices were more likely to choose the update. The study concluded that “most patients and doctors affected by a cybersecurity recall chose not to patch the software issue and continued using their pacemakers anyway.”