TRIMEDX Chief Information Officer Brad Jobe recently contributed an article to Medical Device News Magazine (July 3) and Healthcare Business Today (July 7) about the cybersecurity risks and benefits of artificial intelligence, and how health systems can use AI to empower their clinical engineering and cybersecurity teams. The full article is below.
Using artificial intelligence in clinical environments will change the cybersecurity landscape for health systems. While specific applications of AI can enhance health systems’ cybersecurity capabilities, others will raise new concerns over safety and privacy. To be successful, adopters of new technology must fully understand its risks and benefits while ensuring the technology is used to empower their clinical engineering and cybersecurity teams.
AI brings additional cybersecurity challenges
The use of AI brings about several new challenges executives must be ready to address. The Cybersecurity and Infrastructure Security Agency (CISA) has highlighted the risks posed by AI and is working with industry partners to help organizations secure their AI systems. As of May 2024, the U.S. Food and Drug Administration (FDA) has granted some form of authorization for use to 882 medical devices claiming to incorporate AI or machine learning.
AI within health systems will often require software to access patient data and communicate with other sources to pull or share information. Network connectivity in hospitals is already on the rise, with one survey finding 74% of healthcare organizations had more than half of their medical devices connected to the network. More and more devices will likely become connected in the future, in part to power machine learning. While connected devices can improve patient care, they also create more network access points, a larger attack surface, and potentially new vulnerabilities.
The amount of data needed for AI technology within health systems is also a risk factor, especially when that includes electronic protected health information (ePHI). The World Economic Forum has reported the average hospital generates over 50 petabytes of data annually. While AI’s ability to analyze that data has many potential benefits, it also carries risk. Medical devices and electronic health records are critical security points because of the amount of personal data they transmit and store. If a network were compromised, AI systems that access these records to aid in clinical operations could become an additional entry point for unauthorized ePHI access.
AI technologies may also bring new software vulnerabilities that need to be managed. For example, machine learning programs could potentially be manipulated to produce inaccurate results if breached in an attack. If undetected, breaches could create ongoing threats to patient safety.
While many organizations are making use of AI technologies to innovate and increase efficiency, bad actors are also making use of the technology. Earlier this year, Microsoft said the “speed, scale, and sophistication of attacks has increased alongside the rapid development and adoption of AI.” Phishing emails are a particular area of concern for health systems and other organizations. Already, phishing is one of the most prevalent types of cybercrimes. The emails are scams designed to trick users into revealing personal information or clicking on malicious links.
The Harvard Business Review (HBR) reports AI is increasing the quality and quantity of phishing emails. Cybercriminals are using large language models (LLMs), to automate the entire phishing process, which can reduce the cost of the attack by more than 95% while still reaching the same or greater success rates as scams created by human experts. HBR warns it expects phishing to continue to become more common and more sophisticated in the coming years.
Health systems should consider working with a trusted partner who is well-versed in these emerging and advancing threats and has the latest technology to strengthen the organization’s cyber resilience.
AI can help health systems guard against cyberattacks
Health systems and their cybersecurity partners must establish ways to identify, assess, prioritize, and address the risks new technologies present. A key obstacle to meeting this challenge is the capacity for both IT and clinical engineering teams to keep pace with the quickly evolving risk landscape. As with BMETs, cybersecurity professionals are in high demand and short supply. A 2023 study estimated a shortage of 522,000 professionals in the cybersecurity workforce in North America alone.
As expanding cybersecurity needs outpace the growth of the workforce, AI can reduce the manual effort and turnaround time needed to execute a cybersecurity strategy. This allows teams to focus their time and resources where they will have the greatest impact.
Organizations can use monitoring tools with embedded AI algorithms to monitor connected device inventories. Detecting and reporting anomalous behavior is far easier and faster with a technology solution that is always on, as opposed to traditional monitoring methods. In addition to enabling monitoring on a greater scale, AI could increase the agility of cybersecurity programs. While typical software relies on updates from developers to respond to new information, true AI utilizes machine learning to adapt to new information quickly and more autonomously. AI will be able to identify new anomalies and possible cyberthreats sooner, helping health systems keep pace as new vulnerabilities are discovered.
The immense number of both new and previously identified vulnerabilities that healthcare cybersecurity teams need to monitor makes the case for a technology-assisted strategy. According to a recent Health-ISAC report, 1,000 vulnerabilities impacting almost as many healthcare products were identified in 2023—a 59% increase from 2022. Individuals in IT or clinical engineering could not realistically be expected to retain this much information themselves, let alone use it to address their organization’s needs. AI-powered technology has the potential to store vulnerability data efficiently, make it more easily accessible for cybersecurity projects, and connect it with supplementary information from other sources to increase its practical value.
AI can also be beneficial in deciding the right approach to addressing a cybersecurity risk, but it should be used in conjunction with human expertise. Understanding how possible solutions can impact clinical operations requires an in-depth understanding of a health system’s unique needs. AI software can help standardize the assessment of all the unique risk factors cybersecurity teams need to consider and match that information with potential solutions. The software can then prioritize an organization’s greatest threats and provide precise recommendations. However, the judgement and skills of cybersecurity professionals are then essential to both executing the recommendations and verifying how carrying out cybersecurity projects will impact the work of clinical teams.
Human expertise must remain a top priority
The potential benefits of AI technology that the healthcare industry could pursue in the coming years are numerous and exciting. However, it is imperative to understand and accept that new digital technologies come with their own cybersecurity risks that must be addressed. To effectively prioritize patient safety and privacy as AI becomes more common in IT and medical device applications, health systems must rely on experts dedicated to patient care, cybersecurity, and technology management.
The ideal role of AI is to support healthcare providers and technology professionals in making informed and compassionate decisions, not to replace them. Health systems that experience the greatest success in adopting and using AI-powered tools will be those that recognize the irreplaceable value of the expertise clinicians, biomedical engineering technicians, and cybersecurity teams bring to their organizations and patient care.