Hackers manipulate lung cancer scans, fool radiologists and AI software in study
- Cybercriminals can alter 3D medical scans to remove existing medical conditions, or add false ones in, according to a new study by cybersecruity researchers at Ben-Gurion University of the Negev. Using an artificial intelligence technique called deep learning, hackers can dupe radiologists and AI software 98% of the time in the case of lung cancer.
- There are different plans of attack. One possible method is building a man-in-the-middle device, which hijacks the communication between the scanner and computer, planting it near the scanner and intercepting the CT scans for manipulation.
- Though the Israeli study focused on lung cancer scans, 3D CT scans are used to diagnose conditions such as heart disease, infections, blood clots, spinal injuries, bone fractures, ligament injuries and arthritis. Hackers causing a misdiagnosis based on faulty images could be motivated by insurance fraud, ransomware, cyber terrorism or even murder, according to the report.
Even in this era of increasing concern over online privacy, medical devices and records remain susceptible to manipulation and exposure, whether by malicious hackers or mere human error. Though cybersecurity ranked the top IT investment in 2018 and led the list of provider concerns coming into 2019, 62% of industry leaders experienced a data breach over the past year alone.
A three-dimensional computerized tomography (CT) scan combines X-ray images taken from different angles. It then uses computer processing to create cross-sectional images of bones, blood vessels and soft tissues in much more acute detail than standard X-rays alone.
In the BGU National Cyber Security Research Center simulation, the hacker had complete control over the number, size and locations of the cancers in the full-resolution 3D image, while still preserving the same anatomy from the original.
To inject and remove different medical conditions from the images, the Israeli cybersecurity researchers used a deep learning neural network called a generative adversarial network, or GAN.
In scans of healthy patients injected with false cancer, a team of three radiologists brought on to fact check the research misdiagnosed 99% of them as being malignant. When the GAN removed cancers from sick patient images, the radiologists misdiagnosed 94% as being healthy.
Since 3D medical scans are widely viewed as more reliable sources of definitive evidence than their counterparts, such as preliminary 2D X-rays, this poses a "significant threat," said the researchers, who actually broke into the network of a hospital to demonstrate the feasibility of the attack (with the facility's permission).
Early last month, Moody's warned the medical device industry is highly vulnerable to cyber risks, mostly due to the mounting volume of insulin pumps, cardiac monitors and defibrillators and other internet-connected devices. But even health systems with IT infrastructures unconnected to the internet aren't safe, BGU researchers say.
"Determined intruders can still gain access via the hospital's Wi-Fi or physical access to the infrastructure," Yisroel Mirsky, project manager and cybersecurity researcher at BGU’s National Cyber Security Research Center, said. "However, these networks are now being connected to the internet as well, which enables attackers to perform remote attacks."
The BGU team recommended a series of countermeasures, including enabling encryption between the different hosts in picture archiving and communication system network, enable digital signatures to sign each scan with a mark of authenticity or add a hidden signal into an image that would be distorted with any outside tampering, letting hospital staff know it's been corrupted.
- Healthcare Dive 5 scariest health data breaches of 2018
- Google Scholar CT-GAN: Malicious Tampering of 3D Medical Imagery using Deep Learning
Follow Rebecca Pifer on Twitter