It sounds like the underlying cause in this particular case is a piece of software is running an open port that a lot of people don't even realize is there, and for some reason is exposed outside the firewall.
Take the case of "Offsite Image" as an example;
> The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.
> “We were just never even aware that there was a possibility that could even happen,” Nelms said.
That sounds like to me that they build a custom front-end to serve the images which was doing password authentication and billing, but failed to notice the underlying image server software was exposed and responding to queries with no authentication.
I'm guessing somewhere there's a configuration file which has a default IP binding of 0.0.0.0 and a blank password field.
There's an entire industry of consultants who will help your business pass a PCI audit, and everyone who handles payments requires a PCI audit annually.
There should be equivalent standards and audits and consultants for private information of all sorts, including healthcare info.
Audit is a paperwork exercise. There are no Wizard-like beings who can spot the misconfiguration of a Java servlet or have an informed opinion about your RSA key sizes. When someone who isn't sure of the difference between GitHub and Jira picks "Yes" we do have appropriate controls in place for network security it is Nobody's job to open a developer console and copy-paste raw image URLs to prove they aren't actually insecure.
The question is how many businesses really want to pay the money for somebody to audit their entire network security through the OSI stack at layers 1-7, versus doing an audit to pay lip service to PCI compliance and check some boxes in a form.
> The question is how many businesses really want to pay the money for somebody to audit their entire network security through the OSI stack at layers 1-7
If there was a mandatory requirement (e.g., had it been incorporated in the HIPAA certification requirements that the ACA required the Department of Health and Human Services to have in place and in effect for some HIPAA transactions by the end of 2013, and for all HIPAA data by the end of 2015), the question would be equivalent to “how many businesses want to be legally permitted to conduct business involving HIPAA-covered transactions and data”.
Or course, not only was that not in the regs, the regs were late, and withdrawn without going into effect, so there are no (not even lip service audit) certification requirements for entities in health care.
>There should be equivalent standards and audits and consultants for private information of all sorts, including healthcare info.
There is. In fact, that standard and the regulations attending it were tailor made for healthcare information. These guys are in deep doo doo, and based on the way they talk, it's not clear to me that they understand that fact.
Number one, they took the job. The job of holding that data in the first place brings with it certain iron clad legal obligations. It's not something that some startup or random company should just launch off into willy nilly.
Number two, having taken the job, they didn't properly secure the data. Which is actually a federal crime.
Which brings me to number three. The fact that they publicly communicate these facts in as laissez faire a fashion as they do really does betray not only a level of technological ineffectiveness, but also a level of legal naiveté that borders on imbecilia.
No, if they aren't due to “willful neglect” and are corrected within a reasonable time (for which the minimum allowed is 30 days after the offendig party either knew or reasonably should have known of the violation, but more can be allowed based on circumstances), no monetary penalties can be imposed for HIPAA violations. See 45 CFR 160.410(c).
And how could they “reasonably” not have known they were broadcasting these records to the world? To take the one thing you are storing and put it on the internet for anyone to access?
Maybe this underlying software which has a port open does so without informing anyone that it’s doing it? I can’t quite believe that.
So that leaves me thinking that at some point, someone did not RTFM.
The analogy would be something like running a Bitcoin wallet service and leaving your bitcoind RPC bound to 0.0.0.0 with no password.
> There should be equivalent standards and audits and consultants for private information of all sorts, including healthcare info.
Healthcare doesn't have mandatory certification/audits (there is a legislative requirement to adopt regulations for it by a date that has long past, but it is one of many required rules under HIPAA that the executive branch has simply elected not to come up with regs for on the legally-required timeline.)
OTOH, at a minimum this triggers the breach notification requirements under HIPAA, and will also trigger scrutiny on the degree to which the breach results willful neglect of security.
No, there's HIPAA, but it is actually weaker than lots of people think (in part because of required regs that the executive branch simply has not bothered to develop, in part because the regs that do exist have big gaping escape-from-liability holes in them.)
It sounds like these are DICOM servers being exposed to the internet. DICOM is the standard file format for medical images, and there's also a DICOM server standard. Medical equipment sends images to DICOM servers so your ultrasound (or whatever) can be archived and stored in your medical records. The generic term is a PACS ("Picture Archiving and Communication System") but since equipment speak DICOM, they're always(?) DICOM servers.
DICOM servers are often not very secured, so if you allow the internet to talk to them, you're in for a bad time.
Most of the DICOM providers I've worked with assume that the clinic will be handling security, as the images are stored on a local DICOM server, though lately there's been much more emphasis on setting up a VPN between the clinic and the PACS/DICOM provider's cloud-based system.
It makes sense. You can't exactly update dicom clients easily so really there's no universe where the data should be passing over the internet without wrapping it all in a VPN. There's not much point in trying to secure it since that might just encourage people to expose it to the internet...
Nothing scares me more than my medical records on internet. I have close to zero confidence into any of the hospitals or medical companies to implement their IT securely.
I've met people whose job is to put honeypots on hospital networks to detect ransomware installation attempts. They supposedly have a very short response time SLA, like "if there's bad network activity, we will call your administrator within 5 minutes." It's interesting that job exists, since the ransomware mostly uses old exploits that shoulda been patched anyway.
One of the problems is that many of the medical devices on hospital networks have badly outdated software, that runs proprietary applications. There should be a better job done by vendors to clean that up, but no-one seems interested in supporting this once the sale is made.
Sometimes the IT staff simply have their hands tied, and network isolation is the best they can do, at least for medical devices.
Billing and file sharing vendors should on the other hand have active maintenance contracts to prevent exactly this.
> There should be a better job done by vendors to clean that up, but no-one seems interested in supporting this once the sale is made.
Working in healthcare IT (and I should note this is my personal opinion only), it's actually a little more complicated than that. Healthcare software vendors are generally building their software to meet certain certifications (because the clinics are demanding that), as well as fixing security/patient safety issues, and lastly adding in features the clinics want
The problem comes in that not every clinic cares about the certifications, and the software they have "works fine." So there's no incentive to upgrade to a newer version that has security fixes.
It's important to remember that, with a few exceptions, doctors are not IT, and many clinics are small enough that they outsource their IT. If that IT group doesn't force the clinics to upgrade, the clinic will continue using the version that does what they want, as long as it isn't obviously broken.
Clinics are generally risk-averse, which in many cases is the correct mindset. Unfortunately, that affects their perception of the benefits of updated software.
What was most frustrating to me while reporting this story, was that a researcher had written about this exact problem (after scanning ipv4 for the port) back in 2016. HIPAA is one of the only federal data privacy laws w/ teeth, and this type of obvious insecurity is pretty inexcusable.
Thanks for reading. If you think there's something we should look into, we'd love to hear about it. :-)
P.S. As a fast.ai student -- thanks Jeremy for posting!
If my images are on the internet, then I expect crowd-sourcing of doctors and scientists to tell me everything that is breaking / broken in my body. ;-)
I'm shocked Jeff Larson is back at ProPublica after the debacle he and Sue Gardner caused by removing Julia Angwin as editor in chief at The Markup earlier this year.
I'm guessing he might be freelancing or contracting, but he does have a ProPublica staff page.
I'd love for anyone with knowledge about this to chime in.
Take the case of "Offsite Image" as an example;
> The company referred ProPublica to its tech consultant, who at first defended Offsite Image’s security practices and insisted that a password was needed to access patient records. The consultant, Matthew Nelms, then called a ProPublica reporter a day later and acknowledged Offsite Image’s servers had been accessible but were now fixed.
> “We were just never even aware that there was a possibility that could even happen,” Nelms said.
That sounds like to me that they build a custom front-end to serve the images which was doing password authentication and billing, but failed to notice the underlying image server software was exposed and responding to queries with no authentication.
I'm guessing somewhere there's a configuration file which has a default IP binding of 0.0.0.0 and a blank password field.