To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.
The ethical concern here is whether the NSA should have reported the holes to the manufacturers and the failure to handle its privileged knowledge in a safe manner.
I definitely agree wrt intentional exploits ("backdoors") to be added. To me this news highlights the need for fundamentally safe software. Just like we might have safety laws in the automotive or airline industry.
At least in the US, there is limited ability to sue foreign sovereigns in our courts - not sure if that's the case in the UK too. Beyond that, I doubt this is a rabbit hole any government, much less the UK - which has a fairly imperialistic past - wants to go down. Glass houses and all.
Now that the U.S. has set an alarming precedent that the Kingdom of Saudi Arabia can be sued in U.S. court over terrorist funding, maybe the U.S. government could be sued.
I don't think they'd win; the ransomware authors and operators are the ones who perpetrated the act. The U.S. government probably wouldn't be found negligent since the software was stolen. NHS carries partial liability since it was negligent with its patching, according to industry-wide IT security standards.
Comparing it to firearms, I can be held partially liable for a wrongful death if I leave my Colt 1911 out on my porch; it's different if a burglar stole my gun safe and committed a crime.
(obligatory disclaimer that I am not a lawyer, I just play one on Hacker News)
They've been told for years to get off XP. They weren't paying MS to keep it updated. The exploit was patched months ago. Why were these machines even on the internet?
I'd say the NHS is far more at fault than anyone else here.
He is not talking about the actual flaws as being the example as to why we shouldn't give the NSA backdoor access; he is saying that the leaks prove that even the NSA can't keep their stuff secret. If they couldn't keep their hacking tools secret, why should we think they can keep their backdoor access secret?
Good time to remind folks that gmail, facebook, whatsapp, amazon etc aren't going to be able to protect their data forever at the levels they currently are capable off.
A couple of bad business decisions and they are where yahoo is today. So be smart about how you use these services and educate the non-technical folks around you.
What would 'being smart' about using these services mean? It is pretty difficult to get through life in the modern age without using email for sensitive documents (or at least without using ACCESS to your email as a way to gain access to sensitive services, eg password reset emails, proof of ownership, etc)
Since email in the modern world has this type of importance, what should I do? If you say gmail can't protect their data forever, do I not use gmail for email? What do I use then? No service will be free from data leakage, even an email server I run myself.
Distribute risk. Use multiple accounts. Don't handle all work/financial stuff on a single account. Keep work and personal accounts separate.
Reduce the number of hours you spend online being a data milch cow for these corps. This automatically reduces dependence. Don't allow messenger chat transcript backups to happen by just uninstalling the app every other night. Don't restore any saved transcripts on disk on reinstall.
I could go on and on but basic rule is use your imagination. Don't use these tools the way they want you to use them. Use them as you would use a tool in a workshed as an aid, not as a drug you are dependent on.
Just make sure whatever email provider you use offers IMAP and use a client like Thunderbird to keep a local copy in sync. Back that up somewhere safe and you're fine. If you need good, fast search, use something like X1.
This was something I thought POP did better since it requires maintaining one's own copies after downloading. But it was much less convenient as people used more devices.
Sad that managing our own multi device services is so time consuming.
Data theft is a separate issue. Whether your using gmail, your own mail server or an account with your ISP; if you're machine is compromised all bets are off (including all your other files, not just email). At least with a backup you wont lose your data as a result of the theft.
I would say that it's probably smart to occasionally purge all your content from online services and keep your data in cold storage you physically control.
There is quite a large cost to that, though. Being able to search through old emails is a lifesaver. I can't count how many times I have searched through email to find some account info I set up years ago, or to get date information about when something happened. Just today, I searched my email for my old FastTrak account info, and found it on an email from 5 years ago.
Deleting all my email would be a big cost to pay for a gain that I can't exactly quantify; I would have to figure out the likelihood of my data being leaked over time and the cost to me if the data was leaked. That isn't readily obvious what the risk factor is for me, but I KNOW the cost factor.
You can download/store your email to a medium that you control, like a portable hard drive. Storing email online invites theft and can provide hackers with personal information that can be mined for personal info.
I agree about this ethical concern, but this attack also shows that reporting the holes to manufacturers is of limited use -- these exploits have been known to manufacturers since at least March, and while patches have shipped, the computers remain vulnerable. Clearly, automatic security updates are still not aggressive enough to prevent these kinds of problems. Though it isn't clear from the article how out-of-date the vulnerable systems are, which would help in planning for the future. For example, Windows 10 pushes security updates very aggressively, and I wonder how many of the infected computers were running Windows 10 -- health care providers' computer systems are often notoriously out-of-date.
No-one running a large organisation's IT systems is going to be letting individual machines just install whatever updates the software maker feels like pushing, even on Windows 10. That would be a big risk in itself: plenty of software makers, including Microsoft, have pushed horrible breaking changes in updates in the past.
Personally, where I would point the finger squarely at Microsoft is in its recent attempts to conflate security and non-security updates. Plenty of people, including organisations who are well aware of what they're doing technically, have scaled down or outright stopped Windows updates since the GWX fiasco and other breaking changes over the past few years.
This also leads to silliness like the security-only monthly rollups for Windows 7 not being available via Windows Update itself for those who do update their own systems (not that this matters much if Windows Update was itself broken on your system by the previous updates and now runs too slowly to be of any use). Instead, if you don't want whatever other junk Microsoft feel like pushing this month, you have to manually download and install the update from Microsoft's catalog site. Even then, things like HTTPS and support for non-IE browsers took an eternity to arrive, and whether the article for the relevant KB on Microsoft's support site includes things like checksums to verify the files downloaded were unmodified seems to be entirely random.
I get that Microsoft would like everyone to use Windows 10, but since for some of us that isn't an option or simply isn't desirable. Since we bought Windows 7 with Microsoft's assurance that it would be supported with security patches until 2020, this sort of messing around is amateur hour and they really should be called out on it a lot more strongly than they have been.
I would be curious about this too. I'd assume many of them would be running Windows 7, maybe? (Let's hope it's not XP).
Also, does Windows 10 Pro attached to a domain controller still have the same aggressive updates? Or do domain admins dictate that policy?
At one company I worked at, everyone in IT could volunteer for the patch group to get security patches a few days before the rest of the machines. That seems to work pretty well. Is there any evidence there might have been a 0 day involved that wasn't patched? I find it disheartening that so many machines in large managed networks like telecos and hospitals could be so far behind on patches! (3 months is A LOT in Internet time).
If people are just doing really basic stuff like order entry for doctors/nurses, we really need to get away from the full PC model. Seems like most of these machines should just be Chromebooks, Linux boxes that boot straight to a browser or something of that nature instead of a full PC/Macs. Lower the attack surface with something that's easy to update. Those machines would be lower cost too and easier to manage/patch -- moving back to the terminal/thin-client model.
BMJ released a report[0] just two days ago alleging that up to 90% of the NHS's computers are still running XP.
> Many hospitals use proprietary software that runs on ancient operating systems. Barts Health NHS Trust’s computers attacked by ransomware in January ran Windows XP. Released in 2001, it is now obsolete, yet 90% of NHS trusts run this version of Windows.
It appears the Theresa May is trying to deflect attention from the fact that there has been massive under investment in NHS IT infrastructure by reinforcing that it is a 'international attack on a number of countries and organisations'.
Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.
>Whilst this is true, it's probably also true that the impact of this attack is highly concentrated across organisations with chronic under-investment and a laissez-faire attitude to security.
Good developers are rare enough, but good IT security and security-minded developers are even more rare. And it's even more rare that they decide to work within healthcare.
There just isn't enough of you to go around and you can't be everywhere.
Even if you can afford to have a dedicated pentesting team (I'd like to work at a healthcare system/hospital network that did), physical security is still a major problem if only because it's very easy to impersonate people.
It makes no difference whether they created the security holes by moles in the developers company or whether they simply withheld the information. They put human lives at risk by doing it.
> To be completely fair, it's not the NSA's fault that software has faults. Its the software manufacturers'.
While this is true, it doesn't address the point that you were responding to:
> this is an excellent example that we can all reference the next time someone says that governments should be allowed to have backdoors to encryption etc
...where "should be allowed to have" is interpreted as "should be given by software manufacturers".
That's half the NSA's mission. Tt has another half and that is eavesdropping and getting into things. Those two missions are at odds with each other, and so the NSA has to make decisions about trade-offs. As these incidents show, the trade-offs the NSA has chosen to make have turned out to have been bad ideas.
I don't think you can completely separate the issue from other gov't actions. When the NSA or other gov't agencies come knocking on the door requiring a backdoor or other system security compromises, I would argue that those actions become a broad discouragement for private industry invest in security beyond a certain point.
The ethical concern here is whether the NSA should have reported the holes to the manufacturers and the failure to handle its privileged knowledge in a safe manner.