One thing I didn't see mentioned is that it directly violates the data minimization principle, they collected data they should not have needed and they kept it around way longer than needed.
Yes, they should have updated their config, but why did you need a resume full of private info to join, and why did you not delete it after the membership is approved (let alone keeping it for decades).
There's no way OWASP don't know the difference between "the [entire] Internet" and "our website". I believe their intentions are to tone down the fear that people may have. It's gaslighting.
Yeah, seems kind of obvious thats what they meant.
I cannot understand why they didnt say "removed from our servers" rather than "removed from the internet" though, seems almost intentionally misleading.
Panic and/or complain loudly, I guess? You can't fix a data breach after the fact, which is why organizations need some sort of pressure to take preventative measures.
As a MediaWiki dev, i would love to know what happened here. Was it a vulnerability in MediaWiki? Some custom extension? Some other component on the wiki server unrelated to mediawiki? Was it just a really old software version with known public CVEs, ir something else?
Edit: the "disable indexing" makes it sound like there was some sort of system to upload private stuff and the directory it went in had mod_index enabled. Which i guess would not have much to do with MediaWiki.
Directory browsing was enabled and files were uploaded to an exposed directory during a much earlier time. The only issue was that the directory was in the middle of the MediaWiki webroot. Sadly, the institutional knowledge that these were there was lost in the mists of time. I would never have expected non-MediaWiki files to be in the webroot.
As others have said, the information wasn't something they'd need (arguably) to vet people, and even if it was, what is the argument to keeping all those people's personal data for more than a decade... They breached the confidentiality expected, and I'm not sure this can be just considered an "understandable mistake"... sorry, they need to be help to the same standards anyone possessing or processing data does.
Maybe I'm just hanging out in the wrong places but right now it feels like everyone is making stupid, stupid security mistakes all the time. If AI is the main buzzword, infosec is #2 right now.
It feels to me more like it's the power of the comprehensive background scanning (I've seen it called 'internet background radiation') that is constantly taking place: anything you forget about, anything you misconfigure, anything you are slow to patch, that is going to be found and exploited by some tool someone built somewhere that is continually searching the entire internet for the same five mistakes.
My firm have a Nessus scanner and we point it at ourselves as well as our customers. There are also several checks on the monitoring system that will flag if something suddenly starts working.
Background radiation is about right.
Run up a honeypot VM with a web service on it and watch the logs with something like lnav. You soon get a feel for how fast the legitimate (and I use that term advisedly) crawlers like Google and that rock up along with the others.
You will see a lot of hits from things with a Github link within their agent header - script kiddies or perhaps clever kiddies pretending to be script kiddies - more analysis needed. You will also see hits from agents claiming to be Google or Bing or Firefox on Commodore 64. Again, careful packet analysis, IP lists etc can be instructive ... if you can be arsed.
Anyway.
Humans cannot see network traffic. When you instruct your firewall to do something via its GUI or CLI you are merely providing instructions that may or may not actually do anything. Do feel free to actually test it. nmap, for example, is available for port testing and much, much more.
It’s the combinatorics. A small company might have a hundred micro services and some tens of third party dependencies and most of both only get touched or looked at when something goes wrong. Add in any code shipped to client machines and the various versions of everything and then add in basically everything related to IT and phishing and…
Exactly. Employee A adds a feature and adds security policies X and Y to prevent abuse. Employee B adds another feature and disables policy X because it conflicts with the new feature and policy Y is still in place. Employee C adds some functionality that conflicts with policy Y, but reasons that's OK because a comment says the feature is protected by both the X and Y policies. So policy Y gets disabled too.
You can see why everyone would then start pointing fingers at each other. Hopefully, regular reviews and careful analysis prevent this kind of situation.
The combinatorics works against “regular reviews and careful analysis”. The more the network grows, the more time+resources each review will take, but must be done at a regular interval, squeezing out other priorities.
The best solution to keeping the combinatorics down is to have someone with authority who is happy to say “no” when a proposed new service doesn’t closely align with the org’s key goals.
> Hopefully, regular reviews and careful analysis prevent this kind of situation.
Without incentives it will just keep happening. We need to:
1. Incentivize emphasis on security by penalizing data breeches in a non-trivial way
2. Make orgs much more careful about the data they collect by mandating penalties which scale exponentially with the potential harm of the data which was released -- up to and including existential destruction
Software that gets used ships with insecure defaults, and software that ships hardened and totally locked down and must be configured for everyone's individual use case typically doesn't become successful.
It's a technology problem, not a people problem. We have simply made technology where it is too easy to eventually make a simple mistake.
For one obvious, simple example, most tech is optimized for "Make it easy to access all data from anywhere" as opposed to "require affirmative consent for connections to all new locations". It is surprisingly difficult to lockdown most systems, whether they be laptops or server networks, with that easy rule. Just look at folks who use Little Snitch - it can be difficult to use this effectively because so many apps need to talk to so many different servers that it can just be exhausting attempting to determine if a new connection is malicious or not.
Similarly, look at some recommended settings for a "secure" Content Security Policy on the web. There are a boatload of different options that are recommended to be set, because the original defaults (e.g. "Sure, you can load me in an iframe of any other site!") are so insecure.
As a consequence of this, it's very difficult for any (a) organization that can't afford top-notch security folks or (b) organizations that are so large, with a potentially huge history of acquisitions over time, with a giant surface area where all you need is one "chink in the armor", to prevent breaches.
This is every bit as much a people problem as a technology problem. When I read the headline, I didn't even understand what there possibly was to breach at OWASP.
> OWASP collected resumes as part of the early membership process, whereby members were required in the 2006 to 2014 era to show a connection to the OWASP community. OWASP no longer collects resumes as part of the membership process.
Why did OWASP retain this information ten years after they stopped the practice?
Probably because no one made the decision to delete it, and the person responsible for the system that it was on wasn't going to delete data without a strict order to do it. I think it is more of an 'organizational' problem, in that organizations don't have data retention and deletion policies.
It’s at least partially because no one wants to employ actual operations specialists. Instead believing that developers and security response folks are sufficient. But good, sufficiently staffed ops teams know how to build real defense in depth across the stack, don’t just blindly trust automated scanners or installers, and proactively improve security iteratively over time.
Also a lot of devs who think if they can't see how to exploit something, nobody else will be able to. Honestly, the arguments I've had with people claiming that their SQL injection isn't a problem ("it's in the middle of a clause" was a good one).
One would think so. I remember looking through my very first hosting provider's settings page for my site and saw that "Directory Listing" (or similar) was "On". I thought to myself "Well that doesn't sound right, if it means what it sounds like it means." I Googled and that's what it meant. I turned it off. So if a complete newb setting up his first website thought it was a bad idea, one would think, like you said, that a cyber security company would know to disable it (or double and triple check that it's disabled). With all that said, hopefully this is an April Fool's prank.
Yes, they should have updated their config, but why did you need a resume full of private info to join, and why did you not delete it after the membership is approved (let alone keeping it for decades).