Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just out of interest have you had any legal threats etc from this kind of probing if they don't have explicit bug bounty programs? Also do you ever get offered bounties in on reporting where there wasn't a program?


In Germany, the case of a company called "Modern Solution" has gained quite a bit of traction. An IT guy found a password, tried it on the company's phpmyadmin and reported that he could access their data. They sued him and the case went up to the highest German court, which acknowledged the lower court's decision to rule with the company. The IT guy got fined.

https://www.heise.de/news/Bundesverfassungsgericht-lehnt-Bes... (German article)


Some additional relevant information:

When the changes that toughened the § 202 StGB were made in 2007, there were a lot of public rallies against it in which many programmers participated. These were ignored by the politicians in power. This (together with other worrying political events) even lead to a temporary upcoming of a new party (Piratenpartei) in Germany.

The fact that these rallies were ignored by the politicians in power lead to the situation that from then on by many programmers the German politicians got considered to be about as trustworthy as child molesters who have relapsed several times.


Lesson: instead of being the good guy and reporting shit, just sell it on black market.


(playing the devil's advocate here) But that's not the case- if you find someone's physical keys in the street, will try to open the neighbor's door with it? so why is it ok to use a password that you "found" to log into a site?


Curiosity. I once dropped my keys on the way to my leasing office. I searched the entire complex and office for my keys. Then I saw a guy at the mailboxes trying to open each one, one by one.* I asked if he needed help and he just said he found some keys on the ground and wanted to find out who they belonged to. They were mine. And my mailbox was in the other side of the complex so all bets were off for him anyway.

It costs next to nothing to try out a key in multiple places in the same proximity. Once you start going door to door using a random key you found, that's suspicious.

*it occurs to me now that I write this that this behavior is suspicious as well and probably illegal. He should have turned it into the leasing office.


that actually maybe super illegal if they are usps mailboxes.


They... Probably are? They were my complexs mailboxes but only usps has access to them.


Instructions unclear - any key I find now onwards I’ll mail it to this guys leasing office.


No, it's different. I would compare it to my neighbor using a padlock with code combination. It takes 15 minutes to brute-force that. If I tell my neighbor that his padlock is shit and in response he sues me to oblivion, next time I'll just tell local thugs "hey here's the padlock, here's the code, do what you must", zero regrets, if the asshole insists on being an asshole just for the shits and giggles then so will I.


If I don't try the keys in my neighbor's door, how will I know which neighbor they belong?


It's even worse, you find a key that you know belonged to your neighbor so you try it out just in case in his door.


I don't think the common analogy of "key to a house" makes any sense. For starters, a significant portion of people in existence aren't trying to break into your house 24/7.


The kind of probing they did and described in the blogpost, with the attempt to raise their privileges to admin is legally fishy AIUI. Usually this kind of thing would be part of a formal, agreed-to "red teaming" or "penetration testing" exercise, precisely to avoid any kind of legal liability and establish necessary guidelines. Calling an attempted access "ethical" after the fact is not enough.


Good-faith security research[0] is the only way this industry will move forward, for better or worse. It is clear that most companies do not want to invest in anything further like VDPs.

[0] https://www.justice.gov/archives/opa/pr/department-justice-a...


Without any sort of formally posted bug bounty program explicitly authorizing this sort of activity the CFAA prohibits unauthorized access of "protected computers". I would classify this as legally risky. If FIA had a stick up their ass they could definitely come after the researcher. The researcher's ethical standing is pretty clean in my book, but this was definitely a little more than just changing a URL parameter (only a little more). I would say this is unsafe to do if you are in the united states. The stopping point was somewhere around "I think I could provide the admin role" and reaching out to the best contact you can find and say "Hey, I am an ethical white hat security researcher and I noticed X and Y and in my experience when I see this there is a pretty reasonable chance this privilege escalation vulnerability exists. The chance it exists is high enough in my experience that you should treat it like it exists and examine your authorization code. If you would like I can validate this on my end as well if you give me permission to examine this issue. I am an ethical security researcher" ---> point over to your website and disclosed issues if you got em. To just do it is ehh... I would not take the risk. However if I /did/ do it I would definitely disclose it to them immediately and give an explanation like the above. Shooting the messenger in this case would be pretty asinine, especially if they didn't access anything sensitive, that would preclude FIA from having any evidence you did anything sketchy (cause you did not). The reason I would not do it is because you never know if a system like this pre-fetches data, etc. and that is definitely opening you up to liability of possessing PII etc. Overall, I have disclosed issues like this in the past without actually exploiting the issue to good results. Some times companies ignore it. You can always say "If you do not want to treat this issue as a vulnerability I am going to write this up on my website as an example of things you should probably not do" if you feel ethically compelled to force them to change without actually exploiting the issue. People tend to get the message and do something.


I'd highly recommend adding some newlines to such comments. Walls of text are not fun to read.


... so you'd prefer that the only people doing this will be black-hat hackers who then sell the information on the black market?


I think nobody does, but ultimately our laws are stupid. The CFAA in particular can be unfairly weaponized to make examples, and can put people in prison for DECADES for activities that don't warrant such a response.


What he did there could indeed be legally risky.

Remember that while for a lot of us this kind of security research & remediation is “fun”, “the right thing to do”, etc there are also people in our industry that are completely incompetent, don’t care about the quality of their work or whether it puts anyone at risk. They lucked their way into their position and are now moving up the ranks.

To such a person, your little “security research” adventure is the difference between a great day pretending to look busy and a terrible day actually being busy explaining themselves to higher ups (and potentially regulators) and get a bunch of unplanned work to rectify the issue (while they don’t care personally whether the site is vulnerable - otherwise they wouldn’t have let such a basic vulnerability slip through - now that there is a paper trail they have to act). They absolutely have a reason and incentive to blame you and attempt legal action to distract everyone from their incompetence.

The only way to be safe against such retaliation is to operate anonymously like an actual attacker. You can always reveal your identity later if you desire, but it gives you an effectively bulletproof shield for cases where you do get a hostile response.


> while they don’t care personally whether the site is vulnerable - otherwise they wouldn’t have let such a basic vulnerability slip through

Even if they do care personally (which I would assume is often the case if the respect person is not an ignorant careerist), they often don't have the

- organizational power

- (office-)political backing

- necessary very qualified workforce

to be capable of deeply analyzing every line of code that gets deployed. :-(


When I was still in university I reported a vulnerability and when the company started threatening me with legal action, my professor wrote a strongly worded email and they dropped it. Haven't had it since in 8 years. Feels like many companies understand what we do now, atleast compared to 10 years ago.


This seems depressingly common in universities. I know of a case where someone discovered anyone with a university account (so students, etc.) can edit DNS, and the IT tried to file charges until the head of CS department intervened.


Many years ago when I was at school, I found a paper on a table in the computing library with a list of root passwords for some of the machines at Yale, just sitting there. I tried one and it was valid (this was the old days when remote root logins were a thing). I sent the admins a message telling them, and I was entirely ignored. A month later I tried the password again and it was still good. Luckily for me, I guess, it was before the days of suing people for trying to be helpful.


Actual legal threats are uncommon but I have seen some companies try to offer a bribe disguised as a retroactive bug bounty program, in exchange for not publishing. Obviously it is important to decline that.


Decline because it'd mean you were profiting off of a crime? Or that the opportunity of publishing has higher value than the bribe?


Decline because the public deserves to know the company has that approach to security.


Take the Money and have someone else publish it


Thanks, its cool to hear attitudes have changed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: