Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

that's just dumb, like third parties do all the work and contact you about critical bugs the only effort on Apple's part of verification and some coordination which shouldn't be a huge issue for a company the size of apple.. just hire a team to do it and be done with it the whole 'secrecy culture' is a bunch of hogwash


Apple is all about silos.

So a security threat gets reported to this bug bounty team. They are able to reproduce and confirm. The bug is in some deep, crusty part of the kernel; the code for which isn't available to this team, because Silos.

The team who does have access to this Silo is tracked down. It gets processed into a ticket. Maybe it gets done, maybe it doesn't. Their backlog is already maxed out, as is everyone's.

The security team says "we've done all we can".

This is not a matter of "lol just hire a team". You need leadership aligned, so the manager's manager can see their task list, or open security issues, and say "what the fuck, why are you not prioritizing this".

That's not Apple. Apple is product-driven. They actually, legitimately don't care about Privacy and Security. Their manager-managers get mad when products aren't delivered on time. They may also push-back on purposeful anti-privacy decisions. Its not in their culture to push back on Security issues, or latent Privacy issues resulting from those Security issues.

"Just tear down the silos" > Working for Apple is a cult. The silos are a part of the cult; left-over from one of the worst corporate leaders of all time, Jobs. Try telling a cult member their beliefs are false.

"Grant the security team access to everything" > And, I guess, also hire the smartest humans to ever exist on the planet to be able to implement security improvements across thousands of repositories in dozens of programming languages, billions of lines of code? And, even if they could, push those changes through a drive-by review and deployment with a team on the other side of the planet you've never met? You, coming into their house, and effectively saying "y'all are incompetent, this is insecure, merge this" (jeeze ok, we'll get to it, maybe in... iOS 18)


Accurate - Engineering at Apple has no tradition of security; nor does it have a tradition of being very efficient. It's mostly based on heroics of some very few very talented developers. Processes that are in place are actively hindering development.

Scaling development is hard, and Apple has never really gotten it right. I am wondering if a zero day is $1M on the open market - wouldn't it be easier and cheaper to get an engineer inside Apple to leave some plausible deniability bugs in the code? Or compromise an engineer already there?

Software engineering never had security as its main goal - but today, if you had to do it all over, security would be built into all processes from the get go, and that's likely the only way software could be made secure.

It always amazes me Apple (and others) can't even make a browser that doesn't have a drive by zero day that can take over my computer. Why is that? There must be something fundamentally wrong in the system here. And I think what's wrong is that security was not even in the minds of engineers when most of these software modules were created.

BSD had it built in, but they watered it down instead of - what they should have done - doubling down on it.


I’ve worked on the bug bounty program for a large company. We did the whole thing. It’s hard. The part you’re talking about can be the hardest.

Is probably less than believable to read because it sounds like it should be easy. I don’t have any good answers there. I’m also not suggesting that customers and researchers accept that, but saying it’s easy just diminishes the efforts of those that run good ones.


could you try litle bit harder to provide any example why it is "harder than it looks". you repeated multiple times that its hard, but what exactly(aproximately) makes it hard?


I think it's the phrases 'some coordination' and 'company the size of Apple'. It's rarely the case (well, hopefully?!) that a fix is as trivial as 'oh yeah, oops, let's delete that `leak_data()` line' - it's going to involve multiple teams and they're all going to think anything from 'nothing to do with us' to 'hm yes I can see how that happened, but what we're doing in our piece of the pie is correct/needs to be so, this will need to be handled by the [other] team'.

Not to say that people 'pass the buck' or are not taking responsibility exactly, just that teams can be insular, and they're all going to view it from the perspective of their side of the 'API', and not find a problem. (Of course with a strict actual API that couldn't be the case, but I use it here only loosely or analogously.) Someone has to coordinate them all, and ultimately probably work out (or decide somewhat arbitrarily - at least in technical terms, but perhaps on the basis of cost or complexity or politics or cetera) whose problem to make it.


What's worse, typically an exploit doesn't involve knowledge of the actual line of code responsible -- it's just a vague description of behavior or actions that leads to an exploit, making it much easier to pass the buck in terms of who is actually responsible for fixing it. The kicker is if your department/project/whatever fixes it, you're also taking responsibility for causing this error / causing this huge affront to the Apple way...


Most good exploits have a pretty solid root cause attached to them.


It's mostly just 'human factors'. What I'm describing below applies across the spectrum of bug reports from fake to huge to everything in between. Nothing of what I'm listing below is an attempt to directly explain or rationalize events in the article, it's just some context from my (anecdotal) experience.

- The security researcher community is composed of a broad spectrum of people. Most of them are amazing. However, there is a portion of complete assholes. They send in lazy work, claim bugs like scalps and get super aggressive privately and publicly if things don't go their way or bugs don't get fixed fast enough or someone questions their claims (particularly about severity). This grates on everybody in the chain from the triagers to the product engineers.

- Some bounty programs are horribly run. They drop the ball constantly, ignore reports, drag fixes out for months and months, undercut severity...all of which impact payout to the researcher. These stories get a lot of traction in the community, diminishing trust in the model and exacerbating the previous point because nobody wants to be had.

- Bug bounties create financial incentives to report bugs, which means that you get a lot of bullshit reports to wade through and catastrophization of even the smallest issues. (Check out @CluelessSec aka BugBountyKing on twitter for parodies but kind of not) This reduces SnR and allows actual major issues to sit around because at first glance they aren't always distinguishable from garbage.

- In large orgs, bug bounties are typically run through the infosec and/or risk part of the organization. Their interface to the affected product teams is going to generally be through product owners and/or existing vulnerability reporting mechanisms. Sometimes this is complicated by subsidiary relationships and/or outsourced development. In any case, these bugs will enter the pool of broader security bugs that have been identified through internal scanning tools, security assessments, pen tests and other reports. Just because someone reported them from the outside doesn't mean they get moved to the top of the priority heap.

- Again in most cases, product owns the bug. Which means that even though it has been triaged, the product team generally still has a lot of discretion about what to do with it. If its a major issue and the product team stalls then you end up with major escalations through EVP/legal channels. These conversations can get heated.

- The bugs themselves often lack context and are randomly distributed through the codebase. Most of the time the development teams are busy cranking out new features, burning down tech debt or otherwise have their focus directed to specific parts of the product. They are used to getting things like static analysis reports saying 'hey commit you just sent through could have a sql injection' and fixing it without skipping a beat (or more likely showing its a false positive). When bug reports come in from the outside, the code underlying the issue may have not been touched for literally years, the teams that built it could be gone, and it could be slated for replacement in the next quarter.

- Some of the bugs people find are actually hard to solve and/or the people in the product teams don't fully understand the mechanism of action and put in place basic countermeasures that are easily defeated. This exacerbates the problem, especially if there's an asshole researcher on the other end of the line that just goes out and immediately starts dunking on them on social media.

- Most bugs are just simple human error and the teams surrounding the person that did the commit are going to typically want to come to their defense just out of camaraderie and empathy. This is going to have a net chilling effect on barn burners that come through because people don't want to burn their buddies at the stake.

All of this to say it takes a lot of culture tending and diplomacy on the part of the bounty runners to manage these factors while trying to make sure each side lives up to their end of the bargain. Most of running a bounty is administrative and applied technical security skills, this part is not...which is why I said it can be the hardest.


Reading the OA, I also believe that there's a wide variety of technical detail that could be the cause of, say, not responding.

Maybe the reports get to the tech teams, the tech team figures out that this bug will definitely be caught by the static analyzer, and they have other more pressing issues.

The main problem today IMO is that the incentives for finding and actively using exploits are much higher than the incentives for fixing them, and certainly much higher than building secure code that doesn't have the issues in the first place.

After all, nobody will give you a medal for delivering secure code. They will give you a medal for delivering a feature fast.


I’ve been in infosec since the 90’s, moving slowly has killed way more companies than any security issues.

I’ve worked at some of the largest financial institutions and they spend billions on security every year to achieve something slightly better than average. Building products with a step function increase in security would incur costs in time and energy and flexibility that very few would be willing to pay.


And yet here we are ;)




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: