It's a bad move for apple. A good relationship with the community of security researchers is crucial - they're talented folks and their research results grab headlines. It takes just a tiny amount of corporate humility and public thanks to win their respect, and in return get goodwill. Treating the community badly will get ensure the next guy won't even try to cooperate.
Over the last several years, Microsoft's MSRC has balanced this very well. Google has done well recently, too. Lots of clued-in people in both places.
I'd agree more if he didn't submit — and get approved — a working exploit in their store. Without telling them about it.
Edit: Now, I don't disagree that just banning him from the program isn't a great idea, and that pulling the app and having someone from the security team send him an email isn't a better one. But it's hard to say this that a bad move on Apple's part.
You prove DDoS vectors exist by DDoSing your own site, or one you have permission to work on. Same with SQLi vulnerabilities. If you want to report a vulnerability you've found to a company, include a working exploit in your report, but don't run it. If the company ignores you or tries to brush the vulnerability off, that's where it gets hairy and responsible disclosure comes into play.
We don't know what his level of communication was with Apple, but it doesn't appear that he notified them before testing this exploit. Had they refused to address the issue or otherwise brushed him off, this would be a reasonable escalation. The same story on r/netsec [1] is being linked to a Forbes article [2], which claims he notified Apple three weeks ago. That's not a ton of time.
Ultimately, he very much violated their ToS and Apple is well within their rights to give him the boot. Whether that was a smart decision on their part remains to be seen.
Since the only place you can install software on iOS devices is through the store, it is important to demonstrate the attack vector by which it can be gained.
It indicates both a security flaw in the platform itself, and a security flaw in the app store approval process, both should be highlighted.
Since he has control over pricing, couldn't he submit with a free price tag, and change it to something insanely high once accepted. That way no sane person would buy it, and he'd still prove his point.
He _had_ to submit an app and get it in for this to work ofcourse, otherwise this was a moot point. And it's a good wakeup call to everyone. Security awewareness helps sometimes unfortunately when you make a splash.
Otherwise, while I think you've got a point (he could have used pricing to ensure no one ran his app), that isn't the issue here. The disclosure is. No one is contending he did something evil with his code, it's that Apple is mad about his code and disclosure. I don't think making it unlikely to be purchased would have helped.
For one, he could have submitted it and then have it "held for developer release" — at which point he told them about it. There's no reason he had to have it actually in the App Store here, even if he wanted to test the approval process.
^^This. And he [1] probably told them immediately afterwards, since otherwise they still wouldn't have known. As he says: he regularly submits bugs.
[1] Or perhaps someone beat him to it: he may not have seen the acceptance mail before someone already noticed the app? I'm not familiar with the exact process: do you need to give final approval or can the app be in the store for a while without you knowing it?
This hardly qualifies as an exploit. While it allows the app to do something it's not supposed to do, the ability to download and execute additional executable code doesn't actually violate security. The new code is still restricted to the app's sandbox and can't do anything that the original app couldn't potentially have done directly.
It easily qualifies as an exploit, given that Apple's app store model is based on the fact that each app is reviewed beforehand to ensure various properties, including the property that the app does not contain spyware, etc. If Apple approved a harmless app, and then said app downloaded code that snooped on the user's calls or asked for their credit card number, that's an exploit.
First - I think just general manners, as well as established protocol, would have the security researcher let Apple know ahead of time what he would be doing. A simple email sent prior to uploading this code would have been sufficient to cover his bases - I'm surprised he didn't do that.
Second - Unless I'm mistaken - his proof of concept was more a violation of Apples TOU, it didn't really attempt to copy credit card numbers, or snoop on users calls - so, in that sense, it wasn't an exploit.
Net-Net - nobody comes out of this looking good, but Apple makes it clear that they are prepared to back up the language of their Developer TOU with actions.
Part of the security of the app store is the review process. "It's possible to download and execute code" is neat, "it's possible to download and execute code and the app store reviewers don't catch that" is much more impressive.
Nothing in the App Store review process will allow them to catch a zero-day exploit. Coming up with a zero-day exploit in IOS is very impressive - but, by definition, once you have it, the App Store review process isn't going to catch it.
Yep. There's no deep check of what your code contains, only a fairly superficial check of what it actually does. You can include nearly anything in your app (perhaps lightly obfuscated) as long as it doesn't show its face during the review.
Depends on your level of paranoia and willingness to rely on the network. The server has the advantage of letting you turn it on and off at will, but a timer will work even if the user has no internet connection or your server gets confiscated by the FBI.
I think it qualifies as a great exploit. You totally go around the Private API checks that Apple does. And there is a lot you can do with those APIs that is potentially evil. Even in the sandbox.
Code signining is a control that is intended to restrict the software that can run to only those apps which have been granted the right to run.
Your second question is a good one, but given is context, it is unrelated. If apple signs a python interpreter, they do so at their peril, for obvious reasons.
Yes, and it's still only running an app which was granted the right to run, it's just that this app now has some extra code in it. Since Apple doesn't really inspect the contents of the apps it signs anyway, this grants no extra capabilities.
Unfortunately, if he submitted an exploit and didn't get banned, we'd see more criticizing Apple for favoritism in enforcing the rules.
They deserve that criticism and it's true, but I can see where they would prioritize actually enforcing those rules, especially in a big publicly-visible incident.
Obviously the best choice from HN's moral point of view is to be more open, more even-handed and less draconian about rules in the first place. But failing that, I can see why they try for "even-handed" over "less draconian," given their own priorities.
when you submit a security related bug report to apple - granted my experience dates from 99-2005 - you get:
A/ ignored (mail auto reply "we might fix it, don't tell anyone or we'll go after you"
B/ bug don't get fixed for 2 or 3 years
C/ bug get fixed, you get no credits
I don't know why this is being downvoted. Apple is notoriously horrible at fixing vulnerabilities reported by the general public, unless they're downright critical.
In fairness, many of the bugs which enable jailbreaking also represent serious security problems. For instance, the various iterations of web-based exploits fundamentally do represent remote code execution, a serious bug in any browser environment. On any other platform, we'd classify them exclusively as security vulnerabilities; however, on iOS, the user has to take advantage of security vulnerabilities to break into their own system.
Not necessarily. Remote exploits, definitely, but entirely local jailbreaks that require booting the phone into a specialized firmware-loading mode don't actually impact the user's security, just Apple's anti-tampering guards against the user.
Wrong. The first jailbreak was done because the iPhone trusted the restore mode commands coming from iTunes. The protocol was totally reworked so that the iPhone would only run some canned scripts. This did nothing to improve device security (it pretty much only enabled the jailbreak), but Apple fixed it fast.
FWIW, I've submitted a couple of (relatively minor) ones in the last couple of years. They were each fixed in the next update and I was credited in the security release notes.
I don't know about the timeframes you quoted but the apple security advisories do credit the researchers. See some of the entries here:
http://support.apple.com/kb/HT5002
Submitting a security bug report to the Chromium project was a delight compared to submitting one to Apple. It was obvious that the engineers working on Chromium cared about the problem and were competent. On the other hand, I mightaswell have been reporting the Apple bug to a brick wall or a black hole.
Odd use of the word "competent", you implying Apple personnel aren't competent because they didn't send a message saying "thank you" with gold stars all over it?
Hold on here. Is Apple expected to know Charlie Miller is a "security guru", and even if they did, why should he be treated any differently? Security researchers should be held to the same standard as regular developers when reporting bugs/flaws.
RTM was convicted of a crime because of his curiosity, and here we have a security researcher who knowingly put users at risk. You ask me, Mr Miller got off lightly.
He did not put users at risk. This vulnerability allows apps to download and execute new code, but that new code is still subject to the app's sandbox. This vulnerability is interesting from a research standpoint, but has zero actual consequences to the security of iOS.
Not sure I agree with this. Less scrupulous developers might use this to download code that does things, even from a sandbox, that are bad for users. For example, it could download code that reports your usage habits to third parties, or saves your CC number.
Surely you don't think that having arbitrary code placed within the IOS AppStore isn't a security risk do you? Once malicious code has been approved in the store an attacker need only find a way to break out of the sandbox, which I am sure is possible.
Reviewers check behavior, mostly not content. It's easy to hide code and activate it later. If you can break out of the sandbox, you don't need to download code to exploit that.
In his demo video, he shows a metasploit interpreter downloading the address book. He mentioned it was a different payload, but I don't recall if he said it was a different application.
If it was the same app, then does that imply the sandbox for a stockmarket app allows access to the address book?
Nowhere in that article do I see them state that the downloaded code is able to escape the sandbox. They certainly imply it pretty heavily, but I can only assume that's due to general cluelessness, or less charitably a desire to sensationalize the story.
Everyone at Apple who does security knows of Charlie Miller. The guy has a phd and hacks Apple products and wins prizes and writes research papers, etc. If they don't know of him I'd be very surprised.
Isn't it considered good security-research practice and just "good manners" to notify the company beforehand and give them a chance to fix the problem before going public and pulling stunts like publicly abusing it, making sure they are publicly humiliated with their pants down?
Judging from the article, he did neither - so don't run crying about "that's so rude".
1. This "guy" apparently didn't try very hard, at all, to cooperate, as evidenced by him putting the exploit itself in the App Store before notifying Apple about it, in direct violation of the dev guidelines.
What good is it to have such guidelines at all if you display in public that you won't enforce them?
2. Microsoft is doing a great job at this? So are we to assume that their security is therefore superior?
Over the last several years, Microsoft's MSRC has balanced this very well. Google has done well recently, too. Lots of clued-in people in both places.