Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Hacking Team: a zero-day market case study (tsyrklevich.net)
148 points by colinprince on July 26, 2015 | hide | past | favorite | 76 comments


This is a really amazing post.

Two things that startled me:

First, there is apparently a market for vulnerabilities that bypass the Flash access controls for cameras and audio recording. There can be no benign purpose for those exploits. Nobody penetration tests a Fortune 500 company looking to see if they can light up the cameras on worker desktops.

Second, there's an eighty thousand dollar price tag for a Netgear vulnerability. That shocked me: serverside, highly targeted. Only, it turns out, there probably isn't any such market. Apparently, some of these bugs are listed for sale at exorbitant price with no anticipation of ever selling them. They're not listed at close to a clearing price, but rather just aspirationally, with the idea being that anyone who will someday, maybe, engage a serious zero-day broker for a Netgear vuln is probably going to derive six figures of income from that bug.

That's the theory, at least.

For future HN bug bounty/black market threads: note the absence of Facebook XSS vulns on these price lists. Nobody is paying tens of thousands of dollars for web vulns. Except the vendors. :)


> There can be no benign purpose for those exploits.

I agree, but as can be seen in the post by Netragard linked in the article:

www.netragard.com/exploit-acquisition-program-shut-down

Business can still rationalize and excuse their behavior by pretending that they're actually doing the morally right thing

> The need for 0-days is very real and the uses are often both ethical and for the greater good. One of the most well known examples was when the FBI used a FireFox 0-day to target and eventually dismantle a child pornography ring. People who argue that all 0-day’s are bad are either uneducated about 0-days or have questionable ethics themselves.

I loathe this viewpoint, for at least 3 reasons:

- "Keys Under Doormats": you keep the 0day secret to target pedophiles, but other people will be affected by it as well

- It's using the usual cheap rethoric of figthing pedophilia to defend some business interests

- You're either with us (the spotless knights), or against us... "People who argue that all 0-day’s are bad [...] have questionable ethics themselves "


I think I agree with you. I see a market for camera bypass bugs as more evidence that the whole market is ethically bankrupt. I've always thought that --- high prices for exploits are premised on the idea that they aren't being fixed by vendors! --- but this is a lurid and disturbing detail.

Of course, it's also possible that, like the Netgear CSRF RCE, this is a bug posted with a high price as a a trial balloon, and nobody actually buys them.

(For what it's worth: I also think the idea that serious enterprises need zero-days to test their security controls is also pretty silly. I know it does happen, but I think the causality is mixed up; I think it happens because the markets exist, not the other way around.)


> One of the most well known examples was when the FBI used a FireFox 0-day to target and eventually dismantle a child pornography ring.

Funny... that's when they took down Freedom Hosting right? If I recall correctly, they didn't use a 0-day, they targeted a vulnerability that had been patched in the regular tor browser so only people using an outdated version of the browser were hit by it.

Plus, calling Freedom Hosting a "child pornography ring" isn't representing accurately what it was. Sure, their was child porn hosted there. Maybe even most of what was hosted there was child porn, I don't know... but that was a web host. There was no encouragements from Freedom Hosting to host cp there (apart from not doing anything to prevent it).

Plus, apart from busting the guy responsible for FH, I'm not sure they got anyone of interest with this operation... at least, I haven't heard of it and I'm sure they wouldn't have been too shy to brag about it if they had made any high-profile cp bust from this.


> For future HN bug bounty/black market threads: note the absence of Facebook XSS vulns on these price lists. Nobody is paying tens of thousands of dollars for web vulns. Except the vendors. :)

Is this due to the fact that the value of a Facebook XSS vuln is very low, or that the high likelihood that Facebook will notice the vulnerability (possibly from another source) and patch the issue before a profit can be realized?



Makes sense to me.


It's because Facebook, Google, and some others run generous bug bounties / white-hat programs. Without committing a crime, people can make a lot for disclosing it directly and confidentially. Vulnerabilities can pay 5-15k and there have been 30-40k payouts. Occasionally, you'll see a blog post explaining the process from detection to payment, like: http://homakov.blogspot.com/2013/02/hacking-facebook-with-oa...

FB paid out 1.3m in 2014. http://www.zdnet.com/article/facebook-bug-bounty-program-pai...


No. Facebook is probably not outbidding the black market for their vulnerabilities. I think 'grugq is exactly right: the market for serverside vulnerabilities with hours-long half lives is very thin. Facebook could pay $500 for RCE, and so long as they do everything else they currently do for security, a thriving black market would not emerge for their vulnerabilities.

It's interesting to me that there's a real market price for an Adobe Reader flaw, but that Facebook flaws have (generous) fiat prices set by Facebook.


I didn't say they're outbidding the black market, but they are quite generous and it is a legal action at that point. Makes sense to me, because they actually care about security and it's worth paying out to protect users and the brand. It also serves as a unique recruiting tool.


Seems to me you could "quite easily" double dip.

Sell your exploit on the black market. A day later, sell it to the vendor. "Sorry, they must have found and patched it"? Just ask Facebook not to disclose your information when highlighting vulnerability payouts.


If you read this whole post, you'll see that buyers expect this behavior, and payments are escrowed or tranched to account for ti.


Google and Mozilla pay bounties for vulnerabilities in their browsers, too. Yet the market for these bugs is striving.


Is selling a vulnerability a crime? Does that depend on whom you sell it to?


1. No. 2. Not currently, unless you know they're using your bug to commit a crime, in which case you can assume liability for that crime by actively helping them.


My heart wants to agree with Thomas, but I think we should remember this is a single data point and not generalize too hastily.


There definitely is a market for CSRF RCEs in common home routers.

(The prices are absurd due to the fact that nobody has been selling these for very long)


I believe you. How are those bugs monetized by the buyers?


They generally enable AT LEAST full MITM access, so ad hijacking and espionage.


If you could hijack cookies you could run a ransom scheme. You take control of someone's email or some other equally important social service and then you demand a ransom to release the account.

Just yesterday there was a topic like that in here: https://news.ycombinator.com/item?id=9949444


Your Netgear router can't break TLS. Google Mail and Facebook are designed to assume that your Netgear router is already evil.


Both true - but once you own someone's router, one could (hypothetically) social engineer the target into downloading an alternative SSL cert (like you would do yourself when setting up a debugging proxy for HTTPS traffic). As soon as they do that, it's all cleartext. I've been able to convince various (non-technical) house guests and (technical) coworkers to do this in the past, so I imagine it'd be possible on a wider scale.


Isn't Facebook pinned? Google is.

Obviously, if you can trick someone into running an EXE, it's game over, but that's true whether or not your router is owned up (95%+ of the Internet sites people visit can be used to stage this kind of phishing attack.)

Someone who knows more about organized crime than I do can tell me whether or not people actually harvest logins from compromised routers. CSRF RCE definitely belongs to the class of attacks that can be deployed en masse with very little effort, so it's at least sort of plausible.


Their certs may be pinned, but from what I can tell there's nothing keeping me from introducing a new root cert to my system (IIRC there was a scary popup and that's it).

Here's a screenshot of Chrome giving me the "insecure" padlock, but still rendering Facebook while I debug the HTTPS traffic: http://puu.sh/jdGhc/128d35f793.png

Here's similar, showing Google HTTPS traffic being debugged while browsing in Safari (with nothing appearing out of the ordinary): http://puu.sh/jdGoW/3163db1e54.png

(the debugging proxy is Cellist, which I find to be a suitable replacement for mitmproxy, as long as I don't need rewrite/replay)

> if you can trick someone into running an EXE, it's game over

I suppose it is roughly equivalent (i.e. if you could get them to download a cert, you could probably get them to download an EXE so the point may be moot). I always figured you might have more luck with this sort of thing if you owned a Starbucks router or something, where people might be more accepting of having to jump through hoops to use the internet.


Have you never executed a binary you downloaded over http? (Live) Patching backdoors into existing binaries is not exactly difficult.

So far I've only seen one attack like this in the wild, but you need to understand that these exploit vendors primarily sell for targeted attacks (usually by various government actors)


Of course not. But, like I said, if all you need is to pop up a convincing looking page that gets someone to download an executable, you don't need to pop a router to do it. There are hundreds of ad networks you can inject stuff into to accomplish this.

My point is that the compromised router isn't a usefully privileged position to launch this attack from. Whether or not you have the router, you're relying on sending someone to a random convincing-looking web page. This is an "open redirect" level of difficulty, not an $80k RCE level.


> the compromised router isn't a usefully privileged position to launch this attack from.

I hadn't thought too deeply about it but you're absolutely right. Given the presence of encryption, the biggest advantage you can get from owning the router is being able to phish the user from a site they might normally trust but isn't encrypted (a group of sites that is, thankfully, getting smaller every day).


Key is that you don't need to phish, you can just wait until the user downloads an unencrypted executable.


A bunch of software and devices assumes (or can be configured to assume) anything on the local LAN is safe and can be trusted. So routers seem like a good place to look for unsecured file shares, security cameras, printers and whatever else is on that LAN. I have strong doubts most network protocols are properly secured against eavesdropping or MITM in a LAN either.


This is like the 'nickpsecurity comment downthread. The problem with it is, people don't buy vulnerabilities the way they buy power tools; they don't think of all the different things they might do with them. People who buy vulnerabilities (a) have a very specific thing they plan on doing with that bug and (b) plan on doing it in exactly the same way to a large number of targets.

At the point where you're futzing around on someone's LAN looking for unsecured file shares --- something that perhaps 1 in 20 people actually have in 2015 (file shares at all, I mean) --- you're breaking both of those criteria.


Attacks I heard of were MITM on vulnerable services, network monitoring to enumerate other attacks, malicious DNS activity, proxies, DDOS attacks, and shutdowns if it's eCommerce. Could be more. These are the one's I've seen happen or read in reporting on the subject.


You are suggesting here that someone might pay $80,000 for a Netgear RCE to help them run DDOS attacks?


No, Im suggesting that's one use of router compromises. Already told you that $80k is asking price, not selling price, for its likely application: cyberespionage. It will play a support role for beginning or maintaining attackers' presence in network.


> Nobody penetration tests a Fortune 500 company looking to see if they can light up the cameras on worker desktops.

This surprises me, to some degree. Compromise a machine or laptop privvy to sensitive information -- meetings, phone calls, etc. -- and you might have access to valuable information.

These days, I guess that could include the sound of passwords being typed in, for which keystroke analysis software was making the headlines a couple of years ago.

Part of me is not surprised. Few people think of all the ways around security. But, big dollar pen testers should.

I would presume that firewalls have some success in keeping such malicious data streams from exiting the internal network. Then again... How can one distinguish them from all the legitimate IP-based conferencing going on, these days?

P.S. I suppose in part it is security through obscurity. Being able to identify a useful target and pick out the valuable bits of information. When you're simply spraying exploits, that's probably a needle in a haystack with regard to this kind of information. Or at least, needing of much more attention that waiting for your software to pass back a log of user name and password fields, cookies, etc.


Replying to myself. I am certainly no expert, but both on my own and more so perhaps simply extrapolating from what I hear about, I imagine circumstances that, a few years later, I seem to end up reading about.

But, that's not the point of my reply. That is, that I am not an expert, and I seem to be drifting further and further away from relevance in conversations, these days.

So, I think it is time to unplug myself. Maybe still browse, but enough of my comments.


Does it seem likely that this is the easiest way to get that information, however? If you get someone's email or files you get what is likely to be more detailed information in a conveniently easily searched and summarized form but if you get an audio recording you either have to listen to it yourself or spend time getting a high-quality speech recognition system up and running.


The zero-days used on high-dollar commercial pentests give attackers remote code execution.


A facebook xss vuln is probably not enough to do damage. However, a serious 0day or attack plan that could take down the network would be valuable now that it floats on a public exchange. During April 20 - June 20, 2011 sony shares fell from $30.14 to $24.28 during the PSN hack. A vuln that could take down facebook could allow shortsellers to profit if it was severe in magnitude even if not in duration especially if it created FUD about Facebook security.


About the first, pentesting that gives access to secret board meetings through company laptops must raise some eyebrows. I am not sure that is legitimate pentesting, though.

About the lack of xss vulnerabilities that probably doesn't generalize to e.g. WordPress or bbforum software which are installed client side.


You're right, about Wordpress and bulletin board software. There is a market for those flaws.


Nah, something like Netgear vulnerability is for government or espionage-conducting clients. They're useful. Whether anyone is buying for $80,000 is another matter. I can't comment on it.


For $80k I could probably hire someone for a few months to discover one for me. Why would I buy one on the open market if that's the case? As a bonus, hiring someone may produce more than one 0day.


I agree. Yet, governments regularly buy zero-days for five to six digits instead of hiring someone to produce it for them. As I told Thomas, I doubt they'd get $80k for it but there should be someone with more money & need than tech talent that might buy it. That's the vulnerability broker business in a nutshell. The other end is integration with the attack tools that the company licenses. Combine them for the overall, value proposition.


I'm not sure what this has to do with my comment. Can I ask --- because of your HN name --- have you done a lot of vulnerability research work? I'm curious to discover new perspectives on it.


nickpsecurity's reply is dead because he used the m-word.


What's the "m-word"? (I certainly didn't flag that comment).


Find It's mostly mental and possibly the m-word is what follows this.

I didn't flag this.

But I do object to the suggestion that Thomas learn to think like an attacker. I am pretty sure that he does this as well as anyone.

But this approach seems dismissive of a lot of work that addresses the real world, whereas the approach advocated in this post and other posts reminds me strongly of back when I believed that it was worthwhile to prove programs correct. Very few outfits can do this, and perhaps most rails programmers and Java programmers don't expect to be able to prove their programs correct.

It seems to boil down to "We need to build programs in a fundamentally different way". With that sentiment I agree, but I am not seeing much in the way of actionable advice in your many posts here.


> Find It's mostly mental and possibly the m-word is what follows this.

Yes this, pretty sure it kills any comment immediately.


I'll try to remember that. Thanks for the tip to the both of you.


I didn't mean it as a cutdown on him: more a reminder to do what he does professionally. Security evaluators and hackers alike often get into networks with attacks on gateways and firewalls setup poorly. It's on everyone's checklist. A Netgear router works similarly, albeit less features, so I tried to get him to make the mental connection between a weakness he'd readily recognize in one area and one that applies in another area. I could see how it might come off differently though...

Far as my post, it's part of my larger theme that people aren't apply what the field learns. You don't need to be Paul Karger or Dan Bernstein to do better than what we see. The safety critical industry responded to C's problems by subsets, reviews, static analysis tools, and dynamic testing systems. Quality went way up. We've seen the same in academia with free prototypes of many and further work to straight-up knock out the issues (Ur/Web & Softbound are recents). No uptake. There were system languages such as Ada that, by default, eliminated many of the common problems. Little to no uptake. There were concerns about a steady stream of compiler problems. They could improve/use the model with constant problems or the one (CompCert + ML implementation) with the solution. No uptake. Safety- and security-critical work showed microkernels greatly improved security by eliminating much kernel code, isolating others behind mediated interfaces, and allowing recovery in simple failure modes (eg drivers). Most ignored it and Linus deplored it but at least GenodeOS took it up with an active community. Rare exception.

There's a recurring pattern in INFOSEC where proven solutions to problems are either forgotten or ignored. I fight that trend in my posts where I identify known issues or promote alternatives (design or implementation) that were previously field-proven. I also give credit to those that make solid attempts to do it better often with results: qmail, GenodeOS, Minix 3, INTEGRITY RTOS, Secure64's SourceT OS for DNS systems, Sentinel's HYDRA PCI firewall, CheriBSD, Cornell's SWIFT, Praxis's Correct by Construction w/ SPARK Ada, Python with Cleanroom methodology, and so on.

There's lots of amateurs and pro's alike doing stronger methods in academia and commercial sector. It doesn't have to be a full EAL7 system or even that formal. Yet, if a tool exists that immunizes against big problems, why aren't we using or breaking/improving it? Why do I see another presentation at DEFCON breaking a vanilla bug or a new tech that delivers a tiny fraction of that protection? Why are security-critical apps not leveraging free tech to improve them and decomposition/least-privilege at the least?

I'm not asking much. Just using what exists, what's shown to deliver most bang for buck, what's free, and what takes only a small portion of the labor. Do that from language to architecture to protocols. Baseline will be so much better. I noted in both comments I was grateful to the fragment of INFOSEC that does this. Yet, most of the field just reinforces the status quo while occasionally calling out or bandaiding the preventable problems it introduces. Many get a rush out of these attacks they create they don't get out of building tools that create immunity to attack classes. Hence, that other m-word as a metaphor for what they're doing. In the end, nothing changes with that approach but at least they're having fun. ;)


Frankly: this EAL7 stuff has basically no applicability to the real world. People get EAL6-EAL7-EAL6+ certification for things where they're (a) willing to spend 2x the implementation dollars just for the privilege of selling a product (or part) to a tiny subset of all GSA buyers and (b) things that are actually straightforward to specify fully.

Look at the list of EAL6+ products. They're all things like smartcard chips: things with very limited interactions and very well-defined functionality. Real-world software simply isn't like that. Nobody is going to EAL6+ a web browser, or a PDF reader, or even a desktop or server OS kernel (the fact that the best known example of a formally verified OS kernel is L4 should tell you something).

You bring Common Criteria certification up on a lot of different threads about security. The industry has literally nothing to learn about security from Common Criteria.

Regardless of what you may think about that sentiment: this has very little at all to do with the market for zero-day vulnerabilities.


You're selective again. I named all kinds of existing products and solutions that do a better job at solving real-world problems in a safe/secure way than mainstream alternatives. You ignored that to focus on the EAL7 thing, claimed high assurance hasn't done anything beyond smart cards (lol), and kind of stopped there.

Ok, let's get back to foundations since you didn't read my framework. The methods are more important than the certs themselves. The old stuff (Orange Book) called for strong specifications of requirements, design, each thing a system did, failure modes and so on. The implementation had to be done in safest known way, be modular, have well-defined interfaces with input checks, and provably correspond to that spec. Testing, covert channel analysis, configuration management, pentesting, trusted distribution... many requirements on top of it. Later, static analysis, rigorously evaluated code generators, and so on added to the mix. Early projects, which you claim have no practical value, built secure email, VPN's, databases, object storage, thin clients, web servers, logistics systems, and so on. The empirical assessments (eg "lessons learned from...") showed the various methods caught all kinds of problems albeit with different payoff rates in different situations.

Following NSA's finishing off high assurance market, the mainstream stuff and all the hacks one could desire prevaled for years. Eventually, DOD/NSA demanded high assurance again with their separation kernel concept which academia and private companies built. Academia had also been doing strong verification, from math to clever testing, for all kinds of things up to this moment. A common theme from old days repeated in that they focused EAL6-7 type of effort on critical mechanisms that could be easily leveraged for safety/security benefit since we couldn't do everything like that (good guess on your part). The mechanism could be isolation, analysis, transformation, and so on. More flexible the better.

MILS and Nizza architectures split systems between isolated apps and VM's with eg Linux running on top of strong kernels (eg EAL6+). Results of some did well against NSA pentesters. For others, tiny amount of trusted code by itself shows it could never have the number of problems of... whatever you wrote that post with. Others focused on compilers, language type systems, processor enhancements, code generators, DSL's, and so on. Are you saying a fully-documented, predictable, rigorously tested C compiler isn't practical? Or the finished WCET analysis during compilation or pluggable optimizations other groups are working on now?

Meanwhile, there were plenty of medium assurance offerings. Software such as qmail, Secure64, and HYDRA used architecture that greatly reduced risk. GenodeOS took it quite further by making their architecture plug and play with your choice of assured components. Tools such as Astree and SPARK knocked out all kinds of errors in embedded systems plus components of larger systems. Ada, the ML's, Eiffel (esp Design by Contract & Scoop concurrency) did the same in regular ones. Cornell's SWIFT, Ur/Web, Opa, and SPECTRE all made web applications immune to certain types of attacks in different ways without much effort by developers. We saw the formation of all kinds of secure storage, networking, backup, synchronization, virtualization, recovery, etc in academia with a subset rigorously analyzed and some also integrated with production software in prototypes. We saw hypervisor and paravirtualization work that made the OS itself untrusted. We saw CPU designs such as SAFE, CHERI, DIFT stuff, and those leveraging crypto beat vast majority of attacks down to CPU level with one proving properties down to the gates.

Tons and tons of work. Best stuff being things where effort is expended once to pay off many times. Tagged/capability CPU's, better architectures for OS's, compilers that automatically enforce strong safety/security, static analysis tools that prove absence of common bugs, type systems for high level languages easy to code in... the list goes on. These are all very practical with many used in real projects or products. The thing they all have in common is they (a) believably do their job and (b) result in drastic reduction of risk and attack surface at every layer of our systems. Widespread adoption and investment in such methods that work rather than mainstream one's that don't will have a direct impact on "the market for zero-day vulnerabilities." Given pervasive deployment, that market would mostly disappear outside subversions and interdictions.

And I encourage naysayers such as yourself to put effort into such strong methods to get those that aren't at mainstream readiness to mainstream readiness. Your own mind would be very beneficial to academics designing secure filesystems and messaging systems that rely on necessarily complex protocols and crypto. You might knock out problems they didn't see. There might be many other people on HN with similar things to offer and great results to show for it in future. It's why I respond to these misleading comments of yours in detail. One day, someone reading them might be inspired to do better than any project I referenced or simply put effort into those proven to already get plenty results. It is worth it even if the troll or failure-to-get-it rate of those reading it is 99%. That 1% might make one of these real and everything I cited started with one of them that decided to build on proven theory and practice in contrast to mainstream.

So, I'll stay at it even if you think highly secure processors, kernels, compilers, type systems, web apps, databases, middleware, and so on has... "no applicibility to the real world." Not the majority of it, I'll agree. They prefer their IT assets served to their opponents on a silver platter. I write for the others.


People have been plowing money into this dead-end for decades, as you've ably described here.

The thing laypeople need to remember when they read these impressive-sounding litanies of high-assurance systems is that the components that have been formally assured are the simplest parts of the system. They're demonstrated to function as specified, in many cases rigorously. But so what? We also rely on assumptions about TLB coherence in our virtual memory systems (TLBs are probably not formally assured, given the errata). Are we free of memory corruption flaws because we assume the VM system is secure? Of course not.

And so it goes with systems built on high-assurance components. It's possible, even likely, that the assured components aren't going to generate security flaws --- and their designers and the CCTLs will certainly crow about that. But so what? The security vulnerabilities in most systems occur in the joinery, not the lumber.

Virtually every programming language ever used promised to make systems immune to "certain types of attacks". Even Rails made this promise. The obvious problem is, even if you succeed in immunizing a system against "certain types of attacks", attackers just switch to other classes of attacks.

It is not enough to close the doors on, for instance, memory corruption flaws. Virtually every modern mainstream programming language accomplishes this with ease, and yet systems are still riddled with security flaws.

Why? Because security flaws are simply the product of violated assumptions. Every bug violates an assumption and so every bug has the potential to be leveraged by an attacker to compromise security. Unless your programming environment literally promises "bug free" --- which is to say, you're designing trivially simple components that never change and have operational lifetimes measured in decades --- there is no silver bullet.


You make decent points but are missing the bigger picture: vast majority of problems with security, esp code injection, are caused by mechanisms or design patterns that create insecurity by default. It takes insane amounts of effort to use the basic, building blocks on complex applications without creating problems. The building blocks themselves are often simple or can be made that way. That you say the robust methods only work on the simplest stuff is actually an endorsement of my approach if we focus them on building blocks. That's what I mainly push it for so let's test my theory with a real-world example.

We'll only use techniques from production systems made before the 80's, that were commercially successful, and that exist today in some form. Should make it easy to argue practicality. Gives us Burroughs B5500 (1963) and IBM System/38 (1979). Pointers are tagged for protection, actual value inaccessible by apps, and created by program loader only. Memory is tagged as code or data during load time with all input from I/O tagged as data by default by hardware. Any input can't be executed unless administrator explicitly allows it and it's actually the compiler that does that anyway since apps come as source in type-safe, HLL in Burroughs model. Interfaces are checked during compilation, too. Processor checks these on every instruction. Also does bounds checking, overflow checking, type-checking of function call arguments, and stack protection. Checks and processor run in parallel for performance with final state not written unless check passes. So, you can't smash pointers, arrays, buffers, stacks, or individual data with overflow: all just generates exceptions which are recovered from or freeze app with admin notification.

So, you want to hijack the app via a corrupted PDF or network packet. Assume, as you said, that the simple mechanisms above were implemented at EAL6-7 and apps just used them. Where would you start with a software attack (no rowhammer lol) with input to an app if you only got exceptions when hitting pointers, data fields, memory management, stacks, and arrays/buffers? What's left? If you're claim is true, then these simple modifications provide no meaningful increase to the reliability or security of our systems. There's other security risks but I'm focusing on code injection via attacking software with input. I predict attacker's job is so difficult in this model that most would go for social engineering or sabotaging executables to attack compiler/installer/loader. Those are also protected by these mechanisms and ruggedly built (eg Ada or SPARK w/ all checks on). You're actually more knowledgeable and skilled than me at the many implementation attack methods. How many are left at this point? Seriously, so I can counter them too.

Funny you mentioned hardware. It certainly does have errata here and there. Yet, that's despite tens of millions to billions of transistors running concurrently. Its error rate is actually incredible. I wonder why. Let's look at design flow for Intel, IBM, etc.: specs to RTL to gates with equivalence checking at each layer; lots of testing; formal verification (Intel) and falsification (IBM) of stuff at various layers; synthesis tools with validation approaches to that; generic components with interfaces and timing analysis; gate-level testing to see where tools were lying; comparisons of instrumented chip to the models after a fab run. The difficulties were overcome by constantly investing in tools for various problems and heuristics that made them work better. Guess what? Those methods look very similar to the B3, A1, EAL6, and other assurance activities. They also worked: quality in terms of errata varied from staying steady to improving over time despite exponential increases in complexity.

Believe it or not, you don't need to verify a whole system at highest levels. I'm not even promising absolute security from the effort: just saying systems designed this way have had incredible resilience to pentests, faults, and external attacks. I say invest the effort into mechanisms like above, languages immune to what we can, analysis tools catching what we can, compilers, most-used parts of kernels, interfaces (esp glue), parsers, and so on. These have already been built rather than theory: really just re-applying existing work to new system. Less than 1% of code and design done right knocks out 99% of routes for code-injection and many other issues in the rest of the system. The rest we catch with security research and reviews. Or recover from after monitoring detects problems.

So, Thomas, would you trust an x86-style processor with a monolithic kernel coded in C? Or a system like EROS running on my above CPU that only uses safe mechanisms (hardware-enforced), safe languages, and robust tools for making one properly use the other? Even if COTS-style implementation, the amount of vulnerabilities and their severity should nose dive. Your current position is that 400 kernel and thousands of user-level vulnerabilities resulting in malware execution are better than thousands of user-mode exceptions, a few kernel-exceptions, and maybe a few injects from what we didn't see coming. I disagree and think we can do better. Friggin 1960's-1970's tech had better security & reliability than current architectures! Academics (see crash-safe.org or CHERI) have with way less time and money than Intel, IBM, etc. So, why do you speculate? Methods that got results against problems before will get results against same kinds of problems again. Just need to apply them and in most cost-effective way. All I preach.


If your "system like EROS running on my above CPU that only uses safe mechanisms (hardware-enforced), safe languages, and robust tools" ran a browser, I would trust Chrome on x64 more than that browser. Not even a remotely tough call.

My point is, when people say "use high-assurance systems", they're saying "you don't get to use browsers anymore".


Chrome's based on the OP browser, SFI/CFI, and segments for POLA that came out of my side of the field. They weakened those security models to get extra performance because the good stuff had up to 50% hit on your favorite architectures. The result was plenty of breaks in their model despite its clever design. I did cite them as an example of mainstream trying to leverage proven techniques and they did get best paper in 2009 for the attempt. Just got unacceptably weakened.

Meanwhile, there's DARPAbrowser, OP2, Gazelle, Tahoma, Illinois Browser OS, my scheme of running it in a microkernel partition behind guard functions, old scheme of dedicated box with KVM for switching, compiler transformations, diversification, and so on. These are all either browsing schemes more secure than Chrome or ways to better prevent/contain damage of arbitrary apps than popular methods. I've been posting these on forums for some time now. Strong, INFOSEC research certainly builds & evaluates browser architectures. Only around 6 groups making attempts & no help from the mainstream, as usual. They will do occasional knockoffs with less security (i.e. Chrome). IBOS has a nice table [1] showing what containment Chrome achieved vs their which leveraged Orange Book B3 methods. You'd have lost the bet.

[1] https://www.usenix.org/legacy/events/osdi10/tech/full_papers...


Do any of these secure browsers have JavaScript?


They're all prototypes illustrating better security architectures. Such work often doesn't have all functionality included. Nonetheless, these support JavaScript: OP1 & OP2 web browsers; Microsoft's Gazelle; Tahoma; IBOS; my two kludge solutions I mentioned. So, all of them except the DARPAbrowser which was just a demo for Combex's methods. Papers below on on their architectures, security analysis, and performance evaluation if you're interested.

Far from "use it now," I'm advocating that they illustrate the difference between strong security design and mainstream while providing something to build on. My claim is building on stuff like this would reduce impact of hackers vs popular browsers. Many trusted components can also be built to medium or high assurance because they're simpler.

DARPAbrowser demo http://www.combex.com/tech/darpaBrowser.html

Designing and Implementing the OP and OP2 Web Browsers http://web.engr.illinois.edu/~kingst/Research_files/grier11....

Multi-principal OS Construction of Gazelle Web Browser http://research.microsoft.com/pubs/79655/gazelle.pdf

Tahoma - A Safety-oriented Platform for Web Applications http://homes.cs.washington.edu/~gribble/papers/gribble-Tahom...

Trust and Protection in the Illinois Browser Operating System https://www.usenix.org/legacy/events/osdi10/tech/full_papers...


Huh? If you mean mainstream, there's certainly a mainstreaming effect in INFOSEC. It sometimes promotes things that stop hackers. Mostly just tactical stuff that's bypassed later or thing inferior to more obscure stuff that worked with better design/architecture.

Example (paraphrased): "Dude, what's with all this separation kernel, decomposition, and interface protection nonsense? Nobody does that shit. Real hackers use a monolithic kernel with several MB of privileged code, no POLA, implementation fully in C, and some security features/apps on top of that. Do that and you're good."

Several hundred CVE's later related to the above... yeah, some things aren't popular for their actual security.


As someone currently sat next to a netgear router it did make me look at the thing and wonder.

Now I'm considering a small fanless PC running something like pfsense for home... what a world eh.


A friend of mine launched http://orp1.com/, but I haven't had any hands on experience with it.

I've also been burnt by the Almond+ router (it's only as open as Cortina Systems will let it be) and my current ADSL router won't let me do full bridge mode to something running OpenWRT.

Fun offtopic fact: My copper phone line is so bad I can only use specific Broadcom based modems if I want >5Mbit download speeds. These generally don't work well with OpenWRT, because Broadcom.


I'm lucky I have 200mbps fiber which terminates on a box on the wall which I can plug a Cat-6 into and get a static address, currently that goes into my ISP supplied netgear box but absolutely nothing to stop me putting a pfsense box in front of the netgear box.


I'd recommend an OpenBSD-based setup given 0-days, lack of security tech, and bad configuration are top methods of attackers. That team is way ahead of the rest in auditing for and preventing such problems in every area. Plus, unlike stronger architectures, it's available to you free and ready to go.


Have a look at PC Engines ALIX.


Excellent post; very interesting!

How many of these "expensive" bugs are directly due to memory safety? From a quick glance it looks like this entire market is held up purely on that quirk of C/C++. Of course this is nothing compared to the money being poured into compiler and runtime tricks to try to undo those quirks. Which is probably little compared to the amount of money via time lost developing and debugging such an environment.

Overall these prices seem low compared to the capability. Is that because it's easy enough for governments and big corps to just hire teams and dev in house? These prices are well within SMB price range if an unethical company wanted to attack a competitor (though there's probably cheaper ways in). That, plus given that the price to compromise even federal agents (going off known cases where FBI and CIA agents were turned)... my doubt increases that companies can actually keep secrets. I think of this when folks like Nikon refuse to document camera formats under the claim that it's a trade secret they are hiding from competitors.

It is curious though how little these go for, overall. Perhaps because they aren't overly directly profitable (exploitable for cash)? I wonder if there's more money in exploiting server software that you can use almost directly for profit? Perhaps not; it could be easier for the criminals with the infrastructure in place to find and exploit directly as part of operations.


Great writeup.

Lest not forget, that people got imprisoned for political reasons because of vendors such as HT, FinFisher by Gamma and others.

http://www.theregister.co.uk/2014/10/16/finfisher_criminal_c...

Vendors selling to HT and HT themselves should be sued and brought to justice for supporting oppressive regimes. The UN charta says it all.


Some intersting stuff that shows up in the e-mails:

Kaspersky discovered one of the exploits in use by Hacking Team, but kept shut for a while to trace its users and other linked code. HT's CTO Marco Vallery considers this behaviour "morally despicable". [https://wikileaks.org/hackingteam/emails/emailid/990150]

The price for iOS exploits is apparently in the millions, because the price is "driven by federal programs" [https://wikileaks.org/hackingteam/emails/emailid/15494]


The price for iOS exploits is also high because they are distinctively difficult to generate.


The market for exploits is far cheaper than the cost of defense against them. It seems like firms would be better off hiring real black hats to make real exploits than mess with white hat firms.


What is HACKING TEAMs motivation for publicly sharing all of this seemingly private underworld information?


They didn't share it voluntarily, they got hacked and 400GB of data from their servers turned up in a data dump.

http://leaksource.info/2015/07/07/hacking-team-hacked-400gb-...


I see, thank you for the context!

So then my next question would be what is this person hoping to do by publishing the info like this?


Activism. Hacking Team were not liked.


Exposing criminals.


Perhaps he sold short on tech stocks.


I don't want to be that guy but trying to read this on a tablet is a major pain. The scrolling keeps breaking. Why do people mess with something that isn't broken?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: