Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Sorry, RSA, I'm just not buying it (gist.github.com)
539 points by dmix on Dec 23, 2013 | hide | past | favorite | 127 comments


True/sad story. So at Sun when I was building crypto tools for Java I wanted to be able to use the RSA public key algorithm in the class loader (part of a capabilities based security system). We negotiated with RSA for a right to use their patent in Java, which proceeded right up until the final contract came back (which our lawyer signed but I did not get a chance to review) where the wording was changed to be a license to BSAFE rather than the patent. Clearly I wasn't going to put BSAFE into the JVM, I l already had an implementation of their algorithm in Java. There was never a good explanation for how the lawyer got so "confused" at the last minute and "forgot" to have these changes reveiwed by the engineer leading the project.

Given the sort of shenanigans we've been reading about I would not be surprised to hear that someone who was neither a Sun or RSADSI employee said "spike this deal".

[edit: clarity]


I don't understand your "spike this deal" idiom. I can't make sense of an analogy to a spiked drink (rufie) or spiked football (scoring a point). The closest I can find is "spike somebody's guns" meaning "to spoil someone's plans". Do you mean someone wanted to prevent you from using RSA's patent in the JVM?

(I initially assumed you meant "force this deal".)


Sorry, it means to 'deny successful execution of the plan'. Originally spiking gun barrels a modern example was environmentalists driving spikes into old growth redwood trees that would cause massive damage to a chain saw blade should a logger try to harvest tree. Generally doing something which prevents a deal from actually working to make up for the fact that you were unsuccessful in preventing the deal from being agreed to in the first place.


Let's get our terms right. Tree-spikers are not environmentalists, they are vicious extremists. They are terrorists. The spikes don't just damage saw blades, they send sharp steel shards flying in all directions with the intent to to seriously injure or kill loggers or sawmill workers.


Well, as far as I know, environmentalists who spike trees let the logging companies know. Their goal is to save the trees after all NOT kill lumberjacks.

Any evidence that these "vicious extremists" have actually killed anybody? As in any at all? Because I can't find any references.


I can only say that I stopped at "they are terrorists."


No argument on substance; someone who has no problem maiming or murdering in order, not even to save the life of a tree, but merely to punish people for having in some way been associated with a tree's demise, so places himself far beyond any civilized pale. Such a person deserves at the very least a stiff term in prison; should he succeed in his vile endeavor, he is best rewarded with a cigarette, if he wants one, and a nice sunny place to stand and enjoy it. (In case that's not sufficiently clear, I am talking about execution by firing squad.)

But you might want to avoid that word 'terrorist'. It tends to give people an excuse to shut off their brains and feel good about it, where convoluted rhetoric like mine tends to serve as an attractive nuisance ("I could give up on this, but then whoever wrote it might look smarter than me") for long enough to get a point across.


Advocating extreme violence (an execution with a firing squad) for a group as a punishment for violence( against tree loggers). Amazing hypocrisy.


Hypocrisy, sir? You astonish me. Is it hypocrisy to suggest that the punishment for maiming or murder, with the vilest of malice aforethought, should be judicial execution? -- a life, ended quickly and without suffering, for a life irrevocably ruined or destroyed in gory, agonizing horror? I think not!


Again, any evidence these people are maiming and murdering? Any at all? I found one reference to a possible injury.

You are willing to execute these people based upon logging company hearsay and propaganda.

See, the idea is you spike the trees you want to SAVE and then you tell the logging company you spiked them ASAP so they DON'T CUT DOWN THE TREES YOU ARE TRYING TO SAVE.


No, I am willing to execute them, to the extent of volunteering for the firing squad, without any interest in mine being the rifle loaded with a blank -- if they succeed in maiming or murdering. I notice my previous comment was not clear on this point, and am glad of the opportunity to clarify it now. The mere attempt, in my estimation, merits only a term in prison, as with any other deliberate but unconsummated attempt to inflict grievous harm upon another person, in the absence of extreme provocation such as the need to defend oneself from attack.


So let's consider armored car security. They will shoot people who take the bank's money. If I, perfectly reasonably, want some money, they will shoot me. So the banks who employ armored car guards are terrorists who should be imprisoned or executed!


Do you seriously equate armored car guards with tree spikers? I can't imagine you mean your comment to be taken at face value, but I also can't imagine any other meaning with which you might have loaded it.


Interesting view point. What, pray tell, would you find a suitable punishment for a rapist, murderer or serial killer?


Much the same: imprisonment for the attempt, execution for the success. I suspect you will take this response as cause for horror at my unthinkable whatever-you-call-it in equating the gravity of a tree spiker's crime with rape and murder. If so, then you will be no less amazed than am I at the idea that the crimes of such saboteurs are of any less gravity than those of anyone else who, lest the nature of tree spiking be forgotten, sets out to maim or murder, or -- hardly less grave, or more permissible -- to use the threat of such mayhem, occurring at random, to enforce their will.


Everyone is a terrorist.


The etymology is indeed 'spike the guns'. Usually spiking was done to prevent the enemy from seizing and using one's own ordnance, rather than an act of sabotage by the enemy.

The 'spike' was a nail driven into the barrel, usually perpendicular to the bore.

As a Cold War aside: an AP round from the A-10's GAU-8 cannon could actually spike the barrel of a T-62 tank! But whilst this was demonstrated in controlled environments it wasn't really a practical form of attack...


To spike a gun seems to be a rare event. Gun crews could spike a gun to prevent it being turned on their own forces, but it meant they could not reuse it either when the cavalry left (which usually meant your whole army was running away)

Cavalry were supposed to carry such equipment but it seems it was rare they did, and rarer they got down off a horse in the middle of a firefight.

So it's (from a minor bit of googling) looking like a tactic more honoured in the breach than the observance. Anyone else?

An interesting HN-like back and forth on this subject is below: http://theminiaturespage.com/boards/msg.mv?id=233539

Oh and it's driving a steel wedge into the touch hole so the cannon cannot be fired - not, as seems to be implied on this amazingly tangential thread, driven into ridiculously hardened cannon walls.


In the modern context it is standard protocol to not leave important equipment in a functioning state. So, lets say you got ambushed and there was no chance of getting a tow truck in time before being overrun, for example, you would toss a thermate incendiary on top of the radios and one on top of the engine, thus denying the enemy the use of the vehicle or the knowledge of the radio mechanisms or encryptions on it.


Or you know a bit of thermite on the breech block works well.


I'm pretty sure thermite was in enter after cavalry charges stopped being a good idea (well actually as Poland infamously charged Nazi tanks, maybe not)


verb: spike;

1. impale on or pierce with a sharp point.

(of a newspaper editor) reject (a story) by or as if by filing it on a spike. "the editors deemed the article in bad taste and spiked it"

stop the progress of (a plan or undertaking); put an end to. "he doubted they would spike the entire effort over this one negotiation"

historical: render (a gun) useless by plugging up the vent with a spike.

From https://www.google.com/#q=define+spike


The historical reference is to putting a spike in an enemy's cannon barrel, which should render any attempts to use the cannon to catastrophically fail, as cannons generally had to be well-cast/flawless to operate under their loads.


In British english it's fairly standard; my first analogy-thought is to spiking a news story.


Does one not 'spike the football' in the NFL to stop the clock?


(Searching for "spike this deal" finds two kinds of usages: people making a play on words in advertisements for volleyball and football products, and people talking about ruining a deal on purpose.)


It's presumably a variant on the historic journalistic use of 'spike a story' where the editor would decide not to run a story, and the typewritten manuscript was places on a metal spike.


With what do you kill the undead? A wooden spike.


> rufie

Roofie? Or something else?


Shame your anecdote is lost due people getting hung up on "spike this deal"....


So - what was the outcome? Did the RSA public key algorithm make it's way in? Either through the BSAFE license, or did you just wait for the patent to expire?


There was a lawsuit, I got to testify, nothing came of it, my capabilities system was not going to be allowed to ship, Java had become a 'big deal' inside of Sun. I briefly tried to launch it as a secure offshoot of the language in Sun Labs, that didn't get a lot of traction and so I went on my way.


Not sure if this tidbit made Hacker News -- the OpenSSL project added Dual_EC_DRBG support at the request of a paying customer: http://openssl.6102.n7.nabble.com/Consequences-to-draw-from-...

They're under NDA and cannot reveal the customer's name. The thread doesn't say how much the customer paid, does anybody know? A friend told me 600k USD last night, but I cannot find any sources that back this up.


Some important context from the OpenSSL list last week (http://marc.info/?l=openssl-announce&m=138747119822324&w=2 ):

  Why did we implement Dual EC DRBG in the first place?
  - ----------------------------------------------------
  
  It was requested by a sponsor as one of several deliverables. The
  reasoning at the time (my reasoning and call as the project manager)
  was that we would implement any algorithm based on official published
  standards. SP800-90A is a more or less mandatory part of FIPS 140-2,
  for any module of non-trivial complexity. FIPS 140-2 validations are
  expensive and difficult, taking on average a year to complete and we
  have to wait years between validations. So, there is an incentive to
  pack as much as possible into each validation and our sponsors (dozens
  of them) had a long list of requirements they were willing to fund.
  
  We knew at the time (this was the pre-Snowden era) that Dual EC DRBG
  had a dubious reputation, but it was part of an official standard (one
  of the four DRBGs in SP800-90A) and OpenSSL is after all a comprehensive
  cryptographic library and toolkit. As such it implements many algorithms
  of varying strength and utility, from worthless to robust. We of course
  did not enable Dual EC DRBG by default, and the discovery of this bug
  demonstrates that no one has even attempted to use it.
  
  Where did our implementation come from?
  - --------------------------------------
  
  The client requirement was simply "Implement all of SP800-90A". Our code
  was implemented solely from that standard.
Seems fair enough if a client says "implement all the standard" that they implemented it... it's a bit different than "just add this one algorithm"


This "explanation" is misdirection. The accusation was never that the NSA wanted to add just this one bust algorithm. The NSA, acting as a client protected by privacy, could have easily invested time and money to be a significant contributor, with a small, hidden, 1 time payload.


I fail to see how it's misdirection. OpenSSL have explained, clearly, what they were paid for - and it seems entirely reasonable to implement everything, especially given - as that post shows - that the duff algorithm was really quite hard to use unless you really tried.

Additionally, given the trouble Theo's got into on a number of occasions for criticisng bits of the government, it seems fairly unlikely that he would've been comfortable with the project taking NSA money.

Most plausible situation: Corporation said "here's money, implement ALL the things", OpenSSL figured it was a good way to get the useful things paid for plus some stuff that potential future sponsors might find useful in marketing material.

As for whether said Corporation had additional, non-obvious motives, I make no guess either way.

But misdirection does not, frankly, seem appropriate to describe the explanation given, and your scare quotes suggest you're appealing to emotion rather than presenting an actual argument. Do feel free to present one if you have one, though, it's entirely possible I misunderstood.


Are there not two questions:

1) Were flaws introduced into the technology which potentially reduce the strength of the encryption?

2) Did OpenSSL know about it?

It seems the answers can be yes and no without any real stretch.

If you are, say, the NSA, asking / paying for a whole standard to be implemented (a) gets the bit you want implemented and (b) hides what you're really interested in from anyone wondering and looks less suspicious. Once it's in they could either start pushing for it to become a default, push for people to voluntarily choose it (get people speaking at conferences to recommend it or whatever), or just leave it knowing that a few people would use it and that works for them too.

But the fact that OpenSSL probably weren't complicit doesn't change the fact that elements of the product need to be considered potentially flawed.


I'd reverse those answers, actually.

1) the mailing list post states that OpenSSL wants to be comprehensive, even when that means providing implementations of algorithms that you probably shouldn't use. The inclusion of weak, or even broken, algorithms does not undermine the product as a whole given this mission. It has no impact on the effectiveness of other, stronger algorithms in OpenSSL.

2) OpenSSL knew that one of the algorithms in the standard was weak, but thought it was still valuable to implement the standard. again, the comprehensiveness goal means it's up to the end users to utilize OpenSSL in a manner consistent with their use case.


You're reading too much into this. FIPS-140 validation is required for many applications in areas like healthcare and banking. FIPS compliance is also required for most state/local government applications as well.

I believe you need to be compliant with everything in the spec to be validated.


theo and OpenBSD have nothing to do with OpenSSL


It's pretty obvious that the 'client' was indeed the NSA. They say they can't reveal who the client is because of the NDA. If the client was NOT the NSA they could simply say so. But they didn't...


Ahhh no. Your making assumptions. It's not obvious that the client was the NSA. Why not the DOD, or CIA, or hell even SSA?

Not that it matters much anyway. The result is the same.


That's not how NDAs work... usually would be a prohibition of both confirming AND/or denying anything related to the information behind the NDA.


I don't think you can play 20 questions and also say that you are respecting the NDA.


Fips certifications cost some money and openssl relies upon sponsors to do that. As I understood their explanation it was one of the fips sponsors.

It had a bug anyways, nobody ever used dual ec with openssl.


Steve Marquess, the originator of that post on nabble, is the main point of contact for OpenSSL commercial contracting. It's interesting that he used that phrasing.


It's worth pointing out that at the scale mentioned, there's no reason that the "paying customer" had to be The United States National Security Agency. It was a published algorithm, and OpenSSL is used in countless commercial projects. It would have been entirely reasonable for one of these to have come to OpenSSL requesting implementation (albeit as part of a NSA-funded internal project with a 400% markup), and the request would have seemed entirely reasonable.

I don't think you can tar the OpenSSL folks with this without much better evidence.


OTOH, the fact that the delivered code had a bug rendering it unusable suggests whoever requested it didn't really need to use it – or they'd have discovered the bug earlier. That's vaguely suggestive the client may have paid for its inclusion for mere show, or as a favor for another entity.

I wonder: is the client which paid for the non-functional implementation, which if I understand correctly is now scheduled for deletion rather than fix, entitled to a refund?


"Implement ALL the algoirthms" sounds like a requirement drafted by somebody enamoured of the standard rather than by somebody looking at what they were actually going to use.

"Our product is compatible with all of <impressive sounding standard>" may, indeed, have been worth the money to the customer even if the value was marketing rather than technoloogy


I agree. It's still notable, though.


Yes, it was 600k.


Do you by chance know the source?


And where exactly did the money go?

$600k is a lot of money to implement an existing standard. That's 4-5 annual engineer salaries. I'm wondering exactly how much work was actually undertook for $600k and whether those doing it (/charging for it/taking the money) felt it was unusual size remuneration.


"$600k is a lot of money to implement an existing standard."

Not if such a requirement includes things like FIPS validation, which is entirely possible in this case.

Plus, in some cases, these things are done in liu of donations, etc.

I can think of plenty of cases where well known contractors on open source projects have been paid very well simply as a way of showing support for that project.


the only other comment by this acct was a weird attempt to impersonate someone else. it seems more like a troll than an anon informer.


NSA didn't need to backdoor DES when they just forced everyone to use weak keys:

> 1979 - Present, DES: The Data Encryption Standard was altered by the NSA to make it harder to mathematically attack but easier to attack via Brute Force methods. The original version of DES, called Lucifer, used a block and key length of 128-bits and was vulnerable to differential cryptanalysis. NSA requested that the already small DES key size of 64-bits be shrunk even more to 48-bits, IBM resisted and they compromised on 56-bits11. This key size allowed the NSA to break communications secured by DES.

http://ethanheilman.tumblr.com/post/70646748808/a-brief-hist...

This is why any known NSA employee from security standards groups (including IETF and Trusted Computing Group [1]) must be forbidden to participate in the making of that standard. Their role there can only be seen as to facilitate weakening of the standards, either by weakening the algorithms themselves, or if that's too hard and/or obvious, to convince everyone else to use a weaker version of it (which NIST kind of tried to do with SHA-3 recently, too).

As long as there's any chance of NSA being involved even remotely in a security standard, I'm going to lose faith in that whole standard and the group.

[1] - http://www.securitycurrent.com/en/writers/richard-stiennon/i...


I did consider trying to work in the part where they shortened the keys and eventually DES became useless because of it, but it was a bit of a diversion from the salient (heh) reason I put this on the timeline, that they improved the s-boxes without explanation and that colors any subsequent requests they made to do similar.


I think there are two types of commentators on this issue. Those who've been involved in negotiating agreements like this and those who haven't. Those who have can see how something like this happens. Those who haven't cannot believe how something like this could happen. It's important to remember/realize that no one, outside a handful of folks, understood what the NSA was up to until the last 12 months. Heck, at one point not too far back it was probably prestigious to mention you worked closely with the NSA on developing your technology. Help you impress a few corporate execs and close some deals.


It's important to remember/realize that no one, outside a handful of folks, understood what the NSA was up to until the last 12 months.

There were loads of people - members of the general public, security researchers, government watchdog types, privacy advocates, crazy conspiracy nuts, etc. - who very strongly suspected, for good reason, exactly what turned out to be going on.

ECHELON started in the 1960s. Rumors about it were everywhere by the early 1990s. It became so famous it was featured in pop-culture movies, TV shows, and video games.

There was at least one good book (note 2005 publication) that showed how it was possible to piece together some pretty good guesses about what was happening from unclassified information:

http://www.amazon.com/Chatter-Dispatches-Secret-Global-Eaves...

In short, that book argues the NSA was expanding its eavesdropping capabilities so enormously, so quickly, that the only reasonable target for it was "everything." There simply weren't enough top-secret, diplomatic, or encrypted messages to justify the infrastructure devoted to the task; the NSA had to be developing the ability to listen to absolutely anything it wanted to.


Not to forget Dan Brown's novel "Digital Fortress" which is entirely based on NSA and its cryptographic efforts! I coincidentally saw it just after Snowden's revelations and was surprised that such a topic was the subject of a popular novel. (I couldn't finish reading the book; not sure if the focus changes later)


Frankly, if somebody started talking about ECHELON about 1 year ago, I immediately put him into "9/11 truthers and other conspiracy theorist" category.


Instead of revealing your ignorance, you could read up on ECHELON, which was becoming a bigger issue in Europe already in the 90's, there was an investigation by the European Parliament in 2000/2001, and coincidentally 9/11 is probably the reason why the US&Britain didn't get to bear the cost at the time. Now they are (Boeing, CISCO, IBM, ...) and will probably continue to for some time to come.


Given the lack of mass outrage, I think people are still in some sort of denial. Or perhaps its seen as being too huge to really do anything meaningful about. In the UK, politicians are still mostly trying to sort of shrug it off. Even across Europe, there is not real outrage at the fact that the UK, as a sort of internet EU to US hub, is selling out the EU to the US. Its all too muted for my liking. Cant help thinking that behind the scenes the US is trying to broker some sort of deal. I dunno, intelligence sharing or whatever.


Thanks for this comment - it is one of the rare that says "we did not know before". As opposed to a big part of commentators (mostly outside of HN) that say, Snowden did not reveal anything new.


Are EMC/RSA denying that they took money from the NSA? That alone seems damning, since I can't think of any way that the existence of such a contract for any stated purpose doesn't undermine the credibility of the company fatally.


Really, "implement this, it'll help us get our pet standard through the process" isn't that unlikely a request to get - standard processes are rife with shenanigans at the best of times, and more companies/agencies than you'd hope take part in those.

Also, note that "the NSA is backdooring American crypto" has not always been considered a likely proposition.

(Of course, all of the above is bad/wrong; it's just not that much worse than you'd expect. "That much worse than you'd expect" is http://en.wikipedia.org/wiki/RSA_Security#Security_breach.)


Sure, it's exactly the sort of request you would expect.

It's also exactly the sort of request you need to stay well shut of if you want credibility as a crypto provider to business and consumers. Any interactions with agencies like the NSA taint you, regardless of the intent. By their very nature they are suspect in this context.


What this shows, in the most charitable if feeble light possible, is that RSA will implement something important to one paying customer that affects all paying customers, and will not disclose that without rocks having been turned over.


I don't think the, "We trusted the NSA" explanation makes them look stupid or negligent. This article does reference the fact that people are now retroactively claiming understanding of some of these revelations, but I think the writer forgets that this might apply to him as well.

NOW it makes perfect sense to see how terrible this is, but we haven't always just blatantly assumed the NSA was out to get us. They used to not have the worst reputation in the world in the security community, right? I'm not the best authority for this, but from what I could gather they played a kind of spooky-but-helpful role prior to the Snowden leaks in the intelligence community - that is, you could generally trust they were thought to have the community's best interest at heart, even if they couldn't say why.


Papers were published by reputable cryptographers in 2007 making it clear that a Dual EC DRBG could be backdoored by the entity that chose the points.

The whole point is that this didn't come out of no-where. This algorithm was already regarded as being suspect, and RSA knew that they had been paid to make it the default in their cryptography library.

This isn't like the DES situation, where there was never any real evidence that the changes made by the NSA had made DES weaker (and as we later found out, they'd actually made the algorithm stronger whilst at the same time ensuring that the key-space was small enough that they could crack it).

RSA have seriously let down their customers & not for the first time. If I was an RSA customer I'd be taking a good hard look at dropping them as soon as I possibly could.


"It's not true. It's not true. It's not true.

"...

"It's old news."

I'm loosely quoting a source I can't remember, but I think it was ridiculing a repeated tactic of some candidate. It's a dynamic that seems to play out a lot if you know to look for it in issues that involve a lot of public relations games.

I think you are right to emphasize how little we remember when we learned what. That's why the above tactic works so well. It lets politicians' dance around their tactical mistakes and change positions without undermining their own base. It is also how disingenuous people can now able to talk about "welcoming debate" and have a large portion of the population perceive this as advocating some reasonable middle ground.


"It's not true. It's not true. It's not true.

"...

"It's old news."

Anyone older than 22 or so who doesn't recognized that as a time-worn and common tactic hasn't been thinking critically.


Not sure why this continues to work myself.


I think people get into rhythms and follow the script they've learned. This plays out in the large and in the small. We've all probably had multiple arguments where A says one thing, and B automatically retorts in defense rather than thinking about what we're talking about or even just letting it go. I think a lot of marriages run like that.

People unfortunately think a well spoken response is the same as a truthful response. If the PR flack or representative or CEO seems otherwise calm and unflustered - smooth - then that serves the "reasonable response and explanation" part of that scenario's script.

We are very, very easily lead.


Because it's easy. And most people are lazy; don't have the time or care to think it though. It's PR. And it isn't just politicians who run it.


  "It's not true. It's not true. It's not true.
  "...
  "It's old news."
This also sums up how the underhanded defense of the status quo by hiding behind Hanlon's razor has worked in much internet discourse, including HN discussions, on mass surveillance and cryptography so far.

Of course, it's not going to work any longer.


It seems to me that she addressed those exact concerns:

> So, yes, it is possible that, in 2004, nobody at RSA had any articulable suspicions about Dual EC. They may have taken it on faith that this was another DES situation where the NSA knew it was better but couldn't disclose why. Okay. Is that fair? I think that's fair.

> If that were the end of the story, I would be standing here saying “poor RSA! How cruelly the NSA mistreated them!” But, guess what, it isn't. In 2007 the possibility of a backdoor was made very public, and after that “everyone knew” not to use it. None of us knew for sure it was backdoored (even if some people retroactively pretend they did) but that was kind of a crazy risk to take when there were other RNGs to pick from with no known risks and were faster to boot.


The point is it's not a crazy risk to take when you trust the NSA, even in 2007, when everyone else knew not to use Dual EC.

I just don't see how the RSA is supposed to realize the NSA is evil before the Snowden leaks.


You don't have to assume the NSA is evil to understand that its mandate means you can never trust their incentives in something like this.

You don't have to have any bad guys to have conflicts in purpose and incentive, and this has clearly been true for the entire existence of such interactions.


> how the RSA is supposed to realize the NSA is evil before the Snowden leaks.

I'm surprised at this interrogation. I've classes in schools talking about economic/state intelligence methods. The responsibility of a big corp is to secure itself from outside threats and visible links to state agencies will be analyzed by observers.


"We trusted the NSA" might have been a good excuse without the payment. It seems really strange that the NSA would need to pay RSA to "improve" their RNGs.

And it doesn't change the rest of the post's point - after Dual EC was determined to be backdoorable, RSA didn't say anything.


Not the worst, but not the best. A lot of people haven't forgiven them over DES key size reducing it to 56 from 64 bit and S boxes. Fully story [1]

[1] https://en.wikipedia.org/wiki/Data_Encryption_Standard#NSA.2...


You forgot to mention that the NSA's S-Box tampering actually made it stronger against differential cryptanalysis, which was unknown to the public at that time. It seems like the key size reduction was a compromise for that enhancement. It's an interesting move because it shifts the power from the smartest adversary to the one with the most brute force, perhaps with the assumption that only the U.S. and its allies had access to systems capable of cracking a 56-bit key at the time. That would have made the cipher more secure against other countries that had to rely on cryptanalysis, while still giving them the access they needed if they so desired it.


"We trusted the NSA" or "The NSA paid us $10M to 'trust' them"

Pick one.


"As a bonus, all the other algorithms are apparently faster and that’s generally a desirable property."

I apologize for discussing a technical topic in whats likely to be a political crypto-rage flamewar, but I've been digesting some thoughts about this and the figure of merit of processing required per bit of randomness is probably interesting, in that for a given set of professional grade RNGs (not algorithms implemented by idiots) the more processing required to generate a bit of randomness, the more likely it is someone's sticking a nasty backdoor in.

Or rephrased the more time you spend sticking magic "nothing up my sleeves" constants into a bit, the more likely something unpleasant is getting stuck in there.

(edited to add I'm talking about "real" RNGs not implying the worlds simplest shortest LFSR is magically better than a real RNG just because its really fast... I'm talking about more "in class" performance comparisons than joke vs real.)


It depends on what you are talking about, sometimes you want a slow protocol, since it makes bruteforcing the original value more complicated.

I can't wrap my head around why that would matter for a PRNG, but it sometimes does have value.


And that's exactly what had everyone else scratching their heads, too!

There are plenty of slow random number generators. They're easily built from cryptographic hash functions [1]. Heck, use something like Bcrypt as your hash and you can get as many seconds per number as you like. The reason we don't use them even though they work perfectly well is that they are too slow.

The challenge is making a fast PRNG that can maintains the properties of cryptographic randomness. That's why everyone was so confused with dual EC.

[1] http://en.wikipedia.org/wiki/Cryptographic_hash_function#Use...


Eh, the slowness comes come from it being based on group primitives normally reserved for public key crypto. In a sense, basing a PRNG on a general believed-hard problem is nicer than on a believed-hard instance of bit shufflings, so the idea does have merit despite low adoption due to performance.

What I have to wonder is the implications for all of the other random generators.. The NSA has been studying symmetric crypto far longer and far harder than the public. Symmetric crypto is both sufficient for state security (hierarchal+opsec), and breaking it is sufficient to snoop on the public's communications (the bulk encryption and PRNGs).

On the other hand, academics love neatly defined problems, so their interest is heavily skewed towards studying asymmetric crypto where foundations of open mathematical problems and implementations based on nice closed-form number theory.

The same backdooring approach may have been applied to other NIST generators (which would be sufficient for the NSA to preserve its dragnet snooping and still secure against other attackers), and we simply don't have the analytical tools to see this (what is the entropy diminishment of a nothing-up-my-sleeve-number when the explanation is chosen a posteriori?). Dual_DC_DRBG comes across as so ham-fisted only because we have the ability to analyze it.


The idea has of an EC-based PRNG apparently has merit, however the Dual_DC_DRBG implemntation doesn't. Supposedly there are similar designs out there which have nice security proofs which the NSA one lacked, most likely due to the backdoor.


As I understand it the whole reason you use a PRNG rather than just using your source entropy with some conditioning, is speed -- or maybe the risk of stalling, which is related to speed. So if you want slow, you can forget the seeding-a-deterministic-algorithm part altogether.


Another important reason for using a PRNG is when you don't trust your hardware RNG, what've become way too common lately.


Here's a question: Do we think Snowden is intentionally misleading us to attack RSA and EMC, or that he's actually releasing as little information as he can to get us on the right track toward fixing things? Why would this particular piece of information be selected if it was not a real problem?


I'm tired of having to correct people on this, but here goes: Snowden is not leaking anything anymore. He leaked most of the documents he had to some selected journalists a long time ago and it is now up to them to analyze them and responsibly report whatever interesting information there is to be learned from them.


Thank you for clarifying, Mike.

Deciding what is responsible is likely a coordinated effort. I think the same argument applies. Do we think someone might be being irresponsible here? Sensationalism, or real problem?


Yeah, I'm not too knowledgeable about the whole situtation, but I wish Snowden had leaked the documents to the public. It sucks having to trust the press. Assumedly if Snowden didn't like how they were handling it he'd step in in some way, though.


If he did, they'd just yell that he put everyone in life-threatening danger, like they did with Manning and also Wikileaks. I guess he wanted to avoid that level of accusations, although people like Mike Rogers still say that about him anyway:

http://www.techdirt.com/articles/20131223/02311625673/rep-mi...


A post by Greenwald on the subject of "dump it all" vs. "vet and dribble":

http://utdocuments.blogspot.com.br/2013/12/questionsresponse...


By journalists. This also include anyone they might hire/recruit to help them in identifying what is valuable.

For example, Bruce Schneier helped to make the news articles around the tor network by going through the NSA documents.


Snowden says whatever keeps Putin happy, so that he doesn't sell his ass back to Americans.


We're responding to our valued customers as fast as we can over on Twitter. https://twitter.com/RSAInsecurity


$10M says the NSA is a valued customer.


Note up until this transition around 2001 the NSA was focused on controlling the key length of cryptography available.

They gave up on that and chose to focus instead on stealing the keys


Is this to do with when a certain strength of cryptology was catagorised as a weapon and there for not allowed to be exported from the USA? Does this mean that that was relaxed becasue the NSA, or whoever, eventually got a back door or whatever?


Yes.

It wasn't just a matter of one "back door" but a matter of knowing that (1) people usually use codes incorrectly or screw up the key management and (2) if you give them a little help the key management will always be screwed up.


tl;dr point by point on why RSA's press statement makes them lying liars who lie, and that they were wilfully negligent from 2007-2013 at the very least.


I really needed someone to summarize the article in one sentence (who has time to read more than one) and at the same time editorialize. Now I don't have to read the article or reason. Super convenient. Thanks!


What did you expect? RSA got purchased by EMC in 2006. That's the kiss of death in terms if any semblance of ethics. Someone in EMC would have known about this and swayed decision making.


And the stock-market shrugged.

https://www.google.com/finance?q=NYSE:EMC


EMC bought RSA for $2bn. With a market cap of $51BN, why would you expect RSA to be a major component? The $10M NSA payment was apparently 30% of that particular RSA group's revenue.

And even if RSA was standalone, should we expect a major impact on the company's sales? It's not like Wells Fargo is going to stop using RSA keyfobs because of this. Although I admit I don't know where most of their income comes from.


> why would you expect RSA to be a major component?

Now that everything is public, would you choose to trust your data to EMC hardware and software?


I highly doubt that, at the level that EMCs deals take place, this news will have much impact. Only once have I tried to deal with RSA, and their software was so difficult to use (this was around 2006), there's no way we'd choose it by technical merits.

And something RSA did before EMC bought them doesn't really have any impact on EMC or VMWare anyways.


My gut tells me EMC's stock price does not directly correspond to what's actually been going on here, and that this is not limited to RSA.

Let's briefly assume that the RSA issue is just a tiny piece of a giant EMC public-private pie. Can anyone knowledgeable on such issues speculate as to how significant profits from intentionally secretive corporate public partnership might be visible or non visible to the public eye?


That is what is interesting to me, investors in theory should be complaining about this from a business perspective.


Its not even showing on Bloomberg (yet?). I doubt this will make a dent in the stock price anyway, even if its irrefutably confirmed.


"we continued to rely upon NIST as the arbiter of that discussion"

This seems like a reasonable position to me, but I'm not in the field. Can someone tell me why it's not reasonable, in the face of all sorts of theories and suspicions being thrown about, to rely on the leading standards body as to whether the algorithm is flawed?


> assume it was publicly documented at the time that BSAFE defaulted to Dual EC

Was it? Before it was revealed to be the BSAFE default I was going around saying that no one would have chosen to use it anyways, so it was probably a pretty ineffectual backdoor except if it ever was option for a downgrading attack.


That's a dead corp imho. Do we have any famous customer's list floating around?


We use RSA tokens at the financial institution I work for. I'll be checking in with our CTO tomorrow (was out sick today) to see about removing them from our systems.


With a quarterly income of $587 million in Q2 of 2012, isn't 10 million dollars "chump change" for EMC? Perhaps it's more of a lubricant for the larger picture of deals and pressures.


I worked for RSA back in the 1990s. Back then at least, the sales staff's pay was based heavily on commission. It has a sliding structure that meant that people who sold more got a higher percentage. $10m might not have been a lot to EMC, but to the sales guy it probably meant over $50k, maybe over $100k. That one sale alone would have blasted him/her past the quota and bumped up the commission percentage.


Yes, likely the 10M was just a token amount. Being friends with the NSA no doubt lead them into other deals.


NSA deserves an award for accomplishing this for just $10 million.


Kind of odd: this seems like something better suited to a blog post than a Gist.


My blog has been accruing more personal things and whiny rants lately. I decided to separate this from my doubtlessly profound philosophical ramblings about the meaning of life and Skyrim. All in all, a list of markdown gists is just about as functional as tumblr...


Which is why I made gist.io:

http://gist.io/8101758 (the OP's content, but nicely formatted for reading on any size screen, and with attention to typographic detail.)


This seems like a business opportunity for Github.


What if we literally didn't buy it?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: