I should be happy about this -- who wouldn't want the entire web to be encrypted -- but SSL is so broken for normal people. SSL is expensive (wildcard certificates run $70 a year and up), confusing (how does one pick between the 200 different companies selling certificates?), and incredibly difficult to set up (what order should I cat the certificate pieces in again?).
If SSL doesn't change, this move will cut the little folks out of the internet. What are Mozilla's values?
That's just one project, and it doesn't even exist yet.
The web is moving faster every day, apparently. I sure do hope that project will be all it's chalked up to be.
For example, I need IP-only certs for a new project I'm working on (waiting for DNS to propagate to all clients is too unreliable and slow). If letsencrypt doesn't do that... well then I'd have to hope real hard for a competent CA out there who has an automated process available that allows IP-only certs. And whatever their price, if companies start following Mozilla's lead too soon, I'll have to pay up.
The wording in the article is perhaps not so damning yet, but it's still making me uneasy that they put out this press release while there are currently ZERO viable solutions for this.
For example, I need IP-only certs for a new project I'm working on (waiting for DNS to propagate to all clients is too unreliable and slow).
This doesn't make any sense. You're not waiting for DNS to propagate to clients; if anything you're waiting for recursive DNS servers at shitty ISPs to time out their caches when they are configured to not honor the RR's TTL sent by the authoritative server in a misguided attempt to make the internet "faster".
But this is completely avoidable without having to use IPs or certificates with CN/SAN that are IPs: get a wildcard cert and rotate the subdomain name. It's a new hostname, so it busts intermediate DNS caches by being new queries; since it's a new query, there's no "propagation to clients" to wait for when you change IPs, all queries for the new name hit authoritative servers. Additionally, it looks infinitely more legit than a website that is accessible only via IP address. And doubly additionally, if you're going through so many IPs, presumably you'll be rotating some out and those may be assigned to other people who can then get their own cert for that IP and impersonate you.
It is awful. It makes the standard way to plan and do a DNS migration really difficult.
It occurred rarely, but a few years ago it was a regular problem because some bigger ISPs were doing it[0]. Not sure how common it is these days.
That being said, I'm having troubling coming up with a project that would be better served with IP addresses than a constant name, so I have no idea what the OP I was responding to could be doing that that problem needed to be addressed at all.
Unless you have a reference for Google saying they purposely do this, it is more likely that the cache is dropping LRU entries as it fills up. Also, I doubt there is a trivial number of actual resolvers behind the google public DNS endpoints, so you may be seeing the result of multiple individual servers without shared cache initially populating their caches.
Some quick tests with dig seem to indicate that, at least for the region I'm in, my queries to google's public DNS is rotating between 4 or 5 servers, as evidenced by the TTLs being returned.
LetsEncrypt works with IPv4. I assume it will work with IPv6. LetsEncrypt is a Mozilla project, it's safe to assume they will launch LetsEncrypt before deprecating non-secure websites.
Right now the subject identifier in a Let's Encrypt cert must be a DNS name, not an IP address. From the ACME protocol specification draft:
"Note that while ACME is defined with enough flexibility to handle different types of identifiers in principle, the primary use case addressed by this document is the case where domain names are used as identifiers. For example, all of the identifier validation challenges described in Section {identifier-validation-challenges} below address validation of domain names. The use of ACME for other protocols will require further specification, in order to describe how these identifiers are encoded in the protocol, and what types of validation challenges the server might require."
This is in line with other CAs - no certificates should be issued for IP addresses or internal server names with expiry dates after November 2015. See for example: https://www.digicert.com/internal-names.htm
I watched a demo where they went from a vanilla apache install to an A scoring HTTPS site in sub 5 minutes at Libreplanet. It's a good idea to publicize the upcoming LARGE change.
I'm the person who gave that demo and I appreciate the compliment, but I'll readily admit that we can't yet issue publicly-trusted certificates to the public. So we aren't up and running in a way that a site could take advantage of.
On the other hand, our infrastructure, partnerships, and technology are very real. I hope they'll make the process just as easy as what you saw for many people soon.
StartSSL exists right now - and has been providing free certs for personal use for years now.
I'm curious about your requirement for IP only certs? Sure you don't "own" a domain name, but it's even less true that you "own" a specific IP address. (Well, at least for me, perhaps if your project is in the datacenter/isp/network-infrastructure space you might actually have some cintractual "ownership" of an IP address?)
Last time I tried, their site had JavaScript bugs and their email validation procedure didn't pass greylisting. I didn't want to place my web server security in the hands of a company with such low quality standards.
That's the entire point. If the whole web is going to be secure, then someone who "is not who you'd want implementing your web server security" needs to be able to make it work, and work right.
Well, the counter argument is that even if the whole population is going to not have brain tumours - I _still_ don't want "someone who's not a rocket surgeon" doing brain surgery.
Crypto is non-trivial. There'll never be a proper "Click this button to automatically secure your random php app running in cPanel/Plesk".
The best we'll see I suspect is a "click here and make your website pass the minimal checks modern browsers use to determine if you're secure", then we'll have a daily stream of site owners claiming "the PII/password/creditcard breach wasn't my fault - I used 2048 bit encryption!"
How well does that work on a corporate intranet? How well does it work with Windows?
If the friction for testing, say, an enterprise LOB app on an internal-only QA IIS server is any higher than "basically zero" with Firefox, and the same friction doesn't apply to Chrome or IE, well.
1. On a corporate internet, you're probably in a position to deploy your own CA certificates. This is quite common.
2. Let's Encrypt will use an open protocol, so it should be OS-agnostic.
3. "Privileged Contexts" is being developed as a W3C working draft [1]. It's quite likely this won't be just a Mozilla-thing. Google has been fairly aggressive when it comes to pushing for more (and better) SSL as well (see SHA1 cert deprecation).
I'm a person who actually deploys .net apps to internal IIS QA servers. If I want them to use HTTPS, I have to configure it. I don't know anything about my own CA certificates. I'm not saying it couldn't happen, but it's certainly easier to suggest just not using firefox if something isn't working.
The next problem is IP addresses. How is SNI these days? I've been meaning to experiment with it but the lack of extra personal certs have prevented me.
> it requires you to run special software on your servers
It doesn't, they ship a tool only for convenience. An open source tool running on your machine would be reverse-engineerable anyway. Plus, it is expected that shared hosting providers will run the tool for you.
Any citation for this claim: "it is expected that shared hosting providers will run the tool for you"?
Currently SSL is a revenue stream for many shared hosting providers. Are there are on-record comments from major providers who are planning on supporting Let's Encrypt?
That's not a claim, it's speculation. Since there is not marginal cost, it's going to happen eventually. You only need one provider to do it and some customers to request it for the others to follow. Many shared hosting providers already have removed traffic limits despite the additional revenue it used to bring. They can upsell customers with extended validation anyway.
The only problem is lack of browser support for server name indication so one IP can be shared by many customers, but it's getting less of an issue by end-users renewing their hardware or updating their browser.
Trust them for what? They don't get your private key, and if the fear is that they might sign a rogue cert for your domain, all CAs can already do that, regardless of whether you choose to "trust" them or not.
In what sense? The "MAJOR SPONSORS" don't have the signing key. This seems like a comment empty of meaning, intended only to spread fear, uncertainty and doubt.
I just spent several minutes googling for "EFF" in conjunction with "feminism" and didn't find anything that appeared to be relevant. I did the same for Rob Graham, and he apparently doesn't like the EFF, but I haven't found the EFF saying anything about him yet. I'm sure someone associated with the EFF has mentioned these things at some point, but they don't appear to be major issues. Could you give me some links?
Rob Graham seems to like free markets (great) but think US cable companies are a free market (plainly wrong). Thus he thinks that it's a bad idea to protect the internet free market from the cable monopolies. It's a noble cause, he's just not well informed.
Their staff might be feminists, but that doesn't mean they engage in gender discrimination. If you have something better, post it.
No, not in the slightest. The EFF is a non-profit organization that exists to lobby for policy change. No such organization is worthy of much in the way of trust -- especially for such a sensitive instrument.
To look at the history and charter of the EFF and say you don't trust them "in the slightest" is a bit ridiculous. It's hard to take that kind of remark seriously. Look at who's involved and what they've done, and dismiss that under the broad strokes of "no such organization is worthy of trust"? Nonsense.
I don't care who's involved, honestly. The problem is that they're a lobby and they actively push for policy changes. That's a full time job in and of itself -- identity verification is also a full time job, and a requirement of a properly functioning PKI.
What I want involved is an identity verification organization whose mission is clearly defined to be identity verification and management of a PKI trust.
A non-profit, apolitical organization dedicated to identity verification and management of a PKI trust, whose charter includes yearly audits of its systems.
SSL should be a universally available free resource. I expect that it will be in the near future. That said, it is still very cheap for small sites too:
$9 - $11 / year for perfectly good certs. Less than $1 per month is a small burden.
I understand what you're saying, requiring SSL doesn't seem like too much of a burden superficially. But it's magnitudes harder and more frustration inducing than simply buying a domain. My website is protected by the cheapest certificate on that list, and it was a gigantic pain to set up. Also, only the root subdomain on my site is protected -- I have things on other subdomains which I can't encrypt because $10 dollars a year quickly adds up.
That's not the cost. You have to pay someone to re-sign your cert every (other) year. This is a largely manual process. It's also going to cost you, if you want any compensation at all when they forget. Which they will, no matter how well you prepare.
All modern browsers support SNI [1]. I guess if you need to support IE8 on WinXP you'll have problems. And IIRC, wget on Ubuntu 12.04 doesn't support it (while curl does). But it's picked up a lot in recent years, which thankfully eases the dedicated IP requirement in many cases.
Absolutely no need for a dedicated IP unless you want to support very old versions of Windows or very old versions of Android. I haven't allocated a dedicated IP solely for SSL in years.
That is today's prices, based past demand. As demand for SSL hosting goes up, gradually replacing plaintext hosting, the price will come down.
I actually expect the price of plaintext HTTP hosting to go up a bit; partly due to reduced demand, but also due to increased risk/liability. With SSL being the "industry best practice", I expect at least a few bean counters will view the risk of private information leaks or hypothetical legal liability for enabling DDOS (similar to the "attractive nuisance" doctrine).
There will be a turbulent transition period, of course. As someone currently living at the poverty line, I have argued against the CA system many times. A SSL cert (and anual renewal) may be an insignificant cost to some people, but it is a real barrier when tha cost represents days/weeks of food. Unfortunately, none of this removes the need for encryption or the risks of plaintext. This is why I'm very excited about Let’s Encrypt; It might solve the cost problem, and it might avoid the StartSSL "no second-source" problem because it is a protocol first.
Internet use is only going up, so these transition costs are only going to go up. We can pay it now, or pay even more in the future.
Obviously. What's your point? That the price of SSL hosting will stay the same? Go up? I never claimed it would equalize perfectly, just that it would get cheaper in the future.
Also, why do you think the supply of SSL hosting services is necessarily limited any more than traditional plaintext hosting would be?
The original idea with SSL was to give out certificates to organizations, not domain holders. The labour involved made this an expensive process and today domain validated certificates are the most common.
The idea was that users should want to validate they speak with the organization McDonald's, not with mcdonalds.com which may or may not belong to them. Turns out users don't, and that the distinction gets even less important over time. Domain names is an important identifier for an organization now. You can still however see the old process at work in EV certificates, which normally carries an extra cost.
If SSL had been designed for domain validation from the start, if would have looked like DNSSEC. Cryptographically verified domain assignments is a good idea, and infinitely more secure than the domain validation schemes we use today.
Here at HN there are a handful who can't resist going on about NSA every time DNSSEC is mentioned, so I expect a few of those now. Please do understand the whole picture and how the complete certificate stack works before taking those statements at face value.
It's because all CAs are able to sign certs for all domains. Browsers simply do not trust your average domain registrar enough to give them power over all domains.
Now you might say don't trust the registrar, trust the people who run the .com (or whatever) TLD. That's getting close to what DNSSEC does, which some people say is better. But CAs weren't designed for this like DNSSEC. With the way CAs work, we would have to give the runners of .com power over all domains, which some people might not think is so bad. But it would also mean we would have to give the owners of .sucks power over all domains as well, which most people would be against.
No. Consider that DNS request/responses are simple, cleartext UDP packets. There's DNSSEC of course but nobody uses it (and also most security experts don't like it).
That's a bit silly, considering it was developed over a ten year process, and a lot of security professionals had a hand in its design. There are problems with it, which some people are quick to point out, and it is important to be aware of them. The fact that your DNS data is enumerable is an important change, for example.
You could compare it to IPsec, which is what most VPNs use, which is comparable in security and design. They both, together with SSL, suffer from a bad case of design-by-committee, including atrocities like X509.
DNSSEC did get an important thing right. You are in full control of your own keys, and your DNS provider can not impersonate you. Having an external DNS hosting provider was not common back then, but it is now, and I'm glad they got that right.
Absolutely. SSH also sucks. TLS sucks badly. The only protocols that doesn't are those that haven't seen real-world usage yet. That's what drives innovation.
This is a legitimate concern, but I think the so-called dire consequences are a bit overblown.
Major browser vendors like Google and Mozilla don't change their policies in a vacuum while the rest of the world stays static. The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
Currently, most web hosts charge a hefty markup on SSL certificates and charge even more to enable them on a website hosted with them. This practice may no longer be sustainable as more and more people begin to demand SSL. "Free SSL with every 1-year contract!" could well become a standard marketing slogan, just as "Free domain with every 1-year contract!" has been for the last 10+ years.
Some domain registrars already offer free or low-cost (~$1.99) SSL certificates with the purchase of every domain. This may become more widespread as registrars scramble to remain competitive.
Android 2.x and Windows XP are major excuses for not adopting SNI, but the upcoming release of Windows 10 will reduce the market share of XP even further, and old Android's lifespan is also running out thanks to the planned obsolescence of mobile devices. By 2017-18, nobody will care about these platforms anymore, and if anyone still does, we can tell them to get Firefox.
Even without StartSSL or Let's Encrypt, existing CAs may be forced to cut their prices drastically as a horde of super-price-conscious consumers begin to flood their once prestigious trading floor. Some CAs have already been offering $20 wildcard certs through selected resellers. Expect more of these offers in the near future. This is a race to the bottom, and I'm thoroughly enjoying it!
To top it off, CloudFlare is offering free SSL (SNI required) to everyone. Expect services like this to become more common as SSL comes to be seen as an essential component of every online service.
Of course, there's no guarantee that these changes will occur. But I can guarantee that most of them will not occur unless there's massive, organized presssure on the lazy, greedy incumbents. Google and Mozilla are doing the world a great service by adding their weight to this much-needed pressure. Remember when the rest of the world basically ran an extortion racket to force the web hosting industry into upgrading to PHP 5? That was glorious. I want to see it happen again, this time for easy and affordable SSL.
If the deadline arrives and the world still isn't ready for the transition, we'll think again and adjust our strategies accordingly. Nothing wrong with that. In the meantime, let's be optimistic and go bully some web hosts!
> The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
I'd love to believe this but I've never once seen the https-only nazis bring up this issue on their own, or show any concern for the fact that it will limit speech on the web. The y mostly work for companies where getting ssl certs is no big deal, and they put their personal projects on github or heroku anyways.
The backbone of the web was the fact that you could put up a website on your own computer within a matter of minutes. That is now going to be gone and I've never seen the biggest advocates of this change show any concern whatsoever.
That's just FUD. Nobody is planning to block plain HTTP requests altogether. You can still put up a website on any computer and serve it over plain HTTP, and it will render correctly on most browsers.
The plan is to disable some of the "more dangerous" features when the page is requested over HTTP, in order to entice webmasters to adopt SSL. The list hasn't even been written yet, but I'm guessing that most of those features will be fancy javascript and third-party plugins like Flash. Which you probably shouldn't rely on being enabled in the first place.
I personally wouldn't mind if every insecure page behaved as if I had NoScript & NoFlash enabled by default.
You may be right about the excessive idealism of so-called HTTPS nazis on some online forums, but I'm pretty sure that the people in charge at Google and Mozilla are more level-headed and realistic.
> the goal of this effort is to send a message to the web developer community that they need to be secure
This is not true. The suggestion is that essentially all new features will require https. You are effectively blocking http if I can't do anything interesting with it.
> That would allow things like CSS and other rendering features to still be used by insecure websites ... [but] restrict qualitatively new features, such as access to new hardware capabilities ... [like] persistent permissions for camera and microphone access
If accessing my camera over an insecure connection is your definition of doing interesting things, I would be happy to block you.
In fact, I can't think of a single proposed addition to the current web stack that would be safe to use over an insecure connection.
You have every right to express your views using a static document and stylesheet. But anything beyond that tends to involve executing your code on my computer, and nobody has any obligation to let you do that, especially over an insecure connection. Requiring people to make some additional effort before they can access other people's property sounds like a nice balance of rights and obligations to me.
You changed your opinion quickly. Previously you said:
> The plan is to disable some of the "more dangerous" features when the page is requested over HTTP
And now you're saying that every new feature falls into this category.
Listen, I'm all for going all-in on SSL. What I'm not for is doing until SSL is as seamless and inexpensive as HTTP.
What really concerns me is that the https-only nazis do not share this concern. It's never mentioned in their blog posts.
It actually really scares me that something that has been so important for making the web as powerful for the cause of freedom is not even on the radar of the people who are controlling the direction of the web.
No, I didn't change my opinion. I just clarified what I meant by "more dangerous", and IMO everything that involves javascript or access to surveillance hardware (camera, microphone, GPS, etc.) qualifies as dangerous.
My fundamental disagreement with you is that I don't think we can wait until "SSL is as seamless and inexpensive as HTTP". That will never happen unless we can pose a credible threat to the profit margins of web hosts and CAs. And sending a horde of unsatisfied customers in their general direction is one of the most effective ways to pose such a threat.
Its a bit like the ipv6 band wagon 20 years ago I or anyone with a trivial knowledge of the internet/networking, could have pointed out that developing a new standard with NO thought to inter operation would not work.
Also having it developed by 19 university types and one guy from Bell labs was asking for trouble
>And now you're saying that every new feature falls into this category.
Subtle misreading. It's not that new features are in that category by definition, it's that every major feature proposal kijin has seen so far has been of the dangerous sort.
The move to deprecate HTTP is solely inspired by the need to authenticate online communication. It's necessary to protect speech on the web, because it makes it harder to tamper with the content in transit. Your ISP shouldn't be able to inject ads into a web page, a WiFi access point shouldn't be able to change every "do" to "do not", and a passive listener shouldn't be able to collect information about you for his own gain. SSL/TLS is a huge mess and the current CA situation is abysmal, but authentication needs to happen now, or there will be no privacy or freedom on the Internet.
There's plenty of privacy and freedom on the internet... otherwise the internet would not have existed this long the way it has. More security is always better but let's please stop with hyperbole.
This article from 2014 [0] suggested Google Domains [1] may offer free SSL certificates. As far as I can see, that's not the case at this time. Does anyone have any information on this? How likely is it that this feature will come in the near future?
Which is fine. I wasn't arguing against the changes, but against citing startssl.com as a good solution (Even though I have to admit that their service overall extremely likely has done more good than damage, I really dislike that aspect of their offering)
I always wondered why does this certificate order matter? Web server can (and does) parse certificates. It should reorder them in correct order and log warnings if any issues were found.
As an aside, I really don't like wildcard certs. If the private key is compromised, the consequences are so much worse than if you lose a regular cert.
How so? The update mechanisms use certificate pinning, and even if they didn't it sounds like an argument to use different certificates for code signing and for web servers. What other problems could there have been? Most people are only going to verify that it's microsoft.com.
That's true if you're trying to save money by putting a ton of domains behind a single wildcard cert using a single private key. But there are security advantages to using multiple wildcard certs based on different private keys. One of them is that you can develop a nearly infinite number of sites without exposing the domain name via the certificate, so they can't be crawled or pentested until they are deployed publicly. The number of certs you buy should be based on the number of private keys you can securely deploy.
$70 for a wildcard cert!? Where are you looking at? There's a shitload of AlphaSSL resellers that are much cheaper. I got 2 wildcard certs for $20/yr. Of course, there's really no need for a wildcard certificate, and StartCom gives out free, valid non-wildcard certs right now. On top of that, Lets Encrypt should simplify the process greatly.
Right, and for anything that is commercial, a <$10/yr SSL certificate shouldn't be a huge deal. It's not perfect, it should all be free, but saying cost is a barrier is BS, in my opinion.
< $10/year is generally if you go with a 3rd party seller. Trying to install that, as a non-technical person, is usually very hard.
The 1-click installations that are provided by the hosts are usually more than that.
Yes, even $50/year for a business in not that big a deal but so much of the market has international users where $50 is really expensive. There are also small time hobby/businesses where $50 makes an impact.
If SSL doesn't change, this move will cut the little folks out of the internet. What are Mozilla's values?