I should be happy about this -- who wouldn't want the entire web to be encrypted -- but SSL is so broken for normal people. SSL is expensive (wildcard certificates run $70 a year and up), confusing (how does one pick between the 200 different companies selling certificates?), and incredibly difficult to set up (what order should I cat the certificate pieces in again?).
If SSL doesn't change, this move will cut the little folks out of the internet. What are Mozilla's values?
That's just one project, and it doesn't even exist yet.
The web is moving faster every day, apparently. I sure do hope that project will be all it's chalked up to be.
For example, I need IP-only certs for a new project I'm working on (waiting for DNS to propagate to all clients is too unreliable and slow). If letsencrypt doesn't do that... well then I'd have to hope real hard for a competent CA out there who has an automated process available that allows IP-only certs. And whatever their price, if companies start following Mozilla's lead too soon, I'll have to pay up.
The wording in the article is perhaps not so damning yet, but it's still making me uneasy that they put out this press release while there are currently ZERO viable solutions for this.
For example, I need IP-only certs for a new project I'm working on (waiting for DNS to propagate to all clients is too unreliable and slow).
This doesn't make any sense. You're not waiting for DNS to propagate to clients; if anything you're waiting for recursive DNS servers at shitty ISPs to time out their caches when they are configured to not honor the RR's TTL sent by the authoritative server in a misguided attempt to make the internet "faster".
But this is completely avoidable without having to use IPs or certificates with CN/SAN that are IPs: get a wildcard cert and rotate the subdomain name. It's a new hostname, so it busts intermediate DNS caches by being new queries; since it's a new query, there's no "propagation to clients" to wait for when you change IPs, all queries for the new name hit authoritative servers. Additionally, it looks infinitely more legit than a website that is accessible only via IP address. And doubly additionally, if you're going through so many IPs, presumably you'll be rotating some out and those may be assigned to other people who can then get their own cert for that IP and impersonate you.
It is awful. It makes the standard way to plan and do a DNS migration really difficult.
It occurred rarely, but a few years ago it was a regular problem because some bigger ISPs were doing it[0]. Not sure how common it is these days.
That being said, I'm having troubling coming up with a project that would be better served with IP addresses than a constant name, so I have no idea what the OP I was responding to could be doing that that problem needed to be addressed at all.
Unless you have a reference for Google saying they purposely do this, it is more likely that the cache is dropping LRU entries as it fills up. Also, I doubt there is a trivial number of actual resolvers behind the google public DNS endpoints, so you may be seeing the result of multiple individual servers without shared cache initially populating their caches.
Some quick tests with dig seem to indicate that, at least for the region I'm in, my queries to google's public DNS is rotating between 4 or 5 servers, as evidenced by the TTLs being returned.
LetsEncrypt works with IPv4. I assume it will work with IPv6. LetsEncrypt is a Mozilla project, it's safe to assume they will launch LetsEncrypt before deprecating non-secure websites.
Right now the subject identifier in a Let's Encrypt cert must be a DNS name, not an IP address. From the ACME protocol specification draft:
"Note that while ACME is defined with enough flexibility to handle different types of identifiers in principle, the primary use case addressed by this document is the case where domain names are used as identifiers. For example, all of the identifier validation challenges described in Section {identifier-validation-challenges} below address validation of domain names. The use of ACME for other protocols will require further specification, in order to describe how these identifiers are encoded in the protocol, and what types of validation challenges the server might require."
This is in line with other CAs - no certificates should be issued for IP addresses or internal server names with expiry dates after November 2015. See for example: https://www.digicert.com/internal-names.htm
I watched a demo where they went from a vanilla apache install to an A scoring HTTPS site in sub 5 minutes at Libreplanet. It's a good idea to publicize the upcoming LARGE change.
I'm the person who gave that demo and I appreciate the compliment, but I'll readily admit that we can't yet issue publicly-trusted certificates to the public. So we aren't up and running in a way that a site could take advantage of.
On the other hand, our infrastructure, partnerships, and technology are very real. I hope they'll make the process just as easy as what you saw for many people soon.
StartSSL exists right now - and has been providing free certs for personal use for years now.
I'm curious about your requirement for IP only certs? Sure you don't "own" a domain name, but it's even less true that you "own" a specific IP address. (Well, at least for me, perhaps if your project is in the datacenter/isp/network-infrastructure space you might actually have some cintractual "ownership" of an IP address?)
Last time I tried, their site had JavaScript bugs and their email validation procedure didn't pass greylisting. I didn't want to place my web server security in the hands of a company with such low quality standards.
That's the entire point. If the whole web is going to be secure, then someone who "is not who you'd want implementing your web server security" needs to be able to make it work, and work right.
Well, the counter argument is that even if the whole population is going to not have brain tumours - I _still_ don't want "someone who's not a rocket surgeon" doing brain surgery.
Crypto is non-trivial. There'll never be a proper "Click this button to automatically secure your random php app running in cPanel/Plesk".
The best we'll see I suspect is a "click here and make your website pass the minimal checks modern browsers use to determine if you're secure", then we'll have a daily stream of site owners claiming "the PII/password/creditcard breach wasn't my fault - I used 2048 bit encryption!"
How well does that work on a corporate intranet? How well does it work with Windows?
If the friction for testing, say, an enterprise LOB app on an internal-only QA IIS server is any higher than "basically zero" with Firefox, and the same friction doesn't apply to Chrome or IE, well.
1. On a corporate internet, you're probably in a position to deploy your own CA certificates. This is quite common.
2. Let's Encrypt will use an open protocol, so it should be OS-agnostic.
3. "Privileged Contexts" is being developed as a W3C working draft [1]. It's quite likely this won't be just a Mozilla-thing. Google has been fairly aggressive when it comes to pushing for more (and better) SSL as well (see SHA1 cert deprecation).
I'm a person who actually deploys .net apps to internal IIS QA servers. If I want them to use HTTPS, I have to configure it. I don't know anything about my own CA certificates. I'm not saying it couldn't happen, but it's certainly easier to suggest just not using firefox if something isn't working.
The next problem is IP addresses. How is SNI these days? I've been meaning to experiment with it but the lack of extra personal certs have prevented me.
> it requires you to run special software on your servers
It doesn't, they ship a tool only for convenience. An open source tool running on your machine would be reverse-engineerable anyway. Plus, it is expected that shared hosting providers will run the tool for you.
Any citation for this claim: "it is expected that shared hosting providers will run the tool for you"?
Currently SSL is a revenue stream for many shared hosting providers. Are there are on-record comments from major providers who are planning on supporting Let's Encrypt?
That's not a claim, it's speculation. Since there is not marginal cost, it's going to happen eventually. You only need one provider to do it and some customers to request it for the others to follow. Many shared hosting providers already have removed traffic limits despite the additional revenue it used to bring. They can upsell customers with extended validation anyway.
The only problem is lack of browser support for server name indication so one IP can be shared by many customers, but it's getting less of an issue by end-users renewing their hardware or updating their browser.
Trust them for what? They don't get your private key, and if the fear is that they might sign a rogue cert for your domain, all CAs can already do that, regardless of whether you choose to "trust" them or not.
In what sense? The "MAJOR SPONSORS" don't have the signing key. This seems like a comment empty of meaning, intended only to spread fear, uncertainty and doubt.
I just spent several minutes googling for "EFF" in conjunction with "feminism" and didn't find anything that appeared to be relevant. I did the same for Rob Graham, and he apparently doesn't like the EFF, but I haven't found the EFF saying anything about him yet. I'm sure someone associated with the EFF has mentioned these things at some point, but they don't appear to be major issues. Could you give me some links?
Rob Graham seems to like free markets (great) but think US cable companies are a free market (plainly wrong). Thus he thinks that it's a bad idea to protect the internet free market from the cable monopolies. It's a noble cause, he's just not well informed.
Their staff might be feminists, but that doesn't mean they engage in gender discrimination. If you have something better, post it.
No, not in the slightest. The EFF is a non-profit organization that exists to lobby for policy change. No such organization is worthy of much in the way of trust -- especially for such a sensitive instrument.
To look at the history and charter of the EFF and say you don't trust them "in the slightest" is a bit ridiculous. It's hard to take that kind of remark seriously. Look at who's involved and what they've done, and dismiss that under the broad strokes of "no such organization is worthy of trust"? Nonsense.
I don't care who's involved, honestly. The problem is that they're a lobby and they actively push for policy changes. That's a full time job in and of itself -- identity verification is also a full time job, and a requirement of a properly functioning PKI.
What I want involved is an identity verification organization whose mission is clearly defined to be identity verification and management of a PKI trust.
A non-profit, apolitical organization dedicated to identity verification and management of a PKI trust, whose charter includes yearly audits of its systems.
SSL should be a universally available free resource. I expect that it will be in the near future. That said, it is still very cheap for small sites too:
$9 - $11 / year for perfectly good certs. Less than $1 per month is a small burden.
I understand what you're saying, requiring SSL doesn't seem like too much of a burden superficially. But it's magnitudes harder and more frustration inducing than simply buying a domain. My website is protected by the cheapest certificate on that list, and it was a gigantic pain to set up. Also, only the root subdomain on my site is protected -- I have things on other subdomains which I can't encrypt because $10 dollars a year quickly adds up.
That's not the cost. You have to pay someone to re-sign your cert every (other) year. This is a largely manual process. It's also going to cost you, if you want any compensation at all when they forget. Which they will, no matter how well you prepare.
All modern browsers support SNI [1]. I guess if you need to support IE8 on WinXP you'll have problems. And IIRC, wget on Ubuntu 12.04 doesn't support it (while curl does). But it's picked up a lot in recent years, which thankfully eases the dedicated IP requirement in many cases.
Absolutely no need for a dedicated IP unless you want to support very old versions of Windows or very old versions of Android. I haven't allocated a dedicated IP solely for SSL in years.
That is today's prices, based past demand. As demand for SSL hosting goes up, gradually replacing plaintext hosting, the price will come down.
I actually expect the price of plaintext HTTP hosting to go up a bit; partly due to reduced demand, but also due to increased risk/liability. With SSL being the "industry best practice", I expect at least a few bean counters will view the risk of private information leaks or hypothetical legal liability for enabling DDOS (similar to the "attractive nuisance" doctrine).
There will be a turbulent transition period, of course. As someone currently living at the poverty line, I have argued against the CA system many times. A SSL cert (and anual renewal) may be an insignificant cost to some people, but it is a real barrier when tha cost represents days/weeks of food. Unfortunately, none of this removes the need for encryption or the risks of plaintext. This is why I'm very excited about Let’s Encrypt; It might solve the cost problem, and it might avoid the StartSSL "no second-source" problem because it is a protocol first.
Internet use is only going up, so these transition costs are only going to go up. We can pay it now, or pay even more in the future.
Obviously. What's your point? That the price of SSL hosting will stay the same? Go up? I never claimed it would equalize perfectly, just that it would get cheaper in the future.
Also, why do you think the supply of SSL hosting services is necessarily limited any more than traditional plaintext hosting would be?
The original idea with SSL was to give out certificates to organizations, not domain holders. The labour involved made this an expensive process and today domain validated certificates are the most common.
The idea was that users should want to validate they speak with the organization McDonald's, not with mcdonalds.com which may or may not belong to them. Turns out users don't, and that the distinction gets even less important over time. Domain names is an important identifier for an organization now. You can still however see the old process at work in EV certificates, which normally carries an extra cost.
If SSL had been designed for domain validation from the start, if would have looked like DNSSEC. Cryptographically verified domain assignments is a good idea, and infinitely more secure than the domain validation schemes we use today.
Here at HN there are a handful who can't resist going on about NSA every time DNSSEC is mentioned, so I expect a few of those now. Please do understand the whole picture and how the complete certificate stack works before taking those statements at face value.
It's because all CAs are able to sign certs for all domains. Browsers simply do not trust your average domain registrar enough to give them power over all domains.
Now you might say don't trust the registrar, trust the people who run the .com (or whatever) TLD. That's getting close to what DNSSEC does, which some people say is better. But CAs weren't designed for this like DNSSEC. With the way CAs work, we would have to give the runners of .com power over all domains, which some people might not think is so bad. But it would also mean we would have to give the owners of .sucks power over all domains as well, which most people would be against.
No. Consider that DNS request/responses are simple, cleartext UDP packets. There's DNSSEC of course but nobody uses it (and also most security experts don't like it).
That's a bit silly, considering it was developed over a ten year process, and a lot of security professionals had a hand in its design. There are problems with it, which some people are quick to point out, and it is important to be aware of them. The fact that your DNS data is enumerable is an important change, for example.
You could compare it to IPsec, which is what most VPNs use, which is comparable in security and design. They both, together with SSL, suffer from a bad case of design-by-committee, including atrocities like X509.
DNSSEC did get an important thing right. You are in full control of your own keys, and your DNS provider can not impersonate you. Having an external DNS hosting provider was not common back then, but it is now, and I'm glad they got that right.
Absolutely. SSH also sucks. TLS sucks badly. The only protocols that doesn't are those that haven't seen real-world usage yet. That's what drives innovation.
This is a legitimate concern, but I think the so-called dire consequences are a bit overblown.
Major browser vendors like Google and Mozilla don't change their policies in a vacuum while the rest of the world stays static. The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
Currently, most web hosts charge a hefty markup on SSL certificates and charge even more to enable them on a website hosted with them. This practice may no longer be sustainable as more and more people begin to demand SSL. "Free SSL with every 1-year contract!" could well become a standard marketing slogan, just as "Free domain with every 1-year contract!" has been for the last 10+ years.
Some domain registrars already offer free or low-cost (~$1.99) SSL certificates with the purchase of every domain. This may become more widespread as registrars scramble to remain competitive.
Android 2.x and Windows XP are major excuses for not adopting SNI, but the upcoming release of Windows 10 will reduce the market share of XP even further, and old Android's lifespan is also running out thanks to the planned obsolescence of mobile devices. By 2017-18, nobody will care about these platforms anymore, and if anyone still does, we can tell them to get Firefox.
Even without StartSSL or Let's Encrypt, existing CAs may be forced to cut their prices drastically as a horde of super-price-conscious consumers begin to flood their once prestigious trading floor. Some CAs have already been offering $20 wildcard certs through selected resellers. Expect more of these offers in the near future. This is a race to the bottom, and I'm thoroughly enjoying it!
To top it off, CloudFlare is offering free SSL (SNI required) to everyone. Expect services like this to become more common as SSL comes to be seen as an essential component of every online service.
Of course, there's no guarantee that these changes will occur. But I can guarantee that most of them will not occur unless there's massive, organized presssure on the lazy, greedy incumbents. Google and Mozilla are doing the world a great service by adding their weight to this much-needed pressure. Remember when the rest of the world basically ran an extortion racket to force the web hosting industry into upgrading to PHP 5? That was glorious. I want to see it happen again, this time for easy and affordable SSL.
If the deadline arrives and the world still isn't ready for the transition, we'll think again and adjust our strategies accordingly. Nothing wrong with that. In the meantime, let's be optimistic and go bully some web hosts!
> The move to "deprecate" HTTP is an explicit attempt to manipulate the rest of the world into making SSL easier and more affordable. It is unfair to evaluate this proposal in isolation without considering the market upheaval that it is very much intended to trigger.
I'd love to believe this but I've never once seen the https-only nazis bring up this issue on their own, or show any concern for the fact that it will limit speech on the web. The y mostly work for companies where getting ssl certs is no big deal, and they put their personal projects on github or heroku anyways.
The backbone of the web was the fact that you could put up a website on your own computer within a matter of minutes. That is now going to be gone and I've never seen the biggest advocates of this change show any concern whatsoever.
That's just FUD. Nobody is planning to block plain HTTP requests altogether. You can still put up a website on any computer and serve it over plain HTTP, and it will render correctly on most browsers.
The plan is to disable some of the "more dangerous" features when the page is requested over HTTP, in order to entice webmasters to adopt SSL. The list hasn't even been written yet, but I'm guessing that most of those features will be fancy javascript and third-party plugins like Flash. Which you probably shouldn't rely on being enabled in the first place.
I personally wouldn't mind if every insecure page behaved as if I had NoScript & NoFlash enabled by default.
You may be right about the excessive idealism of so-called HTTPS nazis on some online forums, but I'm pretty sure that the people in charge at Google and Mozilla are more level-headed and realistic.
> the goal of this effort is to send a message to the web developer community that they need to be secure
This is not true. The suggestion is that essentially all new features will require https. You are effectively blocking http if I can't do anything interesting with it.
> That would allow things like CSS and other rendering features to still be used by insecure websites ... [but] restrict qualitatively new features, such as access to new hardware capabilities ... [like] persistent permissions for camera and microphone access
If accessing my camera over an insecure connection is your definition of doing interesting things, I would be happy to block you.
In fact, I can't think of a single proposed addition to the current web stack that would be safe to use over an insecure connection.
You have every right to express your views using a static document and stylesheet. But anything beyond that tends to involve executing your code on my computer, and nobody has any obligation to let you do that, especially over an insecure connection. Requiring people to make some additional effort before they can access other people's property sounds like a nice balance of rights and obligations to me.
You changed your opinion quickly. Previously you said:
> The plan is to disable some of the "more dangerous" features when the page is requested over HTTP
And now you're saying that every new feature falls into this category.
Listen, I'm all for going all-in on SSL. What I'm not for is doing until SSL is as seamless and inexpensive as HTTP.
What really concerns me is that the https-only nazis do not share this concern. It's never mentioned in their blog posts.
It actually really scares me that something that has been so important for making the web as powerful for the cause of freedom is not even on the radar of the people who are controlling the direction of the web.
No, I didn't change my opinion. I just clarified what I meant by "more dangerous", and IMO everything that involves javascript or access to surveillance hardware (camera, microphone, GPS, etc.) qualifies as dangerous.
My fundamental disagreement with you is that I don't think we can wait until "SSL is as seamless and inexpensive as HTTP". That will never happen unless we can pose a credible threat to the profit margins of web hosts and CAs. And sending a horde of unsatisfied customers in their general direction is one of the most effective ways to pose such a threat.
Its a bit like the ipv6 band wagon 20 years ago I or anyone with a trivial knowledge of the internet/networking, could have pointed out that developing a new standard with NO thought to inter operation would not work.
Also having it developed by 19 university types and one guy from Bell labs was asking for trouble
>And now you're saying that every new feature falls into this category.
Subtle misreading. It's not that new features are in that category by definition, it's that every major feature proposal kijin has seen so far has been of the dangerous sort.
The move to deprecate HTTP is solely inspired by the need to authenticate online communication. It's necessary to protect speech on the web, because it makes it harder to tamper with the content in transit. Your ISP shouldn't be able to inject ads into a web page, a WiFi access point shouldn't be able to change every "do" to "do not", and a passive listener shouldn't be able to collect information about you for his own gain. SSL/TLS is a huge mess and the current CA situation is abysmal, but authentication needs to happen now, or there will be no privacy or freedom on the Internet.
There's plenty of privacy and freedom on the internet... otherwise the internet would not have existed this long the way it has. More security is always better but let's please stop with hyperbole.
This article from 2014 [0] suggested Google Domains [1] may offer free SSL certificates. As far as I can see, that's not the case at this time. Does anyone have any information on this? How likely is it that this feature will come in the near future?
Which is fine. I wasn't arguing against the changes, but against citing startssl.com as a good solution (Even though I have to admit that their service overall extremely likely has done more good than damage, I really dislike that aspect of their offering)
I always wondered why does this certificate order matter? Web server can (and does) parse certificates. It should reorder them in correct order and log warnings if any issues were found.
As an aside, I really don't like wildcard certs. If the private key is compromised, the consequences are so much worse than if you lose a regular cert.
How so? The update mechanisms use certificate pinning, and even if they didn't it sounds like an argument to use different certificates for code signing and for web servers. What other problems could there have been? Most people are only going to verify that it's microsoft.com.
That's true if you're trying to save money by putting a ton of domains behind a single wildcard cert using a single private key. But there are security advantages to using multiple wildcard certs based on different private keys. One of them is that you can develop a nearly infinite number of sites without exposing the domain name via the certificate, so they can't be crawled or pentested until they are deployed publicly. The number of certs you buy should be based on the number of private keys you can securely deploy.
$70 for a wildcard cert!? Where are you looking at? There's a shitload of AlphaSSL resellers that are much cheaper. I got 2 wildcard certs for $20/yr. Of course, there's really no need for a wildcard certificate, and StartCom gives out free, valid non-wildcard certs right now. On top of that, Lets Encrypt should simplify the process greatly.
Right, and for anything that is commercial, a <$10/yr SSL certificate shouldn't be a huge deal. It's not perfect, it should all be free, but saying cost is a barrier is BS, in my opinion.
< $10/year is generally if you go with a 3rd party seller. Trying to install that, as a non-technical person, is usually very hard.
The 1-click installations that are provided by the hosts are usually more than that.
Yes, even $50/year for a business in not that big a deal but so much of the market has international users where $50 is really expensive. There are also small time hobby/businesses where $50 makes an impact.
I agree with trying to phase out HTTP, but I think their method is "annoying." What do features have to do with HTTP Vs. HTTPS? It just seems like an arbitrary punishment.
Wouldn't it just be significantly easier to simply change the URL art style to make clear that HTTP is "insecure." Like a red broken padlock on every HTTP page?
That has the following advantages:
- HTTP remains fully working for internal/development/localhost/appliance usage (no broken features).
- Users are reminded that HTTP is not secure.
- Webmasters are "embarrassed" into upgrading to HTTPS.
- Fully backwards compatible.
Seems like a perfect solution where everyone wins.
What features have to do with encryption is this. If a browser asks a user "Do you want http://example.com to be able to access your camera", what it is really asking is "Do you want http://example.com, anybody on your local network, state actors, anybody between you and example.com, people who can mess around with BGP and your DNS provider to be able to access your camera?". TLS mostly makes the first question more truthful.
You might explicitly include "employees of the coffee shop/hotel/library providing wifi" in the list of actors who can intercept your traffic. Honestly, I think that one will get the most attention of the average person.
It sounds like this is mostly going to be focused on features that require user consent. The article gives the example of media devices (camera and microphone) but there are plenty of others: fullscreen API, geolocation, notifications, large amounts of offline storage, etc.
These capabilities are sensitive enough that you want to give users control over who is granted access. But if pages are being loaded over HTTP, the user can have no way of establishing the authenticity of the Javascript code they're granting permissions to.
I can imagine a lot of personal sites will suffer from this. With most, they're sitting on something like Eleven2 or Dreamhost, who requires a dedicated IP for an SSL certificate, which the user then has to buy and figure out for himself (it's not trivial for the average "webmaster"), or buy the certificate from their host which is marked-up plenty.
Yes, the hosts could wildcard. Yes, there are other solutions out there. But for the average Joe who is blogging about his vacations and family? They're going to be completely lost.
Why don't shared hosts just wildcard? Shared certificate? Well, let's think about it... Charging ~$5/month/dedicated IP is a nice upsell, and getting $70 for an installed SSL cert that costs them $10 from their SSL cert reseller, that takes them 2 minutes to configure... That's a nice slice of pie. I'd take that bet any day.
I think you're overstating how bad things are. Dreamhost, for example, no longer requires a dedicated IP for SSL, though they do still recommend it for e-commerce. They are charging $15/year for a CA-signed certificate. Granted, that's for a single-site cert and they don't support wildcards under this scenario, but the vacation blogger isn't likely to need that anyway.
It's only an upsell now. If in the future SSL is required to get access, it stops being an upsell and starts having to be part of the basic package. Whether that will raise prices significantly is yet to be seen.
The actions Mozilla proposes sound awful. I believe that a secure (from the NSA) Internet is the way forward. But this seems so goofy to me. There are legitimate reasons for a site not to be hosted on HTTPS.
* It is a static site with no forms or logins
* It is non-critical info
* The site operator can't afford a certificate (Let's Encrypt is only one site...)
As you say: Color-code sites with a bit more granularity. Don't cripple the cleartext web.
Static sites or non-critical info! "Gradually phasing out access to browser features for non-secure websites, especially features that pose risks to users’ security and privacy" could affect sites that don't expose critical info.
All browsing behavior can be used to build a profile about someone, whether for advertising, surveillance, or whatever. There's a lot more information in the fact that person A visited pages 1-6 on unencrypted website B than one might realize. This reason alone should be enough for us to demand encryption (not necessarily via CA certificates) for any connection that isn't demonstrably local and unintercepted.
I was thinking that was the way to go too for a while, but then I realized that marking http as insecure will just get user used to clicking through security warnings and assuming that they're "normal"
While we're making art style changes, why don't we change the experience for self-signed certs?
When the user first visits an HTTPS page with a self-signed cert, they get the content, and the URL art style has a broken lock or something warning it's not known to be secure. (It's better than raw HTTP but it's not trusted.) With certificate pinning by the browser, the next time the user visits that page, if it's different, then they get the current experience that warns them in big scary text and requires several clicks to get past. There's a question of if it's different in that the server owner upgraded to a paid SSL cert should it show a warning or not, but if there's a way to sign that upgrade with the old cert that the browser can know about there shouldn't be a problem...
So if I have to renew my (self-signed) certificate, all my current users will now get scary warnings? I'm not sure we should be encouraging people to hold on to their possibly-compromised certs.
> When the user first visits an HTTPS page with a self-signed cert, they get the content, and the URL art style has a broken lock or something warning it's not known to be secure.
Do we assume the user is going to notice that URL art style, and actually heed it? Because if the answer is "no" (and I think in reality, the answer would be "no"), then pick a high value site, and MitM it with a self-signed cert. The user misses the indicator, and proceeds to interact with the site; does JS work? (let's steal the user's cookies) do forms work? (please log in!)
If you have the ability to MitM a high value site like facebook.com without getting caught, I think it's worthwhile to do so regardless simply because you'll get some portion of the users who bypass the warning. In my scheme, the only people who won't see the warning are those who have never visited facebook with that browser before, so they may or may not have an account to login with that you can hijack.
Not a bad idea, in theory, but... suppose I visit a site on Monday and see certificate A. Then when I return on Tuesday, I see a different certificate B. What reason is there to think that A is likely to be the "true" certificate, and B isn't?
Showing a big scary warning in one case, and not in the user, implies to the user that the browser has some reason to think one is more secure, which is misleading.
You could use some website which you connect to securely (CA signed) which fetches and displays fingerprint C. You can then compare it to A and B and the one which matches C is the "true" one.
Of course the whole thing can be automated by the browser and happen behind the scene - i.e. Firefox connecting to a Mozilla service for each self signed website it sees and comparing the fingerprints. Then it can store information about this self-signed certificate as trusted.
Except that rather than creating a self-signed certificate and then asking an external service to store a fingerprint, you just let the external service sign your certificate.
EDIT: Oh yeah, and signing the certificate up-front has the nice benefit of not forcing browsers to leak private information (namely, the domain names that are being accessed) to a centralized third party.
I agree in that scenario it's hard to say whether one is more likely to be the true certificate than the other. If we're assuming attacks that aren't targeted towards specific users (e.g. from state actors, or just a corrupt hotel wifi admin, who are attacking whoever happens to be on the network) then we can't say without more details about the network you were connected to on Monday vs. Tuesday. If we're assuming attacks that are targeting you specifically as an individual, perhaps A could be considered slightly more likely than B due to coming first... Visiting the site on Monday leaks the information that you visited the site, so an attacker may believe you will visit that site again. But if the attacker is keeping logs of your traffic habits, they may have just chosen Monday to poison your fresh DNS lookups. So again it looks like we can't say which is more likely.
This is stupid. There are all kinds of use cases where you don't care who knows what you're looking at, or whether it is authentic.
Say I navigate to some restaurant's web page using HTTP. Even if I used HTTPS, someone spying on my traffic would know what I'm reading, if the IP address is a dedicated server for that web site only. Whether I use HTTP or HTTPS, they could infer that I'm interested in visiting the restaurant.
Secondly, I'm only interested in the opening hours. That is not classified information.
I suppose that a MITM attack could be perpetrated whereby the attackers rewrite the opening hours. I end up going to the place while it is in fact closed (and the area happens to be deserted), making me an easy target for the attackers to rob me.
Okay, okay, please deprecate HTTP; what was I thinking!
And that restaurant better get a properly signed certificate; no "self signed" junk! Moreover, I'm not going to accept it over the air the first time I visit, no siree. DNS could be redirecting me to a fake page which also has a signed certificate. I'm going to physically go the restaurant one time first, and obtain their certificate from them in person, on a flash drive, then install it in my devices. Then I'm going to pretend I was never there and don't know their opening hours, and obtain that info again using a nearly perfectly secured connection!
Or one of your browser tabs containing an HTTP-delivered page (any one, really) could arbitrarily be rewritten by the MITM to look the same at first, but carry some injected Javascript such that, a few minutes after it detects you've unfocused the page, it turns itself into a Gmail phishing site[1].
Since the SSL negotiation happens before the HTTP request, either there's only one certificate for that IP or you need to use SNI, which reveals the domain you're requesting.
You could have multiple domains in the certificate to avoid identification, but that has its own problems.
It has become so tiresome to deal with the likes of you - people who will say how they don't need or want SSL, how they don't care about privacy.
This is the techie version of "nothing to hide, nothing to fear". It's a pathetic argument and brings nothing to the table.
Just because you don't care about the NSA knowing you like McDonalds when you browse their menu, everybody else in the world shouldn't care about their government knowing they are gay (which, need I remind you, is an offense punishable by death in certain countries) when they browse an article on LGBT rights.
Because, if McDonalds doesn't need SSL for their menu, why would a writer need it for his small-audience blog?
I have to say, I actually disagree with this move. While I think the intentions sound noble, and I'm all for a more secure web, I also believe that a web browser has no business dictating that the entire web should be forced in HTTPs.
I don't see any benefit in this type of blanket, all or nothing, type of approach. In fact, I see it doing more damage than good. Encrypting blogs, news websites, etc still makes no sense to me. I'm actually disappointed in Mozilla for looking at doing this. As a developer I respect many of their products and see them as champions of the web in a lot of ways.
HTTPs does not:
- protect a user from malware on their own system with keylogging taking place
- increase security in outdated and insecure websites (eg: old known exploitable code)
- prevent any browser drive-by downloaders or exploits
- increase the security of the web server itself (web stack thats serving requests) - yeah that's you using a private VPS without doing Kernel updates.
These are likely the major factors of why people have security issues.
What is forcing HTTPS on the entire web actually doing? Who is it benefiting?
The government can still snoop your data in-flight. If someone is connected to a fake wifi endpoint there is on the fly SSL decryption out there.....
Do we still need TLS for actual secure transactions that deal with personal data? Yes, of course. That's what it is intended for.
Do we need TLS to read the latest TMZ post about Miley Cyrus?
You decide... (oh and it's http if you were wondering)
HTTPS provides authentication, not just confidentiality.
When you visit "blogs, news websites, etc" do you think there's no value in being able to know for sure that the content is exactly what the owner of the site intended? Even though ISPs have proven themselves willing to intercept and modify that content in transit?
You're oversimplifying and being dismissive without cause.
>a web browser has no business dictating that the entire web should be forced in HTTPs.
1. that isn't what is happening as per the article. They are going to begin picking features that shouldn't be allowed over HTTP (like, say, geo location, web camera access, etc).
2. a browser is precisely the actor that should push for these things. If not browser vendors, who?
>What is forcing HTTPS on the entire web actually doing?
Encrypting streams of data that were previously unencrypted.
>Who is it benefiting?
Users.
>The government can still snoop your data in-flight.
So your argument is 'this isn't perfect for all attack vectors, so it isn't useful at all'?
>Do we need TLS to read the latest TMZ post about Miley Cyrus?
> I don't see any benefit in this type of blanket, all or nothing, type of approach.
Imagine you're making some meatballs. You've got pigs, spices, and a stove.
If you're in Germany, there's no problem -- kill some pigs, grind some pork, mix in the spices, and cook your meatballs. You could make sausages the same way (as long as you've got tubing). And you're free to sample your food as you cook it to make sure it suits your tastes.
If you're in the US, you've got two options:
1. Give up on sausage entirely. Make sure your ground pork is well cooked before you even think of eating any of it.
2. Carefully vet the pigs for trichinosis before introducing their pork into your kitchen.
Unsurprisingly, we use option 1.
Germany, like the rest of Europe, has opted for a blanket solution where they're not allowed to have pigs with trichinosis. The US has opted for a different blanket solution where you can't eat raw pork. Nobody is suggesting that we carefully inspect individual pigs and treat the meat according to whether they had trichinosis.
The recent attack on Github, where malicious JavaScript was injected into a plain unencrypted http connection, is enough to convince me that requiring https everywhere is the right move.
Meanwhile, OpenBSD 5.7 came out today, with the following security fixes in LibreSSL (arguably the most secure SSL library so far):
"Multiple CVEs fixed including CVE-2014-3506, CVE-2014-3507, CVE-2014-3508, CVE-2014-3509, CVE-2014-3510, CVE-2014-3511, CVE-2014-3570, CVE-2014-3572, CVE-2014-8275, CVE-2015-0205 and CVE-2015-0206."
So if I were running a TLS-enabled site using LibreSSL from OpenBSD 5.6, I'd have been exposed to potentially 11+ CVEs. A little sooner with OpenSSL, and I would have been exposed to Heartbleed. And who knows how many CVEs will arise before 5.8 is released?
Why is it so impossible to write a secure TLS library? Why should I put my entire server at risk to appease the attempts of Mozilla and Google to prop up the CA business? Sorry, but I'll stick to parsing lines of text.
Let 'em remove HTTP completely. Hopefully after they break 90% of the web, we'll get some real user revolt, and some real competitors in the web browser space might emerge. Maybe from some people who actually listen to what their users are asking for.
I guess now we know what that "signed extensions only" change was for: what do you think they're going to do when someone submits a "Restore HTTP Functionality" add-on in the future?
Well, frequently the vulnerability of those CVEs is breaking or downgrading the crypto... or in other words: if exploited, the connection could become as insecure as HTTP.
So your argument is that since locks can occasionally be picked, doors shouldn't have locks? What exactly is the massive burden with HTTPS? The computational cost is tiny and will continue to become tinier, there are free cert providers like StartSSL and more coming soon, and the implementation is simple enough that anyone managing a server should be able to handle it easily.
The number of websites where I wouldn't prefer encryption and identity authentication is around zero, and the number of websites where I'm okay with someone injecting arbitrary JavaScript is exactly zero. The time people spend making flawed "if you have nothing to hide, you have nothing to fear" or "crypto libraries/CAs are bad, scary, and hard to use" arguments would be much better spent actually trying to improve those circumstances for the inevitable and necessary shift to HTTPS everywhere.
> So your argument is that since locks can occasionally be picked, doors shouldn't have locks?
A faulty lock on my house doesn't turn into Heartbleed.
The thing is, I don't need a lock on my server that serves up static, legal content. You might think it's a problem, that the NSA is going to spy on you, or China is going to inject attacks into your requests to my server, but that's your problem.
I'm not going to run a massively buggy TLS library with an API guide that would take a whole team of engineers weeks to decipher, just because you're intensely paranoid about accessing game-related data over HTTP.
Seriously, look at the GnuTLS documentation sometime. It's psychotic. As is MatrixSSL, PolarSSL, OpenSSL, and NSS. The closest to sanity I've ever seen was libtls, which is only on OpenBSD, still has lots of CVEs popping up, and can't do non-blocking mode.
> What exactly is the massive burden with HTTPS?
1. write your own HTTPS server. I'll wait a few months, or
2. find a library that's easy to use and won't expose my server to Heartbleed-like attacks, and
3. pay me $70/yr for the wildcard cert I would need.
I'll cover the extra CPU costs, since you say they're so small. (even though when people say "small", they're counting overhead as a percentage against a site running a bloated beast like Wordpress in PHP + MySQL.)
> there are free cert providers like StartSSL and more coming soon
That don't provide wildcart certs (and I have a wildcard CNAME entry; and I make use of that.)
> The number of websites where I wouldn't prefer encryption and identity authentication is around zero
And you're free to not visit my site, just like I wouldn't ever patronize a webstore that wasn't HTTPS. That's how markets are supposed to work. I don't see why your browser has to make the decision for the both of us.
> and the number of websites where I'm okay with someone injecting arbitrary JavaScript is exactly zero
Honestly ... I would be okay with blocking Javascript over HTTP. But I think that's more because I just hate Javascript :P
> would be much better spent actually trying to improve those circumstances
You seriously want me to write a TLS library?
My dream goal would actually be to have it built-into the sockets layer. If it could be enabled as easily as a setsockopt(SO_TLS_CERTIFICATE, (void*)certificatedata, ...); and OS updates could fix the security, I'd be a lot more inclined to get on board with the programming side.
I don't have a solution to the wildcard cert issue. I can't well start up my own CA to give them out for free. I guess it would at least be nice to see if they ever tone down self-signed certs from "WORSE THAN HITLER" to "at least equal to HTTP" in terms of warning messages. People keep talking about it, but it's been what? Over a decade now? I'll believe it when I see it.
Can someone explain why HTTPS is necessary for a webpage where I don't log in or submit any information?
For example, take the xkcd homepage. Not only do I not log into it, there's nowhere I _could_ log in. The only input is a search box (which seems to be disabled at the moment anyway). Is it really a security risk if my communication with xkcd's servers is unencrypted? (Yes, xkcd has a store and a forum, and I understand why you'd need HTTPS on those subdomains - but I don't see why the main domain needs it.)
I agree with the parts of their plan to disable browser features that could be a security risk to non-HTTPS pages - that makes total sense. But it seems absurd to prevent static pages from using future CSS layout features just because they're not using HTTPS.
Intermediaries can (and already do) silently cause the content to be tracked, altered or otherwise modified against both your and the site owner's interests.
How would you feel if they inserted javascript to mine bitcoins?
> Can someone explain why HTTPS is necessary for a webpage where I don't log in or submit any information?
What about a site giving out health info? No login there, but could have consequences if tampered with. Or recipes (same as health info in some cases). Or news (could make investors jump).
Not that HTTPS fixes all of this, but there's no reason to think that a non-interactive or "static page can never benefit from security.
If things like "python -m SimpleHttpServer" don't work then developers will switch browsers. I don't think anyone is seriously considering what it will take to migrate the long tail of development tools that use HTTP on localhost.
Chrome has been pushing the same thing (deprecating plain-text HTTP and/or visually marking it as non-secure) for quite some time, and they've been very clear that "localhost" will still be considered a secure origin. I don't see any reason to think that Firefox would behave differently.
And what about testing small applications on remote servers like "dev.my-personal-site.com"? I don't want to pay $15 for an SSL certificate and 15 minutes of my time just so I can get my dumb lunch break tetris HTML app running on the machine I SSH into from my tablet.
I am long past confused and heading toward awed, at this point, that it's not a common-sense practice for every web developer to generate a personal self-signed root-CA cert, and install it on all of their machines. It's as basic as having an SSH or PGP key.
Setting up a new box? Put your CA-cert in its trust roots. Then use your CA to generate a server cert for it; plop that in /etc/nginx and wherever else. Now it's secure!
This is exactly the original use-case for X.509 certificate authorities: pairing devices on a private network without having to give each of them a set of of their peers' keys in advance. You have a private network that you run services on? You're a CA.
And really, in the dev-environment case, you actually want client-auth, too, because then you get "clients who don't have a CA-issued client cert can't connect" for free.
In proper X.509, the server auths the client just like the client auths the server—it's really more of an equal-peers "we're both trusted by the CA—the network owner—so we should both trust each-other" kind of thing. The public Internet centralized X.509 model—where the client has a huge list of CAs that the user doesn't even know the contents of, and the server doesn't check anything—is a very strange and non-idiomatic implementation of the premise.
I'm really talking about the kind of developers that hang out here—people who regularly set up their own staging environments, use those "tunnel into my dev box" services, etc. Most of us here certainly know SSH, and probably have used GnuPG at least once. But it's still relatively unlikely, statistically, that you or I have ever touched the openssl(1) command.
Honest question, say you follow this (which is what I did a while ago for my OwnCloud instance) -- is it possible to install your self-signed certificate on iOS? Because that was the problem I ran into, and ended up moving to a "real" certificate, but I would've been happy to remain self-signed given the option.
Yes. Just email the self-signed certificate to yourself, then open it up on the iOS device. You can also create a personal CA and install it the same way, if you plan on connecting to more than one host.
Has Mozilla indicated whether HTTP2 connections with opportunistic encryption will get access to secure-site features? If so, then SimpleHttpServer could be updated to use HTTP2+oe.
Here's a proposed way of phasing this plan in over time:
1. Mid-2015: Start treating self signed certificates as unencrypted connections (i.e. stop showing a warning, but the UI would just show the globe icon, not the lock icon). This would allow website owners to choose to block passive surveillance without causing any cost to them or any problems for their users.
2. Late-2015: Switch the globe icon for http sites to a gray unlocked lock. The self signed certs would still be the globe icon. The would incentivize website owners to at least start blocking passive surveillance if they want to keep the same user experience as previous. Also, this new icon wouldn't be loud or intrusive to the user.
3. Late-2016: Change the unlocked icon for http sites to a yellow icon. Hopefully, by the end of 2016, Let's Encrypt has taken off and has a lot of frameworks like wordpress including tutorials on how to use it. This increased uptake of free authenticated https, plus the ability to still use self-signed certs for unauthenticated https (remember, this still blocks passive adversaries), would allow website owners enough alternative options to start switching to https. The yellow icon would push most over the edge.
4. Late-2017: Switch the unlocked icon for http to red. After a year of yellow, most websites should already have switched to https (authenticated or self-signed), so now it's time to drive the nail in the coffin and kill http on any production site with a red icon.
5. Late-2018: Show a warning for http sites. This experience would be similar to the self-signed cert experience now, where users have to manually choose to continue. Developers building websites would still be able to choose to continue to load their dev sites, but no production website would in their right mind choose to use http only.
I would personally rather see those promoted and methods developed to securely bootstrap them than make us all reliant on centralised CA infrastructure. The centralised CAs are all at the mercy of their governments and hence, in my opinion, ought to be considered almost as insecure as self-signed certs.
EDIT: I think I misunderstood your comment - reading again it sounds like you are also in favour of self-signed (hopefully so).
Until supports for DANE and DNSSEC becomes widespread, unless it's a site for personal use, self-signed certs can't really be trusted by third parties.
(BTW, if you're not using a conventional CA, you'd best off being your own CA, and signing your certs with a CA certificate you've generated rather than simply self-signing the cert. It's a little more trouble in the short term, but it means that each time you subsequently need to generate a new cert, you don't need to put up with warnings everywhere because it'll be validated by your own CA cert. The downside of this is having to install the CA cert everywhere. That's what I do for my private stuff. There are tonnes of tutorials online on how to do it.)
Users should still get a warning if they requested an HTTPS URL. If they requested HTTP and there was opportunistic encryption, fine. But under no circumstances should HTTPS URLs, which indicate secure intent, silently downgrade to insecure (self-signed or otherwise).
I envy you, citizens of the free world :) You (mostly) can use HTTPS, avoid government surveillance, and use new shiny Mozilla features (for whatever they are going to be).
It's not the same in e.g. Russia (and I'm sure it's not just Russia). In Russia, the Web is now officially being censored by the state. They have a national register of prohibited resources -- basically, a huge list of URLs. Every ISP must block all access to those URLs, or else.
So if a page (perhaps, a comment page?) on your site enters the register, and it is served over unencrypted HTTP, ISPs can use DPI to block the access to just that specific page -- which sucks, but at least your site is still accessible. If, however, you use HTTPS -- then ISPs have no other choice but to block all traffic to your site entirely. Given that choice, many webmasters (myself included) will have to choose plain HTTP.
So, apparently does not matter how many web-services would be secured by HTTPS, there's no problem to spy, and there's always the way to make owners (even if it's Google) let governments use their data - does not matter whether it's encrypted or not. Moreover, in Russia this list is available for everyone, but PRISM has been revealed to public only by Snowden.
If you want your ISP to be able to intercept your traffic and see which URLs you're accessing, you could always let them install their own CA certificate on your machine. Then they could proxy and filter to their heart's desire, even over HTTPS.
> then ISPs have no other choice but to block all traffic
> to your site entirely. Given that choice, many
> webmasters (myself included) will have to choose plain
> HTTP
At some point, blocking CDNs at IP level becomes too much of an economic burden on a country to be feasible. We've seen an unwillingness by the Chinese to block access to GitHub; presumably this means Fastly (their CDN provider) is safe for a while.
Did you know that Russians had github blocked for several days? Anyway, you're talking about counter-censorship warfare. Yes, some of those measures will be somewhat effective sometimes, but the costs (not necessarily even monetary) are actually quite substantial, and it's definitely not for everyone.
This potentially removes the relative anonymity that the entire non-commercial web offers (and in fact was largely built on, post-DARPA). Free DV certificates may help minimize that negative effect, but this entire scheme still further increases reliance on a badly broken CA system.
This seems like a somewhat rushed idea with good intentions but without sufficient community discussion.. rather than put all our eggs in one basket with LetsEncrypt et al, which are noble efforts to fix a broken system, are there things we can do right now in terms of favoring self-authentication of self-signed certs? This whole thing feels a bit like a witchhunt to punish non-HTTP sites.
> are there things we can do right now in terms of favoring self-authentication of self-signed certs?
That's a good question, but I've yet to see any justification for thinking the answer is "yes".
If an attacker controls your network connection and/or DNS, what possible information could you obtain to prove the authenticity of a website, without reference to an external source of authority?
Agreed. That's why it was a question. :) I'm trying to get people to start thinking in that direction, rather than in a central source of authority (which also means DNSSEC or DNS TXT's are out)
I'm very glad to see this. It's embarrassing to think that, just a few years ago, many major websites used HTTP for all but their login pages, and it took Firesheep to get them into gear.
> For the first of these steps, the community will need to agree on a date, and a definition for what features are considered “new”. For example, one definition of “new” could be “features that cannot be polyfilled”.
I hope that includes WebRTC, since WebRTC can be used to figure out your local IP address, which (when combined with your public IP address) is essentially a unique identifier[0]. WebRTC is a technology that enables some great things (like Firefox Hello!), but it's a MASSIVE privacy hole[1], and one that I can't imagine justifying for non-secure endpoints.
It's worth bearing in mind that in the beginning, https was a significant CPU overhead... Since 2004 or so, much less of one. And since around 2010 CPU is rarely the bottleneck for web applications.
I do find it interesting that someone starting out as a significant effort after 2010 would bother having a partially https site, with back and forth jumps for login. It seems to me like it's actually more work than just having it all https and flat.
That's correct but that doesn't invalidate chimeracoder's point. His/her point was to disable the ability to use WebRTC with Javascript served from an unencrypted website. Currently, https://github.com/diafygi/webrtc-ips/blob/master/index.html can work on unencrypted websites. The fact that WebRTC has an encrypted connection is irrelevant.
I view this as an attempt by various power brokers to subvert the power of the World Wide Web by attacking it's decentralized nature. In the beginning (like now) it'll be relatively simple for everyone to get their hands on the SSL cert they need, but the risk is that in the future, after support for HTTP has been reduced it could become more difficult to acquire the certificates required to deliver the user experience that you wish to deliver (not just in terms of price, but in terms of censorship).
In addition to making the web more centralized, forcing everyone into HTTPS actually makes it much easier to effect broad scale traffic analysis. On top of that many info-sec experts suspect that the actual cipher in play here may eventually be proven to have significant weaknesses at some future date. AND HTTPS is more expensive to support in terms of bandwidth, CPU, and increased latency. It could result it more coal being burned each year to push all of those extra bytes around.
In such a scenario, wouldn't an alternative/forked browser emerge with support for an HTTP/anonymous web?
There is also censorship risk in named-data and content-centric networking, which offer multicast and caching benefits, but rely on uniquely identified content.
certainly there will always be alternative browsers, but since they would be used by a small minority the censors would effectively have the ability to determine which publishers are "cleared" to reach out to the most broad demographics. That alone would be enough if your censorship goal was to be able to sway public sentiment.
Hopefully they will also introduce a standard and free way to get SSL certificates. I do not like the idea of having to buy new certificates every year (and all the hassle that comes with installing the certificates) just to maintain a very basic website.
Despite comments elsewhere in this thread that "the web moves fast", browser changes typically involve very slow, deliberate, careful rollout plans, even for much smaller compatibility issues than this one.
I'm sure the Firefox team (and Chrome, which is pushing in the same direction) will be keeping a close eye on the progress of Let's Encrypt, and using it to set the timeframe for their proposed changes.
(Not to mention that Mozilla is a major sponsor of Let's Encrypt, so it's reasonable to expect a high degree of coordination.)
What in your life must have happened for you to actually believe such nonsense? Or do you have a financial incentive of sorts to try to make other people believe it?
1. The technical solution is trivial. You always have encryption, but http=self-signed cert, and no authentication, and no lock icon. https=CA cert, encryption, authentication, and lock icon.
2. There are strong government and corporate interests in being able to filter the open web. This closes the open web.
3. For the first time in my life, I have a comment on Hacker News or Reddit at -4. I've posted much more controversial things before (I do care about anonymity; I do use one-off cypherpunks accounts, so my post history won't indicate things). Good debate was virtually always well-received, up-voted, and not censored. The only exception was here, and one place where there was a strong, clear, well-financed astroturf campaign. That's one datapoint, but overall, the debate on the topic smells of financed astroturf rather than genuine grassroots.
I fully agree with #1, but how do you go from a currently-imperfect solution (which could be improved over the years, moving towards a self-signed cert default solution which by the way we are looking at in http/2) to "the goal is to reduce competition"?
Mozilla is one of the most consumer-friendly companies in the world, and all I can see is you trying to undermine their efforts. Are there issues with the current state of affairs? Sure. Are they at fault?
You've been downvoted because your comment reeks of gratuitous negativity, not because a debate is not welcome.
Step 1: Add support to Firefox for encryption when connecting on port 80. Call this HTTP, but have the protocol identical to HTTPS with self-signed cert. You negotiate that when you connect to the web server.
Step 2: Advertise to the community you'll be deprecating unencrypted on port 80 after 2 years time. Ideally, make patches to nginx and apache such that it's a small config change.
Step 3: Change behavior such that:
1. Port 80+old http+no encryption: Show a small warning
2. Port 80+encryption+self-signed cert: No warning. Also, unlocked padlock. "HTTP" in URL. Behavior as for current unencrypted web sites.
3. Port 443+encryption+self-signed cert: BIG SCARY WARNING.
4. Port 443+encryption+cert without identity: No padlock. HTTPS in the URL, but grey, and unlocked padlock.
5. Port 443+encryption+cert with identity: Padlock. Green. Name of organization. Indicated as trusted.
One of the problems with a push like this is that, aside from preventing open web, it also undermines the meaning of a cert. With initiatives like https://letsencrypt.org/, I a cert means I actually don't know who I'm talking to (at least in a legal sense -- I can identify the entity, and take them to court if they rob me).
To answer your question: I'm actually not too unhappy with the current state of affairs. I'd be more happy with the state of affairs I proposed above. I'm very unhappy with the state of affairs Mozilla proposes. I value an open web more than I do an arguably more secure one.
This stuff ain't rocket science. Mozilla has smart people. If it's being done a dumb way, there's a reason for it.
Cloudflare's free plan has SSL now, which a 10 year could utilize. While that opens up a potential MITM attack, I don't believe it's worse than having no SSL at all (others argue it is, on the premise that it creates a false sense of security).
AFAIK, you can't serve pages from S3 over HTTPS using your own domain name, but https://bucketname.s3.amazonaws.com/ works fine. So if you have some other way of serving your HTML pages, you can include other static assets directly from S3 without triggering browser mixed-content warnings.
One thing I've never been able to figure out from Let's Encrypt's website - will you be able to get a certificate, without hosting your own instance? Or will it be limited to servers you can actually install their program on? Also, I assume they'll get the root CA included by all major vendors/browsers?
You don't have to run the Let's Encrypt client, but you do have to be able to do things to prove that you control the domain. Currently the Let's Encrypt client assumes that it's being run on the same machine on which domain control will be proved (though not necessarily the same machine where the cert will eventually be deployed). Someone could write another client application which gives instructions to complete the challenges manually, which is a feature that's occasionally requested.
The CA will be cross-signed by IdenTrust, which is accepted by mainstream browsers, so those browsers will also accept the certs we issue.
So you don't have to run the Let's Encrypt client? I was under the impression you did because without running the client there was no way to communicate with the Let's Encrypt service?
Or do you mean, you can have multiple servers and only need to run the client on one of them?
I mean that you need to run some client software, but it doesn't have to be the client software we're writing. Other people can implement ACME, for example in hosting provider infrastructure or in server software or other configurations. There could be an ACME client where the verification steps indicated to the user are performed manually rather than automatically. You could probably even speak ACME with curl, although quickly generating valid JSON objects that contain valid cryptographic signatures might be a bit challenging. :-)
It's also right that you can have multiple servers and only run the client on one of them, if you're willing to copy key material from one server to another.
Specifically I'm thinking about current procedures to verify domains - either it's through your registrar (in which case you're golden), or an alternative like adding a specific key to a TXT record, or uploading a particular file to the domain root.
You need to be able to prove ownership of a domain in order to get a certificate. CAs already require this; the "lets-encrypt" command line tool just implements a challenge-response protocol for doing it automatically. But you could implement the same protocol yourself, as long as your platform allows you to either serve the response from a well-known path at the desired domain, or add it as a TXT record in DNS.
> will you be able to get a certificate, without hosting your own instance
> limited to servers you can actually install their program on
They did a bit of Q&A on HN or Reddit a while back, and it seems like you can use them in various capacities, but you'll certainly be able to get your certs signed without using their suite (and use them with the software of your choosing).
> Also, I assume they'll get the root CA included by all major vendors/browsers?
My question whenever this comes up is how will the web respond to the millions of caching devices out there that will now provide no bandwidth savings?
ISPs and companies all over the world cache static HTTP content (i.e. HTTP resources with proper caching headers). Doesn't endpoint-to-endpoint encryption basically kill that?
What I'd love is to have HTTPS for encrypted traffic, and signed HTTP for traffic that doesn't need encryption. So you would use the certificate to authenticate the payload, but a cache would still be able to deliver the content (because a replay would be valid).
> Not all data needs to be secure. Not all websites need to be secure. Requiring HTTPS means additional compute and additional servers securing something may not need to be secured and provides no benefit – only cost. Free and open information should be (optionally) free of encryption as well.
Indeed, there still a portion of the internet that could benefit from being SSL free.
I have a small blog on a home server. Basic HTML and static content and I don't care who views it. I can't get a static IP address.
Some things about this decisions doesn't seem thought out.
-who regulates the companies selling certificates? ($5 for a cert seems shady), are cert companies fronts for others entities?
-does this really prevent malware?
-will self signed certificates get a bit more respect?
-how does this stop Lenovo from adding preinstalled malware that circumvents security certificates?
This is so wrong. Partially because HTTP is used as a vehicle to deliver applications. This blurring of responsibilities results in the messy state the Web is going to be in. I see parallels with systemd and Linux here: poor design decisions, the chase to accommodate an ever-widening audience of Internet users, one-button devices. Just recently I saw a post from a guy somewhere on a dial-up link in Nepal that it is impossible to write e-mails anymore. You have to write them in Notepad and then copy-paste it into your web-based e-mail client, otherwise the "client" is too slow. And no, adding fancy animations to my GUI is not progress.
For Tor and I2P hidden services, HTTPS is redundant so I don't really see the point in punishing people for things like this. Loopback sites are an obvious exception to the "HTTPS is better" rule as well.
SSL means something very specific; something that people should no longer be deploying. The article notably uses the term 'Non-secure HTTP' which at this point in time means HTTPS leveraging TLS (probably at least 1.2) but leaves some room for future interpretation as newer versions or entirely different standards arise.
No one is advocating for 'SSL' here, and continuing to use the term 'SSL' or 'SSL/TLS' when we really mean 'TLS' further confuses the situation.
It's not that specific. You could even negotiate a downgrade with a TLS server to use SSL. The first 3 versions of the protocol were named SSL and the later ones were named TLS but they're not really different.
The differences are significant when it comes to the security of the underlying protocol, and the downgrade is why it's important you refuse to support SSL entirely. SSL of any version (v2 or v3.. the v1 you refer to was never publicly in use) comes with security problems that are resolved in TLS.
I won't bore you with the details, they're well explained at http://disablessl3.com/ among other places. All major browsers have ended support for SSL, and more secure alternatives have been available for years.
It's not a high risk; attacks require scenarios that may not be common, but it remains true that there's no reason to deploy SSL today.
TLS 1.0 was also vulnerable to BEAST. I'm assuming that pointing to TLS 1.0 as the "minimum" is temporary. Over time, we will decide that the cutoff should be TLS 1.1 and we'll deprecate TLS 1.0. At that point, everything you're saying about SSL will be true of TLS 1.0. It's really just a difference in version number.
Yes, it likely will. That's probably why the article mentions a deprecation of "Non-Secure HTTP" rather than prescribing a specific TLS version. It's the sort of language that will stand the test of time as newer protocols become deprecated. The comments here, however, largely encourage "SSL" which is poor advice.
BEAST can be mitigated through ciphersuite selections and other measures. This makes it somewhat different than POODLE which is a protocol design flaw for which no reliable mitigation exists.
Suggesting folks not deploy SSLv3 is hardly a controversial statement. It's not just a difference in version number, it's a difference in protocol specification and name. When we say 'Use SSL' a well intentioned reader may follow that guidance and implement SSLv3, or worse disable support for TLS. Words mean things.
Oh cool, so with increasingly stringent SSL requirements, we're basically entirely phasing out the ability to run a website without a certificate authority's involvement.
So instead of all this bullshit from Chrome, Firefox, et al., can I please just send some huge check to GoDaddy or Verisign or whomever and continue to use the internet as an open platform and not some managed service where we try to hold everyone's hand because we've conditioned them to spew their personal information all over the web all day?
Mentioned this last time, but since I didn't see it elsewhere in the thread, will mention it again... what about LAN resources served over HTTP like NAS, Printer, AP, etc.? These devices don't have DNS, forget about about SSL.
Is the entire local subnet going to be a secure origin like localhost? Because that sounds problematic... What I want is a way to single-click pin a self-signed certificate to "turn it green".
So vendors are supposed to pre-install a certificate based on that? What happens when you rename it? What happens if you have two of the same AP in the house?
At first, I was apprehensive. As a newer web developer I have never had a secure site. I have a small portfolio and a few tiny side projects I work on, nothing with >10 users. I will have to learn more, do more, and pay more to support HTTPS.
When I look at it from a different lens, I believe the internet should be as private as possible. Encryption is a solution. I think we should all make a push to make things more secure. Hopefully, we can destroy the cottage industry around SSL certs and it will be bundled in as an expected value add with either hosting or DNS purchase. I think that $1 a month is enough rent for a cert, I saw an SSL cert offered for $600 bucks which is quite problematic if it represented the threshold someone would have to cross to get a cert.
Hopefully, mozilla will work to sort out the CA problem, which is the real thing holding back HTTPS adoption.
Question: if I'm prototyping a webapp on my machine -- one that will ultimately run behind apache or nginx or an amazon load balancer or something -- can I still prototype it in my browser with new features enabled without getting a valid https setup running on my localhost?
Hopefully they put in a whitelist option... (similar to IE's security zones)... so you can whitelist your development domains.. in the case of hostnames, or when you hit a local VM.
I agree with another comment that a red broken lock for HTTP connections would be a better approach.
For a start, I would like to see http content and https self-signed content being marked the same way. The fact that https self-signed has a shocking warning right on the face and that http is just let through makes me a very sad camper.
The problem is that browsers have gone and made self-signed certs suspect, and yet not created, for example, a well-established foundation for signing such certs.
I can see how https is technically better than http. But wouldn't a https-only web put too much trust in companies who create certificates? I can't think of a concrete danger but it sounds dangerous that the degree of security depends on monetary interests.
Thank god we have technology to serve multiple SSL sites on the same IP!
And yes, while SNI isn't supported on older platforms (namely XP and Android 2.x), those platforms are out of support anyway. Part of this push is for security - people on those platforms won't be any worse off anyway (except for getting warning messages).
Without a solution to everyone needing to pay for a certificate and identify themselves this seems a bit premature. Maybe browsers will relax the "This is an evil self signed certificate on the site" warning when they do it.
It is; HTTPS is HTTP over TLS ("Transport Layer Security"), however, there are various features like pinning, HSTS, etc that need to be controlled by the application layer, which is why we talk more about HTTPS than TLS.
In the worst case, Mozilla (i.e. Firefox and Firefox OS) will make itself irrelevant because it breaks the Internet for its users.
In the best case, this will make website owners value HTTPS as a marketing decision (rather than a boring non-mandatory privacy decision, because let's be honest, what business really cares about its users' privacy as much as it cares about marketing goals?). Much like how Apple helped making Flash irrelevant (at the cost of impacting their users' experience just like Mozilla does now).
If you really want to tackle SSL make it less stupid. Self-signed certificates? I want these pinned and treated as secure. I want a notification if they change around the time they expire and a really big warning if they don't.
If we must have central trust sources, then have central hash servers so when I visit a new self-signer I can externally verify the hash.
Say the owner of a website with a self-signed cert fears it might have been compromised, and decides to create a new cert. How is the user supposed to distinguish that from a MITM?
That's what the central hash servers are for. Am I being MITM'd? Well, ignoring a global adversary, the problem is usually local. But CA's don't solve the global problem either.
Maybe a good first step would be to make self-signed SSL certificates appear less scary than unencypted HTTP in firefox.
While SSL with self-signed certs don't make MITM attacks much harder, they do prevent passive evesdropping. Yet the firefox UI seems to imply the contrary, by making it harder to use https-sites with self signed certs than unencrypted sites.
This kind of implies that HTTPS is secure. :) I dont think there is anything wrong using HTTP internally in a datacenter for data that is not sensitive (like monitoring, statistics, etc.). I guess you can still access these in legacy mode. I think the title should be that HTTP is getting phased out for public internet use or something.
and how is HTTPS protecting us against that? if you took over that IP address you can initiate a valid HTTPS session using the compromised server's identity and communicate with the monitoring service happily reporting fake data over HTTPS. I don't see your point. The question here is btw. is it worth X amount of dollar to protect this service with a secure channel? Sometimes the answer is yes, sometimes it is no.
the internal server is always made of servers and clients just like the external one.
if you compromise one server you have access to the data transiting there, but not the others
if you compromise one client you have access to the data this client sends only
in particular, very few internal networks enforce L2 security (ie its possible to sniff all data on the same VLAN as you are).
Whole point of certs is to verify the site ownership. If getting certs is too easy, well those are then worthless. As it happens to be already. Email verification of domain ownership isn't good verification at all. These certs even if trusted are no different from self signed certs. IMHO.
I understand that http will still be supported but downgraded by both Mozilla's browser and Google search. How about distinguishing websites that are only static content, and websites that have some forms, or dynamic content.
That was the opposite though. There's an obvious difference between making a website HTTPS-only and making a browser HTTPS-only (or blacklisting features for non-HTTPS websites).
Great! Now I just need to setup ssl on my Raspberry Pi to access the several web interfaces I have running there. Oh wait. I'm no longer running Firefox, I won't have to worry about this immediately.
So SSL on localhost? That seems a bit over the top? Can we then assume that all browsers will include a trusted CA/cert for localhost? That doesn't work with eg: ssh tunnels? Or will we need "developer" browsers and "app" browsers to work with localhost? Either for test/dev or for deploying "apps" with nodejs etc?
I'm not sure if considering localhost to be secure/"encrypted" (access to all features) would be a good or bad idea...
If anyone needs help or has questions about SSL, please ask! I work at a company that provides SSL and would also be happy to give out discounts to anyone here.
Browser vendors have indirectly created the money sucking machine that is the certification industry by requiring potential root CAs to have been audited to a very thorough standard (e.g. WebTrust).[0] Most of these audits implicitly require dedicated premises, extreme physical security measures, dedicated hardware, multiple dedicated uplinks, 24x7 personnel, and more. Even browsers that don't use their own cert store prop up this system by using the OS store which does require said audits. (And if anyone doubts how instrumental the browsers are to the continuance of this system, imagine how relatively niche the X509 industry would become if they moved to using something else.) As anyone who has tried to grok the documents at [0] will attest, it's a damn scary thing. Honestly you may as well try to start a bank. Or a country.
This level of difficulty creates a monopoly (or oligopoly, to be more precise.) Few people have the will/finance to do it so few do, and those who do get to take the piss with pricing. As I previously wrote[1], this means FOUR companies control the CAs that issue 91% of ALL the internet's TLS certificates.
LetsEncrypt seems like a good thing, and it might be, but it also might not be. It is, underneath all the PR, pretty much just another root CA who holds itself to the same auditing standards. It is no-doubt a very expensive undertaking and as such we may reasonably assume that there will be few, if any, additional zero-cost, fully-supported CAs in the future: and herein lies one problem. Unless you have specific requirements that LetsEncrypt just doesn't support, you have no reason not to use them. So a future CA landscape might be ONE company controlling 99% of the internet's secrets. Oh dear.
What's more, we should not underestimate the importance of cheap shared hosting. The internet is a medium for information and nothing more, and everybody has something that they might wish to broadcast. Currently, deprecating vanilla HTTP is akin to deprecating the ideas of millions of non-experts who rely on shared hosting to participate. We're telling them to join us in the land of VPSs and terminal emulators/Plesk (shudder), or to use one of the many PaaS services we've created over their own homemade solution. This is fundamentally anti-technology, which is supposed to harness innovation and make lives easier. This point is especially pertinent when you consider that the vast majority of these sites probably don't need encryption at all, so it's not even like you can mitigate the pain with direct benefits - because there are none.
Finally, TLS is a pain in the arse to administer. Really - it's not fun. I'm no stranger to it, and even I get a bit of a sinking feeling when it has to be done. To this day I'm bound to using Chrome, because no matter what I do I cannot get Firefox to parse (never mind accept) my NAS's self-signed cert. Requiring TLS across the board is tantamount to requiring many millions of hours of pain across the world.
To hold up some moral torch that does not have universal applicability and actively makes life difficult, and then declare it as canonical truth that all must adhere to is arrogance of the highest order. A great deal of chat in the tech community is dedicated to lambasting short-sighted and ill-conceived laws (think surveillance, copyright, patents, etc.) and yet here we are, making them. We have to do better.
As someone who participated in the referenced discussion and on HN, I have to say I am very happy with this outcome. Seems like reason has won. Now, if Google follows Mozilla's example, we might actually be able to pull this off.
If SSL doesn't change, this move will cut the little folks out of the internet. What are Mozilla's values?