How does anyone think this is a good idea? It should be clear which news site I am reading when I'm reading an article. Otherwise, how do I know which bias to apply? On iOS, the title bar says "google.com" whether I'm reading an article from CNN.com or WashingtonExaminer.com.
Of all the anti-competitive actions google has taken around search results, AMP is by far the worst. I hope they get smacked down for it in the upcoming anti-trust lawsuit. And kudos to Apple for refusing to change the URL bar like Google does on Android.
I couldn't agree more that AMP is terrible. I do everything I can to avoid it. Using DuckDuckGo certainly helps, but I will still occasionally stumble on an AMP site. I've created a hosts block list to help me avoid AMP as much as possible. It currently has 3,569 unique domains (works great with a PiHole!). I'm really concerned about Chrome's 'signed exchanges' where they can fake the URL completely. I hope Firefox will never support it.
Yes, that is even better! Unfortunately it doesn’t work on iOS, or I would have never created my list. Literally the only thing I miss about Android was being able to use browser extensions like uBlock Origin with Firefox on Android. Safari has its built-in content filters but it’s not the same.
Firefox Beta and Firefox Preview are a bit different. Preview and Focus are both currently a bit more experimental, they're stable but they're not guaranteed to be kept alive. Firefox Beta and Nightly are both the current latest versions of what will be merged into the stable.
There's a big rewrite being done and the current stable Firefox for Android which supports basically all addons that the desktop version does, will be deprecated soon-ish. Preview has a broader support.
Beta and Nightly only support uBlock Origin, literally.
Preview supports six addons in all, but Preview isn't a promise of whats to come as they consider it a pilot.
I have the beta, and only have uBlock origin. If I go to the addons site it tells me that, for example, privacy badger is not available on Firefox for Android.
This most recent change is just a bug in Image Search, at least based on the tweets I read. The extension seems to inspect the AMP article's URL and HTML, both of which are outside the scope of Image Search: https://github.com/da2x/amp2html
Just wanted to say thanks for maintaining your block lists! I use a few with NextDNS. I try to be privacy conscious but I don’t really have the technical background to know the best way to make that happen so I very much appreciate that you share your work and make it easy for those of us who aren’t experts. I’ve never been quite sure that I’ve configured things right but your NOT BLOCKING image doesn’t show up for me so that’s quite a relief!
I'm glad you like it! If you have any issues with it, I encourage people to come open a ticket explaining what is wrong. Sometimes I screw up and block things that shouldn't be - other times I have reasons why I blocked something and the ticket provides a good place to have a that discussion. Feedback from the community is great help to me in improving the lists.
> I'm really concerned about Chrome's 'signed exchanges' where they can fake the URL completely.
They're not faking the URL; a signed exchange contains data that can only have come from the original site. It's a secure way of handling caching/CDNs/etc, and it'll be a net improvement for security that allows sites to put less trust in third-party servers and scripts.
This is changing the semantics of fetching a URL to be agnostic how it gets resolved. That’s disturbing and downright deceitful to the end user.
Fifteen years ago, if you asked me how google search would look, I would have responded “mostly the same, maybe they’ll have cool features like asking me which meaning of ‘converse’ i wanted: the shoe brand or the logical relation.”
Instead, they’ve only subtracted functionality from the query engine (no more domain blocking), discouraged you from clicking through to sites by automatically scraping and rehosting them as “semantic” results, and now they’re trying to actively acquire 100% of the outbound traffic. Fuck google.
I don't think its deceitful - end-to-end, the site you're displaying was loaded somehow from domaina.com and it'll show domaina.com in the address bar.
Can you explain? Every time you search, do you want all of the publishers who appear on the results page to know that you were searching? Or is it that you don't want snippets to appear in the search results and just want a list of links without any evidence for why they might be good matches for your query?
This is a good example of the contradictions within AMP that limit it's usability as a technology. Unless AMP was really intended as a privacy tool this whole time and I just didn't know it, publishers probably "should" know the same amount they would get to know if you visited one of their actual pages, if you are visiting an AMP for the purpose of consuming their content rather than merely searching for it. But if that disrupts the smooth functioning of AMP, well, now we're just exhibiting an architectural shortcoming that's baked into AMPs philosophy of how to serve content.
>Or is it that you don't want snippets to appear in the search results and just want a list of links without any evidence for why they might be good matches for your query?
Is that how you feel about regular search results?
Your comment does not make any sense to me. Are we talking about the same thing? A snippet is a summary of the page served by the search engine. The publisher currently does not have any idea that a particular user has seen a snippet, so publishers do "know the same amount" as they did before with AMP, which is nothing.
> Is that how you feel about regular search results?
I'm sorry that you were having difficulty interpreting my comment. It may help if you go back and note that I was making a distinction between using AMP to consume content vs search for it. Publishers normally know when their content is being consumed. Let me know if that makes sense to you.
>Regular search results have snippets.
Right, and I was asking about the search results, not the snippets that accompany them. That is to say, the part with the blue title, green link, and the few lines in black displayed from the page that are displayed ten at a time, not the snippets that accompany them at the top of the page. Unless you were just using 'snippets' as a general term to mean the same thing that I mean by search results, in which case you were just repeating the content of my own question back to me.
> Publishers normally know when their content is being consumed.
They still know when their content is being consumed. They just don't know when their content is being searched for until the user clicks their link, exactly like a snippet. Does that make sense now? My point was that search engines already show cached portions of the page. Read the parent comment of my first "snippet" comment to understand why I was making that point.
What is stored in your cache was loaded from their servers when you originally visited. Also, you can clear your cache at any time you choose to re-fetch the original content from the provider.
Also, a provider uses a CDN at their discretion. Giving them the ability to invalidate or update cached records at times of their choosing. Or remove the CDN entirely if they choose to.
This is Google using their weight to be anti-competitive and fall further down the anti-trust rabbit hole.
A provider uses a CDN at their discretion, but it's totally legitimate to have a client-site cache as well. If I've got 20 users going through a squid proxy to get to the internet, that's something the provider has to live with. Not to diminish your core point, which I think is correct, but there are limits on what providers should be able to expect.
Its deeply misleading to pretend stuff like signed exchanges and portals are purely about content distribution and security, and not control over said content being shifted to Google, and away from authors and other sites.
Its a similar vein to how many of the objections to AMP were being white-washed with "its open source, if you have a problem with it why aren't you submitting a pull request??"
Google can choose today which site (original or AMP cache) to show in their search results. Today, as an end user, I know, via the URL, when I land on a Google AMP site.
That said, Google already has control of every AMP page because the spec REQUIRES you to load a piece of Google controlled/hosted JS onto your page. That JS can change at any time without "signed exchanges" being aware.
> Google can choose today which site (original or AMP cache) to show in their search results. Today, as an end user, I know, via the URL, when I land on a Google AMP site.
Ok why is this an issue? Note that HN is a bad place for this kind of socratic method discussion, we'll both quickly run out of the ability to post replies. Assume I'm someone who doesn't share whatever values you share about the purity of the url bar or whatever. Why is you being unable to know whether or not the content came from Google's IP or mysite's IP relevant to anyone as long as it's the same content (which signed exchanges ensure)?
Because now every asset you download from the web is a Google tracking resource.
Is it really unclear what’s going on? When you perform a GET request for these assets you are being monitored. These requests end up being part of the profile built for you which is used for advertisement targeting and content recommendation.
PS: You work for Google. Do you work on this project?
> When you perform a GET request for these assets you are being monitored.
You'll only ever retrieve Google AMP cache results from the Google search page, where they were already able to track if you made such a request, since the link you clicked has trackers in it.
So from that perspective, nothing changes.
> PS: You work for Google. Do you work on this project?
No, I work on mostly internal infrastructure. My interest in AMP is simply that I don't dislike the AMP "experience", it's fine. But more importantly, I legitimately don't get the HN hysteria around AMP. Returning to your concern, literally nothing changes with AMP vs non-AMP.
I don't get it. The most compelling concern I've heard is that it's annoying to have to couple parts of your infra to AMP-standard stuff. And I sort of understand that. But even that isn't different than previous SEO/ranking changes that required changes to the page.
Assume every website would use AMP. Like on mobile. How often do you get redirected to AMP already when you just click a random link? I sure do!
From then on, every asset it loaded via Google servers. Google now controls the entire internet. Google does this so it can serve its ads and track all users. It's as if I would only use Google for my internet surfing.
I don't use Google because I strongly believe it is an evil company, but if websites use AMP, then I am forced to hand over my data to Google even though I don't want to.
Right now, I can block Google servers entirely. But if the entire web is served via AMP, I can't do it.
And that's the whole reason AMP exists. So everything I do (or at least as much as possible) goes through Google servers.
> You'll only ever retrieve Google AMP cache results from the Google search page, where they were already able to track if you made such a request, since the link you clicked has trackers in it.
So from that perspective, nothing changes.
I am not affected, I don’t use Google search. The problem is for individuals who use Google search and now don’t have an option to avoid in deep tracking. The difference between regular pixel trackers and multiple data points associated with every resource a site serves is immense. I work in ad-tech, not particularly in the identification side, but I started multiple projects in that end. From experience, a regular tracker can be fooled, but you cannot fool every resource request. One of the things I did to identify ad fraud bots was actually drive them to a site in which I controlled every resource. The resource request fingerprint for bots was easily distinguishable from real people. Moreover, some humans exhibited navigation patterns that were distinguishable from other humans. I remember I caught a QA person doing a shoddy job of testing the front end once due to it. That is the kind of power that Google is acquiring as more and more sites choose to use AMPs. It is scary to think that a single identity has that power.
Nothing beyond the initial page load is served by the AMP cache. If you request additional dynamic resources or navigate to other pages, you'll go to the originating site.
In the example I saw, you could go through a mini site experience; you can “visit” the site without leaving the AMP. I don’t believe this one change, serving images from the AMP cache, is the issue. My concern is with the proliferation of AMPs. A lot of individuals will browse AMPs thinking their browsing is between them and the site publisher without realizing Google is in the middle.
Honestly, Google has an okay track record of respecting people’s data. In their AdExchange, they are one of the few that obfuscates the IP address. However it is still concerning that a single entity continues amassing all those browsing patterns from billions of individuals. It can be abused easily, with intent or not.
@DevKoala, do you have an example where you encountered the "mini-site" experience? I haven't seen it, but it could be a bug that would be worth fixing.
AMP also offers traffic after the load, so this is in no way comparable to a click tracker (which is also none of their business, btw). This will also offer in-band ad obfuscation eventually.
I’m curious, what do you value about the services you consume? I like transactions where I know what I am giving to the service provider. Do you really want to push away from this reality for the benefit of a few MS load time leaving a search page? That’s essentially what you’re arguing for.
The AMP team doesn't prefer these URLs shared either:
If you click the browser share icon, or trigger the browser native share intent, the origin URL will be shared, not the AMP Cache URL. Only if you explicitly copy the URL bar will the AMP Cache URL be shared.
The Signed Exchange spec that AMP has offered sites for a year now allows them to have their own URLs displayed in browsers that support it. In that case, the google.com URL will never be displayed and thus can't be accidentally shared.
All AMP documents on the AMP Cache contain `<link rel=canonical href={origin url}>` and Google recommends that social media prefers the canonical URL. This is useful outside of AMP as there are often multiple URL variants for any article. The sharer and sharee may not ideally get the same version. As an example, a mobile vs. desktop article.
That's really quite useless. I rarely share links by clicking weird "share link" buttons. I usually have half a message already composed in mail/messages/slack, and I just want to cmd-L cmd-C in the browser and cmd-V in the message I'm writing.
Also, "the google.com URL will never be displayed" is a world with an internet I don't want to be a part of.
The workflow I described goes for mobile and tablets just as much as desktop; for the cases where a keyboard is not connected please mentally replace "cmd-l cmd-c" with "tap in address bar to select, tap copy".
Also, others might share an amp link from their mobile devices, which I then end up clicking in a desktop slack/mail/messages app, and there we go again with the amp virus even on desktops.
It's more of a question of if a specific document is using AMP, the site can be a mix. Just like a site using jquery as an example.
An AMP page can be identified by examining only the first few bytes of the HTML. The `<html>` tag will contain either the `amp` or lighting-bolt emoji attribute, ie: `<html amp>`.
Technically an AMP document must pass AMP Validation to be truly AMP, so there are documents that match the above condition which aren't valid AMP. There are multiple ways to validate. A starting place is https://validator.amp.dev/
> Would you bat an eye at Google acquiring CloudFlare?
Not anymore than I already do bat an eye at cloudflare.
(It's probably worth noting here specifically that I do work at Google, so my risk profile is probably different than yours, for me personally and speaking solely from a trust perspective, I'd probably prefer it if Google acquired CloudFlare since I would get a net increase in transparency, but I can understand why that isn't a general position, and there are other reasons I don't think Google acquiring cloudflare would be good).
Thank you for taking the time to engage. Google is a scary beast at the end of the day, and I firmly believe it's an organism that should not remotely resemble what it is right now. Splitting it has potential to go a long way thinking about it.
I share these fears to a lesser degree with Microsoft and of course Facebook. Apple seems to do a great job of safeguarding, but they could become sour if they don't remain careful. Stuff like Clearview crosses the line into directly-dangerous. CloudFlare is currently innocent in my eyes, but they've managed to centralize a lot more channels than I'd like to think about.
I don't think you may realize how much of your online activity is already tracked by google / facebook / instagram.
Google's javascript is everywhere, including explicit tracking with analytics, and lots of CDN loads for endless lists of things (js libraries, fonts etc).
Their properties also track you, google search, youtube, email. They also make software you might use (chome / android / google maps / google play store).
If you think something about signed exchanges let's google track you, and they can't now... please examine these assumptions.
Folks who come up with these super complex schemes (google will use javascript loaded into AMP to take over and track you) ignore that google ALREADY tracks them.
And folks who say they don't use any google products (no android / google maps/ play services / chrome etc etc) are often either lying or don't understand how many third parties load google analytics into websites, or load recaptcha bot protection etc.
Just realize that AWS / GCP / Azure have already gobbled up vast swaths of website hosting in all forms and are growing along with some free CDN and DNS providers.
If google said, we want to track people, and brings android, chrome, dns resolvers, network infrastructure, google cloud compute, AI systems, google analytics which these media sites voluntarily, google play services etc to target and track you - they probably could.
EVERY single person (including you) who claim they don't use google, if you dig down, they often are lying and do. And if you don't, some of the people you email or interact with do, so indirect profiles can be built.
AMP solved a need for a lot of users, which is the janky, slow ad filled websites that media sites in particular had become. So there is an actual end user reason people like AMP - it's a better user experience in many cases. This is where AMP is ruining the web gets hard to support. For most folks they don't perceive they are giving up a lot more in terms of privacy, and they are getting a lot.
> If google said, we want to track people, and brings android, chrome, dns resolvers, network infrastructure, google cloud compute, AI systems, google analytics which these media sites voluntarily, google play services etc to target and track you - they probably could.
Their data governance team wouldn’t allow it. You are basically describing a system they could only introduce with the permission of the government. I don’t care if the government is tracking me honestly, I can’t fight that. I just don’t want Google tracking me for the purpose of influencing my spending habits, emotional state, or perception of the world. That is my main beef with their advertising capabilities.
Combined with that second thing I mentioned (required Google hosted JS), it is total control by Google with no straightforward way for me to detect it, block it or go around it, as I can today.
So if I understand correctly, your threat model is "Google will inject unwanted JS into a JS blob they host (like the amp.js from Google's CDN) and this will do nefarious (for some definition of nefarious) things to me without me knowing."
How is this different than today, where many sites use js from google, either as a cdn or part of the ads infrastructure? I guess you can block some of those, but blocking the jquery provided by google's CDN isn't going to work too well.
(And further, what kind of nefarious thing do you fear Google will do? How likely is it that they will do so, in your opinion?)
Today, many (most) sites do not use js from Google or hosted by Google. Google is pushing them to use Google infrastructure by way of AMP and that's the wrong direction.
> How is this different than today, where many sites use js from google, either as a cdn or part of the ads infrastructure?
It's different because it's a requirement. nytimes.com is moving to phase out all third-party advertising data, so presumably they could design their page such that it only accesses their resources.
With a signed exchange, that would allow them to nicely compartmentalize and contain privacy to their site, if they aren't required to load and run some Google supplied JavaScript. The argument that Google already knows that someone visited the page so it's no big deal is not compelling, since there is a big different in knowing someone clicked to visit a page, and having carte blanche over loading your own code on the page in question.
Can you include the AMP rquiers JS inline such that it implement an AMP spec version, or do you need to load it externally? If you can supply it inline, that's great, and what people would want (as long as it doesn't load additional third party resources). If you can't then you're providing Google with an extra level of control that's not really needed, and that's what people are against.
> (And further, what kind of nefarious thing do you fear Google will do? How likely is it that they will do so, in your opinion?)
If we go forth only considering what we think people will do, and not limiting what they can do, we're destined to be upset with the outcome. If not from Google itself, then in twenty years when someone buys Google, or Google sells off a division that houses information, or there's a breach and it's exposed, or some other company rides on Google's coattails and uses the same precedence to get data but is less trustworthy.
The point is that some people don't want to share this information, and would choose not to do so if there was an easy way to tell when it was being gathered. Fighting against new methods that seek to make it implicit instead of explicit is the only real way to do that.
> With a signed exchange, that would allow them to nicely compartmentalize and contain privacy to their site, if they aren't required to load and run some Google supplied JavaScript.
Then I'd direct you to Gregable's comment (who is a person who actually works on AMP) that
> the AMP project is actively working to move the origin (control/host) of the AMP Javascript to the publisher's own domain, as well as allow a version served on an origin owned by the OpenJS Foundation, rather than Google.
So while this isn't supported yet, the people working on it do ant that.
> If we go forth only considering what we think people will do, and not limiting what they can do, we're destined to be upset with the outcome. If not from Google itself, then in twenty years when someone buys Google, or Google sells off a division that houses information, or there's a breach and it's exposed, or some other company rides on Google's coattails and uses the same precedence to get data but is less trustworthy.
I'm unconvinced by such slippery slope arguments, given that the pushback were Google to do something like inject nefarious js would be swift. They've had the ability to do so for, well, 20 years now. They haven't yet.
> So while this isn't supported yet, the people working on it do ant that.
Good! For what it's worth, I'm slightly pro AMP based on the idea, I'm just not entirely happy with the current implementation. Fixing it to be less dependent on a Google resource is a good change, IMO.
I use copious Google services, such as Gmail and Drive, and Hangouts (or whatever it's called this week), and Android, but I'm leery of becoming more dependent on Google. It's to everyone's benefit if there's healthy competition between all parties, and to my personal benefit if I don't find that someone's gotten access to my google account and literally everything is open to them (which is why I always use a username/password combination for sites I create accounts for instead of linking my Google account... even if I know my email is @gmail.com so it's of limited use, for now. Baby steps).
> I'm unconvinced by such slippery slope arguments, given that the pushback were Google to do something like inject nefarious js would be swift. They've had the ability to do so for, well, 20 years now. They haven't yet.
First, it doesn't have to be nefarious. The bar for Google deciding they deserve analytics for content they "serve" is much lower than the bar for actually doing something illegal. I prefer not to place options to do what I consider the wrong thing for business gain in front of companies when it can be helped. Hope for the best, plan for the worst, and all that.
Second, that was a single one of the scenarios I listed. The others notably did nt rely on Google doing or not doing the right thing, because the decision is no longer in their hands. If Google is no longer the authority deciding (because they are gone, or have a new parent, or the data was taken), what Google would choose to do is irrelevant. That's why it's important to some people to reduce the information being collected. It's impossible to know what it will eventually be used for in the long term, so the prudent thing is to limit it, and/or compartmentalize it (that is, maybe I'm happy with nytimes.com knowing where else I clicked in their article, but I would prefer Google only know I loaded that first article).
I'm not convinced by your argument that since nothing evil has happened in the last 20 years, we are safe in the future.
It's about power dynamics. If you get a consolidation of power, that's going to be open to abuse. Maybe not now, maybe in the future, who knows. Democratic systems have checks and balances in the public domain. Google doesn't have this.
> They've had the ability to do so for, well, 20 years now. They haven't yet.
as we don’t have proof that google did not do nefarious things, we don’t have proof that they haven’t. with such monopoly and power distrust is useful thing.
> as we don’t have proof that google did not do nefarious things
We do have proof that they don't do the specific nefarious things being discussed here: injecting nefarious js into otherwise useful things. That's easy to determine.
>>what kind of nefarious thing do you fear Google will do
Well, the headline is one good example. That google controlled JS is EXACTLY how they removed access to the original URL...on somebody else's page that isn't theirs. "Signed exchanges" doesn't fix that either. It's also how they hijack the back button and swipe events for carousel navigated pages.
> That google controlled JS is EXACTLY how they removed access to the original URL...on somebody else's page that isn't theirs.
No, the Google AMP cache adds the header bar. That isn't added by the Google controlled AMP js. Let me repeat this: The AMP js didn't change. Google's AMP cache implementation changed. (if you disagree with this, please post the diff of the AMP js that removed the url bar, the js is opensource at [0])
> "Signed exchanges" doesn't fix that either.
Yes it does, in two ways:
1. It would prevent Google from mucking with the embedded page at all, like they do now.
2. It would remove the need for me to have the url redirect, since the url bar would point to the original site.
Apologies, you're right in that they aren't using it that way. They could, and the examples of the top bar, swipes, and back button hijacking seem to indicate they aren't averse to it. Those things caused me to lose trust in AMP. That they chose to do it via a "proxy" instead of their hosted javascript doesn't make me trust them more.
This concern, Google controlled/hosted JS, is independent from Signed Exchanges and specific to AMP.
At the same time, the AMP project is actively working to move the origin (control/host) of the AMP Javascript to the publisher's own domain, as well as allow a version served on an origin owned by the OpenJS Foundation, rather than Google.
Will this mean that sites will be able to finally lock in a version of AMP? The idea that there is one place right now that is required for all AMP sites that can dynamically update the behavior of all such pages is, in fact, terrifying to me; like, I went into reading this discussion thinking AMP wasn't making anything worse and being super unhappy that anyone even thought to denigrate signed exchanges (as I want those deployed everywhere in order to provide censorship resistant web caching) and then I got convinced by kbenson that this is actually a bit of a dystopia scenario for the web once you involve AMP... I even managed to find an issue about solving this problem with subresource integrity (one you were involved in) that apparently got closed with prejudice (not by you) under an insistence that this javascript injection be an "evergreen" codebase lest somehow everything becomes super insecure, which to me just indicates a fundamental architectural flaw in the security model of AMP :/.
On publisher origin, the plan-of-record does not involve any validation of the contents of the AMP javascript files. When an AMP Cache (eg: Google) crawls one of these AMP documents, the same is true - the contents of the javascript files will not be relevant to the decision of whether or not the document is considered valid AMP. The files will likely not even be crawled by the Cache.
However, when the AMP Cache serves one of these files, it will rewrite them to the latest* version for serving to users. This is necessary since the javascript runs in a somewhat privileged context in search results.
Lastly, and there is still some discussion around this, it is likely that Signed Exchanges may be able to load the publisher's own version of the javascript in the future, even in search results. This is because the execution context of the javascript is different for Signed Exchanges.
Because its cover to allow them to subsume content and serve it solely from google.com, and deny the originating page any traffic or recognition, or swap it out transparently when they feel like it.
This is literally already happening with the info sidebar, the reason signed exchanges matter is so they can throw up the smokescreen about how its cryptographically verified to come from the original page, so why are people upset.
Any site that objects and refuses to implement this stuff will just disappear from the first page which is reserved only for Accelerated By Google sites. (Which is already happening for AMP links on mobile searches).
As I understood signed exchanges, a correct implementation makes it impossible for the intermediary to inject, alter, replace, or otherwise swap out any part of the page without failing verification. This would mean the original server gets to do all the branding and get all the recognition and integrity in transit that its controllers might wish.
Yes, as you say, Integrity is preserved. However, Confidentiality is also another important aspect of Information Security. Making a 3rd party appear as a 1st party, is a privacy and confidentiality violation, which is why I do not like AMP and signed exchanges.
Yeah, as near as I can tell signed exchanges are essentially a caching proxy. With integrity checks, so that you don't have to trust the proxy all that much and its agency is much reduced. I'm reminded of apt.
The calculation appears to be that given the chance, some website controllers will choose to trade confidentiality of public pages for better load times. In business terms, this seems a pretty straightfoward win in many cases, so I can see why some would sign up.
The publishers are not mentioning to the visitor they are adding one more third party looking at your data; one that maintains an ad exchange and will be an intermediary on every resource request.
A publisher is free to switch to AMP, but choice needs to be given to the user to agree or leave the site the same way it happens with cookies. I wouldn’t opt in and now I cannot block Google tracking at the DNS level thanks to this.
Publishers can, and should, always make visitors aware of how many third parties are positioned to see all visitor data.
We are, unfortunately, a long way from this being normal. Even as third parties doing things like running CDNs or doing TLS termination for other reasons has been pretty thoroughly normalized. Though offering it as a Firefox extension could be an interesting exercise.
I think publishers view AMP as a question of their sovereignty and choice. Since it's their website that's being potentially served by Google, it's their choice to make. There's absolutely a lot of room to dispute if this is the morally correct stance, but I also think it's not wildly out of line with other questions publishers weigh in choosing what they serve and how.
> I wouldn’t opt in and now I cannot block Google tracking at the DNS level thanks to this.
How do signed exchanges break blocking Google tracking at the DNS level? You already need to have google.com unblocked in order to get a results page that serves an exchange from Google.
AMP already heavily restricts what the original page is allowed to put inside, including things like ads, javascript and branding. You can be certain anything google implements around this proposal will have similar very tight restrictions.
That's definitely a scary thought! Can you share any details of what must be public plans to strip everything non-Google of branding? Surely there must be something out there beyond the IETF docs for the protocol, where I don't see any evil plans to Googlify everything in. If anything, it looks like a design intended to allow relaxing the restrictions imposed by AMP.
Your priors are terrible. Extrapolation from past experience is a perfectly valid form of reasoning: it's called “induction”, and most people consider it a valid form of reasoning.
Being paranoid does sound like an issue! However, just because other people's personal incredulity hasn't been a great form of evidence in the past, that doesn't mean it isn't a good form of evidence at the moment. Got any reason to believe it isn't?
I understand that a lot of people genuinely do not trust Google. It's been their experience, and thus their priors, that Google is attempting to assert control over all aspects of the web.
Perhaps someone can find some evidence that allows for deductive reasoning on this subject. I would really like it. Otherwise, all we've got is competing lines of perfectly valid incompatible inductive reasoning. Then people choose the one they like or fear the most.
That's a scenario that my priors suggest is not likely to be useful for producing valuable models of the future. Instead, it's a scenario where the primary outcome I expect is for confirmation biases to rule the day.
> Perhaps someone can find some evidence that allows for deductive reasoning on this subject.
Such evidence would be admissible in a court of law, and would probably lead to the breaking up of Google (and Alphabet, and various holdings) under antitrust. Yes, that'd be nice, but you're asking for a lot; even given the premise, it's unlikely to materialise until the statute of limitations has passed, if that.
> Otherwise, all we've got is competing lines of perfectly valid incompatible inductive reasoning.
Not quite. Because you can ignore the lines of the reasoning, take into account the evidence, and then come to a conclusion yourself. If you don't quite trust yourself to do it properly (and who would, honestly? the bias you've highlighted is enough on its own, and it's just one), there's mathematics to do so: Bayes' theorem. (Of course, you have to estimate the conditional probabilities properly, but I find it easier to notice when I'm leaning on the scales when there's a level of indirection like that.)
Most predictions are made with inductive reasoning. If your comment doesn't contain some flaw (even if not the one I pointed out), then it suggests that predictions should be much harder than they actually are. Super-predictors should do not much better than chance. And yet, we find that, in reality, even the little-trained masses do pretty well: https://predictionbook.com/predictions You can see confirmation bias in the graph there, but it's not as significant as my model of you expects.
The page results will be a tightly restricted iframe-esque window inside of google results (This is what the Chome "Portals" proposal is about).
There will be no obvious distinction between a search result and google's own website, even though the "portals" will be showing cryptographically signed content (which will almost certainly be in a super restricted AMP-esque format that denies sites much control besides what the text says)
If google swaps one result for another, 95% of users wont even notice.
> Will Google allow for example, Taiwan content in mainland China?
Google doesn't run in mainland china, so this is a bit of a strange question. But let's assume that Google did. How would AMP or signed exchanges change the accessibility of Taiwan related content on Google in mainland china? Are you saying that the current Google results page would show Taiwan related content, but signed exchanges wouldn't, or what?
> Is it like VPN/Proxy or has more knobs?
Neither. It's literally a way to say "a site had this html, css, and js, and cryptographically signed the blob so that we can re-host it and you can be certain that the person re-hosting hasn't modified it in any way."
Signed Exchanges mean the publisher signs the content using their private key. A third party can provide delivery like a CDN, but they cannot modify the content, or the signature would no longer match. The useragent (browser) enforces this. This gives the secure control of the content back to the publisher, unlike the trust model of CDNs or the AMP Cache.
Chrome does enforce the matching signature. Browsers without Signed Exchange support will not likely ever get a signed exchange as they do not advertise support for it in the `Accept` request header.
@freeone3000, that's incorrect, in the case of Signed Exchanges. Chrome will verify the document's signature against the publisher's public certificate. This will be `nytimes.com` for example. It is not using Google's certificate for this verification, and Google does not possess the private key required to modify the content and update the signature.
The actual mechanism by which a signed exchange is implemented is prone to man in the middle attacks by removing the Signature field wholesale. You are not requesting info from nytimes.com, you're requesting info from amp.google.com and trusting that the backing data is accurate. There's no need for a certificate to be presented at ALL! Unless it can be determined that such a header should exist, there's no way to verify its absence.
Right, but this means proposing signed exchanges as a solution to AMP's strategies is kind of nonsense, since it's a semantic problem whether a page is acting as a proxy for another, and a technological solution doesn't work here.
Chrome enforces that the signature being served by google is the same signature as the one being served by google. It's a useless verification. If Google were so inclined, they could very well just change the <link> tag too.
I think we are talking about different things here. You, as an AMP engineer are talking about how Chrome implemented this [1], but I'm talking about how Chrome is not a user agent, because it demonstrably acts as Google's agent, not the user's.
[1] Which is unverifiable, we just have to take your word for it.
Oh well please keep checking for us, since all of us do not have access to Google Chrome source code. Thank you for taking on this responsibility, sure hope you don't get hit by a bus.
>A third party can provide delivery like a CDN, but they cannot modify the content, or the signature would no longer match.
To make sure I understand, does this mean that in principle a third party other than Google can deliver the AMP pages? Is google working to facilitate that AMP hosting is open to everyone and calibrating their searches point to any and all alternative AMP hosters?
> Can a third-party other than Google deliver an AMP page?
Yes. Examples: Bing runs their own AMP cache and also delivers AMP pages. LinkedIn and Twitter also link to AMP pages, but they don't currently run a cache. IIRC, Twitter links to the Google AMP cache and LinkedIn links directly to the AMP variant on the publisher origin. They could run an AMP Cache. Cloudflare ran one for some time, but shut theirs down recently.
> Can a third-party other than Google deliver a Signed-Exchange?
Yes. Cloudflare generates them for their customers who opt-in via their "AMP Real URL" product. "Generates" in this context implies delivering them. To date, I'm unaware of any large scale implementation that is delivering Signed Exchanges for third-party origins other than the Google Cache though this may change. The tech stack absolutely supports this.
Interesting. Are there examples I can search for right now? For instance, are there searches I could do for a news article, that will show an AMP for a CNN article that's on a non-google url? Do you have a ballpark estimate of what percentage of total amps are delivered by non Google domains?
Also how would you reconcile your comment with that of madeofpalk who appears to be treating that possibility as a hypothetical idea that hasn't happened, and which would be unpraticable due to needing to trust third parties?
> an AMP for a CNN article that's on a non-google url?
All AMP pages exist at non-Google URLs. They are just cached by the link aggregator (typically a search engine), so the link aggregator can prerender them without deanonymizing the user to the publisher until the user clicks the link.
> Do you have a ballpark estimate of what percentage of total amps are delivered by non Google domains?
All of them (100%) are delivered by non-Google domains to Google, Bing, and other caches.
> Also how would you reconcile your comment with that of madeofpalk who appears to be treating that possibility as a hypothetical idea that hasn't happened, and which would be unpraticable due to needing to trust third parties?
madeofpalk's comment makes perfect sense if you understood what I wrote above. Why should CNN or Bing be told that you have searched for a particular news article on Google before you have clicked it? The page has to be served from the link aggregator the user is browsing to maintain the user's privacy when prerendering results.
You appear to have reinterpreted my questions and translated them into terms that were different than I intended, so I guess I have to take care to go back to the start and restate my original question in a way that uses the appropriate magic words correctly.
I don't intend to ask whether AMPs (hard to resist calling them 'AMP pages') exist somewhere on non Google servers. Obviously third party content that Google is presenting exists somewhere off Google. And obviously it has to be formatted in a way that's compatible with AMP, and it makes sense that that is going to be done off Google domains. I at least knew the gist of that already, and I regard the detour into that explanation to have been a non sequitur. The point is that Google presents AMPs and it serves it's cached version of them from Google servers, on a Google domain. The beginning, middle, and end of the experience of searching for finding and consuming that news never has to involve leaving a Google domain. It's not open in the sense of involving interaction between servers that aren't controlled by Google, until you make that extra click to go from a cached Google version of an AMP to the version that sits on the domain controlled by a third party, at which point going to the third party has been rendered optional and largely unnecessary from the point of view of the user.
This next part is super important: the fact that I'm asking about openness and interoperability, or the lack thereof, in this sense doesn't mean that I'm failing understand the technical advantages with caching and optimization. I regard those as derails that don't wrestle with the issue of openness that's being raised. The point is that the connection between consumers of content who start on Google, and the third party content provider, increasingly depends on Google in a way that shifts nearly the entire experience of consuming content onto Google's infrastructure.
>All of them (100%) are delivered by non-Google domains to Google, Bing, and other caches.
This is the starkest example of a question not being answered but replaced with a different question. I asked 'what percentage of total amps are delivered by non Google domains' and you replied by answering a different question, what percent of non-Google amps were delivered TO Google and other caches, noting that it was 100%. Which of course it is, but that's because that's a tautology.
By contrast, it is helpful to note that there are caches other than Google, like Bing and 'others', which, in contrast to much of the rest of your comment, I feel actually is a pertinent and fair response to the question I'm actually asking. But those aren't content providers, so unless Bing or Google are content creators that were delivering content to themselves, it's tautologically true that 100% of that is going to be delivered to them by third parties, which has absolutely nothing to do with openness. If I'm using magic words correctly, I guess what I want to ask is what percentage of AMP traffic to cached pages is served to users by Bing and others that aren't Google.
An AMP is a page (expand the acronym). I answered your question very precisely, and your response shows that you still don't understand it.
> If I'm using magic words correctly, I guess what I want to ask is what percentage of AMP traffic to cached pages is served to users by Bing and others that aren't Google.
If I search on Bing, the results will be prerendered from Bing's AMP cache. Reread the GP comment, and see if you can understand why that is so.
How is this reconciled with lern_to_spel's reply to my comment, where they claim this is already implemented and there already are such third parties? Are Google searches right now that use AMP exposing users to risks because they are currently requesting assets from potentially untrustworthy third parties?
Duck duck go is a lighthouse in the storm. We are all saved. If Jesus came back ask yourself this, ‘would he be google or duck duck go’
Exactly. Search your hearts and use the net with your values as though they’ve become a force to be feared by the swill that is invading our liberties (google).
If you're using DuckDuckGo, presumably a site that doesnt do the "AMP Cache" iFraming that Google Search does, then what are you left with, apart from a website that will almost definitely load faster than the 'original'?
A while back on one of the AMP discussions here, someone from Google weighed in. They said the data was clear: users not only accept it, they love AMP.
I asked how they knew that, because if it was, say, just tracking how many people tried the 2-3 tap process to get to the original URL compared to how many people just engaged whatever you showed them, then the data might be showing you something else (and in fact, I wasn't clear how you'd get from accept to love in any other way than a focus group or survey).
No response. Not clear if that's because revealing data would run afoul of internal confidential disclosure, or because this basically hadn't been thought through.
> Apple for refusing to change the URL bar like Google does on Android.
Apple has its own problems with the URL bar -- they keep the domain but drop the rest of the URL. Not as bad as replacing the domain, but not great.
As a user, I will absolutely say I love AMP. AMP pages load incredibly fast with much less bullshit.
The real problem is that AMP isn't necessary. Google created AMP and is encouraging (nearly forcing it) it in a very hamfisted way because web developers couldn't figure out how to make responsive, fast-loading web pages without a huge company like Google spelling it out for them with a framework.
Now, obviously to power users like the typical HN user, AMP is evil because it's just Google taking over more of the web.
But understand that to your typical mobile user, AMP is a godsend because of how fast it is. They don't care that the URL shows Google instead of whatever page they think they're on.
I am also a user who loves AMP, the same way I loved RSS. The reason I love AMP is one of the same reasons I loved RSS. RSS aggregators allowed instant loading of content across multiple sites. AMP enables the same thing but with a richer experience. It doesn't simply spell out how to make web pages fast. It makes them instant, which is impossible without something like AMP or RSS that allows safe prerendering with publisher opt in.
Often, it's not the developers that are at fault. Middle managers, cross-functional people, etc. ask developers to cram garbage into otherwise would-be lean pages. Developers often have no authority to push back, and stakeholders often don't fully understand "why" the garbage they're asking developers to insert into pages is detrimental to the user experience.
With AMP, the little badge (the verification), serves as a constraint for developers, but mostly for business stakeholders. The conversation of "I can't do that, because it's simply not compatible with AMP" is way easier than "I can't do that, because it will make the page slow".
Reminds me of the Office 2013 ribbon bar - users preferred it apparently! They couldn't find any of the options they'd used for 20+ years but somehow preferred it!
Thank goodness menus for Office still existed on the Mac.
> Apple has its own problems with the URL bar -- they keep the domain but drop the rest of the URL. Not as bad as replacing the domain, but not great.
You can view the full URL in safari for iOS by tapping the URL bar. In Safari for Mac, you can modify your preferences to always show the full URL. Definitely not ideal, but also not anticompetitive in the same vein as AMP
>Of all the anti-competitive actions google has taken around search results, AMP is by far the worst. I hope they get smacked down for it in the upcoming anti-trust lawsuit.
This reminds me of the EU Internet Explorer lawsuit, which was peanuts compared to what Google is doing right now. Between Google Search, Chrome, Android, Youtube, Gmail, Google Maps, Google Docs, AMP and likely 5 more things I forgot, they've stealth-grabbed so much of the internet, it's not even funny. At least force them to split off their ad/datahoarding businesses.
So how long until AMP is exploited in a wide range phishing campaign? I know that it is getting harder and harder to distinguish phishing emails from non-phishing emails, but this move is not helping at all. It clearly helps google though.
The annoying thing is, average user will not notice and/or care.
All it would require is an AMP website that mimics the Google login page. It already says "google.com" at the top of the browser, and we've told users to trust that, so...
The google.com domain would also probably trigger autofill recommendations from some password managers, which would make things even more convincing and seamless.
Maybe they are trying to push their signed exchanges / "AMP real URL"[1]? Last I understood, Firefox says they won't support it, ever. Apple is less vocal about it, but doesn't appear to be working it.
That's a bit of an exaggeration. Mozilla's position statement on web packaging[1] says:
> As a whole, and for origin substitution in particular, until more information is available on the effect on the web ecosystem, Mozilla concludes that it would not be good for the web to deploy web packaging.
Personally I'd like to see web packaging adopted without the origin substitution capability (at least to start with), as it would still allow sites to offer web apps comprised of a fixed bundle of code (signed with an offline key), rather than potentially different code each time you visited. That would reduce some of the concerns around serving "secure" apps on the web, as long as browsers had a way of preventing the server from silently updating the web package (perhaps using short-lived unique subdomains/certs).
My concern is they lose the ability to counterbalance the other four of the Big Five, if they are uniquely targeted by an anti-trust lawsuit.
Which is to say, I don't disagree with the momentum toward anti-trust measures being brought against Google, but I want them systematically brought against the entire Big Five, or else the effects may just be further consolidation.
When you say everyone you mean a tiny minority of developers. The actual users don't care or even know what it is. If anything they like it because it loads the page faster.
I don't know if it is just my experience but amp sites load slower than hn does. Slower than my own websites. Slower than the websites I usually read on.
Sure, amp sites from the feed appear to "work" instantly but they fetch all the content in the background so that's not fair (on chrome at least).
I disagree that amp doesn't bother normal users. It is easily noticeable when you click on a search result and go to a page that says amp.google.com?blah=blahhhh and all that. When, as a normal user, you try to copy that link, you'll end up finding this junk and no straightforward way to see the original link. When you click on a link on the AMP page, it forces a full browser refresh. Sites which force people to use apps don't even work with AMP, because these links confuse the deep linking big time. I haven't even seen AMP pages to be any faster than the actual website itself.
It's because your internet speed is too good to notice. AMP pages are definitely much faster, which you can ask anyone in a country with lousy internet connections about. I'm not saying that AMP is good but claiming it's not faster is just wrong. I also don't see why it wouldn't be. If the restrictions on the stuff you can put on AMP pages together with Google's edge caches wouldn't make it faster it would be strange.
>I hope they get smacked down for it in the upcoming anti-trust lawsuit.
I wonder if the outcome of the lawsuit will be a grab bag of concessions, such as abandoning AMP, and, say, adding the ability to fully delete Google Apps from Android rather than doing a mere factory reset, making opt-out of interest based ads easier, and things of that nature.
Malte tweeted about this 8 hours ago actually: "Just heard from the Image Search team that this is an oversight and they'll add the feature! Sorry about that and thanks for the report!"
https://twitter.com/cramforce/status/1265688067706245120
The already proclaimed their desire to get rid of the URL. I don't think this is in the interest of the user and more in the interest of market dominance and information filtering.
I wrote an extension to "unamp" the website of a popular Italian newspaper and as soon it reached a thousand users Google banned it and forced Firefox to do the same (shame on you Firefox)
So I guess worse is better we are supposed to accept it or else they'll police us into accepting it.
The extension was very simple, it just displayed the content for pages protected by a paywall by changing the display attribute.
To be clear, they use amp to serve pages in the protected area, but the protected content is served in the body of the page, inside an AMP-APP tag, but simply hidden.
The code is few lines long, it could have been one line, but it has to unwrap the shadow dom.
I made it for myself, but then friends started asking for it so I published it on Firefox's extension store and later on on the Chrome's one.
It's been there for a couple of years, nobody complained, I even emailed the newspaper several times to warn them about the "bug" but they never replied back.
It has never been very popular, especially outside Italy, it had around a thousand of installations and about 900 daily users, but on March this year I received a copyright infringement notice from Google, even though Google has no rights on the newspaper's content.
They removed the extension right away and banned me from re-uploading it.
I tried to reach their support many times, never heard back from them.
A few days later Firefox removed the extension as well saying Google contacted them about the copyright infringement.
As of today after at least a dozen emails I haven't been able to speak to a human being.
I don't mind much, I still use it on my systems and it still works even on Firefox mobile (for now)
I tried to upload it under a different name, but apparently I'm forbidden from publishing it again.
Weird detail: the day after Firefox removed it I've been contacted on the email I used to register on Firefox dev website asking me if I wanted to sell the extension.
The URL bar is helpful to fighting phishing attacks. It's one of the things everyone at my company, including the non-technical people, is askedt to check before entering information.
Also techie and non-techie people have been using the same browsers for how many decades now? Aren't the majority of people in many countries on the internet? It seems like the average non-techie users succeeded in figuring things out so far.
> The URL bar is helpful to fighting phishing attacks
It would have been nice if client side TLS certificates were more popular. Then your browser could warn you that you're not using your client cert when connecting to a certain website and not allow you to complete the connection. That would be a better solution as opposed to relying on users to manually check the URL.
It doesn't. The way client certs are useful is that a website that only uses client certs for authenticating clients removes the ability for the client's credentials to be leaked.
Even if the user uses the client cert with a phishing site, the phishing site doesn't have the ability to impersonate the user against the real site because the private key is still on the client's device.
In addition, if a browser is configured to automatically use a client cert for all requests to a particular domain, then even that leak doesn't happen because the browser would automatically not use the cert with the phishing domain.
~~It wouldn't work because the phishing site would lack the private key needed to validate your client-side certificate and the TLS connection would not be established. This is assuming that the legitimate website itself signed your certificate signing request (CSR) to create the client-side certificate.~~
Edit:
What I posted above is not correct. What I should have said was that the server would validate the client certificate by checking a certificate authority (either managed by the server itself or a 3rd party).
>It wouldn't work because the phishing site would lack the private key needed to validate your client-side certificate and the TLS connection would not be established.
That is not how TLS works. A server can trust a client based on the certificate the client presents. A client can't distrust a server based on the certificate the client presents.
"The phishing site would lack the private key needed to validate your client-side certificate" is nonsense. Neither the real site nor the phishing site have the private key used to generate the CSR; only the client that sent the CSR has that, and validating a cert does not involve the private key in any way. If you're thinking of a new protocol where the server itself generates an arbitrary asymmetric keypair and shares it with the client, then a) that's not TLS, b) that could just as well be done with a symmetric key (since this is just pre-shared key auth) where the server presents a nonce to the client to sign and verifies the client signed it, and c) a fake server can just not do that.
> That is not how TLS works. A server can trust a client based on the certificate the client presents. A client can't distrust a server based on the certificate the client presents.
You're correct; I posted inaccurate information.
> validating a cert does not involve the private key in any way.
What I should have said was that the server would validate the client cert by checking whether the certificate is valid according to the authority that signed it (which could be the server serving as a CA itself or a third party CA).
As for the original question, I guess it's possible for a phishing website to not bother validating the client certificate presented at all and allow the TLS negotiation to succeed.
If there was something that could instruct a browser to only send a given client certificate if it only receives a certain server certificate, then it would be much harder for a phishing website to work, because the browser would not send the client certificate to the wrong server.
Not to pile on, but I think you're imagining some kind of TLS framework that simply doesn't exist currently. It's not clear if you're misunderstanding what exists now, or you're alluding to a different possibility without clearly articulating it.
> I guess it's possible for a phishing website to not bother validating the client certificate presented at all
Why would a phishing site do anything to discourage a connection from a potential victim? Of course a phishing site would accept an invalid or missing certificate! Even if the site was impersonating something like amazon.com, Amazon hasn't issued client certificates to all of its users so the whole point is moot.
> I think you're imagining some kind of TLS framework that simply doesn't exist currently. It's not clear if you're misunderstanding what exists now, or you're alluding to a different possibility without clearly articulating it.
It's possible that I am misunderstanding it, but it appears that the point of contention is what the server will do when it receives a certificate from the client.
Ideally, it would check if it's valid by checking it against a CA. So, if someone who manages the server signed the CSR, then the server can validate the certificate with a CA that it manages. If it uses a 3rd party CA, then it would validate it using that.
What I'm not sure about is whether a browser can map a particular client side certificate to present to a server based on the server side certificate presented to the client. If it could, then it would be easy to determine whether one has connected to the correct server since the browser wouldnt' try to present the client side TLS cert to the wrong server.
> Even if the site was impersonating something like amazon.com, Amazon hasn't issued client certificates to all of its users so the whole point is moot.
Which was the point of my original post. If we had worked in making the process of generating and using client side certificates more user friendly, then companies would have done so as part of the account creation process (meaning poeople would use their client cert in addition to their username and password as part of the authentication process).
What we have now is major companies like Amazon using SMS based 2FA that would easily be compromised by re-routing the verification code message to another device since that factor is not under my control, but at the mercy of the phone company.
A private key is not required to validate certs (private keys are for generating certs). A cert chain back to a trusted root cert is required for validation. A phishing site would just trust all certs.
I realize that now. One way to mitigate it would be to have the browser somehow tie a given client certificate with a particular website. That is, the client cert for news.ycombinator.com, would only be presented if I try to connect with that server and nothing else.
That way, if I go to a phishing website that pretended to be Hacker News, my client certificate would not be sent and my browser could warn me by saying that the connection is not using a client certificate. Right now, if we only rely on server side certificates, there's nothing stopping a phishing website from using Let's Encrypt to show the secure connection icon in the URL bar and tricking me into thinking it's a legitimate server.
Presumably you know that non-technical users, IME, use Google instead of the URL bar. And, if Bing is the default search engine they search Google first, then enter the website name in Google, then click on the first result (without checking the URL).
This is the primary way I've seen "non tech" users navigate to a website. Some will use the address bar, but they're in the minority.
No matter how much I berate my family they still all do this, so I'm no longer surprised to see it when visiting a client (though I don't do web design/training since last year).
Lots of people were also convinced Google wouldn’t go down that route and would be dismissive of criticism.
I feel like when a company grows to a certain size, we have to drop the “assume good faith” outlook we give to small businesses and individuals and take on “assume bad faith” instead.
As commitees dilute responsibility usually via anonimity (“it is not me, it is the majority”), you should never attribute personal (human) traits to them. Their members deny responsibility: then you must assume the committee is irresponsible (i.e. has no human intent).
So: the committee only wants what the papers say. In this case: benefits.
This always reminds me of the passage about the bank in Grapes of Wrath.
> Some of the owner men were kind because they hated what they had to do, and some of them were angry because they hated to be cruel, and some of them were cold because they had long ago found that one could not be an owner unless one were cold. And all of them were caught in something larger than themselves. Some of them hated the mathematics that drove them, and some were afraid, and some worshiped the mathematics because it provided a refuge from thought and from feeling. If a bank or a finance company owned the land, the owner man said, The Bank- or the Company- needs- wants- insists- must have- as though the Bank or the Company were a monster, with thought and feeling, which had ensnared them. These last would take no responsibility for the banks or the companies because they were men and slaves, while the banks were machines and masters all at the same time. Some of the owner men were a little proud to be slaves to such cold and powerful masters. The owner men sat in the cars and explained. You know the land is poor. You've scrabbled at it long enough, God knows.
The worst thing I've seen recently is amp URLs for reddit threads. It's one bad thing (new reddit UI) wrapped in a worse thing (AMP), and getting back to classic reddit takes a lot of gymnastics. The stupid part is that the amp page is indistinguishable from the (new) reddit page (the AMP page comes complete with the "download our app" popup). So I don't see how it's providing any speed/experience benefit.
Exactly. For Reddit the AMP version and the mobile site look almost exactly the same... except the AMP site performs worse due to the AMP restrictions.
So none of the "nice" Reddit mobile site features actually work, and supposed "one-time" annoyances like the "Download the App" pop-up don't get cookied under AMP and continue to annoy on every google search. Insane!
I can never get the Open In Reddit app button to work. I’m on iOS and have the latest version of the official reddit app. Which btw is painfully filled with ads.
May I recommend using any 3rd party app for reddit instead of the official app. I use relay on Android and it is great. There are many other options, all of them are better than the official app.
Unfortunately no. But there's an "Open in Apollo" option from the share sheet which works for me. It's just an extra tap to open the share sheet from the AMP page.
It never works for me either, but there is a workaround. If you click on the 'x comments' link at the bottom of a post, it will open up in reddit is fun.
"The worst thing I've seen recently is amp URLs for reddit threads."
How can I see this very specific example ? I would like to understand exactly what this looks like ...
I am not a reddit user and I don't consume much web content on a phone, which is using Safari on an iPhone ...
Would I need to download google chrome onto my iphone, then do a google search for a reddit thread, then click on that search result ? Or would I see this result in Safari as well ?
Genuinely curious as I would like to recreate this specific result ...
This page is simultaneously encased by AMP, has multiple ways reddit is trying to get me to install an app that isn't going to help me right now as I am hopping between websites--I am going to glance at reddit for ten seconds and then a stackoverflow question and then a bugzilla issue and then a quora thread... the last thing I want right now is to end up in some app--but it also doesn't show me all the comments and is asking me to click through to get them... it used to be I clicked a search result and it showed me the reddit thread, with all the comments and without an app: I liked that :(.
The madenning part is you will see these links when using Firefox or Safari and you'll see them on a desktop. It is of course, not just Reddit thing. I've seen LinkedIn shares look like this.
I agree that AMP is a major attack on the internet, not to mention the fact that it makes it way slower [1] and hard to understand what you're browsing. The worst part is that my friends send me AMP links all the time even though I use DDG to avoid this stuff.
They can package the ads in band with the content, although DOM based blocking might still work—at some point we’re going to have to write an ad detection AI just to use the internet.
The long term solution is stigmatizing ads—you can never ad block someone in the ear of a newspaper editor.
It slows down doing anything that involves it. I end up landing on some jank AMP page, and have to then navigate to the top, and pop up the original link, click it, then wait for the real site to load. Total UX fail
If you want to mitm yourself, nobody takes it away from you. Create your own CA, add it to the trusted list, setup your proxy with the keys, and mitm all you want.
You do get the benefit of https even on static sites though. Do you want every network you join to be able to inject any JS they want into pages you're viewing? Https solves that.
I disagree. There's no reason to leak things to everyone on the planet, even if what's leaked isn't the most-damaging-to -leak-thing possible. As an example, it annoys me that the Texas Instruments site isn't encrypted, leaking my interest in parts to anybody listening.
Even if Texas Instruments implemented SSL the fact that you went there would not be a secret to anyone who can see your packets due to SNI[0]. HTTPS is really only useful when you want to hide the contents of a message, not the recipient.
I don't understand this point of view. You are either using a vpn or tor if you don't want the planet to leak your info to the world or you are leaking already.
If you are not then sure browsing in an internet cafe or an unsafe network will allow rogue entities to see your interest in parts.
Your browser is fingerprinting you on chrome with an id. You are being fingerprinted with your unique fonts on other browsers. If you have javascript on that opens the floodgates. Logged into facebook still? Browser extension gone rogue? Andriod OS?
I dont think they meant leaking into to TI, they meant leaking more into to ISPs than necessary. Http connections are like a post card, anyone in route can read it. At least with https they have to jump thru more hoops.
This gets trotted out a lot, but who is "everyone"? At worst it's a bunch of random people in the cafe whose WiFi you're using - but these people don't have the resources to track your activity once you leave the cafe. Otherwise it's just the same rogue's gallery of large corporations interested in adtech/surveillance money: ISPs, device makers, other online service providers. The thing is, none of them have the reach, data collection, and analytics capability of Google. And Google almost certainly gets all this information too, whether you use HTTPS or not (see reCaptcha, Google Analytics).
To me, this rationale looks an awful lot like a moat to stifle Google's competition. If collecting "the urls you're browsing" is wrong, why is it ok for Google to do it? And if it's not wrong, why is it somehow better that only Google gets to do it?
> At worst it's a bunch of random people in the cafe whose WiFi you're using - but these people don't have the resources to track your activity once you leave the cafe.
Depending on what you're doing, one-time collection may be enough.
Also, many captive portals are provided to businesses by companies whose own business interest is in tracking people, and they'll absolutely correlate the data.
Rather than having to worry about whether the service you're getting internet access from will track you, make it impossible for them to do so.
Isn't this just imparting a false sense of security? The one party who I'm most worried about getting my data, Google, will still get it.
I think you've still failed to answer my basic point - how is this not just a competitive moat that benefits Google? If we care about privacy and data collection, legislation is required because Google and Facebook have no reservations about sucking up everything they can. If it's ok for them to do it, why not $RANDOM_CANADIAN_ISP?
If you use HTTPS, you know you're talking to the site you think you're talking to. If that site itself is sharing data in a way you don't want, including by pulling in third-party scripts, you have a problem with the site. That's not an argument against HTTPS; communicating in cleartext doesn't solve that problem, it just means that other people the site doesn't trust can also access that data.
Let's not let the perfect be the enemy of the good here. Universal HTTPS is an improvement.
Especially when you can just sit behind cloudflare and gett for free with very little work on your part.
Granted, you then have to trust cloudflare; but it seems like they have been good actors so far considering their privileged position delivering tons of content across the web.
Wikipedia says that Amp first started appearing in search results in February of 2016. Some random websites tell me the average Google product death happens about 4 years after it launches (not counting anything that hasn't been killed at all and with some huge error bars). So we should expect AMP to be abandoned sometime between now and never.
Me: When can we expect to see a fix? As of right now, there’s no easy way to visit the original URL.
Malte Ubl: I don't have a timeline. The share menu should help getting the underlying URL!
Me: I’m sure I don’t need to explain how this looks; regardless of your intentions, it comes across as a fairly hollow response. The issue gets a lot of negative attention on HN, someone from Google responds that it was just an accident and will be fixed, but there’s no fix.
Malte Ubl: Sorry, no way to do it in fewer than a couple days.
I agree with everything you're saying and every single point you're trying to make here, but, as someone who's been on the other end, your responses are exactly why people hate posting about this stuff publicly. He doesn't have the authority to speak on it (alone) but you trapped him into accidently saying something he wasn't trying to say (in fact he only said it to be nice and reply to your question - which he had no obligation to even acknowledge). There was nothing he could have said that would have made you (or me or any of us on HN) happy, no information he was allowed to give out, but on his own volition he chose to respond to and validate your questions, and you reacted by saying "Aha! I got you now!"
That said, it seems likely to me that if this somehow stopped advertising revenue, it would have been fixed before HN even noticed. "No way to fix it for at least a few days" seems very untrue, that department just isn't lead by someone who cares enough about this issue to make it happen. It's not the tweeter's fault, though.
Great points - also, reading between the lines of his tweets, it sounds like the bug was pushed by a team he doesn't lead, so the best he can do is ask them "Please fix this, people are really pissed (at my team) because of this". But to the Image Search team, it's like, why should they do a costly rollback? It's easier for them to just sit back and say nothing, and let AMP soak up the blame on HN. Then they can fix the bug on their normal schedule, without their deadlines being affected.
> as someone who's been on the other end, your responses are exactly why people hate posting about this stuff publicly
I’m well aware. But this is a pretty serious problem, and we, as Google’s users, have very few options for effecting change. If I’m given an opportunity to make a point, I’m going to seize it.
He thinks it can be done in a couple days. Does the rest of the company care enough to make that happen? Let’s find out.
If you sabotage a competitor so that their market share is irrelevant then you don’t have to pay them as large an amount of cash to make sure their users keep coming your way.
I got a new computer (Mac Mini) last week and, as the ritual usually goes, I opened Safari and was about to type “google.com/chrome”, then stopped myself and went to “getfirefox.com” instead. It’s been a while since I gave FF a serious look, and so far I haven’t felt like I’m missing anything.
Next up, I need to change my default search engine to DDG.
What’s a good privacy-oriented, web-based email service? I’ve heard of ProtonMail but haven’t used it.
I migrated from Gmail to Fastmail a couple years ago and I have been happy with it. Migrating existing emails from Gmail into Fastmail was easy, as was setting up forwarding from Gmail to Fastmail for all future emails. The hardest part was just updating accounts across the web to use the new email. Since I'm using a custom domain with Fastmail, I can change account emails to things like foo.com@mydomain.com or bar.net@mydomain.com.
It's also easy to configure a Fastmail account to fetch messages from a Gmail inbox and even send messages via Gmail, for a seamless transition while updating accounts.
Wow, that's a neat feature. I've seen a lot of Fastmail recommendations on HN but none that mentioned this feature. Thanks for bringing it up. I'm probably far from the only person around these parts that wants to migrate off of Gmail but just hasn't made it a priority yet, and this tip certainly helps.
I switched to Outlook email. I've been considering changing to Fastmail (considered a few others too), but this is the reason i thought emails are hopeless now. Most people I know are on Gmail.
>What’s a good privacy-oriented, web-based email service? I’ve heard of ProtonMail but haven’t used it.
Been Firefox user out of conviction for 10+ years, made the switch to DuckDuckGo as my main search engine with about a year ago, no regrets. But Gmail? It's sticky.
I have so much stuff tied to my email account and it just sucks looking for alternatives. I'd even happily pay. The main difficulty is finding a service that's as mature, feature-wise, as Gmail and a sensible way to migrate all my stuff (email-logins for major websites, informing clients of the new address, etc).
I think the best compromise is using two accounts for a couple of years, heavily focusing on the non-Gmail one and dropping Gmail once you had no interaction with it for a while. But all the obvious alternatives people post don't quite cut it for me, I've been searching for ages.
I've been "meaning" to switch off of Gmail for years, with all of the issues with Google. I switched to Firefox/Safari a couple years ago, then DDG a year ago..
Finally, a week ago, I switched my email to fastmail. Bought a new domain for 10 years explicitly for that purpose.
I'm slowly migrating everything to them... Every day or two I think of something else that needs to be migrated, and do it. I don't feel a need to rush because I just have them both simultaneously.
I have to say though, Fastmail feels better than Gmail in every way. The UI is fast and snappy, looks clean (with a few themes), includes a calendar and notes. The first month is free, so I figured I'd try it before I decided to go all in. After about 20 minutes I was sold and bought a year subscription...
They have this really cool feature with subdomain forwarding so you can organize things into folders automatically. As I've been migrating emails, I've been giving each service it's own unique email address. Eg `amazon@stores.domain.com` will automatically add the email to my "stores" folder, and I'll see it came from Amazon. I can put unlimited anything before the @ sign, which will be really nice for signing up for one-off forums. If they sell my email address or send spam, I'll know exactly who did it and can just block that address.
They also have a bunch of other features for organization like the "+ addressing" that Gmail has. I definitely recommend checking them out though. I feel so good having an email that won't get shut down
The good thing is you never need to drop your Gmail. I use Fastmail since 4? Years and still get all the mail from all private accounts I've ever had. And they have excellent guides for migration, or so I recall (it's been a while)
get your own domain name. then you can move between webmail providers quite simply: just change the mx records and you can shift between most reponsible providers. it incurs a small cost.
i setup a gmail redirect when i switched to fastmail, that was more than a decade ago. it may or may not still be an option.
Maybe you can setup your new service to pull email from your Gmail account via POP3 or something, so you still only need to login to one account. Then you can gradually move further and further away from the Gmail account.
Fastmail, protonmail, migadu to name a few. Personally I use migadu because the pricing is perfect for my needs but I've used ProtonMail a bit and been pretty happy with it as well.
You probably get an email address with your ISP as well. The downside is switching ISPs when you use their domain, but if you're happy with them and don't plan on switching, it might be an option.
At work we use Runbox (Norwegian if I'm not mistaken), works well and iirc the pricing is affordable also for individuals.
A new ISP in the Netherlands, set up in response to XS4ALL being killed by the parent company, offers email hosting with a custom domain for 50 euros a year (domain included I think). I personally have high expectations there. For more context on what XS4ALL did and stood for aside from privacy, see https://hn.algolia.com/?q=XS4ALL
Self hosting is also pretty easy if you don't mind getting your hands dirty for an evening or two to set it up. (For me it has been hands-off most of the time, aside from upgrading to new hardware every few years.)
The only issue there is developers not being able to keep up with the rapid pace at which Safari is still strengthening privacy protection even in the middle of a pandemic, as far as I can tell.
Blocking third party cookies by default and requiring requests for storage access all sounds great.
I've been happily using Protonmail for the past 2 years. It's not going to have all the bells and whistles of Gmail, but I'm a minimalist so it's perfect for me.
I'm still against AMP on a conceptual ground, but it is so much faster and more reliable than traditional page loading on my phone. Pages can't have megabytes of JavaScript and huge images. It's served by Google's fast CDN instead of some far-away server.
Use firefox or brave. I solved this problem. I'm only waiting for a proper alternative to Android, I'm so tired of google and its swallow way of doing things...
I just switched to Android because iOS is so incredibly buggy. Android is pretty buggy too, but not nearly as bad. I'm afraid we're currently living in dark times for software :(
As someone who switched from Android to iOS and even dual carried for a period of time, I found Android light years more buggy AND janky than iOS. Google still remains years behind there, and I don't see that changing.
Interesting. I wonder if usage patterns are a factor here. I also dual carried android/iphone (actually, I still do). I honestly consider the iOS of today the buggiest software I've ever used.
I find that MobileMail is horribly buggy, but there are rarely any bugs in my usage path (Gmail and other Google apps, Safari, Messages, Notes, Camera and Photos, Snapchat, Apollo, GitHub, Apple Music, my bank app, and a few others). What apps do you use most and what's so buggy?
I don't really use too many apps overall. Most of my bugs are in the core OS itself and Safari.
For example, I get hit with this one pretty often (blank page in Safari): https://discussions.apple.com/thread/250740002,
Going to reader view and back sometimes fixes it. Force quitting Safari always fixes it.
Fairly often Safari will stop accepting input. Force quitting fixes it. Also often after a pinch zoom, it snaps right back to 100%. This happens on Reddit and HN very often.
Airdrop works maybe ... 5% of the time? Just now I tried to airdrop an image to my wife. My phone said "waiting...", and nothing happened on her phone at all. Tried several times. Nothing. Airdropping to a Mac I think I've gotten to work once or twice.
I email myself URLs from Safari. But about half the time I have to email myself, wait a few seconds, go into Mail's outbox, and then send the email from there. Otherwise it will just sit in the outbox indefinitely. This is actually true of all emails, but I use the Gmail app now for normal emailing. Added fun, sometimes the outbox doesn't show up until you force quit Mail.
Lots of annoying lack of polish issues. For example, expand an image in messages to be full screen, then return to the thread. Often you get a blank screen, because it has scrolled itself down beyond the messages by about a screen's worth. Sure, just scroll back up to fix it, but I expect a better experience from such an expensive phone.
My previous address (I just moved a week ago) has not been added to Apple maps despite the complex existing for about 2 years now. When you enter the address you get a different address about 10 miles away. Sure, not a bug but a data issue, but all the same from the user's perspective. This caused all kinds of pain. So many people use Apple Maps because they just use the maps app that came with their phone. I got situations like this, https://i.imgur.com/v13VYZM.png, all the time. Package deliveries, appointments at my house, you name it, it was such a mess. I contacted Apple support, they assigned me a support representative. Over Facetime I showed him my address not being in Apple maps and how it is in Google maps. After 2 phone calls with him and many emails, I finally just gave up and accepted it.
These are the ones I can think of quickly. I've also had tons of issues with carplay and many, many third party apps. But it's hard to know who is to blame for these issues.
Over on Android, the only real issue I've encountered is part of the phone understands I have work and personal profiles, and other parts think I don't have a work profile and want me to set one up. Admittedly, this is a pretty annoying bug that does cause some headaches, but it's really the only thing I've hit. Chrome, Gmail, pretty much everything else has been just fine for me.
I just had a discussion with someone on Lobsters about the actual cost of a lower-end iPhone versus a similarly-priced Android phone and the iPhone appeared cheaper:
> The Pixel 4 is $799 and the Pixel 3a is
$399 (although it may be on sale right now depending on your region).
The iPhone 11 is $699 and the iPhone SE (2nd generation) is $399. Both
provide additional discounts if you trade in your current phone (and
since iPhones have a lot more resale value, you can get much more for
your trade-in). Google provides three years of security updates,
starting from the time that the phone is released. Apple provides four
or five years of security (and feature!) updates. You can get the same
amount of usable life out of a brand-new Pixel as a one- or two-year-old
iPhone, which puts the per-year price strongly in Apple's favor. Other
Android devices, like those made by Samsung, are usually even more
expensive and have fewer years of guaranteed security updates. Apple
even backports extremely high-severity security patches and major bug
fixes (like the GPS rollover patch) to devices that would be considered
"obsolete" by Android manufacturers.
So the iPhone might be a higher upfront cost, but it's a significantly lower per-year cost, especially if you get last year's model or the SE.
Maybe I've been really unlucky, but I haven't seen an iPhone realistically surviving more than 3 years with real usage. Buttons dying and the battery barely surviving a day was pretty common. I know there will be survivor examples out there, but without knowing the average it's hard to compare them.
> Other Android devices, like those made by Samsung, are usually even more expensive and have fewer years of guaranteed security updates
That doesn't seem accurate. I'm using a Samsung, 3.5 years old, and it works fine; was still getting security updates until April of this year, over 4 years after release.
Apparently they guarantee three years on the flagship Galaxy S/Note models and there’s no similar guarantee on any of the cheaper models. It sounds like they’ve gotten better at it with the flagships than the last time I used a Samsung (back in the S4 days).
Pixel and Samsung were intended to compete with iPhone, so the cost is going to be somewhat similar. Also, these go on huge sales every year, whereas iPhone never does. If you buy these at cost, you're doing it wrong. If you're really concerned about cost, you get a new Motorola for about $200, and there's even cheaper options out there.
Maybe, but I don't spend money on high end phones, I don't really see the point if you are not going to play games. I have an LG G4 and honestly, don't really need more.
Are you running LineageOS on it? How are you getting security patches? I wouldn't be comfortable having the device with my most sensitive information on it running an out-of-date operating system, but maybe that's just me.
I never said I was unable to load it, I said that the AMP site is noticeably faster. The problem is my currently slow network, not the phone. My phone processor gets the same single-core benchmark speeds as my desktop processor, so that's not the bottleneck.
OMG! Why would Mozilla kill extensions on mobile (android)? It would make mobile unusable for me. In fact that's the reason I barely browse the internet on my iPad: Firefox never had add ons on iOS due to Apple restrictions. I hope that's just FUD!
It's more nuanced than that. Iirc they're rewriting the way extensions are handled internally on Firefox for Android, making only certain extensions available initially. Feel free to look more into it for more precise information, this is all public.
This update will initially include support for one of the most popular extensions on Android, uBlock Origin. Additional extensions will be supported in subsequent releases so you can customize and expand your mobile browsing experience even more."
Why has Moz turned in to this almost fascist company all of a sudden. Like, what? Kill all but whitelisted extensions, which users asked for that?
Let me guess, there'll be an accidental reset of people's config to enable them to be auto-updated and then there extensions will stop working ... I wouldn't put it passed then to then have already secured control over uBlock and we'll have unlockable ads before you know it.
IIRC, this is happening because of the complete internal rewrite - the relevant APIs to expose to extensions simply don't exist yet, and they're implementing them to get the most-wanted extensions available first.
They could use better messaging on this so people who are on firefox precisely for this reason don't think that something has fundamentally changed.
A statement along the lines of "We're working with the most popular Extension creators to get up to speed on the mobile extension API" (which could mean anything) would have been better.
Would like to add a little context to this for anyone who isn't familiar-
For all the intelligence and engineering prowess of Germans, their Internet infrastructure is severely lacking. Most places I'm aware of have poor connections to begin with but also still meter every kilobyte that you download, so Germans are really far more conscious of payload sizes than most Westerners are, where internet service is just a flat fee.
It's really strange when so-called "second-world" countries like Bulgaria and Romania have far better Internet than Germany
The AMP site is unusable far more often than not. My phone has had a 100 Mbps connection for years, yet opening webpages is slow because I need to manually mess with the page so that I can get a working webpage.
I'll take +250 ms load time over fiddling with the page for 5 seconds and then having to reload it anyway.
AMP might be faster if you browse the net with javascript enabled by default and with no content blockers enabled. But if you don't do those things, AMP is a clear net-negative.
I did the same, and am slowly switching over other computers too as I remember. The quality of search result is high enough that I can't tell the difference.
Search engines are too important to leave in the hands of one company. Especially a company that, though it was founded as a search engine company, has that search engine as only a tiny part of its people-tracking machine.
DDG has the ability to save your entire configuration setup ("open links in new window", etc.) in a common URL string - after getting it set up the way you like, go here: https://start.duckduckgo.com/settings
On the right is "Show Bookmarklet & Settings", click to open and you'll see your URL to bookmark. As I use standard Firefox/Chrome sync for bookmarks, I just had to save it once in either browser and it shows up on all devices exactly how I like/want.
Same. And it's been maddening, because while DDG is fine and sometimes better for desktop work (code questions, stack overflow, professional stuff), I find it far inferior for mobile stuff (getting around, local recommendations, shopping, etc.).
But if that's what it takes to get the real web, so be it.
There's definitely a psychological element to it. I find I just don't trust search results with pictures next to them as much as I trust plaintext search results. The years and years of simple google results has trained me to be suspicious of noisier search result pages.
Ditto, for all the good reasons to use DDG, AMP is what retrained me into searching DDG first. Each time I landed on an AMP page it was a reminder to change my default search engine on that browser/device.
Idk I really appreciate the lack of cookie popups and subscribe to this and that popups that can occupy almost the entire screen bedoree being dismissed on mobile.
I switched all my default search to DuckDuckGo a few months ago because of stuff like this. No regrets here. If you haven't tried it recently, give it a shot. It's gotten tons better in the past few years. Once you get used to it, it's as good as Google search.
I've been using DDG as my default search engine for more or less a year now, and tbh I find myself using the !g bang way more often than I'd like. Basically anytime I'm searching for something dev related or anything that's broad enough that without my personal user info it would be hard to return the results I'm looking for. Which is kind of the crux of the issue I believe. For everything that's wrong with Google as a company in terms of privacy-related issues, the truth , at least for me, is that many times (not all) it ends up being... quite convenient I guess.
YMMV though, maybe I need to step up my searching game to obtain better results using DDG. That's definitely something I should work on.
Replace !g with !s to get Startpage results, which is basically proxied Google results. Although I would advise just not relying on Google at all. The perception is that it's better, but in recent years I've found Google results to be total trash.
Yeah, definitely, I'll try to replace !g with !s in the future. It is my intention to replace whatever google services I can. But admittedly, since this change is mostly motivated by personal ethics and a desire not to give away my private info (as opposed to issues with the platforms themselves, in terms of functionality), it is a bit rough to adjust to some changes.
I'm surprised I've never heard of this. The results seem pretty good too.
Searching for my username on DDG returns an entire page of results on "glanders", a "contagious zoonotic infectious disease that occurs primarily in horses, mules, and donkeys". I don't find anything related to my name until page 2.
Runnaroo, however, has my GitHub profile as the first hit and my person website as the second. I like those results!
It is only a couple months old, so not many people have heard of it yet, but I have been trying to make an effort to share it more vs. just staying behind the computer and adding features. It did get a nice bump recently when Brendan Eich tweeted out a link to it. That was a really cool surprise.
"Runnaroo, however, has my GitHub profile as the first hit..."
I'm also working on adding Github right now as a Deep Search source, so the below results will soon be integrated into the SERP for that query.
It really isn't as good. I want DDG to be awesome. I'm not in the Google ecosystem, and don't care for their business practices. I would be happy to dump them.
But Google search is noticeably much, much better. They've got us over a barrel.
I kept seeing comments on HN about how DDG was as good as Google, so i've switched on one of my machines. It isn't.
For stuff that doesn't matter too much, or is an easy search, i use DDG, then fall back to Google if i don't find what i need. For anything serious, i just open up a Google tab :(
Well, sometimes what you're used to matter too. For me DDG is better (expect for local searches, in which case I've got to include my city's name).
Every time I search anything in someone else's computer (with google) I find it much harder to find what I'm looking for, sometimes I can't find and just go to DDG.
The only times I use !g is when I find some pretty obscure programming errors and no (useful) results on DDG, and only once in a while google is able to find anything better, usually the results are the same.
It's interesting that there are such diverse experiences of DDG vs Google. I'm also in the camp of finding DDG to be better at finding what I'm looking for.
I wonder if the difference has to do with variations in search strategy or interests, or a combination of both. In any case, DDG does seem to have been improving, and I don't see any reason to believe that the trend won't continue. Particularly since Google seems hell-bent on making their search useless.
And there are so many nice features, like the !bangs search prefixes, and the ability to store and sync your preferences as a passphrase, without requiring an actual account.
yes, exactly, thank you. Not as powerful as a specific wildcard, but good enough.
sadly, it doesn't work with .something, so 'inurl:.co.uk' doesn't seem to work. Works with 'co.uk' but that's only valuable with longer suffixes. something like 'uk' is a bit too common.
DDG has basically equivalent results for most things, but I find that I use !g for programming stuff enough that I've gone back to just using the Google site for those searches. It feels like Google has better results for more recent programming topics.
Most programming searches are just stack overflow, at least for me. So, I just add `!so` to the search in DDG. It's the same as using "site:stackoverflow.com #{what_youre_looking_for}"
Now if we can just get people to stop saying "Google it" instead of "search it" or something else which doesn't involve being a constant two-legged advertisement for Google, maybe others will start to realize they have options.
> Our company moved to it and its been horrible with any third party application or marketing pixel.
Isn't that part of the point though? 3rd party marketing and tracking pixels are NOT things that improve the experience or performance for the visitor.
Yes, this is true. But when our company (ecommerce) relies heavily on marketing data it becomes an issue. Third party applications that provide UX and UI are impacted as well.
The main issue here is executives believed the hype that an AMP website would result in higher revenue and that is not the case. The money spent on making our website AMP'd could have been spent fixing the current system.
Can you tell me more about AMP? First time hearing about it (maybe I do live under a rock), but I always thought it was convenient to see news results up top that open up fast.
Hi all! I'm an engineer who worked on this feature. I can't speak to the general concerns about AMP, but I can say that we didn't remove the original url here intentionally - we actually never added it to the Images version. Sorry for the oversight, we're working on bringing it back now!
I’ve been trying to get a straight answer out of google for a while on this, wondering if you can clear it up.
Does Chrome mobile’s “articles for you” section prioritize AMP content along with Google news?
From my experience this seems to be the case. I find it alarming as the only sites that support AMP are often the big clickbait news fear factories. Meanwhile the little sites who never bloated their pages with tracking scripts in the first place end up getting screwed.
Is there a way AMP could have an opt-out option? (maybe a permanent setting, maybe a per-search trigger) Ideally at user discretion, but I suppose it must be enabled by the first-party content provider (the website).
That would make everyone happier I think. I don't mind AMP some of the time, but sometimes it's undesirable for security concerns notably.
Are you and your team generally aware of the concerns about AMP? What does the AMP team think about how negatively pretty much the entire world views it?
When we engaged the AMP team early in it's lifecycle to try and dissuade Google from it's course, it was very apparent that the AMP team, and especially it's lead, Malte Ubl, strongly believe that Google is the web, and AMP is the only way to "save" the web from being app-ized by things like Facebook Instant Articles, which seems to be the existential threat Google is scared of.: https://www.facebook.com/facebookmedia/solutions/instant-art...
Obviously, the AMP team does not, in any way, believe the web needs to be saved from Google.
It's been a while since I read the AMP project website, but my impression was that it was written by people who drank a lot of the Kool-Aid. There's probably a lot of people working there who believe that AMP is good, but we can't discount the compliance created by the exorbitant pay and prestige of working at Google.
There also seems to be a healthy amount "oh, well that's more of a Google search issue, not AMP as such so I have nothing to do with that" denialism.
I note that Terrence Eden did excellent work getting the concept of "how do users opt out of AMP results" on to the steering group, but it is notable that no work appears to have happened on it for more than a year now.
I've stopped using Google Chrome and Google Search. The only time I use Google Search now is when DuckDuckGo doesn't quite deliver and then I do so using an incognito window. Next step will be to ditch Gmail, and then Google Cloud & Firebase.
This might sound silly coming from a lone developer in South Africa, but my experience is that we live in the future, and developers with a mass intuition is rarely wrong. If the trend continues, then in 10 years' time this will be the popular view.
They're making their money, the US probably will never slap them with any meaningful antitrust lawsuits, and they also see China as a long term investment, so there's no reason to see them as self-immolating. Maybe they're doing so for people in the know, like those of us on HN, but most of the world doesn't see it the way you do. To most of the world, Google is still that innovative company with the friendly looking logo that makes gadgets.
Which is 90% of the time when you are looking for IT questions. Unfortunately. I am hoping they are going to get better, it would be so good to finally stop google search.
The main reason I occasionally venture into google is for news -- "!gn something specific" can return better results if I'm looking for a specific article I recall seeing a couple of weeks ago. That said there's also far more rubbish in the google news source, it's far less useful on the whole than DDG, but occasionally it reveals the right result.
I've generally found what I wanted via the DDG news search, but I haven't used it enough to seriously evaluate its quality, unlike the main search engine. I've also had reasonably good luck with "past week" or "past month" searches on DDG, but those aren't news-specific.
Which queries in particular? I've found duckduckgo to be quite useful especially for IT questions because it actually listens when I tell it to search a phrase verbatim. If I try to google an error then I keep getting all kinds of stuff that is vaguely related but with no mention of that particular error whatsoever.
StackOverflow stuff, for me. I always have to prepend `g!` to those.
Google must index SO better or something because there's always so many more useful results (particularly in the little "sub-results" below any SO result) than what DDG gives. E.g., compare:
It looks the info you want is in at least each of first 4 results on ddg. Specifically the 1st ddg result for me is a stackoverflow post with the question showing how to pass individual arguments and the answers explaining how to pass a variable number of arguments. (https://stackoverflow.com/questions/3811345/how-to-pass-all-...) I would say ddg gave better results.
I have to agree that DDG seems to struggle with developer terms, but that is probably a cost of not knowing that I'm a developer. E.g. compare search results on Google and DDG for "Rust http". Google knows I'm looking for a crate, but first result on Duck is for the Steam game, Rust.
Firefox focus has an emphasis on privacy. That means no history and no bookmarks. No history means no cache. No cache means I have to refetch all resources every time I open it--if this is, say, a list of articles from a limited selection of websites, this is incredibly inefficient and will eat my limited data. No bookmarks or history means I can't use it as a primary browser to open common sites without typing them in--I can save shortcuts to pages in the launcher, but this causes usability issues (can't see entire title or URL; no horizontal view; my folders on Nova launcher are hierarchically organized drawer groups, which doesn't support shortcuts; can't search shortcuts only, results include deep phone search).
Firefox focus is a very good browser for opening links on WiFi or unlimited data when using another app. It's not very good at all at being a primary browser.
Basically, Google is penalizing webmasters who do NOT support AMP (also by rewarding competitors who DO support it); if you depend on Google search results for a significant portion of your traffic, not having AMP will impact your business.
And the Department of Justice is investigating whether this results in higher prices for consumers, if so their business conduct will deserve antitrust restraint under the current consumer welfare standard.
I don't think so. I personally don't use sites that don't have HTTPS. I don't have any interest whatsoever in making it easy for third parties to see my traffic.
I personally don't use sites that don't have HTTPS.
Then you have chosen to exclude yourself from knowing a large chunk of interesting information that was posted on web sites before the web went fully corporate.
A/B testing whether or not people have finally gotten tired of fighting for the web. Once they can go for a few months without anybody speaking up, they'll move on to further stages.
My unpopular opinion. I love AMP. Mobile internet for me ( maybe on worse devices ), was becoming a non-thing for me. AMP let's me check the news again.
My opinion might be because I don't have much understanding of it as a framework, but from a non power user perspective, I've found the UX to be amazing.
For me, the UX is terrible on mobile. On my device it breaks the auto-hide behavior of the chrome address bar - normally when you scroll down the address bar hides, but with AMP only the thin AMP bar hides when scrolling down. On some websites they only display a portion of the website content on AMP (ex. Reddit, although it seems like they have fixed this now), so to see all of the website content I have to click out of the AMP page to view the full page, and then click again to expand whatever content I was trying to view in the first place. And when I want to view the original web page I have to click this little "i" icon to reveal the actual web page link rather than them just putting a link to the original web page in the amp header directly.
Regrettably Apple News will occasionally drop sections of the content it doesn’t know how to render properly, leaving the news article bafflingly incomplete, and usually it’s not obvious that something is missing.
It is a problem, but depending on what publications you read, how often it happens can vary.
The reason is that, like AMP, Apple News only supports a subset of HTML. It's up to the publishers to adhere to those limitations. Some choose not to make a special Apple version or change their main content to be Apple-friendly, so Apple News does what it can to show what it can.
It's not ideal, but it's better than being reliant on Google.
Also, if you see an article that doesn't look right, there's an option to report that article to Apple. One of the options is "Content missing."
The content is essentially the same in both places: they're both just aggregating content from news providers. So you go with the one that has a non-awful UX. Apple news also lets you access paywalled content through a single subscription (News+), which is nice if that's your thing.
For anyone on the fence about using DuckDuckGo instead of Google, if you don't find what you're looking for, it's easy to revert to a Google search by typing "!google" with your query.
DDG has improved a lot with time, so I almost never fall back to Google anymore.
For some technical searches that DDG can't find, I find codeseek.com is better than both Google and DDG
if typing "!google" is too much of a hassle "!g" also works. you can also do pretty much any letter and it will search another service from bing to yahoo to images to maps to wikipedia to whatever.
For the even lazier, here's a Greasemonkey/Tampermonkey script that adds a "Google" button to the DDG results page. Just click it if you don't like DDG's results and you'll be taken to Google for the same search term.
Previously I'd considered AMP a storm in a teacup. Now I think it's enough that I'll switch my mobile device to DDG, something I thought I'd never say.
For some reason on my iPhone AMP pages just don't fit on the screen. Maybe it's because I have a large default zoom/font. But this actually makes them unreadable because pinch-to-zoom is almost always disabled. I have a bookmark workaround for that but honestly I don't think it's ever worked.
Only recently did I discover a workaround for this: on Safari you can force touch to bring up a preview of the original site then click on it to bring that up.
If this change breaks that functionality then something has to change. That could be by using a browser to pretend to be a desktop browser or it could be DDG. I'm not sure yet.
Why do I, as a user, not have the ability to opt out of this horrible broken mess?
I'm all for having Web pages that render fast. I really hope for Google's sake that rendering speed alone is what affects ranking and there isn't some boost for AMP directly because that has anticompetitive written all over it. It would be forcing sites to adopt AMP or suffer downranking (to be fair, companies are typically terrible at designing fast-rendering websites).
So if if DDG gets me out of AMP and I can somehow set it so I get Google search results by default (instead of Bing) without using !g on every search then honestly at this point, I'm in.
I Googled "how to turn off amp results" and the top article is a 2017 article[1] that lists some workarounds, such as using duckduckgo or installing DeAMPify, "an Android app that lets you bypass AMP links so you can always load the original link." As far as a first-party way to disable it, the article says:
>"Late last year [2016], Google said that it’s working to let users disable AMP in Google search, but there doesn’t seem to be any official kill switch yet. Meanwhile, you can use any of the above workarounds to get around Google AMP pages."
Can any Googler chime in on why it would take more than 4 years to figure out the code to turn off a feature like AMP for signed-in users who don't want it? I would have thought that this is something anyone could do in 20 minutes, but I do realize that Google has thousands of highly paid and experienced engineers so maybe there is something that takes a lot longer, that I didn't realize. Could you shed some insight on what makes this difficult?
This is exactly what people speculated would eventually happen when they first announced AMP. Google continues to disappoint and harm the internet in a very predictable way, year after year.
If I tap on the share button on the top right of the AMP frame, it seems that I am able to copy the original URL (although it's the specialized amp.knowyourmeme.com version of the page)
For those who are okay to install addons on your browser, there is Redirect AMP to HTML addon[0] on Firefox. I'm sure there is something similar for Chrome.
I never looked deep into this but I remember that back in the day there was the "View original image" link in image search and I think they got rid of it because of some EU directive that stated that they can't link an image directly but rather provide a link to the original source, that is the website? Or am I completely wrong?
Sharing AMP links is the worst. I probably wouldn't mind if they had a way to provide short links that cardified okay in slack, but lately I've been just sharing the link to the original.
if anyone else prefers to share the original link, you can probably find the article by searching on duckduckgo. Using their search more often now these days.
I can kind of understand why Google wants to do AMP. Mobile web performance and all. It probably scores better at some metric that someone believes passionately in (to the exclusion of all else). I don't think it's worthwhile overall, but I can understand it.
What I can't understand is why it has to be managed so badly. Just put the damn URL there for people who want it. Allow opt out for people who want that. Super easy.
Even if the plan is to cynically use leverage to railroad through adoption of AMP, you're not going to win over the people who despise AMP. There's nothing to be be gained by twisting arms like this. You're only making enemies. Just throw a bone to the people don't like AMP, and the rest of the people will go along with AMP anyway because they don't care.
The original URL is provided by the Share button on the right. But it's not a link. (To visit it, you'd have to copy and paste back into the browser. You can't share a link to the app you are shared from
)
So the URL is available as always, but the link is not.
If you're looking for a decent replacement to images.google.com, I run https://canweimage.com. It gets results from Wikimedia Commons. There's also just https://commons.wikimedia.org/wiki/Main_Page. I built canweimage because Wikimedia Commons used to be harder to navigate (once you searched, you then had to click through categories and subcategories to see image results). Looking at it just now, and that doesn't seem to be the case anymore!
I always chuckle to myself when people are quick to attack amazon/facebook as being the real evil companies....while google has been slowly taking over the internet these past 10 years.
This is so unfriendly to users. I don't understand the push to obscure URLs. They are sometimes hiding the URLs in search results also which is infuriating. One more reason to use DuckDuckGo.
Another thing that is extremely annoying is the hiding of urls in Chrome. Hiding the protocol and the rest of the url besides the domain. When you go to click up in the url/address bar to change a section of the site the protocol re-appears and messes you up.
Or when you click in on some mobile and the whole url disappears and you just wanted to change part of the url.
Yes Google there are still people that want hackable urls, I remember when that was something they pushed, clean urls that make sense so you can type in where you are going.
AMP might be the thing that actually brings anti-trust, so not needed and very anti-competitive using their monopoly position to stifle competition and innovation rather than extend innovation and compete on product. Google should reward fast sites not band-aid it with an anti-competitive AMP. AMP also creates lots more work for content companies and it is only useful for Google. Essentially content companies are doing Google's work for them.
McKinsey and the management consultants have taken over large swaths of Google with stuff like this and AMP.
Here's hoping some pirate product people/engineers put up a flag and start returning to product over management metrics.
The push is to get you to stay at Google. They are hurting real bad because people are typing "amazon.com" searching for shopping queries on Amazon now instead of Google; Google hates that.
Google wants to be the gatekeeper of the internet. It wants to be the middleman between everything you do online. Might you be using Google Chrome right now? Or Android? ;0
Google is an evil and destructive force in technology and it genuinely baffles me that people trust their email and smartphone operating systems to be guarded by an advertising company.
Overwhelmingly that's the main user request for AMP. The lack of progress and the fact that it disappeared from the 2020 priorities with no progress speaks volumes.
I’m not an expert in the details of Chrome permissions vs. other browsers, and setting aside for the moment the market power of Google/Alphabet, this seems fairly fixable, along the same lines as email spam or ad tracker networks, and a clear case for “caveat emptor”.
How about rewriting any AMP urls in the browser location bar to redirect to the canonical location as an end-user extension, or as an in-content script?
I don't understand why Reddit specifically puts up with AMP.
They already have a "good" mobile site. The AMP pages break reddit results and result in the constant annoyance of having to dismiss "Open in App?" banners both in the AMP view and on the mobile site. Everyone hates the AMP page. Why keep it? You already invested a ton of resources in the fast/nice mobile app...
Am I the only one who likes AMP? Like 99% of news websites etc. are so overloaded with crap to the point where it's impossible to read them on mobile. I'm glad to see an AMP link, it means the content will load quickly.
I feel like we need a solution that works for everyone, not just news companies and search giants, but users as well!
Seems like a short sighted move since image search is one of the few times I'll jump from DuckDuckGo to Google. Probably won't bother if I can't pull the source image url
Is AMP popular in the US only? (I am from the Netherlands)
As far as I know I have never seen an AMP website. I use DDG as search engine but !g a lot, so I assume I should have stumbled upon an AMP site at least once.
But maybe it is also browser related? When I go to amp.cnn.com I get redirected to editions.cnn.com in Firefox.
The thing is, from their point of view they aren't evil; they are "making the web faster", they say (and they're not wrong) that most sites are so full of crap like trackers and adverts that it negatively impacts your browsing experience.
This Twitter post costs me 1.6MB of bandwidth and it keeps pinging for more stuff, and that's with an ad blocker enabled. While we're talking just bytes in terms of the main message.
Google is selling it as an improvement to the web, but that also says a lot about websites.
But this has been Google's strategy for a while now; they push technology that make the web faster and accessible to more people (like Chrome, HTTP/2 and 3, webp/webm, google DNS, google fiber (discontinued), Android, etc) and score goodwill, but at the same time they know if people can browse faster they will run into Google ads more often, earning them more money.
AMP is no different; if a site that doesn't even have google ads takes 10 seconds to load, they earn nothing. If the same content takes <1 second to load, WITH google ads, they earn money. And they earn money more often because people can consume more content instead of wait for things to load.
Amp-the-standard is not the same thing as amp-the-google-UX.
I'd love for sites to adopt AMP, as a standard for web design which leads to very lean sites without content pop-in. That'd be awesome! Give me a little icon next to the search result that says "this site is AMP certified" so I know it'll be fast.
But what I don't love, is that Google uses AMP as a trojan horse to keep me inside the google search results as I browse the internet, by:
- Rehosting the amp sites on their CDN
- Pre-rendering sites I haven't clicked yet to make them load faster
- Putting the site in some pop-over div which makes me feel like I'm still in the google results (so I can pop right back quickly and spend more time on google!)
Google gives me slightly better results (sometimes much better, much of the time about equal), but I prefer Bing's interface. If you want to stop using Google, Bing Images is a good option to replace the vast majority of image searches.
- Install an Ad Blocker, so Google will not be able to make money on your visits and chances you'll get a malware from ads will decrease
- If you use Android, you can install Firefox and add uBlock Origin as extension, because Google abuse their power and prevent people from installing Ad blocker on Android.
- Stop paying money for Google Ads
- Stop using Gmail. There are plenty of alternatives (ex. outlook.com)
- Use other search engines (bing.com and yandex.com). I noticed they work in many cases better than Google. For example, yandex.com is much better for semi-legal content which is blocked on Google and not available at all.
- Stop using GCP. AWS and Azure are the better cloud providers.
- Stop supporting AMP on your website
- Don't pay money for integrating Google Maps into your website. There are much cheaper alternatives.
- Office 365 is years ahead of Google Docs.
There are so many times when using google news app where the story half loads, or the main part of the story is missing... now I can't even go to the actual site and read the new because the link is gone. Goodbye google news.
I just tested again with Chrome, Opera, Vivaldi and Firefox on Android and can confirm now, too (except for Firefox). First I only checked on desktop where you get to the original website.
It's not often any more that when I search for an image that Google Image Search is my last stop. I use DDG, so when I need an image I try any of !gi !bi !yi or just !i
Amp is supposed to reduce file size and script overhead. Its seemingly innocent goal was to speed up page loading on slower mobile devices and reduce bandwidth usage. Google then cached entire amp versions of articles on their servers and never load the original when searchers or Google news app users click to read an article. It means the authors of the articles web pages never get hit by readers. This is I think the main reason why website owners are quite upset.
You're only half right. While Amp is designed to create small pages, what's more important is that they should be safely-embeddable. That means there's no potentially unsafe scripts or asset requests. That why Amp uses a subset of HTML as defined via WebComponents.
>Google then cached entire amp versions of articles on their servers
That's the Amp Cache. The cache allows vendors like Google, Microsoft, and Cloudflare (who each run their own caches) to automatically preload these pages in search results without there being any risk to the user. Yes, Bing runs a copy too.
"preloading" the entire article the way say, Google News app does for example means that without going two touches deeper it essentially negates the user ever needing to visit the site that originally authored the content removing them entirely from the process. It's not very content-creator friendly. For that matter, why use the internet if you don't trust anyone but Google, Microsoft, Cloudflare, etc? Insanity. The internet may as well be considered broken and useless if you can't trust your web browser to at least minimally protect you from scripts automatically hijacking your computer with an "unsafe script".
Also, I did mention scripts but left it short and sweet with the limited support of scripting that AMP has to explain AMP in more simple terms.
> ... it essentially negates the user ever needing to visit the site that originally authored the content removing them entirely from the process.
Yes, in this sense they are acting as a CDN. The original website is still authoring the content however.
> The internet may as well be considered broken and useless if you can't trust your web browser to at least minimally protect you from scripts automatically hijacking your computer with an "unsafe script".
It's not just about malware. I'm sure most users would not be comfortable with websites being able to track them after simply performing search results. Actually going to a website is an action with more intent behind it.
It's supposedly a component framework. supposedly you can build web stuff faster and easier using it. In reality it looks like Google is using it a Trojan horse
AMP makes work easier for Google (their indexing and content presentation process). However it's presented as something that makes it easier for us the general public.
This seems like the kind of thing that will be labeled (maybe rightly, maybe falsely) as a "mistake" and fixed... It really seems like if this AMP proxy stuff is going to get forced down our throats, then viewing the original version needs to be integrated at the protocol level and exposed out through the browser UI, not the website (which would also make it possible for non-Google browsers to opt out, as an extra bonus). Otherwise, this sort of thing is just going to keep on happening.
Interestingly : could this lead to Google violating copyright laws ?
As long as Google is "only" a search engine, referencing other websites, it can get away with copyright laws by stating that he's not infringing, only showing infringing website...
But if Google serve the image by itself, with its own domain, doesn't it make him responsible for its own copyright infringement ? Maybe that we be a way to kill the behemoth... or at least this stupid "feature"...
I see a lot of hate for AMP, and rightly so.
I've been able to get rid of it completely on my Android device with two tricks:
- Changed my browser for Kiwi browser : very close to Chrome but with some neat features, one of them being to automatically redirect AMP links to original links
- deAMPify is a way to redirect AMP links from any other app than the browser
I really wish there was a way to opt-out of AMP as on iOS you're definitely SOL
if you're on Android use Firefox as your main browser, if you wanna avoid the shit called AMP. on iOS safari does a little better on showing the url. Changing search engines to say Bing, is not really helpful. Bing on chromium android, still shows AMP pages.
and now they own the content. Links will be created to that google URL that last decades. And google can replace the data at that link with whatever they want, whenever they want.
Some website: "I did this"
*walks away as Google approaches*
Google: ... "I did this"
Seriously, though. I understand why the general public isn't pushing back against this, but why don't I see more push back from websites? Websites support AMP for SEO, but at this point they should be trying to redirect users away from AMP and social media sites should be trying to automatically strip AMP from links.
And why does the "Redirect AMP to HTML" extension on Firefox have so few users?
Wait, AMP still exists? I haven't seen it for a while now, also in google search results (though I don't do many of those). I thought it was dead after literally everyone thought it was a terrible idea.
I suppose I'm happy the google crawler is still banned from my domains. People should use something else if they want good results now.
Does uBlock or privacy badger block it? It seems out of scope for those projects so I expect I should see amp links just like anyone else. Or did they kill it in the EU or something? I saw someone from NL wondering the same elsewhere in the thread.
I understand the objections to Google's actions--this is clearly a dick move which is terrible for both users and content creators--but I'm not understanding the AMP hate here.
As a user, AMP is great. AMP is a better implementation of the open web than HTML is. It's a (usually) self-contained document that isn't tightly coupled to the server it came from. You can download an AMP document, render it, attach it to an email to a friend, etc., without having to log in or get tracked. Unfortunately AMP is adding ad capabilities, but at least AMP allows you to strip those out fairly trivially.
As a website, if you want your users tightly coupled to your server, just don't implement AMP? You literally went through non-negligible effort to implement a feature and now you're surprised and angry that it works the way you implemented it. AMP was always a bad idea if you wanted users to be dependent on your website for your content--this has only made it a slightly worse idea. And by the way, if all you want is credit for your content, it's trivial to add a linked byline to the top of your AMP.
I'm not defending Google here. They're an amoral corporation with too much power and shouldn't be used, period. But AMP is fine.
EDIT: I can only assume the silent down-voters are people who implemented AMP and are whining that what they implemented works. ;P
> AMP is a better implementation of the open web than HTML is.
Except the results of AMP can be done without AMP. It just requires site owners to not put a bunch of crap on the site. Something that AMP requires you to do.
> You can download an AMP document, render it, etc., without having to log in or get tracked
Except for the fact that now only Google tracks you. Also, doable without AMP.
The hate from many isn't about being a website owner. It's about being a website user. When I click on a link of a website, I expect to go to that website. Not stay on Google's site.
> Except the results of AMP can be done without AMP. It just requires site owners to not put a bunch of crap on the site. Something that AMP requires you to do.
Sure, you can build self-contained HTML pages, but you don't have any way of indicating to browsers or search engines that what you've created can be consumed in that way.
> The hate from many isn't about being a website owner. It's about being a website user. When I click on a link of a website, I expect to go to that website. Not stay on Google's site.
That's a great objection to Google not linking to the original site.
It's not an objection to AMP. Nothing about AMP prevents Google from linking to the original site.
The silent down-voters are more likely site owners who feel they are being coerced into using AMP in order to compete in Google's search rankings, or users who feel they are being forced to visit AMP sites (which have various usability or privacy concerns) due to these site owners being coerced.
Of all the anti-competitive actions google has taken around search results, AMP is by far the worst. I hope they get smacked down for it in the upcoming anti-trust lawsuit. And kudos to Apple for refusing to change the URL bar like Google does on Android.