A step in the right direction but it doesn't entirely solve the issue. To make this complete they simply need to remove all Google nav/branding/back button from AMP pages (or at least offer the option)<p>AMP should have been purely an open source library implementing a specification, not a way to opt-in to becoming a sub page within Google<p>If I go out to a new URL it should then cut away the relationship with the old site, even if AMP makes that transition a bit quicker and gives the new site tools to load the page more quickly
The most interesting thing to me seems that they plan to decoupled signing of content from TLS connections. So that packages could be signed using normal TLS certificates (or something like that).<p>Hmm, so maybe in some future static-only sites will be able to sign a bundle with offline keys and not use TLS at all.
Or maybe we just sign static bundle with a TLS key for our origin and upload the bundle to Google and other web caches. As in maybe the internet can be distributed again.<p>I see lots of interesting potential in decoupling origin verification from TLS connections.<p>Web Packaging Format Explainer:
<a href="https://github.com/WICG/webpackage/blob/master/explainer.md" rel="nofollow">https://github.com/WICG/webpackage/blob/master/explainer.md</a>
It's tacky reading "y'all" in an official google blog post about how they're going to hijack browsers to further centralize the web, and where they're suggesting that giving google all the traffic helps improve privacy, but I guess that's the kind of world we live in.
Somewhat predictable to see the mess evolving. Once you start peanut-buttering over something, not quite all the nagging problems go away and then you need even more “solutions”. Then even more.<p>Enough, Google. Making small web sites is EASY, OK? No AMP needed: just write your content and, as if by magic, it is small and loads nearly instantly. If web sites are bloated and slow, close them and use something else. Stop hyperextending the web to make lousy programming practices the norm.
What a nice touch they published that on amphtml.wordpress.com. Like, "see, we are free as in beer and represent the voice of people". By using <i>.wordpress.com instead of </i>.google.com they execute a well-calculated PR strategy.<p>But in reality, Google tries to racket the free and open web in order to squeeze even more juice to feed its insatiable corporate greed.
Usually I try to be constructive, but I just need to get this out: fuck AMP. I don't care downvote me to oblivion I'm a little buzzed but FFS who actually wants AMP and why is it even a thing? Why can't Chrome just prefetch shit from the actual servers and let ISPs handle the caching? Why does mother Google need to serve me all the content from its overly suckled teat? I know everyone working on AMP means well but why why why does Google insist on destroying the internet and entirely undermining TLS in the process? Sorry. That was therapeutic.<p>Bonus Quiz:<p>1. When I encounter an AMP link I...
a) Click it.
b) Don't click it.
c) I don't see AMP links using FireFox.<p>2. When my friends send me an AMP link...
a) I click it.
b) I don't click it.
c) Friends don't send friends AMP links.<p>3. Reasons I've switched to FireFox...
a) I love RUst.
b) I care about privacy.
c) I hate AMP.<p>4. My ISP is...
a) Google
b) Chrome
c) None of the above.<p>The answer is 'c'.
The meat of the story is:<p><i>"We embarked on a multi-month long effort, and today we finally feel confident that we found a solution: As recommended by the W3C TAG [1], we intend to implement a new version of AMP Cache serving based on the emerging Web Packaging standard [2]."</i><p>I'm just reading through this so I'm gleaning as I go, but it looks like the W3C TAG came out with a recommendation for 'Distributed and Syndicated Content' [1] that specifically addresses AMP by name, and recommends strategies to do this kind of content syndication in a way that preserves the original provenance of the data.<p>The Web Packaging Format [2] aims to, apparently [3], solve packing together resources, but, rather, HTTP request-response pairs, maybe HPACKed?, and signed and hashed for integrity, in a flat hierarchy, in a CBOR envelope, that nonetheless has MIME-like properties? I'm still digesting what's all involved.<p>[1] <a href="https://www.w3.org/2001/tag/doc/distributed-content/" rel="nofollow">https://www.w3.org/2001/tag/doc/distributed-content/</a>
[2] <a href="https://github.com/WICG/webpackage" rel="nofollow">https://github.com/WICG/webpackage</a>
[3] <a href="https://github.com/WICG/webpackage/blob/master/explainer.md" rel="nofollow">https://github.com/WICG/webpackage/blob/master/explainer.md</a>
Can someone explain what this is about?<p>> As we detailed in a deep-dive blog post last year, privacy reasons make it basically impossible to load the page from the publisher’s server. Publishers shouldn’t know what people are interested in until they actively go to their pages. Instead, AMP pages are loaded from the Google AMP Cache but with that behavior the URLs changed to include the google.com/amp/ URL prefix.<p>To me, this reads as "for <i>our</i> privacy, we don't tell the publisher what page has loaded" but that may be an uncharitable interpretation.
I read the referenced blog post and it didn't clear up anything about the "privacy" issues.
Still shite. AMP still breaks scrolling and results in weird stub pages that are missing features. And I can't turn it off.<p>AMP-enabled pages load faster, but on the other hand I have an ad-blocker and LTE that gets 10Mbps, so the improvement is negligible. Not worth breaking the web, IMHO.
I don't understand. So now the URL bar won't always show where the page was actually loaded from? It could show example.com but really be loaded from Google'S AMP servers? If I'm reading this right, I find it very sad.
From the article:<p>> <i>while maintaining the [...] privacy benefits of AMP Cache serving</i><p>"AMP Cache serving" == hosted on Google's server. This makes this statement at best, stupidly oxymoronic, at worst, deliberately dishonest advertising.<p>> <i>privacy reasons make it basically impossible to load the page from the publisher’s server.</i><p>Browsers (including Firefox[0][1]) already do this. There are no "privacy reasons" preventing this. The only reason not to do this is to present another justification for opting into their AMP Cache product.<p>> <i>can take advantage of privacy-preserving preloading and the performance of Google’s servers</i><p>Also a contradiction of terms.<p>[0] <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefetching_FAQ" rel="nofollow">https://developer.mozilla.org/en-US/docs/Web/HTTP/Link_prefe...</a><p>[1] <a href="https://bugzilla.mozilla.org/show_bug.cgi?id=1016628" rel="nofollow">https://bugzilla.mozilla.org/show_bug.cgi?id=1016628</a>
I'm sorry. I don't get this hatred for amp. Amp is not the problem here. The problem is that we accept third party JavaScript on every page on our monetized website. From what I understand, you can completely self-host and be amp-compliant. Is that not the case? If you care about user experience, you should not let the highest bidder* run JavaScript on your website on your user's devices. Demand your ad network that they provide ads that are just text and images. Demand your ad network to host that text and those images themselves with a reliably low latency and so on. Or better yet, remove the advertising if you care about user experience.<p>If you are a news organization and Google won't let you be in a certain section without serving on Google, complain.<p>My understanding is that AMP exists to solve a problem. It obviously isn't the only way to solve that problem.
In related news, Firefox Quantum is out; rewritten in Rust. And, good news The mobile version supports extensions, in case you're missing ad block on mobile Chrome.
I once wrote something similar to figure out what happened based on corrupted and non-corrupted input.<p><a href="http://unicode-doctor.myname.nl/" rel="nofollow">http://unicode-doctor.myname.nl/</a>