That's really unfortunate. And it's not just performance, it really messes around with OS-level URL handling protocols like Android intents (and possibly FB's app links and iOS's new Extensibility).<p>I recently found this happening with Twitter's Android app. The user sees a link to player.fm and thinks it will open the native Player FM app if they have it installed, since it's registered to handle that URL pattern. But instead, the OS offers web browsers and Twitter as ways to open the link, because it's not really a player.fm link as presented to the user, but a t.co link. If the user then chooses a browser, the browser immediately redirects to the correct URL, which then pulls up the intents menu again.<p>7 redirects could potentially be 7 popup menus for the user to navigate through.<p>The OS could pre-emptively follow redirects, but that would of course introduce considerable latency since normally the menu is presented without any call being made at all. Maybe the best solution for OSs is to present the menu immediately but still make the call in the background, so the menu could be updated if a redirect happens.<p>"I don't see any work happening in HTTP 2.0 to change it."<p>Probably the best HTML standard for dealing with it is the "ping" attribute which allows a way for servers to be notified of a click without actually redirecting. However, that's HTML and not HTTP, and these days, apps are more popular HTTP clients than browsers, and apps don't manually bother to implement things like that.<p>So there <i>are</i> probably things that could be done with the standard. Perhaps using some distributed lookup table to ensure at most 1 redirect (by caching the redirect sequence and returning it with the first request). That does ignore any personalisation that goes on, but generally these should be permanent redirects without personalisation anyway.
Pretty sure I called this one a few years ago.<p><a href="http://joshua.schachter.org/2009/04/on-url-shorteners" rel="nofollow">http://joshua.schachter.org/2009/04/on-url-shorteners</a>
We could put a stop to marketing redirects tomorrow if we didn't allow redirects to set cookies.<p>(Or perhaps only allowed a cookie if the redirect was served by the same domain as the target domain.)
I think the most practical solution to this, requiring only a change in practice and not in standard, would be for link shorteners to start doing HEAD requests on the urls they shorten and unwrap it to make their shortened link canonically correct if it results in a permanent redirect.<p>Yeah, there are things that might have some problems with this, but they're things that are probably somewhat abusive to the 301 status code to begin with.
> Redirects are being abused and I don't see any work happening in HTTP 2.0 to change it.<p>I agree that this is an unfortunate pattern, but what exactly could the HTTP spec do to change it? The only thing I can think of is limiting the number of chained redirects, although I don't see browsers implementing that if longer chains are even remotely common.
This is classic "Tragedy of the commons" behavior where each individual group with a link shortener is benefited by encouraging and enforcing its usage (ability to kill malicious links easily, user tracking, etc)<p>I'm not sure if this can be resolved until users are educated sufficiently on the long-term adverse effects of link shortening services (link rot, privacy concerns, slow/broken redirects, etc).<p>For change to happen the demand for direct links (generated explicitly by things like this blog posts, or implicitly by higher bounce rates due to long loading times) will need to be enough to outweigh the benefits to organizations that are building them.<p>Edit:<p>Even if there is evidence that shows this, why should _I_ be the one to give up my link shortener service when it will have no significant improvement to the overall problem which involves tens or hundreds of these services?
This is propagated by people not really understanding URLs and blindly reposting links that have already been wrapped in a URL shortener through services that wrap them in another one. Whenever I repost links, I repost only the URL of the final page, stripping off anything unnecessary. Sadly, the trend of browsers hiding URLs or pieces of them is not helping the situation either.<p>I don't think this can be solved technologically - HTTP redirects are not difficult to detect but a lot of these shorteners (and becoming increasingly more common) use Javascript and/or meta tags to accomplish redirection. The solution is better educated users that don't create chains of shortened URLs.
The user experience on mobile with multiple url-shortener redirects is beyond annoying. Every new HTTP connection opened on over a marginal cell or wifi connection can stall or fail, even when the actual destination site is up and reachable.
I'm impressed by his proposed solution: <a href="http://uniformresourcelocatorelongator.com/" rel="nofollow">http://uniformresourcelocatorelongator.com/</a>
> Every redirect is a one more point of failure, one more domain that can rot, one more server that can go down, one more layer between me and the content.<p>These are all good reasons, but are there any real users who are actually being affected by these issues? If it is just a theoretical concern, then I don't think it is reasonable to call the situation "officially out of control".
I've lived in the Philippines for awhile, and the big telcom here, PLDT, has <i>terrible</i> DNS. t.co links are the most obvious point of contention, where they just won't resolve 90% of the time. It's incredibly obnoxious, especially on a mobile device where DNS settings aren't (easily) exposed.
A little off topic, but I seem to recall seeing, probably some years ago, a post on HN about someone a reversable url shortening algorithm that could convert from the shortened url back to the original. Can't find it now, anyone recall this, or did I dream it?
I couldn't find anything that would output something similar to the redirects image shown in this post, so wrote a small script in node to do that. It looks like this: <a href="http://cl.ly/image/3T3e462G1C3d" rel="nofollow">http://cl.ly/image/3T3e462G1C3d</a><p>Here's the script: <a href="https://gist.github.com/akenn/7ca7e99a51c3a4abc049" rel="nofollow">https://gist.github.com/akenn/7ca7e99a51c3a4abc049</a><p>Speaking of, what software did this guy use? Is there a bash script that's better than what I wrote?
> What do you think?<p>I think URL un-shortening should be done in the browser, on URLs that were shortened according to a standard hashing method, so your browser can tell you where the URL will go.<p>Shortening sevices are ridiculous and dangerous.