How do you resolve offering a permanent URL for something whilst also complying with DMCA takedowns for copyright materials when the end service may have removed the content but you continue to publish it?
There is already <a href="http://purl.org" rel="nofollow">http://purl.org</a><p>See <a href="http://en.wikipedia.org/wiki/Persistent_Uniform_Resource_Locator" rel="nofollow">http://en.wikipedia.org/wiki/Persistent_Uniform_Resource_Loc...</a> also, this is a "protocol"
Sounds cool, but only as long as purl.ly itself is up. We’ve seen what happens with single point of failure services like this when twitter’s link shortener t.co was down.
This is a good idea in theory, but not if in form of a company. This would require something like a consortium where Google, Microsoft, Twitter, Facebook, ... and a few more federates to create a service paying for the bill and with the intention to take it up for the future as long as possible.
Tried the service and got two errors first,<p>I cannot add a URL that does not have "http"
I cannot type a URL that has "https"<p>Would be nice to be able to add https adresses and add them in any form.<p>Also an idea, make it as a web browser plugin so I can change the url in the brower and add it to my link library. Then it's even better. I don't like detours.<p>Question, what happen if I prul.ly a Url once then the content changes and I want to save the new content as well (Different content, same url)?<p>Anyway, I really like the concept, keep going!<p>Annelie @detectify
I don't like the redirect page, it's a large and heavy page with a forced delay and what looks like placeholders for a ton of ads.<p>It might be better packaged as something blogs and forums can automagically implement for a fee instead of trying to make money off ads.
I did something similar as a weekend project, haven't checked-up on it in a while but it seems to still work: <a href="http://const.it/http://online.wsj.com/article/SB10001424052970204349404578101393870218834.html" rel="nofollow">http://const.it/http://online.wsj.com/article/SB100014240529...</a><p>The original idea was just to provide a consistent link which would fallback to a cache when necessary and back to the original content for reddit/hn type traffic. Then it made sense to do some paywall busting and readability functionality on top of it and those features overshadowed the original concerns.
Thanks for all of the feedback everyone... it has been very exciting to actually "launch" something and get some feedback. You guys did a swell job of uncovering some bugs and edge cases. I'm going to keep pushing and at least get it working as advertised.<p>I did some research ahead of time and did come across purl.org, but had no idea about WebCite and a couple of the others. Yes, my project is basically the same as those.<p>Does this work as a single point of failure company? Who knows, but it's been fun.
If you have an interested in permanent identifiers, you might also be interested the archival resource key standard <a href="https://wiki.ucop.edu/display/Curation/ARK" rel="nofollow">https://wiki.ucop.edu/display/Curation/ARK</a> and the EZID service <a href="http://n2t.net/ezid/" rel="nofollow">http://n2t.net/ezid/</a> . Disclaimer, I work at the same digital library where the standard and the service are developed and maintained.
It's pretty quick and dirty... purl.ly links will detect a 404 at the destination and redirect you to the google cache instead. Works great if that page is in the google cache, but it may not be.<p>I'll get around to caching the full content of the destinations at time of purl.ly creation next, and serve that if google is missing it.
seems that some urls with a query string give an error<p>eg make a purl for <a href="http://www.reddit.com/top/?sort=top&t=hour" rel="nofollow">http://www.reddit.com/top/?sort=top&t=hour</a> which generates <a href="http://purl.ly/www.reddit.com/top/?sort=top&t=hour" rel="nofollow">http://purl.ly/www.reddit.com/top/?sort=top&t=hour</a> which gives a 404