Just FYI, most, if not all, of this is built into javascript already, via the DOM. <a href="http://www.joezimjs.com/javascript/the-lazy-mans-url-parsing/" rel="nofollow">http://www.joezimjs.com/javascript/the-lazy-mans-url-parsing...</a>
Extracting Host, Subdomain and TLD from a url is hard. And they clearly oversimplified it.<p><pre><code> $.url('host', 'http://www.google.com.br'); // 'com.br'
$.url('sub', 'http://www.google.com.br'); // 'www.google'
$.url('tld', 'http://www.google.com.br'); // 'br'
</code></pre>
It will work for simple cases where the TLD has only one field like (www.google.com) or (www.usa.gov), but it will fail for any other tld with country codes.
this has nothing to do with jquery, stop the madness! haha<p><a href="https://github.com/component/component/wiki/Building-better-components" rel="nofollow">https://github.com/component/component/wiki/Building-better-...</a><p><a href="https://github.com/component/url" rel="nofollow">https://github.com/component/url</a>
Well I really wasn't expecting this much feedback for this little library. I threw it up for kicks and went out for a movie, when I came back I was surprised to see all the interest. It's just a simple parser I wrote for my projects, so it doesn't get too crazy with exact definitions of "tld" and support for IPv4/IPv6 addresses as mentioned in the comments. Might be better to create a secondary more heavyweight version for things like that.<p>Anyway, I've fixed/updated a bunch of things from the comments, if anyone finds any other bugs or missing features probably best to leave me an issue on the github project page here: <a href="https://github.com/websanova/js-url/issues" rel="nofollow">https://github.com/websanova/js-url/issues</a>.<p>The project has also been un-jquery-ified and renamed to js-url (thanks to Jay Adkisson : <a href="https://github.com/jayferd" rel="nofollow">https://github.com/jayferd</a>)<p>Thanks
Parsing url is difficult and prone to errors. How $.url() compares to furl? There are two furl(s) on github:<p>(1) <a href="https://github.com/stricaud/furl" rel="nofollow">https://github.com/stricaud/furl</a><p>and<p>(2) <a href="https://github.com/gruns/furl" rel="nofollow">https://github.com/gruns/furl</a><p>furl(1) seems to handle a lot of edge cases when the URL is containing an IPv4/IPv6 address or when the URL is partial (e.g. without the protocol). furl(1) aims is to be fast. furl(2) seems to be more flexible and used for inline modification of the URL.
Nice little utility, just a couple of issues that should be easy to fix:<p>- Breaks for protocols other than http/https, including protocol-relative urls.<p>- Breaks query string values that contain '='.
As others have suggested, jQuery is overkill for this.<p>The best lightweight js library I've found/used in the past is parseUri:<p><a href="http://blog.stevenlevithan.com/archives/parseuri" rel="nofollow">http://blog.stevenlevithan.com/archives/parseuri</a><p>Obviously, if you only need to analyze the current URL, you can use window/document.location, but there are times when you want to parse a URI for a page you aren't on. For instance, if you are analyzing har data, and want to break URI "components" into columns so they are sortable.
It looks like this has been moved and is no longer a jQuery plugin, which is awesome. It's now, however, creating a global variable named `url` which is dangerous to say the least.
I have OCD about code cleanliness... No semicolon at the end of line 26? Please excuse me as I proceed to gag.<p>But me being a douche aside, I'm not sure why you needed to extend from/bake into jQuery for this.
Good job. JavaScript has needed something like this for a while. I'm still kind of confused as to why they don't have good native support for URL parsing.