TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

In search of the perfect URL validation regex

65 点作者 lgmspb将近 11 年前

22 条评论

to3m将近 11 年前
If you&#x27;re going to allow dotted IPs you should really allow 32-bit IPs too, e.g., <a href="http://0xadc229b7" rel="nofollow">http:&#x2F;&#x2F;0xadc229b7</a>, <a href="http://2915183031" rel="nofollow">http:&#x2F;&#x2F;2915183031</a> and <a href="http://025560424667" rel="nofollow">http:&#x2F;&#x2F;025560424667</a>. (The validity of this last one was news to me I must admit.)
评论 #7929277 未加载
评论 #7929030 未加载
评论 #7929883 未加载
TazeTSchnitzel将近 11 年前
Why use a regex? It&#x27;s much simpler to write a URL validator by hand, speaking as someone who wrote a URL parser,[1] and fixed a bug in PHP&#x27;s.[2]<p>Or, you know, use a robust existing validator or parser. Like PHP&#x27;s, for instance.<p>[1] <a href="https://github.com/TazeTSchnitzel/Faucet-HTTP-Extension" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;TazeTSchnitzel&#x2F;Faucet-HTTP-Extension</a> - granted, this deliberately limits the space of URLs it can parse, but it&#x27;s not difficult to cover all valid cases if you need to<p>[2] <a href="https://github.com/php/php-src/commit/36b88d77f2a9d0ac74692a679f636ccb5d11589f" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;php&#x2F;php-src&#x2F;commit&#x2F;36b88d77f2a9d0ac74692a...</a>
评论 #7929767 未加载
评论 #7930404 未加载
评论 #7929601 未加载
评论 #7930550 未加载
Dylan16807将近 11 年前
Why are <a href="http://www.foo.bar./" rel="nofollow">http:&#x2F;&#x2F;www.foo.bar.&#x2F;</a> and <a href="http://a.b--c.de/" rel="nofollow">http:&#x2F;&#x2F;a.b--c.de&#x2F;</a> supposed to fail?<p>The @stephenhay is just about perfect despite being the shortest. The subtleties of hyphen placement aren&#x27;t very important, and this is a dumb place to filter out private IP addresses when a domain could always resolve to one. Checking if an IP is valid should be a later step.
评论 #7930567 未加载
评论 #7929268 未加载
评论 #7929707 未加载
eli将近 11 年前
At best this lets you conclude that a URL <i>could</i> be valid. Is that really useful? Is the goal here to catch typos? Because you&#x27;d still miss an awful lot of typos.<p>If you really want your URL shortener to reject bad URLs, then you need to actually test fetching each URL (and even then...)<p>As an aside, I&#x27;d instantly fail any library that validates against a list of known TLDs. That was a bad idea when people were doing it a decade ago. It&#x27;s completely impractical now.
评论 #7930620 未加载
评论 #7929269 未加载
bdarnell将近 11 年前
Another important dimension when evaluating these regexes is performance. The Gruber v2 regex has exponential (?) behavior on certain pathological inputs (at least in the python re module).<p>There are some examples of these pathological inputs at <a href="https://github.com/tornadoweb/tornado/blob/master/tornado/test/escape_test.py#L20-29" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;tornadoweb&#x2F;tornado&#x2F;blob&#x2F;master&#x2F;tornado&#x2F;te...</a>
评论 #7929605 未加载
评论 #7929591 未加载
mdavidn将近 11 年前
Use a standard URI parser to break this problem into smaller parts. Let a modern URI library worry about arcane details like spaces, fragments, userinfo, IPv6 hosts, etc.<p><pre><code> uri = URI.parse(target).normalize uri.absolute? or raise &#x27;URI not absolute&#x27; %w[ http https ftp ].include?(uri.scheme) or raise &#x27;Unsupported URI scheme&#x27; # Etc</code></pre>
评论 #7930595 未加载
评论 #7929612 未加载
MatthewWilkes将近 11 年前
Why no IPv6 addresses in the test cases?
VaucGiaps将近 11 年前
Why not put in some of the new TLDs as test cases... ;)
eridius将近 11 年前
John Gruber (of daringfireball.com) came up with a regex for extracting URLs from text (Twitter-like) years ago, and has improved it since. The current version is found at <a href="https://gist.github.com/gruber/249502" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;gruber&#x2F;249502</a>.<p>I haven&#x27;t tested it myself, but it&#x27;s worth looking at.<p>Original post: <a href="http://daringfireball.net/2009/11/liberal_regex_for_matching_urls" rel="nofollow">http:&#x2F;&#x2F;daringfireball.net&#x2F;2009&#x2F;11&#x2F;liberal_regex_for_matching...</a><p>Updated version: <a href="http://daringfireball.net/2010/07/improved_regex_for_matching_urls" rel="nofollow">http:&#x2F;&#x2F;daringfireball.net&#x2F;2010&#x2F;07&#x2F;improved_regex_for_matchin...</a><p>Most recent announcement, which contained the Gist URL: <a href="http://daringfireball.net/linked/2014/02/08/improved-improved-regex" rel="nofollow">http:&#x2F;&#x2F;daringfireball.net&#x2F;linked&#x2F;2014&#x2F;02&#x2F;08&#x2F;improved-improve...</a>
评论 #7929251 未加载
Buge将近 11 年前
Interestingly it seems <a href="http://✪df.ws" rel="nofollow">http:&#x2F;&#x2F;✪df.ws</a> isn&#x27;t actually valid, even though it exists. ✪ isn&#x27;t a letter[1], so it isn&#x27;t allowed in international domain names. I was looking at the latest RFC from 2010 [2] so maybe it was allowed before that. The owner talks about all the compatibility trouble he had after he registered it [3]. The registrar that he used for it, Dynadot, won&#x27;t let me register any name with that character, nor will Namecheap.<p>[1] <a href="http://www.fileformat.info/info/unicode/char/272a/index.htm" rel="nofollow">http:&#x2F;&#x2F;www.fileformat.info&#x2F;info&#x2F;unicode&#x2F;char&#x2F;272a&#x2F;index.htm</a><p>[2] <a href="http://tools.ietf.org/html/rfc5892" rel="nofollow">http:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc5892</a><p>[3] <a href="http://daringfireball.net/2010/09/starstruck" rel="nofollow">http:&#x2F;&#x2F;daringfireball.net&#x2F;2010&#x2F;09&#x2F;starstruck</a>
评论 #7930088 未加载
mnot将近 11 年前
There is no perfect URL validation regex, because there are so many things you can do with URLs, and so many contexts to use them with. So, it might be perfect for the OP, but completely inappropriate for you.<p>That said, there is a regex in RFC3986, but that&#x27;s for parsing a URI, not validating it.<p>I converted 3986&#x27;s ABNF to regex here: <a href="https://gist.github.com/mnot/138549" rel="nofollow">https:&#x2F;&#x2F;gist.github.com&#x2F;mnot&#x2F;138549</a><p>However, some of the test cases in the original post (the list of URLs there aren&#x27;t available separately any more :( ) are IRIs, not URIs, so they fail; they need to be converted to URIs first.<p>In the sense of the WHATWG&#x27;s specs, what he&#x27;s looking for <i>are</i> URLs, so this could be useful: <a href="http://url.spec.whatwg.org" rel="nofollow">http:&#x2F;&#x2F;url.spec.whatwg.org</a><p>However, I don&#x27;t know of a regex that implements that, and there isn&#x27;t any ABNF to convert from there.
siliconc0w将近 11 年前
This is a good lesson why you want to avoid writing your own regexes. Even something simple like an email address can be insane:<a href="http://ex-parrot.com/~pdw/Mail-RFC822-Address.html" rel="nofollow">http:&#x2F;&#x2F;ex-parrot.com&#x2F;~pdw&#x2F;Mail-RFC822-Address.html</a>
评论 #7929587 未加载
lucb1e将近 11 年前
What&#x27;s wrong with IP-address URLs? If they are invalid because it says so in some RFC, this is still not the ultimate regex. If you redirect a browser to <a href="http://192.168.1.1" rel="nofollow">http:&#x2F;&#x2F;192.168.1.1</a> it works perfectly fine.<p>And why must the root period behind the domain be omitted from URLs? Not only does it work in a browser (and people end sentences with periods), the domain should actually end in a period all the time but it&#x27;s usually omitted for ease of use. Only some DNS applications still require domains to end with root dots.
评论 #7929342 未加载
评论 #7929410 未加载
tshadwell将近 11 年前
I&#x27;ve put the test cases into a refiddle: <a href="http://refiddle.com/refiddles/53a736c175622d2770a70400" rel="nofollow">http:&#x2F;&#x2F;refiddle.com&#x2F;refiddles&#x2F;53a736c175622d2770a70400</a>
droope将近 11 年前
I just validate with this regex &#x27;^http&#x27; :P
JetSpiegel将近 11 年前
It has to match this valid URL: <a href="http://موقع.وزارة-الاتصالات.مصر" rel="nofollow">http:&#x2F;&#x2F;موقع.وزارة-الاتصالات.مصر</a>
cobalt将近 11 年前
What&#x27;s wrong with &#x2F;([\w-]+:\&#x2F;\&#x2F;[^\s]+)&#x2F;gi<p>It&#x27;s not fancy but it will essentially match any url
评论 #7929891 未加载
Sir_Cmpwn将近 11 年前
When you have a hammer, everything looks like a nail.
CMCDragonkai将近 11 年前
What does the red vs green boxes mean?
评论 #7930161 未加载
timmm将近 11 年前
What flavor of regex are we do making this in?
zAy0LfpBZLC8mAC将近 11 年前
WTF? When will people finally learn to read the spec and implement things based on the spec and test things based on the spec instead of just making up themselves what a URL is or what HTML is or what an email address is or what a MIME body is or ...<p>There are supposed URIs in that list that aren&#x27;t actually URIs, there are supposed non-URIs in that list that are actually URIs, and most of the candidate regexes obviously must have come from some creative minds and not from people who should be writing software. If you just make shit up instead of referring to what the spec says, you urgently should find yourself a new profession, this kind of crap has been hurting us long enough.<p>(Also, I do not just mean the numeric RFC1918 IPv4 URIs, which obviously are valid URIs but have been rejected intentionally nonetheless - even though that&#x27;s idiotic as well, of course, given that (a) nothing prevents anyone from putting those addresses in the DNS and (b) those are actually perfectly fine URIs that people use, and I don&#x27;t see why people should not want to shorten some class of the URIs that they use.)<p>By the way, the grammar in the RFC is machine readable, and it&#x27;s regular. So you can just write a script that transforms that grammar into a regex that is guaranteed to reflect exactly what the spec says.
评论 #7929983 未加载
评论 #7930584 未加载
lazyloop将近 11 年前
You do realize that RFC 3986 actually contains an official regular expression, right? <a href="http://tools.ietf.org/html/rfc3986#appendix-B" rel="nofollow">http:&#x2F;&#x2F;tools.ietf.org&#x2F;html&#x2F;rfc3986#appendix-B</a>
评论 #7930623 未加载
评论 #7929630 未加载
评论 #7929636 未加载