Why would you want to have a constructor of type T that returns you something that is not even an object of that type T? If you need the "return null on error" behaviour — so that you can never check that it's null (or forget to check) and let your execution stop with "TypeError: Cannot read properties of null" way down the line instead of right here, where you can fix it immediately before even shipping the code — then just write a tryParse() wrapper for that!<p>Now, to be fair, having a public constructor for URL with just a string as the argument, with the parsing semantics is arguably an anti-pattern, and a static factory function would serve much better indeed. A proper public URL constructor should instead take (scheme, authority, path, query, fragment) tuple and validate its constituents. The private URL constructor could be the one without validation, and that's what parse() and the public constructor would call internally.
Swift, and I imagine other modern languages, do this the right way. Their idiomatic error passing is via try/catch, which is the best error handling pattern we've figured out so far, but instead of bloating the API for every case where somebody might theoretically not care about the specific error and isn't doing multiple failable operations in close proximity, it just adds some sugar to get the result described in this article:<p>`let fooOrNil = try? Foo("blah")`<p>Edit: Updated URL to Foo, since Swift happens to use optional return for its own 'URL' type, as pointed out.
I kinda agree that `new URL()` need not bail out when the URL is invalid. Both practices exist in the spec: `new Date('foo')` returns invalid date, `parseInt('foo')` returns NaN, while `new Array(-1)` throws a `RangeError`. Probably there is a need of URL instances for invalid ones? Then we come back at an Either<x, y> return type.<p>However, it is the `try...catch` pattern that messes up with the `const`, not the URL constructor. It is very annoying every time when I have to wrap an existing block with a try...catch, and inevitably lose the const-ness to some variables, unless I wrap everything again into a function and `return` in the try block if things go normally.
just make a utility function! don't see what the fuss is about. the original API choice is arguably not great, but haphazardly adding multiple ways to do the same thing does not improve the API either.
The best solution was already there: have a canparse() to do up front if statement checks, and use the language's exception handling for the rest.<p>I understand people might not like exceptions, but special-casing certain calls like just makes the language harder to use.
I feel like it shouldn't be the classes responsibility to provide a method to return a nullable / falsey value. We should have a convenient one-line syntax around try / catch. Something like pipe operators and or pattern matching.
It's annoying that none of these end up parsing quite like the HTML inside a website.<p>For example, on google.com, you can find:<p><img class="lnXdpd" alt="Google" height="92" src="/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png"<p>But open the console and run this, and it will throw:<p>let x = new URL("/images/branding/googlelogo/2x/googlelogo_color_272x92dp.png");<p>You're supposed to address this by adding the base URL on the end, but then correctly obtaining that can be brittle. If you're browsing a folder, the whole document.location is the baseURL, but if you are browsing a file (ie ends in .php or .html) it isn't. If you're taking user URLs, your browser's URL bar will add <a href="https://" rel="nofollow">https://</a> automatically so people don't think about it, but they'll leave it out of an absolute URL and now you need to detect and add it if needed.
I encountered another pitfall when using `new URL` to check if a text contains urls. It accepts strings like ”example:”, “next:", etc. as valid urls because it interprets the string before the colon as a custom protocol. This is a real edge case in the spec, because with common protocols, just "http:", "https:" is not accepted as a valid url.
I don’t see how it changes anything. You’re still handling a second branch as if an error was thrown. It’s just called null.<p>Practically, though, I like it because JavaScript’s error handling sucks. I generally avoid using try/catch as a first class member of my software.
Another reason why "errors as values" error handling approach is superior.<p>Compare<p><pre><code> let url = Url.parse(str)?;
</code></pre>
To<p><pre><code> let url; // invalid state
try {
url = new Url(str)
} catch (e) {
return ...
}</code></pre>
Great! Now there are three different ways to call this API! Just because one dude has a different preferred coding style...<p>Now, arguably the original API wasn’t the best, but this is really not the way to design an API. There should preferably only be one way to do things and consistency with the rest of the library should be kept. Finally, don’t forget about browser compatibility.<p>As many others already argued, the root cause can be solved with just a sprinkle of syntax sugar to aid with try/catch.
Recently ran into this where we wanted to avoid new URL for the memory overhead, and was blown away that JavaScript's URL does not contain a static parse method. Things like this are why people working with real languages make fun of us.