It gets one better...<p>And Chrome was good, and MSIE wasn't, so webmasters served bad pages to MSIE. Microsoft was not happy. So they created Edge. Edge was good, but Microsoft feared webmasters would treat it like MSIE. So Microsoft Edge pretended to be Chrome to get the good pages.<p>Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/39.0.2171.71 Safari/537.36 Edge/12.0
And Alan Kay saw this coming from a mile away and said: "What a total stone age BS this is. We already did it better in the PARC". Instead of sending shitty text files to rendering engines to parse all their own way, we should send objects. Every object should have an URL and the users should interact with these objects. And he teamed with David A. Smith and six others and they made it happen... aand it had 2d objects and it had 3d virtual reality where objects from different servers interacted and everybody saw it was cool as hell, but nothing came out of it because the world is path dependent and network effects rule.<p><a href="http://wiki.c2.com/?OpenCroquet" rel="nofollow">http://wiki.c2.com/?OpenCroquet</a><p><a href="https://en.wikipedia.org/wiki/Croquet_Project" rel="nofollow">https://en.wikipedia.org/wiki/Croquet_Project</a><p><a href="https://www.youtube.com/watch?v=XMk9IGwuRmU" rel="nofollow">https://www.youtube.com/watch?v=XMk9IGwuRmU</a><p>TL;DR: Future was already here, but it could not communicate with the present.
This is interesting, because at every step along the way, each actor took the locally optimal step -- webmasters wanted to serve up working pages to their users, and new browser vendors wanted their users to get pages with all the supported features.<p>Yet, in the end, we end up with a mess for everybody. What could have been done differently to end up at a good solution? I guess having universally defined and complied with standards would have helped, so a browser could just say "I support HTML 1.3".
I remember that in the very early days of Firefox, some websites would refuse to serve pages to anything that wasn't Internet Explorer. I did not see the point to that and I was not amused.<p>Firefox didn't have a problem displaying those pages, so I had to install a plugin so that Firefox could pretend to be Internet Explorer so that I could just see the web page.<p>I'm glad those days are over.
Anyone doing anything with user agents should use ua-parser[0]. Don't even bother trying to do any of this yourself.<p>If ua-parser doesn't exist in your language, just pull the yaml file out of ua-core. That defines the regexes you should use and how they translate to browser versions (and os versions and devices).<p>[0] <a href="https://github.com/ua-parser" rel="nofollow">https://github.com/ua-parser</a>
It's quite interesting how user agent stings have changed and become more bloated with time.<p>Other fun facts:<p>- Chrome on iOS reports its chrome version (eg 64.0.36), with no way to get the underlying Safari engine version.<p>- Android webviews have replaced one UA string pattern with another close to three times (pre-Kitkat, Kitkat till Marshmallow, and one for marshmallow and above)<p>- Chrome continues to add a "Webkit" version to its UA, even after having forked to Blink. Though since Chrome 27, the webkit version always says "537.36".<p>Src -
I wrote a library that generates user agent strings programatically -
<a href="https://github.com/pastelsky/useragent-generator" rel="nofollow">https://github.com/pastelsky/useragent-generator</a>
Some more fun tidbits:<p>> ProductSub returns 20030107 for Chrome and Safari, because that's the release date for Safari which used an Apple fork of WebKit. Chrome also uses this fork. For Firefox, it's 20100101. I don't know why.<p>> Vendor returns "Google Inc." for Chrome, but undefined for everything else.<p>> Navigator can tell if your device has a touch screen<p>> Navigator can tell how many logical cores you have<p>> appCodeName always returns "Mozilla" and appName always "Netscape"<p>> Navigator can tell if you're using: Wi-Fi, Ethernet, cellular, Bluetooth, or WiMAX<p>> Navigator knows how much RAM you have<p>> And the exact plugins you're using. A Firefox useragent won't hide 'type':'application/x-google-chrome-pdf'<p>> Your screen can be shared through navigator -- without your permission<p>> Languages are set as either `US-en` or `en` to differentiate between Americans and British<p>> Your battery can be acpi'd by Navigator<p>> File permissions can be read, revealing usernames<p>And this is just navigator, wait till you see all the fun things you can do with Javascript and canvas.
> What's your favorite web browser?<p>Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13
The OP is from 2010. For those wondering what sort of user-agent a brand-new browser engine would adopt in this era, see this discussion regarding inventing a UA for Servo, which involved collecting data from popular sites in the wild to see how they treat UAs: <a href="https://github.com/servo/servo/issues/4331" rel="nofollow">https://github.com/servo/servo/issues/4331</a><p>TL;DR: you can see end result for each platform here: <a href="https://github.com/servo/servo/blob/2d3771daab84709a6152c9b56c43bad2b280b2ab/components/config/opts.rs#L456" rel="nofollow">https://github.com/servo/servo/blob/2d3771daab84709a6152c9b5...</a>, and it looks like "Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:55.0) Servo/1.0 Firefox/55.0"
saying "... and used KHTML" glosses over the entire Konqueror project and existence of Konqueror long before the first release of Safari. I was using Konqueror on a KDE 2.0 desktop quite happily for a while.
The userAgent property has been aptly described as “an ever-growing pack of lies” by Patrick H. Lauke in W3C discussions. (“or rather, a balancing act of adding enough legacy keywords that won’t immediately have old UA-sniffing code falling over, while still trying to convey a little bit of actually useful and accurate information.”) [<a href="https://superuser.com/questions/1174028/microsoft-edge-user-agent-string" rel="nofollow">https://superuser.com/questions/1174028/microsoft-edge-user-...</a>]
I once created a similar problem. I built a tracking and split testing system designed around a list of features activated during a page load. So a single page load might be described like:<p>root,signin,bluebutton<p>Where bluebutton was a design we were testing for our signin page. Of course once bluebutton worked and had run for a while everyone was afraid to change it in case there was a dependency of some kind. So the Facebook login that replaced the old signin would look like:<p>root,signin,bluebutton,fbookredirect<p>Even though no sign in page was shown let alone a bluebutton.
For me, the page isn’t loading. Here’s a Google cache of it:<p>Text-only cache: <a href="http://webcache.googleusercontent.com/search?q=cache:maxiNwj6M34J:https://webaim.org/blog/user-agent-string-history/&num=1&client=safari&hl=en&gl=us&strip=1&vwsrc=0" rel="nofollow">http://webcache.googleusercontent.com/search?q=cache:maxiNwj...</a><p>Edit: The full-version cache is broken for me as well!
I'm guessing that today, in the current age of the modern web, user agents strings are no longer so relevant, and can be basically set to anything?
The title is simply false. I read the article and it does present interesting history from the browser wars. However, any cursory glance of web server logs will show that sometimes the user agent string is blank, or it starts with "MobileSafari" or "UrlTest." The user agent string is client generated and can be anything the client wants.