Imho, the problem with Lighthouse (and Pagespeed before it) isn't that they're not perfect, it's that they assign scores/grades.<p>When Google assigns a score to something, people understand it to mean highest score = best and start optimizing for the grade Google gives them, not for performance and user experience, which the grade is supposed to represent.<p>It would be more fruitful to list the issues and their severity but not add overall scores, because scores change the objective from fixing the problems to getting high scores. They occasionally also have the have bugs where they will punish something with a worse score that is actually an improvement in the real world, discouraging people from doing the right thing ("I want my site to load faster, but Google is buggy and will rank me down if I do").
Wow, CSS system color keywords seem like a massive privacy leak. I just tested setting the property:<p><pre><code> background: Background;
</code></pre>
on an element, and then changing my Windows desktop background. The element immediately changes color to match my desktop. Then if I call getComputedStyle on the element, I get my desktop background color in javascript. This is in Firefox private mode, and apparently every website can read all my system colors. Why in the world is this enabled by default?<p><a href="https://www.w3.org/wiki/CSS/Properties/color/keywords#System_Colors" rel="nofollow">https://www.w3.org/wiki/CSS/Properties/color/keywords#System...</a>
One of the "philosophers' stone" goals of the software industry is to completely replace human testing with automated testing.<p>Basically, I think automated testing is a very good thing, and we should <i>definitely</i> try to do as much of it as possible.<p>So we can clear the way for more useful and meaningful human testing.<p>I've always thought that the engineers in QC should be just as skilled and qualified as the ones building the product.<p>Part of what they should do, is design and build really cool automated tests, but I think that they should also be figuring out how to "monkey-test" the products, and get true users (this is 1000% required for usability and accessibility testing) banging on the product.<p>"True users" != anyone even <i>remotely</i> connected with the product development or testing, beyond a contract agreement.<p>But I'm kind of a curmudgeonly guy, and my opinions are not always welcomed.<p>I do write about why I prefer using test harnesses, as opposed to [automated] unit tests, here: <a href="https://medium.com/chrismarshallny/testing-harness-vs-unit-498766c499aa" rel="nofollow">https://medium.com/chrismarshallny/testing-harness-vs-unit-4...</a>
Having seen many other instances where chasing numbers has lead to a worse outcome, I think "metrics driven development" is an abomination that must be abolished. Unfortunately, management seems to really like the idea of turning everything into a number and increasing it at all costs --- I have fought against such things, and when I pointed out all the negatives associated with it, they would often agree; but then dismiss the thought completely with a response that essentially means "but it makes the numbers look better."<p>As the saying goes: "Not everything that counts can be counted, and not everything that can be counted, counts."
Cool, but this article would have been more useful with some practical examples of things Lighthouse doesn't catch. If the point is "this automated metric isn't perfect", no automated metric is but how bad is it exactly?<p>I still don't have a sense for how bad Lighthouse is because I've never disabled all keyboard events, disabled all mouse events, or changed the high contrast stylings. The article almost makes the opposite point to me -- how bad can Lighthouse be if the only loopholes are things that would pretty obviously have accessibility issues?<p>The only useful examples I could see were the ones at the bottom of the article, which show up in Lighthouse next to the score.
This reminds me of a time when a team member was adamant about code coverage metrics. It felt like an intense amount of busy work that really didn’t improve our codebase or ensure thoughtful tests that actually, you know, caught stuff.<p>It was just some weird metric we were chasing that involved making sure we go through each function and superficially test calls, regardless of the fact that some of the stuff we were testing gave us no confidence on the actual internals. I will not even mention that code coverage became this <i>number</i> that he/she believed was this standard, even though no attempts were made to build the codebase via TDD from the get to (making chasing the code coverage metric after the fact laughable). What could I say? The person appealed to the authority of the code coverage metric.<p>But hey, we got that code coverage percentage up :)
An Interesting and entertaining article.<p>The most interesting part to me (as someone with vision problems) was the WebAIM link [1]. The biggest problem I have is with the almost total blind adoption of low contrast (so often too low for me to even read) and sure enough the section about low contrast [2] says:<p>"found on 86.3% of home pages. _This was the most commonly-detected accessibility issue_.<p>My basic question then is why do so many designers and websites choose to break the WCAG guidelines?<p>[1] <a href="https://webaim.org/projects/million" rel="nofollow">https://webaim.org/projects/million</a><p>[2] <a href="https://webaim.org/projects/million/#contrast" rel="nofollow">https://webaim.org/projects/million/#contrast</a>
First, let me start by saying that this is a good article and sheds light on one of the challenges of accessibility adoption.<p>All of tools like lighthouse, axe-core etc. run a subset of tests that gives a false sense of security about accessibility. Similarly, tools like Accessibility Insights for Web has a fast-pass option, which does the same thing where it runs a subset of tests to catch the most common issues on a website.<p>But it does not and cannot(at this moment) catch all the issues that require semantic analysis of a website, like checking that the alt text on an image has meaningful text. For tests like those, a human is needed to perform a comprehensive assessment, something like Accessibility insights for web offers as an Assessment option.<p>In my opinion, all of the tools are doing one thing good and that is raise awareness about the problems that users with a disability face daily, when trying to use a website. They are making accessibility a must. Tools still needs more work and I feel confident that it will continue to improve. It all kind of comes down to how much time a development team puts in to make their website completely accessible, which ideally every team should budget and plan for.
Another technique to mess up keyboard users: don’t use the document scroll area, but make your own (and don’t focus it or anything in JavaScript). Thus the user will have to press Tab or click in the area before keyboard navigation keys will work. So for best results put a large number of focusable elements before the scrollable pane, so that the keyboard user must press Tab a large and unpredictable number of times before it works.<p>It’ll be something like this:<p><pre><code> <body>
<div tabindex=0></div>
<div tabindex=0></div>
<div tabindex=0></div>
<div style="position:absolute;top:0;left:0;right:0;bottom:0;overflow:auto">
…
</div>
</body>
</code></pre>
You could probably mess with other tabindexes (randomly jump through the document with Tab!) without Lighthouse baulking.<p>I was going to suggest adding `pointer-events: none` so that the user can’t just click to focus it, but that was already done!<p>(I mentioned focusing your scroll area element as something you need to do if you roll your own rather than using the document scroll area; but that’s not all you need to do. You also need to monitor blur events and change any .blur() calls, so as to avoid the document element ever retaining focus. It inherently depends on JavaScript, and is very fiddly to get fully right—I’m actually not sure if <i>anyone</i> gets it <i>fully</i> right; the interactions of focus and selection are nuanced and inconsistent, and it’s extremely easy to mess up accessibility software; I haven’t finished my research on the topic. I strongly recommend against the technique on web pages; web apps can <i>occasionally</i> warrant it.)
I just read an article a couple days ago about how even YouTube widgets and stuff have huge a11y problems. I think it's time to admit Google is terrible at accessibility. All their devrels talk like it's important and have all these beautiful demos but whenever you look behind the curtain at their products it's terrible.