Selenium tests are inherently slow, unreliable and flappy. They have been the bane of developers for every employer I've had. Do yourself a favor and write React and test your components without a browser driver in good ol' JS with the occasional JSDom shim. It removes almost the entire need for Selenium, which should be reserved for only the faintest of smoke tests. And please, if you have to use Selenium, use headless Firefox, because PhantomJS is very bad software.
I currently manage a rather large test suite (around 700 different tests) using Selenium, which is all written in Ruby and Rspec (although I've also used Cucumber), and uses the gems Capybara (an abstraction layer for querying and manipulating the web browser via the Selenium driver) and SitePrism (for managing page objects and organizing re-usable sections).<p>The entire suite runs in around 10 minutes on CircleCI, using 8 parallel threads (each running an instance of the Firefox Selenium driver), and it is rock solid stable.<p>It took us a while to get to this point, though.<p>The hard part is handling timing due to Javascript race conditions on the front-end. I had to write my own helper methods like "wait_for_ajax" that I sprinkle in various page object methods to wait for any jQuery AJAX requests to complete. I also use a "wait_until_true" method that can evaluate a block of code over and over until a time limit has been reached before throwing an exception. Once you figure out ways to solve those types of issues, testing things with Selenium becomes a lot more stable and easy.<p>I have also used the exact same techniques (page objects, custom waiter methods for race conditions, etc) to test mobile apps on iOS and Android with Selenium.<p>It can be a challenge, but once you have a system down and you know what you are doing, it's not so bad.
The most annoying thing I found with Selenium was that it wouldn't wait for the browser to respond to click events and rerender.<p>The approach in the blog post (and I think elsewhere ... not sure) is to poll the DOM with a timeout.<p>Is there a better solution to be add with something like `executeScript`? You could run `requestAnimationFrame`, and then poll for an indicator that the click, etc. handler has indeed finished. That way if it fails, you know about it pretty soon, without the need for long timeouts. This is all just a guess though.
Nice rundown, wish I had read this a year ago!<p>> One developer designed a way to take a screenshot of our main drawing canvas and store it in Amazon’s S3 service. This was then integrated with a screenshot comparison tool to do image comparison tests.<p>I would also take a look at Applitools <a href="https://applitools.com/" rel="nofollow">https://applitools.com/</a> — they have Selenium webdriver-compatible libraries that do this screenshot taking/upload and offer a nice interface for comparing screenshot differences (and for adding ignore areas). Way fewer false failures than typical pdiff/imagemagick comparisons.
Everyone in the blogosphere (and at my own company) writing non-app-specific layers on top of selenium suggests that there is scope for a higher level framework that can be used on top of selenium. Or that the selenium api is too thin a layer over webdriver.<p>Does anyone know of such a project?
Here's the presentation the post is based on: <a href="https://www.youtube.com/watch?v=5K6bwikZulI" rel="nofollow">https://www.youtube.com/watch?v=5K6bwikZulI</a>
The PageObjects tip is a really good one. Previously using Selenium you end up with a complete maintainability nightmare.<p>I used Geb on a recent project, and I actually felt that the tests I built demonstrated a passable level of engineering discipline. However, Geb was really hard to learn (partly the error messages were really confusing/missing) and you're still on top of Selenium so you still get wacky exceptions and edge cases.
Some very good information in this article.
It is true that Selenium has its quirks, retrying a failed test can sometimes result in a passing test.<p>Disclaimer: I work for <a href="https://testingbot.com" rel="nofollow">https://testingbot.com</a> : at my work we offer our customers automatic retries when a test fails.
Writing a Selenium test does take its time, but once you run it in parallel across hundreds of browser and os combinations, it's worth it.
I wonder if there are stories about running Selenium tests in production. Something in the lines of semantic monitoring (<a href="http://www.thoughtworks.com/radar/techniques/semantic-monitoring" rel="nofollow">http://www.thoughtworks.com/radar/techniques/semantic-monito...</a>)
BrowserMob, that was a sweet service (based on selenium). Does anyone know what happened to those guys after they sold? I've always wanted to learn more about their story.
using it right now for my latest project, it is a nightmare. I have 1100 tests that have to run per night. I'm using PhantomJS.
It is such a mess ! ! !
<p><pre><code> > getWithRetry takes a function with a return value
>
> def numberOfChildren(implicit user: LucidUser): Int = {
> getWithRetry() {
> user.driver.getCssElement(visibleCss).children.size
> }
> }
>
> predicateWithRetry takes function that returns a boolean and will retry on any false values
>
> def onPage(implicit user: LucidUser): Boolean = {
> predicateWithRetry() {
> user.driver.getCurrentUrl.contains(pageUrl)
> }
> }
</code></pre>
At first I didn't get the difference between `getWithRetry` and
`predicateWithRetry`, but then I noticed that the former throws an
exception whereas the latter returns false. I infer that `getWithRetry`
will handle exceptions thrown by the retried function.<p>In stb-tester[1] (a UI tool/framework targeted more at consumer
electronics devices where the only access you have to the
system-under-test is an HDMI output) after a few years we've settled on
a `wait_until` function, which waits until the retried function returns
a "truthy" value. `wait_until` returns whatever the retried function
returns:<p><pre><code> def miniguide_is_up():
return match("miniguide.png")
press(Key.INFO)
assert wait_until(miniguide_is_up)
# or:
if wait_until(miniguide_is_up): ...
</code></pre>
(This is Python code.)<p>Since we use `assert` instead of throwing exceptions in our retried
function, `wait_until` seems to fill both the roles of `getWithRetry`
and `predicateWithRetry`. I suppose that you've chosen to go with 2
separate functions because so many of the APIs provided by Selenium
throw exceptions instead of returning true/false.<p><pre><code> > doWithRetry takes a function with no return type
>
> def clickFillColorWell(implicit user: LucidUser) {
> doWithRetry() {
> user.clickElementByCss("#fill-colorwell-color-well-wrapper")
> }
</code></pre>
Unlike Selenium, when testing the UI of an external device we have no
way of noticing whether an action failed, other than by checking the
device's video output. For example we have `press` to send an infrared
signal ("press a button on the remote control"), but that will never
throw unless you've forgotten to plug in your infrared emitter. I
haven't come up with a really natural way of specifying the retry of
actions. We have `press_until_match`, but that's not very general. The
best I have come up with is `do_until`, which takes two functions: The
action to do, and the predicate to say whether the action succeeded.<p><pre><code> do_until(
lambda: press(Key.INFO),
miniguide_is_up)
</code></pre>
It's not ideal, given the limitations around Python's lambdas (anonymous
functions). Using Python's normal looping constructs is also not ideal:<p><pre><code> # Could get into an infinite loop if the system-under-test fails
while not miniguide_is_up():
press(Key.INFO)
# This is very verbose, and it uses an obscure Python feature: `for...else`[2]
for _ in range(10):
press(Key.INFO)
if miniguide_is_up():
break
else:
assert False, "Miniguide didn't appear after pressing INFO 10 times"
</code></pre>
Thanks for the article, I enjoyed it and it has reminded me to write up
more of my experiences with UI testing. I take it that the article's
sample code is Scala? I like its syntax for anonymous functions.<p>[1] <a href="http://stb-tester.com" rel="nofollow">http://stb-tester.com</a>
[2] <a href="https://docs.python.org/2/reference/compound_stmts.html#the-for-statement" rel="nofollow">https://docs.python.org/2/reference/compound_stmts.html#the-...</a>
I'm working for a startup that addresses this by means of a simpla wrapper API: <a href="http://heliumhq.com" rel="nofollow">http://heliumhq.com</a>. Human-readable tests with no more HTML IDs, CSS selectors, XPaths or other implementation details.