I think Cypress.io is a tool in its death throes. So like, wildly out-of-touch responses like this, and wacky new pricing/lock-in monetization schemes should be expected. Think Myspace or AOL near the end.<p>Where I work, there was a guy in 2018 who made an internal video presentation about how awesome Cypress was, and how many of our projects might benefit from adopting it.<p>And at the end of last year, since that guy was me, I sort of felt obligated to do a follow up internal presentation about my team's experiences with it (very good at first, very bad by the end), why no project should adopt Cypress anymore, and the reasons that projects using it should consider switching to Playwright.<p>It's not just that Playwright works better, in every conceivable way, than the open-source parts of Cypress. Although that is also true.<p>It's that that no-longer-up-to-par[1] E2E web testing implementation is <i>also</i> tied to this obvious "let's ramp-up the lock-in and increase prices" strategy. I love paying for stuff that saves my development team time, but we're paying Cypress thousands of dollars a year and if we kept using Cypress it would keep going up and up and up, because Cypress wants you to pay for <i>parallelization</i>.<p>Flaky test identification, possibly-redundant time-wasting analysis... those are (to me) valid things to pay extra for — or <i>not</i> pay for, if the budget gets tight. Paying extra for <i>simply running your tests in parallel</i>, or being locked into one provider for it, is a huge red flag. It means that the better your test coverage gets, the more you have to pay.<p>But pay-to-parallelize is just crap. It's table stakes, and has to be part of the open-source component, not the premium tier. (NOTE: To be clear, Cypress makes you pay to parallelize on <i>your own instances</i>. If they had a Cypress cloud that also provided the CI instances running the test, I wouldn't have a problem with paying... although I suspect in that case, I would have a problem with price-gouging, because they still presumably wouldn't open source that part.)<p>Contrast with Playwright: you just append "--shard=1/30" (if you have 30 instances). It's not as sophisticated as Cypress Dashboard's dispatcher which... <i>waves hands</i> coordinates in realtime and feeds instances tests as soon as they become idle (?). But it is open source, free, and if one shard fails in CI you can run the Playwright command locally with the same shard number to debug it.<p>So, if you use one of the services affected by this move, well ouch, but OTOH you probably shouldn't be using Cypress in the first place. So maybe take this as a final nudge that least freeze your Cypress tests and start writing new tests in something not only more open, but also just... better.<p>[1]: Off the top of my head, the critical flaws that make Cypress not as good as its 2023 competition:<p>- A model for async that is not based on — and not compatible with — standard async/await (promises). You have to use this bizarre alternative "Cypress.Chainable" thing instead, and it makes debugging a failing test much harder than the same test in Playwright (or anything that uses normal JavaScript async). Many people have complained about this, and to me it is a no-brainer, but Cypress basically doubled down on it like "No no, sure async/await is OK, but Cypress Chainable is better because hage hige hoge..."<p>- Colocating Cypress in the same browser instance as your app, and using an iframe to "isolate" the app under test. IRL this results in "Cypress-only" test flake (things happen in Cypress that never happen otherwise) and just outright "the browser crashed during CI".<p>- Slow. As. Dog. Shit. (It wasn't slower than the competition a few years ago. But the competition has gotten dramatically faster (at executing tests, I mean), and Cypress has not.)<p>- Due to the above things, presumably, our Cypress tests exhibit a much higher level of flake, and therefore maintenance cost, than our post-Cypress tests (mainly Playwright). This, though, could be partly due to just being older tests. For instance, Cypress response mocking support was first bad, then OK, and now I dunno but maybe it is good. All our tests that need to use response mocking have failed and needed some engineer to fondle their balls and whisper sweet nothings into their ear to coax them back to functionality... but that maybe wouldn't happen if they'd been written with Cypress 13 instead of Cypress 3 or whatever. So only blaming Cypress partially for this.