Over the years I've tended to like to work in a very specific way:<p>prioritized and documented requirements.<p>sit down with users to discuss and refine.<p>build small. test small.<p>quick iterations.<p>daily stand ups our nice for getting to know what's going on. helps the team find blockers.<p>everything should be a task. if you can't get it done in 4-8 hours it's too big of a task create a task to break it into smaller bits.<p>if you're connecting to another system build tests to verify their system is online and working. Why this isn't a thing when publishing an api is beyond me.<p>priority and requirements should be constant in the middle of a sprint.<p>developers shouldn't be coming up with test data.<p>the team and management should understand the value of QA as a second set of eyes and a different way to interpret the specs.<p>leave enough room in the schedule for support and other outside issues.<p>schedule tasks for technical debt and other improvements that improve the long term values of the system.<p>some tasks have a high chance of failure and that's ok. some experimentation is good.<p>extra points: add post mortems to improve the system. also, post mortems need follow up to make sure tasks are added.<p>so any variation in this kind a gets annoying.<p>don't get me started on smart objectives and other wish lists that aren't broken into manageable tasks and added to a sprint.
I'll agree that slow pipelines are irritating, but you should really take the extra time and fix your e2e tests. In a project I was on recently, our e2e tests were extremely robust and played a key part in re-architecting a fairly large backend for a site we were maintaining. They don't have to be flaky.
I'll add to your original thread too, and say that disengaged product owners combined with changing requirements and/or scope creep have been some of the biggest sources of annoyance for me.