Funny. At a previous employer, we had a few services that needed access to some data we hadn’t put behind a service yet. For bureaucratic reasons I won’t bore y’all with, our team wasn’t given the green light to create the needed service to provide access to the data, so a coworker and I created an endpoint on an existing service but we added a requirement that an oddly specific and undocumented HTTP header calling out management must be provided in every request made to the endpoint and then we waited. At some point, someone else needed to use the endpoint and came to us asking why it wasn’t working. We told him about the header. He was annoyed and complained to his manager. The very next week, we got what we wanted: Our new service was up and running! We also got into a bit of trouble over this, but that’s just an extraneous detail — haha.
A couple of related practices I've picked up over the years: (1) using long names when implementing things that are necessary but discouraged (2) putting dange_danger in names of things that should never be used in production.<p>The former introduces just a tiny bit of friction, but it's often enough to encourage developers to use the preferred API calls.<p>The latter I learned almost 20 years ago from Google's "Mustang" search back-end. If running a test instance of Mustang without bringing up a ton of dependency processes, there was a --danger_danger_must_not_fail true commandline flag (or something very similar) to turn a bunch start-up sanity check aborts into warning log messages. If any code reviewer sees "danger_danger" being added to production command line flags, they'll almost certainly ask the right questions.
I've noticed a lot of developers that have this kind of code purity complex where their way of doing things is superior even if the actual code Works - which I think is the number one most important thing about code. The code purity complex is a self-limiting decision.<p>Those developers will be sitting at home fuming, I guess, as I cheerfully build a new feature on VB6 code last changed in 1999 that relies on using "On Error resume next", on an obscene contract rate.<p>Working only with Good code is a luxury decision, and writing Code That Sucks isn't necessarily bad, if thats the fastest/cheapest way to write code that works. The end user almost as a rule, doesn't care about your code purity. A developer probably couldn't even explain that purity in a way that would make the end user understand its value. This method name indicates a lack of understanding that finding the right balance of speed, cost and quality is a compromise.<p>Yes, tests that can run in isolation are "better".
Pytest has an equally deprecating option for a different "use case":<p><pre><code> disable_test_id_escaping_and_forfeit_all_rights_to_community_support = True
</code></pre>
<a href="https://docs.pytest.org/en/6.2.x/parametrize.html#pytest-mark-parametrize-parametrizing-test-functions" rel="nofollow">https://docs.pytest.org/en/6.2.x/parametrize.html#pytest-mar...</a>
I have a method in our Java app`iKnowWhatImDoingGiveMeTheSession()` that will only work if you have set the system property `doYouReally=Yes, I really know what I'm doing`<p>It should never be used in production, but can be helpful to have available when doing certain kinds of investigation.
I wrote an unsafe library at a FAANG and used similar naming conventions.<p>The init function was named along the lines of “MyClassName_UNSAFE::initUnsafeIUnderstandTheRisks()”<p>And the library itself was called something like “library-name-unsafe-my-team-name-must-approve-diff”<p>So anyone trying to use it would have to add a library with that name to their list of dependencies and would come to us asking for a code review, and more than half the time we would redirect them to a safer alternative (valid use cases for the unsafe library were few and far between).
MySQL Command line has a flag called i-am-a-dummy, I know it's supposed to be funny but I think it's actually a good idea to run in dummy mode when you can, so the insulting name might be counterproductive.
I think most people who have used SQL have updated with an insufficiently restrictive where clause, or even forgotten to include the where clause entirely, at least once.
Make your models thin. Then make your models simple data structures.<p>Then your tests can build a data structure with just what is necessary for the test. Then your tests don't need to be ordered.<p>The common problem is that models are fat, and to test them requires building up some very complex scenario. Every step mutates something. So the step order is critically important.<p>This is absolutely not necessary if your functions are pure. It's also easier if your models are simple data structures, so they require less setup.<p>edit;added - if you make your models thin, then you need to add "service" modules which encapsulate the business logic. Taking it outside the models makes it more testable with less effort, and it also makes it less dependent on your framework.
PNPM has an option called `shamefully-hoist` and it's by far my favorite option of all time: <a href="https://pnpm.io/npmrc#shamefully-hoist" rel="nofollow">https://pnpm.io/npmrc#shamefully-hoist</a>
I disagree that ordered tests are bad. See, I don't even have mutable state in my program -- but still have ordered tests for another reason: the ease of debugging. Say, if module A depends on module B, then B should be tested first, only then A, since if you test in the other direction, you might have a hard time figuring out whether A or B is misbehaving.
Is this still ok in today's over-sensitive environment where "blacklist" files are taboo. (Btw, "blacklist" comes from British chimney cleaners).
Related: Go finally added <i>support</i> for randomizing test order in 1.17! <a href="https://go.dev/doc/go1.17" rel="nofollow">https://go.dev/doc/go1.17</a> it's disabled by default.<p>tl;dr: use `-shuffle on` so your tests don't suck (as much)
How incredibly arrogant. 99% of the time that happens because the code sucks and the tests are impossible to write any other way without significant effort to mock resources and/or just refactor the code (which of course breaks the old tests).<p>Anyone know if this is one of the many papercuts that probably killed off ruby (poor dev culture)? I never followed the language much.