This actually gave me another idea. What do you all think, I’d be up for trying to build it.<p>It would watch your program during execution and record each function calls input and output. And then create tests for each function using those inputs and outputs.<p>You could always go over the created tests manually and fix them or review them but if nothing else it could be a good start.
Does this do anything other than assert that functions return the right basic type given extremely basic inputs?<p>Is naive type testing a thing that people actually bother doing vs testing that the functions actually do what they're supposed to?<p>If a function takes two ints and returns an int that is the integer division of the two, is the evaluation of exceptions and returned types not already implicit in the evaluation of the actual results?<p>If you know that divide(a, b) should return c for sufficient candidates a, b, and c, then you _know_ that divide returns the right type without explicitly checking. And knowing that the divide function happens to return ints when given ints doesn't actually tell you that it's doing anything even close to the right behavior. So this both doesn't reduce the number of tests you need to write and is also obsoleted by actually writing the tests that you need.
I like the idea, but I think I would be more comfortable with it generating test files that I could keep beside my other tests.<p>Also, `auto_pytest_magic` doesn't seem to exist.<p><pre><code> ImportError: cannot import name 'auto_pytest_magic' from 'hypothesis_auto' (/home/.../.local/lib/python3.7/site-packages/hypothesis_auto/__init__.py)</code></pre>
This is neatly packaged, but it's not immediately clear what advantages it has over Hypothesis' native offerings for accomplishing this? ( <a href="https://hypothesis.readthedocs.io/en/latest/details.html#inferred-strategies" rel="nofollow">https://hypothesis.readthedocs.io/en/latest/details.html#inf...</a> and <a href="https://hypothesis.readthedocs.io/en/latest/data.html#hypothesis.strategies.from_type" rel="nofollow">https://hypothesis.readthedocs.io/en/latest/data.html#hypoth...</a> )
On one job, I had to disable code coverage for a whole suite of tests that were simply making calls and completely ignoring the results.<p>I know, coverage is Yet Another Metric, but if you don't game it, it can help you track down branches you haven't written tests for.<p>So my hesitation is I can see people running this, gettting 100% code coverage and thinking, "hooray, it's fully tested!"
I like this idea. One piece of feedback, a parameter with a leading underscore feels very odd. In python I interpret leading underscores to indicate the programmer thinks of this as an internal / pseudo-private property. Exposing it through the api makes it "public" which means (to me) that it shouldn't have a leading underscore.<p>This is especially true if the usecase is common enough to put in the top level examples
While I had really liked the idea of hypothesis in Python I found that the edge-cases it was uncovering were the ones that were obviously gonna break but at the same time cases I didn't care to guard against, e.g., using 3-mile long integers, or cases that wouldn't work with the underlying libraries eg NumPy. Thus, I found myself spending more time adding constraints on the generated inputs than fledging out my test-suite. So my adventures with hypothesis were short-lived.<p>I don't mean to detract from this library, I think its a great combination of strong-typing and property-based testing but has anyone had any experience employing property-based testing on complex functions outside of the whole add/subtract/multiple stuff? What kinda thing have you used it on?
Nice concept but does it work with real-world application.
I failed to understand how will it work with methods like `authenticate_user(user)` or `load_permissions_from_db(user, db)`.