I was thinking about a concept lately I would refer to as type coverage.<p>The idea behind it is to measure how well a python function is tested by recording all inputs/outputs during a complete test run and comparing the values with the annotated types.<p>e.g. a function `foo(a: Optional[float])` which is only tested with `foo(5)` gets a low coverage because `foo(None)` is not being tested. Or it could be hinted that a more feasible type annotation would be `foo(a: int)`.<p>As a use case I was thinking of testing APIs, to make sure you covered all use cases of your API which you advertise by the annotated types.<p>An extension of this concept could be that you check how extensive you tested a type. Like did you test `foo(a: int)` with negative, positive and zero values?
If not it could be a hint that your test coverage is too low or you have a wrong type, maybe a enum would be suited better.<p>I am curious to hear about your thoughts about this concept.
While not exactly the same, you can obtain the same benefits with mypy.[0] It will perform all of what you say staticaly, without any test, though obviously it will not be smart enough when you use libraries that aren't using annotated types. But if you test for those, then you obtain 100% of the functionality that you described, unless I misunderstood something.<p>[0] <a href="https://mypy.readthedocs.io/en/stable/" rel="nofollow">https://mypy.readthedocs.io/en/stable/</a>