This reminds me of several solutions albeit lacking the explicit "AI" part:<p>- Up9 observes traffic and then generates test cases (as Python code) & mocks<p>- Dredd is built with JavaScript, runs explicit examples from the Open API spec as tests + generates some parts with faker-js<p>- EvoMaster generates test cases as Java code based on the spec. However, it is a greybox fuzzer, so it uses code coverage and dynamic feedback to reach deeper into the source code<p>There are many more examples such as Microsoft's REST-ler, and so on.<p>Additionally, many tools exist that can analyze real traffic and use this data in testing (e.g. Levo.ai, API Clarity, optic). Some even use eBPF for this purpose.<p>Given all these tools, I am skeptical. Generating data for API requests does not seem to me to be that difficult. Many of them, already combine traffic analysis & test case generation into a single workflow.<p>For me, the key factors are the effectiveness of the tests in achieving their intended goals and the effort required for setup and ongoing maintenance.<p>Many of the mentioned tools can be used as a single CLI command (not true for REST-ler though), and it is not immediately clear how much easier it would be to use your solution than e.g. a command like `st run <schema url/file>`. Surely, there will be a difference in effectiveness if both tools are fine-tuned, but I am interested in the baseline - what do I get if I use the defaults?<p>My primary area of interest is fuzzing, however, at first glance, I'm also skeptical about the efficacy of test generation without feedback. This method has been used in fuzzing since the early 2000s, and the distinction between greybox and blackbox fuzzers is immense, as shown by many research papers in this domain. Specifically in the time a fuzzer needs to discover a problem.<p>Sure, your solution aims at load testing, however, I believe it can benefit a lot from common techniques used by fuzzers / property-based testing tools. What is your view on that?<p>What strategies do you employ to minimize early rejections? That is, ensuring that the generated test cases are not just dropped by the app's validation layer.