Hey HN,<p>We just shipped a new AI-powered feature... BUT the "AI" piece is largely in the background. Instead of relying on a chatbot, we've integrated AI (with strict input & output guardrails) into a workflow to handle two specific tasks that would be difficult for traditional programming:<p>1. Identifying the most relevant base URL from HAR files, since it would be tedious to cover every edge case or scenario to omit analytics, tracking, and other network noise.<p>2. Generating synthetic data for API requests by passing the API context and faker-js functions to GPT-4.<p>The steps are broken down into a simple flow, with users working with the AI and verifying the output throughout.<p>All of the focus is on reducing cognitive load and speeding up test generation.<p>Let me know what you think!
Interesting feature. One key thing I found when testing is that for you to reproduce the set of steps the user went through, there are a data attribute(s) that need to remain the same. For example, after login, a request for your account information will contain an account_id number that should be the same for all other account requests. If you can't guarantee this, then I don't see how you could use this in any sort of integration tests.<p>Isn't it simpler to use the Open API spec then generate from there?
This reminds me of several solutions albeit lacking the explicit "AI" part:<p>- Up9 observes traffic and then generates test cases (as Python code) & mocks<p>- Dredd is built with JavaScript, runs explicit examples from the Open API spec as tests + generates some parts with faker-js<p>- EvoMaster generates test cases as Java code based on the spec. However, it is a greybox fuzzer, so it uses code coverage and dynamic feedback to reach deeper into the source code<p>There are many more examples such as Microsoft's REST-ler, and so on.<p>Additionally, many tools exist that can analyze real traffic and use this data in testing (e.g. Levo.ai, API Clarity, optic). Some even use eBPF for this purpose.<p>Given all these tools, I am skeptical. Generating data for API requests does not seem to me to be that difficult. Many of them, already combine traffic analysis & test case generation into a single workflow.<p>For me, the key factors are the effectiveness of the tests in achieving their intended goals and the effort required for setup and ongoing maintenance.<p>Many of the mentioned tools can be used as a single CLI command (not true for REST-ler though), and it is not immediately clear how much easier it would be to use your solution than e.g. a command like `st run <schema url/file>`. Surely, there will be a difference in effectiveness if both tools are fine-tuned, but I am interested in the baseline - what do I get if I use the defaults?<p>My primary area of interest is fuzzing, however, at first glance, I'm also skeptical about the efficacy of test generation without feedback. This method has been used in fuzzing since the early 2000s, and the distinction between greybox and blackbox fuzzers is immense, as shown by many research papers in this domain. Specifically in the time a fuzzer needs to discover a problem.<p>Sure, your solution aims at load testing, however, I believe it can benefit a lot from common techniques used by fuzzers / property-based testing tools. What is your view on that?<p>What strategies do you employ to minimize early rejections? That is, ensuring that the generated test cases are not just dropped by the app's validation layer.
Why is AI needed for this at all?<p>You should take a look at Schemathesis (<a href="https://github.com/schemathesis/schemathesis">https://github.com/schemathesis/schemathesis</a>)
How is this different from Postman’s test generation features? <a href="https://www.postman.com/postman-galaxy/dynamically-generate-tests-from-open-api-specs/" rel="nofollow">https://www.postman.com/postman-galaxy/dynamically-generate-...</a>
This can be used to generate data given an openAPI spec? Bit unclear on whether that's unbundled from test generation. Say I just want to generate data that conforms to a spec as one-off. Can this be done?