TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Show HN: Auto-generate load tests/synthetic test data from OpenAPI spec/HAR file

33 点作者 yevyevyev超过 1 年前
Hey HN,<p>We just shipped a new AI-powered feature... BUT the &quot;AI&quot; piece is largely in the background. Instead of relying on a chatbot, we&#x27;ve integrated AI (with strict input &amp; output guardrails) into a workflow to handle two specific tasks that would be difficult for traditional programming:<p>1. Identifying the most relevant base URL from HAR files, since it would be tedious to cover every edge case or scenario to omit analytics, tracking, and other network noise.<p>2. Generating synthetic data for API requests by passing the API context and faker-js functions to GPT-4.<p>The steps are broken down into a simple flow, with users working with the AI and verifying the output throughout.<p>All of the focus is on reducing cognitive load and speeding up test generation.<p>Let me know what you think!

5 条评论

pitah1超过 1 年前
Interesting feature. One key thing I found when testing is that for you to reproduce the set of steps the user went through, there are a data attribute(s) that need to remain the same. For example, after login, a request for your account information will contain an account_id number that should be the same for all other account requests. If you can&#x27;t guarantee this, then I don&#x27;t see how you could use this in any sort of integration tests.<p>Isn&#x27;t it simpler to use the Open API spec then generate from there?
评论 #39049906 未加载
dmitry_dygalo超过 1 年前
This reminds me of several solutions albeit lacking the explicit &quot;AI&quot; part:<p>- Up9 observes traffic and then generates test cases (as Python code) &amp; mocks<p>- Dredd is built with JavaScript, runs explicit examples from the Open API spec as tests + generates some parts with faker-js<p>- EvoMaster generates test cases as Java code based on the spec. However, it is a greybox fuzzer, so it uses code coverage and dynamic feedback to reach deeper into the source code<p>There are many more examples such as Microsoft&#x27;s REST-ler, and so on.<p>Additionally, many tools exist that can analyze real traffic and use this data in testing (e.g. Levo.ai, API Clarity, optic). Some even use eBPF for this purpose.<p>Given all these tools, I am skeptical. Generating data for API requests does not seem to me to be that difficult. Many of them, already combine traffic analysis &amp; test case generation into a single workflow.<p>For me, the key factors are the effectiveness of the tests in achieving their intended goals and the effort required for setup and ongoing maintenance.<p>Many of the mentioned tools can be used as a single CLI command (not true for REST-ler though), and it is not immediately clear how much easier it would be to use your solution than e.g. a command like `st run &lt;schema url&#x2F;file&gt;`. Surely, there will be a difference in effectiveness if both tools are fine-tuned, but I am interested in the baseline - what do I get if I use the defaults?<p>My primary area of interest is fuzzing, however, at first glance, I&#x27;m also skeptical about the efficacy of test generation without feedback. This method has been used in fuzzing since the early 2000s, and the distinction between greybox and blackbox fuzzers is immense, as shown by many research papers in this domain. Specifically in the time a fuzzer needs to discover a problem.<p>Sure, your solution aims at load testing, however, I believe it can benefit a lot from common techniques used by fuzzers &#x2F; property-based testing tools. What is your view on that?<p>What strategies do you employ to minimize early rejections? That is, ensuring that the generated test cases are not just dropped by the app&#x27;s validation layer.
评论 #39063333 未加载
ushakov超过 1 年前
Why is AI needed for this at all?<p>You should take a look at Schemathesis (<a href="https:&#x2F;&#x2F;github.com&#x2F;schemathesis&#x2F;schemathesis">https:&#x2F;&#x2F;github.com&#x2F;schemathesis&#x2F;schemathesis</a>)
评论 #39049451 未加载
cebert超过 1 年前
How is this different from Postman’s test generation features? <a href="https:&#x2F;&#x2F;www.postman.com&#x2F;postman-galaxy&#x2F;dynamically-generate-tests-from-open-api-specs&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.postman.com&#x2F;postman-galaxy&#x2F;dynamically-generate-...</a>
评论 #39049945 未加载
xwowsersx超过 1 年前
This can be used to generate data given an openAPI spec? Bit unclear on whether that&#x27;s unbundled from test generation. Say I just want to generate data that conforms to a spec as one-off. Can this be done?
评论 #39049369 未加载
评论 #39049570 未加载