Hi, this is Mish and Sebastian.
We are working on Step CI - a fully automated API testing platform for developers.<p>Step CI works programming-language independent and for different API paradigms (REST, GraphQL, XML).<p>Our CLI and test runner are available on GitHub (<a href="https://github.com/stepci" rel="nofollow">https://github.com/stepci</a>) under the MPLv2 license.<p>Since our last launch, Step CI is now able to generate automated tests for your API based on your OpenAPI (Swagger) spec. This saves you a lot of time as you never have to write and maintain your tests again!<p>We would like to invite you to try our tool and give us feedback! Please star us on GitHub, if you like what we are working on!<p>We are very thankful for your attention and any feedback or suggestions we receive from you :)<p>Mish and Sebastian from Germany
From my experience, generated tests are worthless for anything more serious than smoke tests. I prefer working with no tests than automated tests, I feel they give you a false sense of confidence.<p>The Step CI engine itself looks good though. It looks like a cleaner, but less powerful version of a tool (open source, build in-house) we used when I worked at OVHcloud, Venom: <a href="https://github.com/ovh/venom" rel="nofollow">https://github.com/ovh/venom</a><p>Here's an example test file for the HTTP executor of Venom: <a href="https://github.com/ovh/venom/blob/master/tests/http.yml" rel="nofollow">https://github.com/ovh/venom/blob/master/tests/http.yml</a> it's very close to Step CI format.<p>I'd still use Venom because it's way more powerful (you have DB executors for example, so after executing a POST request you can actually check in DB that you have what you expect) and I prefer focusing on actually writing integration tests instead of generating them.<p>Maybe this post sounds harsh (I feel it as I write it because I have strong feelings against test generation) but I think your approach is a good one for actually writing automated tests. Testing APIs declaratively like this has a great benefit: your tests work on an interface. You can migrate your API to a whole new stack and your tests remain the same. I did it multiple time at OVHcloud: one time migrating a huge API from a Go router to another (Gin->Echo), and another time migrating public APIs from a legacy, in-house Perl engine to a Go server.
If you are looking for a more general tool, which can do not only API tests, but also interact with Databases and etc, but still use declrative syntax, try Venom <a href="https://github.com/ovh/venom" rel="nofollow">https://github.com/ovh/venom</a>
How do you deal with state between API calls? (For example, a user filling a shopping cart, getting discounts based on their geo/products and then checking out)
Really cool, with already a lot of features!<p>Given the raise of CI/CD, I really think these kind of tools (CLI tests on HTTP requests), based on a simple format, will be really important. We've build Hurl (<a href="https://github.com/Orange-OpenSource/hurl" rel="nofollow">https://github.com/Orange-OpenSource/hurl</a>), that shares a lot of similarities with Step CI (plain text instead of yaml, captures, jsonpath, xpath etc...). I will shamelessly take inspiration for some new features (like GraphQL for instance)!
Fantastic project. I worked on something similar that had the same approach, but the main goal was to test availability in different environments (dev, stage, production) against the same tests.
We also had the feature of using the result of one test as input to another, so a complete user case from an API could be tested (auth, POST data, GET results)
Hi , I tried to build cli from swagger, it was a disaster: very verbose and did not really work. I am wondering what’s the verbosity of your tool. How do you deal with imperfect specs?
Some years ago I setup postman tests and run it with cli during deployment tests. The whole hassle with postman ui just makes it really not fun work with it even for simple tasks depending on each other. I wanted to use restclient from vscode where you just define requests in a raw format but there was no usable output to process for azure devops.
I will give this project a chance to get things done in a convenient way. Thanks!
I was looking in the site to understand the automatic generation of tests from OpenAPI but I couldn’t find it. Maybe it’s very obvious but I missed it. Does it mean that it produces test cases from the examples provided? Or how does it figure out the request response pairs?
How does this compare to Schemathesis [0]?<p>[0] <a href="https://schemathesis.readthedocs.io" rel="nofollow">https://schemathesis.readthedocs.io</a>