TE
科技回声
首页24小时热榜最新最佳问答展示工作
GitHubTwitter
首页

科技回声

基于 Next.js 构建的科技新闻平台,提供全球科技新闻和讨论内容。

GitHubTwitter

首页

首页最新最佳问答展示工作

资源链接

HackerNews API原版 HackerNewsNext.js

© 2025 科技回声. 版权所有。

Ask HN: How do you test SQL?

663 点作者 pcarolan超过 2 年前
I've been looking for resources for our data team to apply best practices for testing SQL pipelines (we use DBT) but have not found anything. How do you test SQL pipelines? What patterns, tools and best practices would you recommend? Any good reference material you know of?

96 条评论

RobinL超过 2 年前
Try and write any complex SQL as a series of semantically meaningful CTEs. Test each part of the CTE pipeline with an in.parquet and an expected_out.parquet (or in.csv and out.csv if you have simple datatypes, so it works better with git). And similarly test larger parts of the pipeline with &#x27;in&#x27; and &#x27;expected_out&#x27; files.<p>If you use DuckDB to run the tests, you can reference those files as if they were tables (select * from &#x27;in.parquet&#x27;), and the tests will run extremely fast<p>One challenge if you&#x27;re using Spark is that test can be frustratingly slow to run. One possible solution (that I use myself) is to run most tests using DuckDB, and only e.g. the overall test using Spark SQL.<p>I&#x27;ve used the above strategy with PyTest, but I&#x27;m not sure conceptually it&#x27;s particularly sensitive to the programming language&#x2F;testrunner you use.<p>Also I have no idea whether this is good practice - it&#x27;s just something that seemed to work well for me.<p>The approach with csvs can be nice because your customers can review these files for correctness (they may be the owners of the metric), without them needing to be coders. They just need to confirm in.csv should result in expected_out.csv.<p>If it makes it more readable you can also inline the &#x27;in&#x27; and &#x27;expected_out&#x27; data e.g. as a list of dicts and pass into DuckDB as a pandas dataframe<p>One gotya is SQL does not guarantee order so you need to somehow sort or otherwise ensure your tests are robust to this
评论 #34606222 未加载
评论 #34609337 未加载
评论 #34605317 未加载
评论 #34607112 未加载
评论 #34604813 未加载
评论 #34604370 未加载
评论 #34607814 未加载
评论 #34604105 未加载
评论 #34611306 未加载
评论 #34605705 未加载
评论 #34604907 未加载
评论 #34612319 未加载
评论 #34608882 未加载
ramenmeal超过 2 年前
We spin up a docker container running the DB technology we use, run our DB migration scripts on it, and then run integration tests against it. You get coverage of your migration scripts this way too.
评论 #34603554 未加载
评论 #34607793 未加载
评论 #34603682 未加载
评论 #34604502 未加载
评论 #34609027 未加载
评论 #34603871 未加载
评论 #34610111 未加载
评论 #34604259 未加载
评论 #34604372 未加载
评论 #34603322 未加载
评论 #34603566 未加载
评论 #34604145 未加载
评论 #34604069 未加载
评论 #34609768 未加载
评论 #34607014 未加载
评论 #34606399 未加载
purerandomness超过 2 年前
There&#x27;s pgTAP for Postgres [1], the same approach probably is valid for other databases.<p>Here&#x27;s [2] a slide deck by David Wheeler giving an introduction into how it works.<p>[1] <a href="https:&#x2F;&#x2F;pgtap.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;pgtap.org&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;www.slideshare.net&#x2F;justatheory&#x2F;unit-test-your-database" rel="nofollow">https:&#x2F;&#x2F;www.slideshare.net&#x2F;justatheory&#x2F;unit-test-your-databa...</a>
评论 #34627164 未加载
评论 #34619032 未加载
评论 #34607703 未加载
评论 #34608375 未加载
efxhoy超过 2 年前
I write mostly batch ETL stuff. All plain psql and bash. We don’t have a good testing setup to be honest. What we do use instead:<p>Plenty of constraints, uniques and foreign keys and not nulls. Enum types.<p>Visuals, dump to csv and plot some graphs. Much easier to find gaps and strange distributions visually.<p>Asserts in DO blocks, mostly counts being equal.<p>Build tables in a a _next suffix schema and swap when done.<p>Never mutating the source data.<p>Using psqls ON_ERROR_STOP setting.<p>Avoid all but the most trivial CTEs, preferring intermediate tables that can be inspected. Constraints and assertions on the intermediate tables.<p>“Wasting” machine resources and always rebuilding from scratch when feasible. CREATE TABLE foo AS SELECT is much simpler than figuring out which row to UPDATE. Also ensures reproducibility, if you’re always reproducing from scratch it’s always easy. State is hard.<p>Overall i’m quite happy with the workflow and very rarely do we make mistakes that unit tests would have caught. Our source data is complex and not always well understood (10+ years of changing business logic) so writing good tests would be very hard. Because we never touch the raw source data any errors we inevitably make are recoverable.<p>This talk by Dr Martin Loetzsch helped a lot: <a href="https:&#x2F;&#x2F;youtu.be&#x2F;whwNi21jAm4" rel="nofollow">https:&#x2F;&#x2F;youtu.be&#x2F;whwNi21jAm4</a>
评论 #34607475 未加载
评论 #34608312 未加载
drx超过 2 年前
If you&#x27;re using dbt, dbt tests are a good start: <a href="https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests" rel="nofollow">https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests</a><p>You can hook up dbt tests to your CI and Git(hub|lab) for data PRs.<p>Depending on your needs, you can also look into data observability tools such as Datafold (paid) or re_data (free)
评论 #34608907 未加载
评论 #34611947 未加载
chrisoldwood超过 2 年前
Back in the mid-noughties I decided to see if I could write SQL in a test-first manner (i.e. TDD). This resulted in me writing a 100% T-SQL based unit testing framework for SQL Server [1] which we then used for the SQL back-end at an investment bank.<p>On the back on that professional use I wrote a blog post [2] explaining why you might choose to go down this route as it wasn&#x27;t the way database was developed way back then (SQL wasn&#x27;t developed in the same way as the other front-end and back-end code).<p>A few years later I gave a short 20-minute talk (videoed) to show what writing SQL using TDD looked like for me. It&#x27;s hard to show all the kinds of tests we wrote in practice at the bank but the talk is intended to show how rapid the feedback loop can be using a standard DB query tool and two code windows - production code and tests.<p>Be kind, it was a long time ago and I&#x27;m sure the state of the art has improved a lot in the intervening years :o).<p>Chris Oldwood<p>---<p>[1] SQL Server Unit: <a href="https:&#x2F;&#x2F;github.com&#x2F;chrisoldwood&#x2F;SS-Unit">https:&#x2F;&#x2F;github.com&#x2F;chrisoldwood&#x2F;SS-Unit</a><p>[2] You Write Your SQL Unit Tests in SQL?: <a href="https:&#x2F;&#x2F;chrisoldwood.blogspot.com&#x2F;2011&#x2F;04&#x2F;you-write-your-sql-unit-tests-in-sql.html" rel="nofollow">https:&#x2F;&#x2F;chrisoldwood.blogspot.com&#x2F;2011&#x2F;04&#x2F;you-write-your-sql...</a><p>[3] Test-Driven SQL: <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5-MWYKLM3r0">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=5-MWYKLM3r0</a>
AnEro超过 2 年前
Gitlabs has their guide up, I love it and use it all the time. I&#x27;ve been doing data engineering in a small team for about 4 years, helping hospitals with data and hopefully making it easier to understand. Something that is overlooked or undervalued in my opinion, have stark distinctions for separating out technical and business logic tests. It makes it easier communicating what&#x27;s happening in the event something is &#x27;wrong&#x27; vs wrong, and it&#x27;s easier to bring a non-technically inclined team member up to speed. Also, I think it&#x27;s good to learn from the SaaS side of things and not bloat up or overengineer with infrastructure as data engineering is the latest development flavour. Keep it simple. Great expectations is a great tool however I think small teams should take really hard looks at their needs and see if a simple orchestration engine and SQL testing is enough. A centralized source for testing is great, however infrastructure isn&#x27;t free even when it is you are paying for it with you and your teams time.
评论 #34603169 未加载
dagss超过 2 年前
Probably not that relevant for a data team, but this is what we do as a backend team:<p>We use Microsoft SQL&#x27;s docker image and spin it up in the background on our laptop&#x2F;CI server so port 1433 has a database.<p>Then we have our homegrown migration file runner that will compute a hash of the migrations, make a database template_a5757f7e, and run the hundreds of migrations on it, whenever we add a new SQL migration (todo: make one template build on the previous).<p>Then we use the BACKUP command to dump the db to disk (within the docker image)<p>Finally, each test function is able to make a new database and restore that backup from file in less than a second. Populate with some relevant test data, run code, inspect results, drop database.<p>So our test suite uses hundreds of fresh databases and it still runs in a reasonable time.<p>(And..our test suite is written in Go, with a lot of embedded SQL strings, even if a lot of our business logic is in SQL)
jlund-molfese超过 2 年前
In SQL-heavy ETL pipelines, I normally don&#x27;t test the SQL queries by themselves, but do black box &amp; performance testing to verify that the output of a certain batch job matches what I expect (automated acceptance testing).<p>This is easier if you have the same input every time the tests run, like a frozen database image, because then you can basically have snapshot tests.
评论 #34603196 未加载
ahakanbaba超过 2 年前
The hard part about testing SQL is decoupling from infrastructure and big data sources. We use DuckDB, and pandas dataframes mock data sources to unit test SQL. Python testing frameworks (or simple assert statements) can be used to compare inputs and outputs.<p>When the tests pass, we can change from DuckDB to Spark. This helps decouple testing Spark pipelines from the SparkSession and infrastructure, which saves a lot of compute resources during the iteration process.<p>This setup requires an abstraction layer to make the SQL execution agnostic to platforms and to make the data sources mockable. We use the open source Fugue layer to define the business logic once, and have it be compatible with DuckDB and Spark.<p>It is also worth noting that FugueSQL will support warehouses like BigQuery and Snowflake in the near future as part of their roadmap. So in the future, you can unit test SQL logic, and then bring it to BigQuery&#x2F;Snowflake when ready.<p>For more information, there is this talk on PyData NYC (SQL testing part): <a href="https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=yQHksEh1GCs&amp;t=1766s">https:&#x2F;&#x2F;www.youtube.com&#x2F;watch?v=yQHksEh1GCs&amp;t=1766s</a><p>Fugue project repo: <a href="https:&#x2F;&#x2F;github.com&#x2F;fugue-project&#x2F;fugue&#x2F;">https:&#x2F;&#x2F;github.com&#x2F;fugue-project&#x2F;fugue&#x2F;</a>
评论 #34605372 未加载
seanhunter超过 2 年前
One easy method is just to test sql the way you test anything else:<p>1)Set up a test db instance with controlled data in it as the basis for your test cases. Ideally this data is taken from real data that has caused pipeline problems in the past but scrubbed for PII etc. You can also use or write generators to pad this out with realistic-looking fake data. If you do this the same dataset can be used for demos (once you add data for your demo paths).<p>2)Write test cases using whatever test framework you use in your main language. Say you code in python, you write pytest cases, java -&gt; junit etc. You can help yourself by writing a little scaffolding that takes a sql query and a predicate, runs the query and asserts the predicate over the result. If you don&#x27;t have a &quot;main language&quot;, just write these test cases in a convenient language.<p>3)Consider resetting the state of the database (probably by reloading a controlled dump before each test batch) so any tests which involve inserts&#x2F;deletes etc work. You may actually want to create an entirely new db and load it before each test run so that you can run multiple test batches concurrently against different dbs without contention messing up your results. Depending on your setup you may be able to achieve a similar effect using schemas or (sometimes but not always) transactions. You want each test run to be idempotent and isolated though.<p>Doing it this way has a number of benefits because it&#x27;s easy to add your sql test cases into your CI&#x2F;CD (they just run the same as everything else).
cwp超过 2 年前
Two ideas here:<p>1) The same way you&#x27;d write any other tests. Use your favourite testing framework to write fixtures and tests for the SQL queries:<p><pre><code> - connect to the database - create tables - load test data - run the query - assert you get the results you expect </code></pre> For insert or update queries, that assertion step might involve running another query.<p>2) DBT has support for testing! It&#x27;s quite good. See <a href="https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests" rel="nofollow">https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests</a>
hichkaker超过 2 年前
I did data engineering for 6 years and am building a company to automate SQL validation for dbt users.<p>First, by “testing SQL pipelines”, I assume you mean testing changes to SQL code as part of the development workflow? (vs. monitoring pipelines in production for failures &#x2F; anomalies).<p>If so:<p>1 – assertions. dbt comes with a solid built-in testing framework [1] for expressing assertions such as “this column should have values in the list [A,B,C]” as well checking referential integrity, uniqueness, nulls, etc. There are more advanced packages on top of dbt tests [2]. The problem with assertion testing in general though is that for a moderately complex data pipeline, it’s infeasible to achieve test coverage that would cover most possible failure scenarios.<p>2 – data diff: for every change to SQL, know exactly how the code change affects the output data by comparing the data in dev&#x2F;staging (built off the dev branch code) with the data in production (built off the main branch). We built an open-source tool for that: <a href="https:&#x2F;&#x2F;github.com&#x2F;datafold&#x2F;data-diff">https:&#x2F;&#x2F;github.com&#x2F;datafold&#x2F;data-diff</a>, and we are adding an integration with dbt soon which will make diffing as part of dbt development workflow one command away [2]<p>We make money by selling a Cloud solution for teams that integrates data diff into Github&#x2F;Gitlab CI and automatically diffs every pull request to tell you the how a change to SQL affects the target table you changed, downstream tables and dependent BI tools (video demo: [3])<p>I’ve also written about why reliable change management is so important for data engineering and what are key best practices to implement [4]<p>[1] <a href="https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests" rel="nofollow">https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests</a> [2] <a href="https:&#x2F;&#x2F;github.com&#x2F;calogica&#x2F;dbt-expectations">https:&#x2F;&#x2F;github.com&#x2F;calogica&#x2F;dbt-expectations</a> [3] <a href="https:&#x2F;&#x2F;github.com&#x2F;datafold&#x2F;data-diff&#x2F;pull&#x2F;364">https:&#x2F;&#x2F;github.com&#x2F;datafold&#x2F;data-diff&#x2F;pull&#x2F;364</a> [4] <a href="https:&#x2F;&#x2F;www.datafold.com&#x2F;dbt">https:&#x2F;&#x2F;www.datafold.com&#x2F;dbt</a> [5] <a href="https:&#x2F;&#x2F;www.datafold.com&#x2F;blog&#x2F;the-day-you-stopped-breaking-your-data">https:&#x2F;&#x2F;www.datafold.com&#x2F;blog&#x2F;the-day-you-stopped-breaking-y...</a>
munk-a超过 2 年前
Use a testing framework to mock some database into the DB, run your queries, verify the result. Make sure you have a variety of data you use for tests to fully exercise the surface of logic you expect to hit.<p>Basically, treat the query and database as a black-box for testing like you would another third party API call.<p>I would strongly suggest having a layer of code in your application that is <i>exclusively</i> your data access and keeping any logic you can out of it. Data level tests are pretty onerous to write in the best circumstances and the more complexity you allow to grow around the raw SQL the worse of a time you&#x27;ll have - swapping out where clauses and the like dynamically is a cost you&#x27;ll need to eat, and sometimes having a semi-generic chunk that you reuse with some different joins can be more efficient than writing ten completely different access functions with completely different internal logic so judgement is required.<p>At the end of the day a database is like any other third party software component - data goes in, data comes out... the nice thing is that SQL is well defined and you&#x27;ve got all the definitions so it&#x27;s easier to find the conditional cases you need to really closely tests... but databases are complex beasties and it&#x27;ll never be easy.
评论 #34604821 未加载
uticus超过 2 年前
My favorite interview question. No, I mean when <i>I&#x27;m</i> being interviewed. The sheepish grins let me know I&#x27;m not alone.<p>Best ideas IMO (no particular order):<p>- make SQL dumber, move logic that needs testing out of SQL<p>- use an ORM that allows composing, disconnect composition &amp; test (ie EF for .NET groups, test the LINQ for correct filtering etc, instead of testing for expected data from a db) (I see this has already been recommended elsewhere)<p>* edited formatting
评论 #34607740 未加载
评论 #34605097 未加载
评论 #34605594 未加载
评论 #34603307 未加载
taeric超过 2 年前
Best tool nowadays has to be to spin up a database and execute the queries against it. If you are on a database setup that spinning up an instance takes a long time, consider docker.<p>Be wary of too many techniques that are supposed to be making it easier to test, but also make it hard for you to leave a query pipeline. In particular, SQL should be very easy in the &quot;with these as our base inputs, we expect these as our base outputs.&quot; Trying to test individual parts of the queries is almost certainly doomed to massive bloat of the system and will cause grief later.
uticus超过 2 年前
Related - how is <i>any</i> declarative language tested?<p>Quick web search confirms suspicions, it is not easy<p><a href="https:&#x2F;&#x2F;www.metalevel.at&#x2F;prolog&#x2F;testing" rel="nofollow">https:&#x2F;&#x2F;www.metalevel.at&#x2F;prolog&#x2F;testing</a>
评论 #34603852 未加载
评论 #34603581 未加载
kkleindev超过 2 年前
The teams I&#x27;ve been working on have resorted to data tests instead of code tests. That means that the data produced by your code is tested against a certain set of expectations - in stark contrast to code being tested _before_ its execution.<p>We&#x27;ve written our own tool to compare different data sources against each other. This allows, for example, to test for invariants (or expected variations) between and after a transformation.<p>The tool is open source: <a href="https:&#x2F;&#x2F;github.com&#x2F;QuantCo&#x2F;datajudge">https:&#x2F;&#x2F;github.com&#x2F;QuantCo&#x2F;datajudge</a><p>We&#x27;ve also written a blog post trying to illustrate a use case: <a href="https:&#x2F;&#x2F;tech.quantco.com&#x2F;2022&#x2F;06&#x2F;20&#x2F;datajudge.html" rel="nofollow">https:&#x2F;&#x2F;tech.quantco.com&#x2F;2022&#x2F;06&#x2F;20&#x2F;datajudge.html</a>
sakopov超过 2 年前
.NET Shop using SQL Server here, but I think something similar to what we do can apply to any stack. We use TestContainers [1] to spin up a container with SQL Server engine running on it. Then use FluentMigrator [2] to provision tables and test data to run XUnit integration tests against. This has worked remarkably well.<p>[1] <a href="https:&#x2F;&#x2F;dotnet.testcontainers.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;dotnet.testcontainers.org&#x2F;</a><p>[2] <a href="https:&#x2F;&#x2F;fluentmigrator.github.io&#x2F;" rel="nofollow">https:&#x2F;&#x2F;fluentmigrator.github.io&#x2F;</a>
crabbone超过 2 年前
Can you please expand on what you mean by DBT?<p>DBT, specifically, DBT-2 is a suit of tests designed to <i>benchmark</i> a database <i>system</i>. These tests aren&#x27;t interested in, eg. correctness of an application that is using the database. They are meant to be testing the system as a whole by modeling some sort of an &quot;average business&quot; and defining some sort of an &quot;average business operation&quot; and estimating how many of such operations can a particular deployment of a system perform.<p>Such tests are rarely of any interest to application developer, and are more geared towards DBAs who execute such tests to estimate the efficiency of a system they deploy or to estimate the amount of hardware necessary to support a business.<p>MySQL DBT2 suit: <a href="https:&#x2F;&#x2F;dev.mysql.com&#x2F;downloads&#x2F;benchmarks.html" rel="nofollow">https:&#x2F;&#x2F;dev.mysql.com&#x2F;downloads&#x2F;benchmarks.html</a><p>PostgreSQL DBT2 suit: <a href="https:&#x2F;&#x2F;wiki.postgresql.org&#x2F;wiki&#x2F;DBT-2" rel="nofollow">https:&#x2F;&#x2F;wiki.postgresql.org&#x2F;wiki&#x2F;DBT-2</a><p>Those tools are typically modeled on TPC-B... And, it would require a separate discussion to describe why these tests are obsolete and why there isn&#x27;t really any replacement.<p>----<p>However, from the rest of your question it seems that you may use DBT acronym in some other way... So, what exactly are you testing? Are you interested in performance? A benchmark? Schema correctness? Are you perhaps trying to simply test the <i>application</i> that is using a SQL database and you want to avoid dealing with the database setup as much as possible?
评论 #34610504 未加载
评论 #34610459 未加载
评论 #34610491 未加载
dave_infuseai超过 2 年前
dbt does have testing built in, but of course there are only certain cases for which that kind of testing works. dbt can&#x27;t know if your metrics &#x27;look&#x27; right, only you will know.<p>As others have mentioned, you want to compare the results of your queries against a previously known &#x27;good&#x27; state of the data. So, as you&#x27;re making data model changes, you can regularly check your development environment against production to see how your changes affect the data.<p>Data profiling is the perfect tool for this, especially when your pipeline reaches a certain size, or you&#x27;re dealing with very large datasets.<p>I work on the team creating PipeRider.io, which uses data profiling comparisons as a method of &quot;code review for data&quot;.<p>It becomes particularly useful when you automate generating data profiles of development and production environments in CI, and attach the data profile comparison to the pull request comment. It makes seeing the impact of changes so much easier.<p>Here&#x27;s an article that discusses the benefits of this: <a href="https:&#x2F;&#x2F;blog.infuseai.io&#x2F;why-you-lack-confidence-merging-dbt-data-model-prs-c4f1bc35e7b8" rel="nofollow">https:&#x2F;&#x2F;blog.infuseai.io&#x2F;why-you-lack-confidence-merging-dbt...</a>
Waterluvian超过 2 年前
GitHub Actions trivialized this for us. Spawning a Postgres database for testing is easy and carefree. Spawns. Runs operations and evaluates the state of the database after each operation.<p>We have two flavours of test: one that drops the transaction each time, ensuring a clean, known state. And one that doesn’t, allowing your tests to avoid lots of overhead by “walking through a series of incremental states”.<p>Yes, some might call the latter heresy. But it works great.
dwohnitmok超过 2 年前
Given that you&#x27;re using dbt, it comes with a testing framework out of the box: <a href="https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests" rel="nofollow">https:&#x2F;&#x2F;docs.getdbt.com&#x2F;docs&#x2F;build&#x2F;tests</a>
nephton超过 2 年前
Write a toolbox for your tests that is able to spin up and tear down an instance of the database that you use (eg. on a random port and random password). Make sure that the new instance is really independent, especially storing its data in a separate (temporary) ditectory.<p>During test: - At the start of the test (fixture), run a new DB instance - Apply DB schema. - possibly: Remove constraints that that would disturb your tests (eg. unimportant foreign keys) - possibly: Add default values for columns that are not important for your test (but do with caution) - run you test - Assert results (maybe also directly as access to databse or via a dump of tables). - Tear down database possibly removing all data (except error logs).<p>I used this pattern to test software that uses MySQL or MariaDB server. For Microsoft SQL server it may be enough to create a new database instead of running a new instance (possible but not as easy as for MySQL&#x2F;MariaDB).<p>On CI server this can be used to run tests against all required DB server types and versions.
KronisLV超过 2 年前
I think that when it comes to testing databases... most people just don&#x27;t.<p>Look at this JetBrains survey: <a href="https:&#x2F;&#x2F;www.jetbrains.com&#x2F;lp&#x2F;devecosystem-2021&#x2F;databases&#x2F;" rel="nofollow">https:&#x2F;&#x2F;www.jetbrains.com&#x2F;lp&#x2F;devecosystem-2021&#x2F;databases&#x2F;</a><p>Around half of the people never debug stored procedures. Three quarters of people don&#x27;t have tests in their databases. Only half of the people version their database scripts.<p>Personally: the answer is containers. Spin up a database in a container (manually or on CI server) and do whatever you need with it. Seed it with some test data, connect an app to it, check that app tests pass when writing to and reading from a live database (as opposed to mock data stores or something like H2), then discard the container.<p>Even if you don&#x27;t have a traditional app, throwaway instances of the real type of DB that you&#x27;ll be using are great, both for development and testing.
jiggywiggy超过 2 年前
To be honest.<p>I would love testing to work.<p>Have set up and maintained several unit test suites in Jest.<p>Wrote several large e2e test suites in Cypress.<p>I don&#x27;t think anyone won time from simply having a manual checklist and testing manually.<p>Maybe me and my former teammates are doing it wrong. Talking 8+ teams, from corporate to startup.<p>But loved de proven wrong. E2e def. catched most issues.
评论 #34604079 未加载
gsvclass超过 2 年前
In GraphJin an automatic GraphQL to SQL compiler we use the gnomock library it startups a database instance (docker) then create the schema and tests data and finally our code connects to it and runs a series of tests. We run these across Mysql, Postgres and a few other DB&#x27;s. Gnomock supports a wide range of them. Right now we don&#x27;t take down the db for every test only between test runs but its fast enough that we could. This whole thing runs of a simple `go test -v .` command and we run it on every commit using a githook. <a href="https:&#x2F;&#x2F;github.com&#x2F;dosco&#x2F;graphjin&#x2F;blob&#x2F;master&#x2F;tests&#x2F;dbint_test.go">https:&#x2F;&#x2F;github.com&#x2F;dosco&#x2F;graphjin&#x2F;blob&#x2F;master&#x2F;tests&#x2F;dbint_te...</a>
Ataraxy超过 2 年前
So I&#x27;m not an expert, but for simplistic use cases I merely make use of <a href="https:&#x2F;&#x2F;github.com&#x2F;oguimbal&#x2F;pg-mem">https:&#x2F;&#x2F;github.com&#x2F;oguimbal&#x2F;pg-mem</a><p>It&#x27;s a lot faster and easier than dealing with containers and the like.
评论 #34612087 未加载
luckystarr超过 2 年前
I use a fake object in place of a database connection which gives fake responses when the correct SQL query is sent to it.<p>Example:<p>db = Fake().expect_query(&quot;SELECT * FROM users&quot;, result=[(1, &#x27;Bob&#x27;), (2, &#x27;Joe&#x27;)])<p>Then you do:<p>db.query(&quot;SELECT * FROM users&quot;)<p>and get back the result.<p>In Python if you do this in a context manager, you can ensure that all expected queries actually were issued, because the Fake object can track which ones it already saw and throw an exception on exit.<p>The upside of this is, you don&#x27;t need any database server running for your tests.<p>update: This pattern is usually called db-mock or something like this. There are some packages out there. I built it a few times for companies I worked for.
评论 #34604266 未加载
summerlight超过 2 年前
Google internally has developed a SQL language dialect and some investments have been made to have a first class language supports for SQL. It has several interesting functionalities that made SQL nice for ETL use cases.<p><pre><code> * It has a language level module support, similar to other languages. Thus SQL functions are reusable across multiple codebases without depending on code generation tricks. One of the major blocker for SQL adoption has been complex domain specific business logic and now the situation is better. * It has an official unit test support. Google use Blaze (which is known as Bazel externally), so adding a unit test for SQL code is as simple as adding a SQL module (and its test input) dependency to SQL test target, write a test query and its expected output in a format of approval testing. Setting up the DB environment is all handled by the testing framework. * It has an official SQL binary support. It&#x27;s just a fancy name for handling lots of tedious stuffs for running a SQL query (e.g. putting everything needed into a single package, performing type checks, handling input parameters, managing native code dependencies for FFI etc etc). </code></pre> None of those are technically too sophisticated at least in theory, actually these combined together become pretty handy. Now I can write a simple SQL module which mostly depends on other team&#x27;s SQL module, do a simple unit test for it then run a SQL binary just as other languages. I haven&#x27;t worried a single time on how to set up a DB instance. This loop is largely focused on OLAP so it&#x27;s a bit different for OLTP, which has another type of established testing patterns.
nextlevelwizard超过 2 年前
Test the queries your application is making. I wouldn&#x27;t put much effort into this. You have to trust that other people test their stuff anyway so why make a difference with a database? I&#x27;d much rather test that your backups work.<p>And that can be done by dumping the database (possibly verifying the content of that dump), taking a backup, restoring the backup to a fresh container, then comparing dump of that freshly restored database to the one you took at the start.
评论 #34607599 未加载
systems超过 2 年前
Well, I used to work at a place where they used <a href="https:&#x2F;&#x2F;tsqlt.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;tsqlt.org&#x2F;</a> for testing it worked great<p>I dont know the technical detail of how to set it up, it was already setup when I worked there<p>But basically, we wrote SQL script that included statements to<p>1. create the db structure, tables or views<p>2. insert statement to enter test data (you can insert corner cases etc..)<p>3. ran the function or procedure<p>4. ran an assert to confirm if results are to our expectation<p>test script were ran by the CI&#x2F;CD process
评论 #34641239 未加载
gorgoiler超过 2 年前
Realistically, most of my tests are integration &#x2F; end to end tests. They typically get written only when it comes to patch time, where you first want proof that the old system works before you tear it apart and rebuild it. I think that’s probably the only SQL testing I’ve ever done and honestly, if they are fast enough, that kind of integration testing is all you will need too.<p>As the meme say: <i>App worked before. App work afterwards. Can’t explain that.</i>
creakingstairs超过 2 年前
Phoenix&#x2F;Elixir&#x2F;Ecto seems to handle this really well. When you write a test in Phoenix, you can use `DataCase` which automatically creates sandboxed transactions [1] for each test. This makes writing integration tests a breeze.<p>[1] <a href="https:&#x2F;&#x2F;hexdocs.pm&#x2F;ecto_sql&#x2F;Ecto.Adapters.SQL.html#module-sql-sandbox" rel="nofollow">https:&#x2F;&#x2F;hexdocs.pm&#x2F;ecto_sql&#x2F;Ecto.Adapters.SQL.html#module-sq...</a>
ufmace超过 2 年前
I don&#x27;t think there&#x27;s any perfect universal answer for this. I only have a few things I&#x27;ve done that work.<p>Rails for Ruby comes with some pretty nice setups for testing the database code. There&#x27;s a test DB by default with the same schema as Production, and the usual test frameworks (FactoryBot and RSpec) make it easy to set up some data in the actual DB for each spec, run model code that makes actual SQL queries, and assert against the results.<p>I would have hoped most other web hosting frameworks would make as much effort to making it straightforward to test your database code, but it doesn&#x27;t really seem to be the case.<p>In Rust, there&#x27;s a very handy crate called sqlx. What it does is, at compile time, it runs all of the SQL in your codebase against a copy of your database to both validate that it runs without errors and map the input and output types to typecheck the Rust code.<p>When it comes to stuff like validating that your queries are performant against production datasets or that there isn&#x27;t any unexpected data in production that breaks your queries, well I pretty much got nothing. Maybe try a read replica to execute against?
epgui超过 2 年前
We use this and take an example-based tests approach for any non-trivial DBT models: <a href="https:&#x2F;&#x2F;github.com&#x2F;EqualExperts&#x2F;dbt-unit-testing">https:&#x2F;&#x2F;github.com&#x2F;EqualExperts&#x2F;dbt-unit-testing</a><p>More trivial example:<p><pre><code> {% call dbt_unit_testing.test( &#x27;REDACTED&#x27;, &#x27;Should replace nullish values with NULL&#x27; ) %} {% call dbt_unit_testing.mock_source(&#x27;REDACTED&#x27;, &#x27;REDACTED&#x27;, opts) %} &quot;id&quot; | &quot;industry&quot; 1 | &#x27;A&#x27; 2 | &#x27;B&#x27; 3 | &#x27;&#x27; 4 | &#x27;Other&#x27; 5 | &#x27;C&#x27; 6 | NULL {% endcall %} {% call dbt_unit_testing.expect(opts) %} &quot;history_id&quot; | &quot;REDACTED&quot; 1 | &#x27;A&#x27; 2 | &#x27;B&#x27; 3 | NULL 4 | NULL 5 | &#x27;C&#x27; 6 | NULL {% endcall %} {% endcall %}</code></pre>
davedx超过 2 年前
SQL being relatively pure, functional, algebraic and whatnot, doesn&#x27;t require the same rigor of automated test coverage that more &quot;systems&quot; programming languages do. (By &quot;systems&quot; I include all languages that people use to integrate the various parts of a software system - i.e. regular programming languages like Java, C#, TypeScript, C++ and so on. Not just low level languages.)<p>Stored procedures are a different beast though. Having significantly struggled to debug stored procedures running in MSSQL on a Macbook (on Windows SQL Management Studio lets you set breakpoints, on Mac you&#x27;re SOL), if I was building an application based on them I&#x27;d definitely try to spin up some kind of testing framework around them. I guess what I&#x27;d probably do is have a temporary database and some regular testing framework that nukes the db, then calls the stored proc(s) with different inputs and checks what&#x27;s in the tables after each run. Sounds slow and clunky?
评论 #34607993 未加载
winrid超过 2 年前
I try to do some kind of compile-time query checking. I really like sqlx with Rust, and other languages have some kind of equivalent (although maybe not as nice) like JOOQ. If you can store the queries in some kind of configuration, like SQL files, then this is easy no matter the language.
jesseryoung超过 2 年前
I started in the software engineering space and move into data engineering, and I was floored with the complete lack of tooling. There is a HUGE gap between software engineering and data engineering when it comes to both tooling and practice. Even the simplest &quot;unit&quot; test of &quot;Is the SQL statement valid&quot; is not all that common in frameworks and tooling but in practice is like 90% of the production failures that I&#x27;ve seen.<p>Starting with a framework that is programming language first (IE Spark) can help you build your own tooling to help you actually build unit tests. It&#x27;s frustrating though, that this isn&#x27;t just common across other ETL tooling.
samsquire超过 2 年前
From an SQL database implementation perspective, in my toy Python barebones SQL database that barely supports inner joins (<a href="https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;hash-db">https:&#x2F;&#x2F;github.com&#x2F;samsquire&#x2F;hash-db</a>) I tested by testing on postgresql and seeing if my query with two joins produces the same results.<p>I ought to produce unit tests that prove that tuples from each join operation produces the correct dataset. I&#x27;ve only ever tested with 3 join operations in one query.<p>For a user perspective, I guess you could write some tooling that loads example data into a database and does an incremental join with each part of the join statement added.
评论 #34603267 未加载
alphanumeric0超过 2 年前
I actively avoid it. The SQL languages have no default libraries for testing. Furthermore, SQL cannot be easily composed, or abstracted into modules, so there is no sense in testing it. It is not designed to be tested.
pmarreck超过 2 年前
Excellent question. Not sure why it&#x27;s getting no traction.<p>For my own use-cases, I usually test this at the application level and not the DB level. This is admittedly not unit-testing my SQL (or stored procs or triggers) but integration-testing it.
评论 #34603142 未加载
megalan247超过 2 年前
I experienced this issue in many companies and found there to be no real solution, especially when not testing with the real data. This was one of the reasons I created DrvDB [1] as it allows you to store a copy of the data and very quickly spin up containers to test large databases in CI, and verify the output, performance, etc. is what you expect.<p>You can achieve the same thing with &quot;docker commit&quot;-ing data into docker images of your dB engine of choice, and firing your queries on them, but that only really works with smaller datasets.<p>[1] <a href="https:&#x2F;&#x2F;devdb.cloud" rel="nofollow">https:&#x2F;&#x2F;devdb.cloud</a>
评论 #34603863 未加载
hansvm超过 2 年前
You get a lot of bang for your buck with an expected input&#x2F;output setup. In basically every database it&#x27;s trivial to set up a few tiny tables, and it&#x27;s cheap to run a query on small input. Pick a few edge cases and a few non-trivial representative examples, and any passing query written by a real person will likely express the logic you care about or will expose an additional input&#x2F;output pair to add to the tests. Combine the high efficacy with the ease of writing and understanding such a test, and it becomes hard to argue against having at least a few.
AdrianB1超过 2 年前
It depends a lot on your use case. In my case we have SQL running against tables with trillions of rows, so we need to take a look at every single SQL query in the code that runs more than tens of milliseconds or often enough to get significant. There is no automation for a good DBA looking at an execution plan; I heard about a guy that works in some financial company where his job for the past 10 days was to tune the same ~ 10 queries to the death, but if your app is working with a database that can be hosted comfortably on a smartphone, none of this is needed.
zabzonk超过 2 年前
assuming you are asking about sql select statements, the problem is knowing what the correct answer is so you can test against it. for most data, you don&#x27;t, and probably cannot know this.<p>not a unique problem with sql, btw.
评论 #34603785 未加载
rowls66超过 2 年前
This is a little off topic, but related. Does anyone know of any tools that would allow information about the size and shape of expected data to be provided along with a database schema so that developers could get instant feedback if queries were likely to perform poorly when run against production data sets. Or to perform poorly when a database grows to beyond a certain size. I have seen many instances of SQL going into production databases that works well for a while, but gets much slower as the database grows.
thingsilearned超过 2 年前
This linter can really enforce some best practices <a href="https:&#x2F;&#x2F;github.com&#x2F;sqlfluff&#x2F;sqlfluff">https:&#x2F;&#x2F;github.com&#x2F;sqlfluff&#x2F;sqlfluff</a><p>A list of best practices: <a href="https:&#x2F;&#x2F;docs.getdbt.com&#x2F;guides&#x2F;legacy&#x2F;best-practices" rel="nofollow">https:&#x2F;&#x2F;docs.getdbt.com&#x2F;guides&#x2F;legacy&#x2F;best-practices</a><p>And shameless plug but there&#x27;s a chapter on modeling in my book: <a href="https:&#x2F;&#x2F;theinformedcompany.com" rel="nofollow">https:&#x2F;&#x2F;theinformedcompany.com</a>
cgopalan超过 2 年前
You say you are using dbt, so doesnt &quot;dbt test&quot; provide you with the functionality to test? I assume by testing sql pipelines, you want to test if the data written to intermediate stores conform to what you expect. You should be able to do that with dbt test. If you are using an analytical database like Snowflake you could direct the results of the dbt run and dbt test to a test database and do your testing there.
idlewords超过 2 年前
Live in production!
评论 #34610710 未加载
geocrasher超过 2 年前
SQL pipelines can be tested pretty easily. If they look congested, use a snake (Python, if you like) to try to knock the CRUD out of them.
davvid超过 2 年前
Here&#x27;s a nifty python module for testing against postgres that a friend wrote. It creates a temporary database on the fly for testing and tears it down at the end. You can then populate the database and run all kinds of integration tests against it.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ugtar&#x2F;pg_temp">https:&#x2F;&#x2F;github.com&#x2F;ugtar&#x2F;pg_temp</a>
edublancas超过 2 年前
If you use Jupyter, check out JupySQL. It allows you to break down long queries in multiple cells so you can test them individually: <a href="https:&#x2F;&#x2F;jupysql.ploomber.io&#x2F;en&#x2F;latest&#x2F;compose.html">https:&#x2F;&#x2F;jupysql.ploomber.io&#x2F;en&#x2F;latest&#x2F;compose.html</a> (Disclosure: my team develops JupySQL)
andy_ppp超过 2 年前
I really love (as with lots of things) how Ecto from the Elixir community handles this, you have an extra database and because Postgres has awesome transactions you can even run all theses tests of your whole data layer in parallel, including any SQL. Ecto is largely a domain specific language for writing modular SQL so that helps test things too.
sam0x17超过 2 年前
Side note: I believe it is good form to always have unit tests that test the &quot;up&quot; and &quot;down&quot; for every single migration in your app. It&#x27;s not always possible but if you&#x27;re strict about it you can avoid a lot of bad patterns and have a much healthier set of migrations
hans0l074超过 2 年前
On a related note (though it does touch upon testing) mitchellh open sourced Squire[1] recently (today?). Note though, that it&#x27;s PostgreSQL specific.<p>[1] <a href="https:&#x2F;&#x2F;github.com&#x2F;mitchellh&#x2F;squire">https:&#x2F;&#x2F;github.com&#x2F;mitchellh&#x2F;squire</a>
gxt超过 2 年前
Abstraction layer between the query you write and the one that gets executed. This way you can mock the schema, run the query on the mock to assert some condition x.<p>A ref() concept like dbt&#x27;s is sufficient. When testing, have ref output a different (test-x) name for all your references.
nitwit005超过 2 年前
If your app is API driven, I would create test data with the APIs, and test the queries against what that generates. APIs tend to change less often than the internal data representation, and the app developers can usually figure out how to fix the tests when they do.
iblaine超过 2 年前
If I were building a tool to test SQL, then I&#x27;d try to load the SQL into a dataframe, then test it by mocking the tables and the output. This is a tough problem to solve. If testing is important, possibly move away from SQL and towards ORMs.
JenrHywy超过 2 年前
We test it implicitly with integration tests across the API for the most part. We occasionally have unit tests (in C#) that directly call stored procs.<p>The backbone for this is that we spin up a DB per unit test, so we don&#x27;t have to worry about shared state.
dangwhy超过 2 年前
I&#x27;ve had great success with this<p><a href="https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34580675" rel="nofollow">https:&#x2F;&#x2F;news.ycombinator.com&#x2F;item?id=34580675</a><p>I am always baffled by why this ins&#x27;t more popular way of writing SQL.
javaunsafe2019超过 2 年前
I always did that with integration tests. Put some data in the db. Use the repository aka your sql and validate the results.<p>Most of the times there is a layer around your sql (a repository, a bash script or whatever) that you can use for integration testing.
devdada超过 2 年前
Creating a separate isolated environment is totally the way to go, and migrate the data from your existing database to it. This way the data turns to &quot;dummy&quot; data but it provides you with something to conduct testing with.
jve超过 2 年前
I regret making an integration in SQL Server Integration Services, with some nice addon tools that can make http requests and such. In the end it is untestable piece of hard to follow solution.<p>Anyone have any recommendations on testing SSIS ?
pharmakom超过 2 年前
Testing databases with Docker, dummy data, etc. can be very slow so it’s a big win to use Bazel as the test executor. This enables caching between runs and between machines. Saves us about 20 mins of CI time every build.
Gigachad超过 2 年前
Part of the solution could be using tooling which can compile time check SQL is valid like <a href="https:&#x2F;&#x2F;github.com&#x2F;launchbadge&#x2F;sqlx">https:&#x2F;&#x2F;github.com&#x2F;launchbadge&#x2F;sqlx</a>
nikita超过 2 年前
Disclosure. I&#x27;m CEO of Neon (neon.tech).<p>One of the premises that we have is the ability to instantly create a test environment by creating a branch. I&#x27;d love to hear what you think about it.
mikpanko超过 2 年前
Datafold (<a href="https:&#x2F;&#x2F;www.datafold.com&#x2F;">https:&#x2F;&#x2F;www.datafold.com&#x2F;</a>) is solving exactly that problem and has integration with dbt.
xmorse超过 2 年前
I do E2E tests for the core features against a staging database and url.<p>This way i don&#x27;t waste time with unit tests that quickly get old and no one wants to maintain and run
hparadiz超过 2 年前
Surprised only a single comment in this thread had mentioned the &quot;explain&quot; keyword that exists in MySQL.<p>Excellent for checking delete queries before running them.
d0100超过 2 年前
I&#x27;ve been using IntegreSQL and it works pretty well
评论 #34607108 未加载
johnthescott超过 2 年前
sqldb testing is historically problematic, for at least correctness AND performance.<p>in postgresql a cool tool for performance is&quot;hypothetical indexing&quot;, which predicts how the optimizer will use indexes in any sql query. i could see an automated testing tool written around &quot;hypothetical indexing&quot;.<p>also, i believe MSServer supports HI.
spikder超过 2 年前
I have done this more than once and it is great. Point django at your db, dump the models, and then use django&#x27;s test framework.
dirtybirdnj超过 2 年前
in production, on a friday before I leave for vacation<p>for real though I love tools like SequelPro or TablePlus that let me work out a query before I bake logic or stuff into my apps. Also sometimes I use it to work out the data needed for reports. I am working with salesforce for the first time in my life and apparently there are tools that let me treat it like I&#x27;m used to SequelPro.
Andys超过 2 年前
Something to consider (for Go users): Use the sqlc library, which makes whole classes of bugs&#x2F;mistakes into compile-time errors.
DecoPerson超过 2 年前
In production ;)<p>But my app is for six users at one site, it’s not mission critical, and the sqlite DB is backed up hourly.<p>Life’s too short for (unnecessary) testing.
world2vec超过 2 年前
DBT supports testing right out of the gate, you can write little queries and macros to test and validate every column.
EGreg超过 2 年前
Have small databases.<p>Learn to use IMPORT TABLESPACE in MySQL or just dump and import SQL.<p>Every time you run a test you set up the mock databases again.
dllthomas超过 2 年前
I&#x27;ve been wanting to put together a property testing framework that lets you specify properties as queries.
kindofabigdeal超过 2 年前
Either a test environment or a mocking library
z3t4超过 2 年前
Local Dev dB and&#x2F;or staging
User23超过 2 年前
Like any other integration test.
postalrat超过 2 年前
Git, merge requests, developers.
neural_thing超过 2 年前
Great Expectations and Datafold
distantsounds超过 2 年前
Flyway, <a href="https:&#x2F;&#x2F;flywaydb.org&#x2F;" rel="nofollow">https:&#x2F;&#x2F;flywaydb.org&#x2F;</a>
tommyage超过 2 年前
Relational databases rely on math. You may test your implementation; Or the input to your statements.
karakanb超过 2 年前
Disclaimer: I am building a data platform called Blast, and part of our focus is to make SQL pipelines easier to maintain at every aspect: easier to write, easier to test, easier to ensure it is correct before they are deployed. Link in my bio if you are interested, happy to have a conversation anyway. This is a problem we have been thinking and providing solutions for quite some time.<p>There are a few types of tests one would like from a SQL pipeline, each with a different value add:<p>- Quality assurance tests: these are things like DBT tests, they mainly test the accuracy of the result after the tables are produced. Examples of this would be tests like &quot;this column should not contain any `null` values&quot; or &quot;it should have only X, Y and Z values&quot;. They are valuable checks, but the matter of the fact is that there are many cases where running this sorts of tests after the data is produced is a bit too late.<p>- Integration tests: specify an input table and your expected output, and run your queries against it, the end result must match with the expectations at all times. This is useful for running them regularly and serve as &quot;integration tests&quot; for your SQL assets. They allow validating the logic inside the query, provided that the input is covering the cases that needs to be covered, they can be executed in CI&#x2F;CD pipelines. We are exploring a new way of doing this with Blast CLI, effectively running a BigQuery compatible database in-memory and running tests against every asset in the pipeline locally.<p>- Validation tests: these tests aim to ensure that the query is syntactically correct on the production DWH, usually using tricks like `EXPLAIN` or dry-run in BigQuery. These sorts of tests would ensure that the tables&#x2F;fields referenced actually exist, the types are valid, the query has no syntax errors, etc.. These are very useful for running in CI after every change, effectively allowing catching many classes of bugs.<p>- Sanity checks: these are similar to the quality assurance tests described above, but with a bigger focus on making sense out of the data. They range from &quot;this table has no more rows than this other table&quot; to business-level checks such as &quot;the conversion rate for this week cannot be more than 20% lower compared to last week&quot;. They are executed after the data is produced as well, and they would serve as an alerting layer.<p>There is no silver bullet when it comes to testing SQL, because in the end what is being tested is not just the SQL query but the data asset itself, which makes things more complicated. The fact that SQL has no standardized way of testing things and the language has a lot of dialects make this harder than it could have been. In my experience, I have found the combination of the strategies above to have a very good coverage when it comes to approximating how accurate the queries are and how trustworthy the end result is, provided that a healthy mix of them is being used throughout the whole development lifecycle.
davps超过 2 年前
This approach didn&#x27;t use an ORM and run the tests concurrently against the same database.<p>I follow those steps on my pipeline:<p>Every time I commit changes the CI&#x2F;CD pipeline follow those steps, on this order:<p>- I use sqitch for the database migration (my DB is postgresql).<p>- Run the migration script `sqitch deploy`. It runs only the items that hasn&#x27;t been migrated yet.<p>- Run the `revert all` feature of sqitch to check if the revert action works well too.<p>- I run `sqitch deploy` again to test if the migration works well from scratch.<p>- After the schema migration has been applied, I run integration tests with Typescript and a test runner, which includes a mix of application tests and database tests too.<p>- If everything goes well, then it runs the migration script to the staging environment, and eventually it runs on the production database after a series of other steps on the pipeline.<p>I test my database queries from Typescript in this way:<p>-in practice I&#x27;m not strict on separating the tests from the database queries and the application code, instead, I test the layers as they are being developed, starting from simple inserts on the database, where I test my application CRUD functions that is being developed, plus to the fixtures generators (the code that generate synthetic data for my tests) and the deletion and test cleanup capabilities.<p>-having those boilerplate code, then I start testing the complex queries, and if a query is large enough (and assuming there are no performance penalties using CTE for those cases), I write my largue queries on small chunks on a cte, like this (replace SELECT 1 by your queries):<p><pre><code> export const sql_start = ` WITH dummy_start AS ( SELECT 1 ) export const step_2 = `${sql_start}, step_2 AS ( SELECT 1 ) `; export const step_3 = `${step_2}, step_3 AS ( SELECT 1 ) `; export const final_sql_query_to_use_in_app = ` ${step_3}, final_sql_query_to_use_in_app AS( SELECT 1 ) SELECT \* FROM final_sql_query_to_use_in_app</code></pre> `;<p>Then on my tests I can quickly pick any step of the CTE to test it<p><pre><code> import {step_2, step_3, final_sql_query_to_use_in_app} from &#x27;.&#x2F;my-query&#x27;; test(&#x27;my test&#x27;, async t =&gt; { &#x2F;&#x2F; &#x2F;&#x2F; here goes the code that load the fixtures (testing data) to the database &#x2F;&#x2F; &#x2F;&#x2F;this is one test, repeat for each step of your sql query const sql = `${step_3} SELECT * FROM step_3 WHERE ..... `; const {rows: myResult} = await db.query(sql, [myParam]); t.is(myResult.length, 3); &#x2F;&#x2F; &#x2F;&#x2F; here goes the code that cleanup the testing data created for this test &#x2F;&#x2F; }); </code></pre> and on my application, I just use the final query:<p><pre><code> import {final_sql_query_to_use_in_app} from &#x27;.&#x2F;my-query&#x27;; db.query(final_sql_query_to_use_in_app) </code></pre> The tests start with an empty database (sqitch deploy just ran on it), then each test creates its own data fixtures (this is the more time consuming part of the test process) with UUIDs as synthetic data so I don&#x27;t have conflicts between each test data, which makes it possible to run the tests concurrenlty, which is important to detect bugs on the queries too. Also, I include a cleanup process after each tests so after finishing the tests the database is empty of data again.<p>For sql queries that are critical pieces, I was be able to develop thounsands of automated tests with this approach and in addition to combinatorial approaches. In cases where a column of a view are basically a operation of states, if you write the logic in sql directly, you can test the combination of states from a spreadsheet (each colum is an state), and combining the states you can fill the expectations directly on the spreadsheet and give it to the test suites to run the scenarios and expectations by consuming the csv version of your spreadsheets.<p>If you are interested on more details just ping me, I&#x27;ll be happy to share more about my approach.
评论 #34606516 未加载
评论 #34604160 未加载
评论 #34604151 未加载
sqldba超过 2 年前
Thoughts and prayers.
Teamteam16超过 2 年前
SQL.lite
devdada超过 2 年前
Test
opportune超过 2 年前
I used to work on SQL pipelines, sadly I didn’t get to implement some of these suggestions, but we did implement others.<p>For testing:<p>Run your query&#x2F;pipeline against synthetic&#x2F;manual data that you can easily verify the correctness of. This is like a unit test.<p>Run your query&#x2F;pipeline on sampled actual data (eg 0.1% of the furthest upstream data you care about). This is like an integration test or a canary. Instead of taking 0.1% of all records you might instead want to sample 0.1% of all USERID so that things like aggregate values can be sanity checked.<p>Compare the results of the new query to the results from the old query&#x2F;pipeline. This is like a regression test. You may think this wouldn’t help for many changes because the output is expected to change, but you could run this only on e.g. a subset of columns.<p>Take the output of the new query (or sampled query, or the manual query) and feed it to whatever is downstream. This is like a conformance test.<p>For reliability:<p>If the cost is not prohibitive, consider persisting temporary query results (eg between stages of your pipeline) for 1-2 weeks. This way if you catch a bug from a recent change you only need to rerun the part of your pipeline after the breakage. May not make sense to do if your pipeline is not big<p>If the cost is not prohibitive you could also run both the new and old versions of the pipeline for ~a week so that you can quickly “rollback”. Ofc whether this is viable depends on what you’re doing.<p>The big failure modes with SQL pipelines IME are<p>1. unexpected edge cases and bad data causing queries to fail (eg you manually test the new query and it works fine, but in production it fails when handling Unicode)<p>2. not having a plan for what to do when a bug gets caught after the fact<p>3. barely ever noticing bugs or lost data because nobody is validating the output (for example, if you have a pipeline that aggregates a user’s records over a day, any USERID that’s in the input data for that day should also be in the output data for that day).<p>4. This can be very hard to solve depending on your circumstances, but upstream changes in data are the most annoying and intractable to solve. The best case here is you either spec out the input data closely OR have some kind of testing in place that the upstream folks run before shipping changes.<p>To address these, you need to take the approach of expecting things to fail, rather than hoping they don’t. This is common practice in many SWE shops these days but the culture in the data world hasn’t quite caught up. I think part of the problem is that automating this testing usually requires at least some scripting&#x2F;programming which is outside the comfort zone for many people who “just write SQL.”
2v35gggg超过 2 年前
by running it in prod duh
sbricks超过 2 年前
you don&#x27;t?...
chadlavi超过 2 年前
that&#x27;s the neat thing, you don&#x27;t
wokwokwok超过 2 年前
Two days ago, everyone: “You should use SQL because everyone knows it and modern SQL is pretty good.<p>Other languages are too complicated. :(“<p>Everyone today: “<i>tries using sql</i><p>Oh wow, the tooling is quite basic, and you can’t express complex data structures and imperative code. :(“<p>What did you expect?<p>Look, I spent 4 years in this rabbit hole, and here’s my advice:<p>Don’t try to put the square peg in the round hole.<p>You want easy to write, simple code and pipelines? Just use sql.<p>Have a dev environment and run everything against that to verify it.<p>Do not bother with unit testing your CTEs, it’s hard to do, there are no good tools to do it.<p>If you want Strong Engineering TM, use python and spark and all the python libraries that exist to do all that stuff.<p>It won’t be as quick to write, or make changes to, but it will be easier to write more verifiably robust code.<p>If you treat either as something it is not (eg. Writing complex data structures and frameworks <i>in sql</i>) you’re using the wrong tool for the outcome you’re trying to achieve.<p>It’ll take longer and “feel bad”, not because the tool is bad, but because you’re using it in a bad way.