DBT’s missing software engineering piece: unit tests

Cor Zuurmond/
27 May, 2022

We talk about dbt quitealot. We talked about how we use dbt for data transformations, or for data testing, or for knowledge sharing with data docs. However, we did not talk yet about what is missing in dbt! And, an important software engineering piece is missing: unit tests.

Unit tests

When we write data transformations pipelines without dbt, we write unit tests to verify that our code works as expected. As most programmers know: unit testing is an important software engineering best practice. A quick refresher for y’all:

Unit tests are (small) test cases that typically consist of four steps: (i) define expected output, (ii, optional) define input, (iii) run to-be-tested code and (iv) verify the output is the same as the expected output. If the output is not the same as the expected output, your test failed! If it is the same, your test passed!

Sometimes it feels cumbersome to write unit tests, however, most programmers acknowledge that tests save time in the long run. Tests become increasingly important when your code base grows. The bigger the project, the harder it is to know the impact of a change. Unit tests verify that your code does what you expect it to do.

Moreover, there are additional benefits to unit testing, for example unit tests are a form of documentation and allow you to safely refactor code.

Production without unit tests?!

Wait what?! Am I saying we run dbt in production without unit tests?! Uhm … yes, we are … but we do test our data!

Dbt makes data testing easy. And, data tests are the integration test for your data application. They are one level higher on the test pyramid. They are a more complete, final check than unit tests. A unit test suite might succeed while you have data issues.

But, unit tests speed up the development process. Good unit tests are isolated, making them predictable, and thus easier to test and review your code changes than with data tests.

From experience I know that reviewing PRs with complicated SQL + jinja statements is hard. I have approved PRs in a dbt project which created issues in production that could have been caught with unit tests.

Luckily, we do not have to choose between unit and data tests, we can have both!

Unit test dbt with pytest

I made a Pytest plugin for unit testing a dbt project. The user documentation is on read the docs. You install the package from PyPI using pip:

pip install pytest-dbt-core

The project adds a fixture, macro_generator, which fetches a macro from your project using dbt’s internal classes. Then you define a test for your macro: (i) define expected output, (ii, optional) define input, (iii) run macro with optional input and (iv) compare the output with the expected output. An example:

import pytest
from dbt.clients.jinja import MacroGenerator
from pyspark.sql import SparkSession

@pytest.mark.parametrize(
    "macro_generator", ["macro.spark_utils.get_tables"], indirect=True
)
def test_create_table(
    spark_session: SparkSession, macro_generator: MacroGenerator
) -> None:
    expected_table = "default.example"
    spark_session.sql(f"CREATE TABLE {expected_table} (id int) USING parquet")
    tables = macro_generator()
    assert tables == [expected_table]

The macro can be a part of a query too, for example:

import pytest
from dbt.clients.jinja import MacroGenerator
from pyspark.sql import SparkSession

@pytest.mark.parametrize(
    "macro_generator",
    ["macro.my_project.to_cents"],
    indirect=True,
)
def test_dollar_to_cents(
    spark_session: SparkSession, macro_generator: MacroGenerator
) -> None:
    expected = spark_session.createDataFrame([{"cents": 1000}])
    to_cents = macro_generator("price")
    out = spark_session.sql(
        "with data AS (SELECT 10 AS price) "
        f"SELECT cast({to_cents} AS bigint) AS cents FROM data"
    )
    assert out.collect() == expected.collect()

The examples show how to write tests when using dbt with Spark. The plugin is written to work with any dbt adapter. You require an engine to run the SQL against, like a database or warehouse session.

Limitations

There is a fundamental limitation with this set-up: Python. It goes against the SQL first approach of dbt. And, we like dbt because it is SQL first! A language that targets a broad audience. So, why do we introduce Python for a unit testing framework?

First, if there is a good alternative to write unit tests using a SQL framework, I would like to know! There are some alternatives out there already (see next section) that allow you to write tests by (i) define expected results, (ii) define input data, (iii) run a model and (iv) compare output with the expected result. However, I felt a SQL first testing framework to be limiting, I want to have the full flexibility that pytest gives you. A couple examples:

The first example in the section above is not a macro that a typical dbt user writes, more advanced users or adapters maintainers could write a macro that fetches all tables from a warehouse. Such a query does not fit a SQL testing framework that is designed to test data transformations only.

The second example in the section above tests a subquery, that also does not fit such a SQL testing framework.

And, when your dbt project grows, you want to keep your test base under control by:

Alternatives

There is a Github discussion about unit testing SQL in dbt. See the discussion for a full overview, a quick summary:

Dbt seeds could be used to add input and expected output data for testing. However, people felt that seeds gives a poor developer experience because it is slow, SQL mocks are preferred.

A framework as mentioned above made sense to various developers, for example by defining an ..._input.sql and ..._expected.sql file next to the model you want to test. Then, your testing framework (i) runs the expected file, (ii) then the input file, (iii) then your model and (iv) finally compares the expected with the actual output. This approach has been taken a step further by Equal Experts’ dbt package that allows you to run unit tests with macros: mock_ref, mock_source, expect and test.

Recently dbt released a new way for testing adapters as part of version 1.1. While the title says "testing a new adapter", the testing framework can be used for defining your own tests. I very much like that dbt is shipping unit testing capabilities with its core! That is a major benefit of using this framework.

The approach of pytest-dbt-core is inspiration for the initial implementation of using pytest built-ins for testing dbt adapters. Prior to this new framework, dbt adapters are tested with a pytest extension dbtspec which runs tests from the command line – it does not integrate within code.

Dbt core testing framework

I prefer the pytest-dbt-core testing framework because I find the indirectness of dbt’s testing framework confusing and thus hard to use. This is an opinionated style difference on how you should write tests with pytest. To motivate my point of view:

In dbt core, tests are defined in classes instead of functions. This style originates from the unit test library, which has been superseded by pytest. Within pytest classes allow you to group tests, given you benefits like test organization, sharing fixtures and applying marks on a class instead of per test. I do not like to use test classes because they introduce a level of indirectness through (multiple) inheritance and an implicitly ran setup/teardown method.

The dbt core tests framework does not integrate smoothly with your project. For example, you need to (re)define your profile or configuration instead reusing the one from your project.

The dbt core plugin does not integrate smoothly with pytest, you first have to create a global variable that points pytest_plugins to the dbt.tests.fixtures.project prior to running your tests.

The framework relies on run_dbt which is a high-level entrypoint to dbt that makes it hard to track what exactly you are running – and thus testing.

Note that I have contributed to dbt’s test suite, I know from experience that their test suite is complex. I advise(d) on how to improve their test suite, on high level choices like using test classes instead of functions and on smaller issues like a decorator with side effects. It is not straightforward to maintain and change a big open-source project like dbt. Moreover, testing is important but usually not the top priority. pytest-dbt-core has the benefit of having learned from dbt’s test suite and to start from scratch.

Conclusion

It is too early to say if pytest-dbt-core will be adopted by the community as unit testing framework for dbt. As motivated above, I think there are benefits of this framework compared to the alternatives. However, there are some clear disadvantages too, like not being SQL first and not being part of dbt core.

I expect unit testing to be a hot topic within the dbt community this year. I am keeping a close eye on this topic. Let’s see if we converge to one solution!

Subscribe to our newsletter

Stay up to date on the latest insights and best-practices by registering for the GoDataDriven newsletter.