Alex KlausDeveloper  |  Architect  |  Leader
Pain & Gain of automated tests against SQL (MS SQL, PostgreSQL)
25 March 2022

Pain & Gain of SQL testing

There is a .NET project. Running hundreds of SQL commands and queries. Primarily via an ORM. But occasionally doing the heavy lifting in carefully tuned high-performed SQL queries.

How to orchestrate automated tests against a live SQL engine for this project?

Considering the tests focus on asserting a specific behaviour (e.g. applying BDD), where SQL queries/commands are an essential part, isolating which might hurt the reliability of the project. And also, using in-memory DB providers would mask potential issues and discouraged by Microsoft.

The Gain

The benefits are clear – fast and fully automated quality assurance of the code, the database schema and DB operations in all environments:

  • locally on dev machines (that run Windows, macOS or Linux);
  • build pipelines in the cloud (e.g. Azure DevOps pipelines, GitHub Actions, etc.).

The Pain

Even a simple test project has to go through these steps:

  1. Launch SQL engine 😞
  2. Create a test database 🤕
  3. Populate the database (schema, logic, lookup dictionaries, etc.) 😢
  4. Connect xUnit/NUnit tests to the database, and for each test (rinse and repeat):
    • Seed test data
    • Perform the testable activity and checks
    • Revert the database to the pristine state 😓
  5. Tear down the SQL engine along with the database and other artefacts 😢

For instant pain relief, jump straight to DbSample on GitHub – an example of an EF Core based project with fully automated tests against SQL Server running in GitHub Actions.

This post dives deeper and analyses all potential paths for orchestrating tests. Let’s get it started.

1. Launch SQL in Docker

When it comes to simple spin-off and tear-down, there’s no better option than Docker.

1.1. MS SQL Server

SQL Server 2017 and later operates on Linux running the exact same engine as on Windows and other supported platforms. So, getting a Docker image for Express Edition shouldn’t be an issue:

docker run --name sql-server -e "ACCEPT_EULA=Y" -e "SA_PASSWORD=Secret_Passw0rd" -e "MSSQL_PID=Express" -p 1433:1433 -d mcr.microsoft.com/mssql/server 

Note: SQL Server would be unable to set a simple password that violates the Password Policy, so be creative 😋 but try to avoid $ in bash or other reserved symbols for your shell.

Foreseeing that some readers may ask about using LocalDB, it would be no help here as LocalDB is just a packaging mechanism for SQL Server Express Edition available only for Windows.

1.2. PostgreSQL

For PostgreSQL (Docker image) it would be

docker run --name sql-server -e "POSTGRES_PASSWORD=Secret_Passw0rd" -p 5432:5432 -d postgres

And hey, it doesn’t include any password complexity enforcement by default.

1.3. Caveat for ARM processors

As MS SQL Server doesn’t support ARM (issue 668), devs on ARM-based systems (e.g. Apple’s M-series) have two options

It’s better for PostgreSQL that works on ARM chip since it came out. Though, you could hit a snag till recently (e.g. see this issue).

Note that Docker itself might misbehave on ARM (e.g. this issue), but it’s getting less likely overtime.

1.4. Why not everything runs in Docker?

Bundling up a test project together with SQL engine by using Docker Compose is a good way to prepare a package on a dev machine and run it everywhere. As it’s hard to avoid small scripts (in either Bash, PowerShell, etc.), a Docker Compose file could be a handy workaround for executing bespoke logic on all platforms.

One big downside – debugging tests locally from the IDE would get very cumbersome. So here we continue with a standard SQL docker image and a bundle of commands to run (in my example it’s in Bash).

BTW if you really need to go down that path, check out ”Testing With Docker-Compose and SQL” article that gives a neat example of docker-compose.yml for test bundles.

1.5. Waiting for SQL engine to start

Mind a pitfall trap. The docker run command would flag that the container is up while the SQL engine is still loading… A several-second delay might go unnoticed when running locally but become an issue for scripted pipelines.

Simply adding a sleep 20 (docs) command after the launch would solve the problem. But there’s a more sophisticated solution – wait on the availability of the TCP port with wait-for-it bash script. This solution can be perceived as over-engineered but check it out:

# Path for a cached script from https://github.com/vishnubob/wait-for-it
pathToWaitForIt="./devops/wait-for-it.sh"
# Wait for the TCP port to become available or 30s timeout
# Note that we add '--user root' for the `chmod` command that's required for launching `wait-for-it.sh`
docker exec -i --user root sql-server sh -c "chmod +x ./bin/wait-for-it.sh && ./bin/wait-for-it.sh localhost:1433 -t 30"
# Just in case wait an extra second, because even after the port is available, the server still may need a moment
sleep 1
# Can continue...

Uh, it’s quite a fair bit to maintain for saving a couple of seconds in the pipeline, so use sleep 20 if hesitant.

Note: That fancy waiting is not required for PostgreSql as creating a new DB and populating the schema can be done via passing arguments on the launch of the container. A bit more on that later.

2. Create test database

Creating a test database in Docker is trivial.

Example for SQL Server:

docker exec -i sql-server /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Secret_Passw0rd -d master -Q "CREATE DATABASE TestDb"

and for PostgreSql:

docker exec -i sql-server psql -U postgres -c "CREATE DATABASE TestDb"

In case of PostgreSql, the command for creating a new database can be combined with launching the container:

docker run --name sql-server -e "POSTGRES_PASSWORD=Secret_Passw0rd" -e "POSTGRES_DB=TestDb" -p 5432:5432 -d postgres

3. Populate database schema

3.1. Create once or create&drop for each test?

As an option for a simple clean-up, we could drop the database and re-create it in each test. However, while it potentially opens a path for running tests against multiple DBs in parallel, you’re likely to lose on time-consuming drop/create operations.

Luckily for .NET devs, there’s a simple way to clean up the database – Respawn. That makes testing a single instance of the database very simple.

3.2. Preparing a DB schema script

There are multiple ways of populating the schema (tables, views, etc.). It mainly depends on the ORM and the way of managing migration scripts.

For lean ORMs, like Dapper, you’re more likely to maintain creation & migration scripts manually. For more feature-rich ones, like Entity Framework, standard migration tools will help you out.

EF Core has dotnet-ef CLI tool for populating/migrating the DB schema directly from the data context:

dotnet ef database update --project DataBaseEntities.csproj --connection your_connection_string

It works for tests, but it’s quite likely that your Build & Test pipeline is separated out from your Deployment pipeline that consumes artefacts created by the former. It would be way more beneficial to produce a full SQL script with all the updates that can be used for migration and creation of DB schema.

And dotnet-ef can create such an SQL file that gets consumed in the tests and later for updating the targeting database in deployment:

dotnet ef migrations script -i -o CreateOrMigrateDatabase.sql --project DataBaseEntities.csproj --startup-project StartProject.csproj --context DataContext -v

See the official docs for the supported EF command arguments.

3.3. Populate schema from SQL script

For MS SQL Server:

Once we’ve got the script, we can get the schema populated in two simple steps.

  1. Copy the SQL script to the container:
docker cp CreateOrMigrateDatabase.sql sql-server:/home/script.sql
  1. Executing the script in the container:
docker exec -i sql-server /opt/mssql-tools/bin/sqlcmd -S localhost -U sa -P Secret_Passw0rd -d TestDb -i /home/script.sql

For PostgreSQL:

It’s much easier as the container provides a magical /docker-entrypoint-initdb.d directory for hosting SQL and sh files that need to be executed on the launch (see docs) by mounting a single file (-v).

With big pleasure we get away with a single command to launch the server, create a new database and populate the schema (Oh, I like PostgreSQL!):

docker run --name sql-server \
	-e "POSTGRES_PASSWORD=Secret_Passw0rd" \
	-e "POSTGRES_DB=TestDb" \
	-p 5432:5432 \
	-v CreateOrMigrateDatabase.sql:/docker-entrypoint-initdb.d/init.sql \
	-d postgres

4. Seed data and clean up in .NET tests

Now we gotta sort out the .NET tests.

4.1. Connecting to the test database

The connection string can be supplied in the usual way – via a JSON config file, or environment variables, or user secrets:

var settings = new ConfigurationBuilder()
	.AddJsonFile("testsettings.json", optional: true)
	.AddEnvironmentVariables()
	.AddUserSecrets<TestDbBase>()
	.Build();
var connStr=settings.GetSection("ConnectionString").Value;
...
services.AddDbContext<DataContext>((_, options) => options.UseSqlServer(connStr));

The simplest approach would be passing the ConnectionString as an environment variable in the dotnet test command line.

For Linux-based OS:

ConnectionString="Data Source=localhost;Initial Catalog=SampleDb;User Id=sa;Password=Secret_Passw0rd;Connection Timeout=30;TrustServerCertificate=true" dotnet test

For Windows batch:

Set "ConnectionString=Data Source=localhost;Initial Catalog=SampleDb;User Id=sa;Password=Secret_Passw0rd;Connection Timeout=30;TrustServerCertificate=true" && dotnet test

The above command is noticeable clunky, and there’s an old proposal to improve it (please upvote).

4.2. Context clean-up – deleting test data

I already mentioned Respawn, a handy tool from Jimmy Bogard, that’s explained in his blog post ”Respawn vs SQL Server Snapshots” and a short video.

We’ll use Respawn to wipe out test data in the database. A minimalistic code would look like:

Checkpoint checkPoint = new();
await checkPoint.Reset(_sqlConnection);

Calling the above before or after each test would give us a pristine database. Usually, it’s much faster than fiddling with DB snapshots or recreating the database each time.

4.3. Automatic clean-up

Of course, IDisposable.Dispose() method is the usual place for test context clean-up. Though, for asynchronous methods, placing your setup/tear-down logic in there would be an anti-pattern.

For xUnit tests you can use IAsyncLifetime implementation and place the clean-up code in InitializeAsync() method, so you get a pristine database before each test and still have a chance to review it manually if the test fails.

public async Task InitializeAsync()
{
	await DataContext.Database.EnsureCreatedAsync();
	await _checkPoint.Reset(_sqlConnection);
}

At this stage, there’s no need in using xUnit shared context (class or collection fixtures). But sure, it’d speed up the tests.

5. Are we there yet?

Once we run tests against a temporary database, there’s no need to delete all the DB files at the end – leave it to Docker.

Though, we do need to apply one constraint – no parallel test execution, as they’re run against one database. So put xunit.runner.json file with { "parallelizeTestCollections": false } into the solution folder (see more options in the docs).

That’s all for today 🎉.

For a summary of all the above, check out DbSample on GitHub, a sample EF Core based project with fully automated tests against MS SQL Server with a GitHub Actions pipeline.

Now go and write some test!… and comments below, on Twitter, LinkedIn, or Reddit 😊.