One of the ways containers have dramatically changed the way applications are developed is the ability to run production-like services on the developer machine. For years, developers have been stuck with a proverbial foot on either side of the testing world. They could write unit tests and run them quickly and easily on their local box, but integration testing would either require a choice. A developer could integrate by connecting to a running service in a shared development environment or be forced to wait until after their code is pushed to get the feedback from the CICD process before they need to mark that Jira complete.
With the introduction of containers on the developer desktop, the world changed. A developer can now spin up any dependent services using any container they want quickly and easily. Devs are now running not only their application code locally, but full versions of databases and even clustered services such as Kafka with ease.
docker-compose
One of the tools out there to quickly spin up and spin down services is docker-compose. Compose is a tool for defining and running multi-container Docker applications. You use a YAML file to configure your application’s services. Then, with a single command, you create and start all the services from your configuration. Here’s an example for a postgresql database:
version: '3.1'
services:
db:
container_name: db
image: postgres
restart: always
ports:
- "5432:5432"
environment:
POSTGRES_DB: my_database
POSTGRES_USER: local
POSTGRES_PASSWORD: password
The developer can simply run the following command and they have a running instance:
docker-compose -f thefile.yaml up
The level up from here is integrating it into the actual integration test. There are maven plugins out there that can do something similar.
<plugin>
<groupId>io.fabric8</groupId>
<artifactId>docker-maven-plugin</artifactId>
<version>0.20.1</version>
<executions>
<execution>
<id>prepare-it-database</id>
<goals>
<goal>start</goal>
</goals>
<configuration>
<images>

</images>
</configuration>
</execution>
<execution>
<id>remove-it-database</id>
<goals>
<goal>stop</goal>
</goals>
</execution>
</executions>
</plugin>
Testcontainers has entered the chat…
Testcontainers is a Java library that supports JUnit tests, providing lightweight, throwaway instances of common databases, Selenium web browsers, or anything else that can run in a Docker container. After adding the dependencies to your path, you can now have total control over spinup and spindown of services. Here’s a JUnit5 example:
@Container
public GenericContainer redis = new GenericContainer(DockerImageName.parse("redis:5.0.3-alpine"))
.withExposedPorts(6379);
This is great but the coolest feature with testcontainers is the ability to use the jdbc url itself to spin up a test database. Here’s an example using quarkus application configuration:
'%test':
quarkus:
log:
level: DEBUG
datasource:
url: jdbc:tc:postgresql:latest:///market_data
driver: org.testcontainers.jdbc.ContainerDatabaseDriver
hibernate-orm:
dialect: org.hibernate.dialect.PostgreSQL10Dialect
flyway:
migrate-at-start: true
locations: db/migration,db/testdata
The datasource url specifies the Testcontainers implementation and you can even specify the version of the container directly in the url. Combine that with flyway migration patterns, and you have tons of power to get your integration tests done before pushing anything.
Get started:
https://www.testcontainers.org/quickstart/junit_5_quickstart/