If you are not familiar with reactive, go check out the intro at: https://quarkus.io/guides/getting-started-reactive
Reactive is a very powerful implementation pattern allowing your application to do more with less. The concept is simple: stop blocking. We want everything to be run on the I/O threads but because the I/O threads are used to handle multiple concurrent requests, we want the threads to not be consumed by a blocking call. In previous implementations, there was a hangup because the technology hadn’t made it’s way all the way down the stack, so you would end up with a blend of reactive and blocking, and the thread handoff between the I/O threads (non-blocking) and worker threads (blocking), which resulted in context switching and larger thread pools. But now, the entire stack, all the way down to the database driver, is fully reactive which means your application can really benefit from the pattern.
Getting Started
The application’s architectural layers remain very similar to the traditional implementation patterns. The only difference is really related to the return types and how they are handled.
Get started by creating a new project using the Quarkus Maven plugin.
mvn io.quarkus:quarkus-maven-plugin:1.13.0.Final:create \
-DprojectGroupId=io.orep \
-DprojectArtifactId=account-api \
-DclassName="io.orep.account.AccountResource" \
-Dpath="/api/accounts" \
-Dextensions="resteasy-reactive"
After initializing the project, I always like to quickly run a ./mvnw clean test
just to make sure everything works as expected.
Quick Housekeeping
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-config-yaml</artifactId>
</dependency>
Let’s use the YAML config for Quarkus. Rename the application.properties
file to application.yaml
.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy-reactive-jackson</artifactId>
</dependency>
Let’s use Jackson for json serialization.
<dependency>
<groupId>org.assertj</groupId>
<artifactId>assertj-core</artifactId>
<scope>test</scope>
</dependency>
AssertJ for testing is very powerful. It’s also included as part of the BOM. Fluent assertions for the win.
Starting From the Bottom
We are going to be building a full stack reactive application, all the way down to the database client. To proceed, let’s start there.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-hibernate-reactive-panache</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-reactive-pg-client</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-hibernate-validator</artifactId>
</dependency>
Two extensions are needed for reactive database interactions – the reactive PostgreSQL client and the reactive hibernate implementation. From a programming model, the main difference is the return types of the panache components, which are Mutiny-based Uni and Multi objects instead of just the Entity or Collection objects returned by the non-reactive version. Let’s start with the Entity.
@Entity(name = "Account")
@Table(name = "account")
public class AccountEntity {
@Id
@GeneratedValue(strategy = GenerationType.IDENTITY)
@Column(name = "account_id")
private Integer accountId;
@Column(name = "name", nullable = false)
@NotEmpty
private String name;
}
Quick Note: I am purposefully excluding all boilerplate code from data objects such as entities, domain objects and views. I usually use my IDE to generate these things: getters, setters, equals, hashCode and toString.
You will notice that I am using the repository pattern for panache. I like using the separate repository class instead of the active record. Here’s the Repository class.
@ApplicationScoped
public class AccountRepository implements PanacheRepositoryBase<AccountEntity, Integer> {
}
At this point, you should be thinking: Wait, this is exactly like the regular panache extension. Is it not? So far, it is with the exception of the imports which are sourced from io.quarkus.hibernate.reactive.panache
rather than io.quarkus.hibernate.orm.panache
. Let’s create the service class and see where we start to diverge slightly. But first, let’s quickly create a domain object to return from the service.
public class Account {
private Integer accountId;
@NotEmpty
private String name;
}
Mapping DTOs
We are going to use MapStruct to help us with mapping not only DTOs, but we are going to need some reactive friendly mapping methods to help us out. Let’s add the needed maven config including updating our compiler plugin to generate the objects.
<mapstruct.version>1.4.2.Final</mapstruct.version>
...
<dependency>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct</artifactId>
<version>${mapstruct.version}</version>
</dependency>
...
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>${compiler-plugin.version}</version>
<configuration>
<annotationProcessorPaths>
<path>
<groupId>org.mapstruct</groupId>
<artifactId>mapstruct-processor</artifactId>
<version>${mapstruct.version}</version>
</path>
</annotationProcessorPaths>
</configuration>
</plugin>
The Service Class
This is where we will first start interacting with the Mutiny reactive model. For our build here, we are just going to be implementing the “get all” use case. The service is going to use the repository to retrieve the data and the mapper to convert it to the domain object, but remember we need to stay reactive-friendly.
The accountRepository.listAll()
method returns a Uni<List<AccountEntity>>
but we don’t want to leak our entity class outside of the service class. This is where we would start mapping but the interfaces are a little different. The map method on the Uni object contains a List, not the Entity: accountRepository.listAll().map(accountEntities -> {})
so we will need to create our mapper with some default methods on the interface to help us handle the extra step of the List interface. Our new AccountMapper looks like this.
@Mapper(componentModel = "cdi")
public interface AccountMapper {
Account toDomain(AccountEntity entity);
AccountEntity toEntity(Account domain);
default List<Account> toDomainList(List<AccountEntity> list) {
return list.stream().map(this::toDomain).collect(Collectors.toList());
}
default List<AccountEntity> toEntityList(List<Account> list) {
return list.stream().map(this::toEntity).collect(Collectors.toList());
}
}
Now our Service class can properly utilize the mapper to convert between entity and domain objects.
@ApplicationScoped
public class AccountService {
private AccountRepository accountRepository;
private AccountMapper accountMapper;
public AccountService(AccountRepository accountRepository, AccountMapper accountMapper) {
this.accountRepository = accountRepository;
this.accountMapper = accountMapper;
}
public Uni<List<Account>> findAll() {
return accountRepository.listAll().map(accountMapper::toDomainList);
}
}
Quick Note: For objects with @ApplicationScoped
and other CDI annotations, if there is a non-default constructor there, it will automatically inject the referenced objects. You can add the @Inject
but it’s not necessary.
The Resource Class
The last bit is simply wrapping the payload in the correct response object.
@Path("/api/accounts")
@Consumes(MediaType.APPLICATION_JSON)
@Produces(MediaType.APPLICATION_JSON)
public class AccountResource {
private AccountService accountService;
public AccountResource(AccountService accountService) {
this.accountService = accountService;
}
@GET
public Uni<Response> get() {
return accountService.findAll().map(accounts -> Response.ok(accounts).build());
}
}
Schema Management with Flyway
For this application, we are going to keep it simple and allow the application and database DDL to live together. We will use Flyway to manage the database versioning and because it’s going to be managed by the application, it will require downtime for application releases with database changes. If you can’t have downtime or want some fanciness like canary rollouts, then look into managing your database DDL as a separate project with it’s own release cycle.
This is where the fun begins because Flyway doesn’t play nice with reactive datasources (yet). We will need to manage the migrations in a manual way, allowing us to build the proper JDBC url from the reactive url. First, let’s add the extensions for Quarkus.
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-flyway</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-jdbc-postgresql</artifactId>
</dependency>
Because we added these extensions, we now have an issue. We have references to both reactive and non-reactive datasources and the quarkus runtime doesn’t like when you have references to both. Because of this, we will need to add our first couple of configurations to the application.yaml
. The db-kind will be needed to load the correct extension for the datasource and the jdbc: false
tells quarkus to ignore the traditional JDBC datasource process.
quarkus:
datasource:
db-kind: postgresql
jdbc: false
We need to add the schema to be managed as well. I like to keep things simple and leave things in default places so we just need to add the database table creation script in the default folder that flyway looks in.
CREATE TABLE account (
account_id SERIAL,
name TEXT NOT NULL
);
insert into account (account_id, name) values (1, 'Test');
Next, let’s setup the migration service class. Notice we are manually configuring the JDBC connection at runtime. The migration service will run on startup of the application.
@ApplicationScoped
public class FlywayMigrationService {
@ConfigProperty(name = "quarkus.datasource.reactive.url")
String datasourceUrl;
@ConfigProperty(name = "quarkus.datasource.username")
String datasourceUsername;
@ConfigProperty(name = "quarkus.datasource.password")
String datasourcePassword;
public void runFlywayMigration(@Observes StartupEvent event) {
Flyway flyway = Flyway.configure().dataSource("jdbc:" + datasourceUrl, datasourceUsername, datasourcePassword).load();
flyway.migrate();
}
}
Testcontainers
We are almost there. Last, we need to setup how the tests will manage a test database for us to be able to run full integration tests. This is where Testcontainers comes in. To get us going, we will add the necessary components to the maven pom. Starting with 1.13, the testcontainers bom is included in the quarkus bom. No need to add it.
<dependencies>
...
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>testcontainers</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>junit-jupiter</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>org.testcontainers</groupId>
<artifactId>postgresql</artifactId>
<scope>test</scope>
</dependency>
...
</dependencies>
Once the libraries are in, then we can implement the test resource which will utilize Testcontainers to start and stop a PostgreSQL container for us.
public class DatabaseTestResource implements QuarkusTestResourceLifecycleManager {
private static final PostgreSQLContainer<?> DATABASE = new PostgreSQLContainer<>("postgres:12");
@Override
public Map<String, String> start() {
DATABASE.start();
Map<String, String> map = new HashMap<>();
map.put("quarkus.datasource.reactive.url",
String.format("postgresql://%s:%d/%s",
DATABASE.getHost(),
DATABASE.getFirstMappedPort(),
DATABASE.getDatabaseName()
)
);
map.put("quarkus.datasource.username", DATABASE.getUsername());
map.put("quarkus.datasource.password", DATABASE.getPassword());
return map;
}
@Override
public void stop() {
DATABASE.stop();
}
}
The Test
Finally, let’s augment the test and add the test resource annotation needed to crank up the database prior to running the test. The lifecycle for the startup of the test does a quick introspection on all the tests in scope and checks to see what test resources will be needed but only starts them once for the entire test scope.
@QuarkusTest
@QuarkusTestResource(DatabaseTestResource.class)
public class AccountResourceTest {
@Test
public void get() {
given()
.when().get("/api/accounts")
.then()
.statusCode(200);
}
}
At this point, running ./mvnw clean test
should give you a successful response.
Very nice,👍👍👍.
Thanks.