In “How to automate software delivery using Quarkus and GitLab,” you learned how to develop and deploy a simple Quarkus-JVM application to a Kubernetes cluster using GitLab Auto DevOps. Now, you’ll learn how to use Quarkus-native to compile a Java application to machine code and deploy it to a Kubernetes cluster using a CI/CD pipeline. Follow our journey from development to deployment leveraging GitLab Duo as our AI companion, including the specific prompts we used.
What is Quarkus?
Quarkus, also known as the Supersonic Subatomic Java, is an open source, Kubernetes-native Java stack tailored to OpenJDK HotSpot and GraalVM. The Quarkus project recently moved to the Commonhaus Foundation, a nonprofit organization dedicated to the sustainability of open source libraries and frameworks that provides a balanced approach to governance and support.
Prerequisites
This tutorial assumes:
- You have a running Kubernetes cluster, e.g. GKE.
- You have access to the Kubernetes cluster from your local laptop via the
kubectl
command. - The cluster is connected to your GitLab project.
- You have Maven (Version 3.9.6 or later) installed on your local laptop.
- You have Visual Studio Code installed on your local laptop.
If you’d like to set up a Kubernetes cluster connected to your GitLab project, you can follow the instructions in this tutorial, up to but not including the “Creating an instance of MySQL database in your cluster via Flux” section (you do not need a database for this tutorial).
You will also need to install an nginx ingress in your Kubernetes cluster. Here are two ways to do this:
- You can follow the instructions in “Creating and importing projects”, up to the creation of the variable
KUBE_INGRESS_BASE_DOMAIN
. - Or, just create an ingress in your Kubernetes cluster by following the instructions in our Auto DevOps with GKE documentation.
NOTE: For this article, we used the first method above to install an ingress and cert-manager in the Kubernetes cluster.
Creating necessary project files using GitLab Duo Chat
We started our endeavor from VS Code and an empty project called quarkus-native
, which we had previously created in GitLab and had already cloned to our local laptop.
- We opened GitLab Duo Chat, within VS Code, and entered the following prompt:
Create a “Hello World” Quarkus application that can be natively compiled
Chat replied with the prerequisites and process of what to do to create a simple “Hello World” Quarkus application. At this point, we were after the generated body of the class HelloResource.java
:
package org.acme;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.Produces;
import javax.ws.rs.core.MediaType;
@Path("/hello")
public class HelloResource {
@GET
@Produces(MediaType.TEXT_PLAIN)
public String hello() {
return "Hello World";
}
}
- We knew that for Quarkus we needed a special Dockerfile, named
Dockerfile.native
, as part of the creation of the image for the application executable. We then entered the following prompt in Chat:
Create the Dockerfile.native file for the sample application above
Chat replied with a very thorough answer about this file, its contents, and the containerization process. At this point, we were just after the generated body of the file Dockerfile.native
:
####
# This Dockerfile is used in order to build a container that runs the Quarkus application in native (no JVM) mode
####
FROM registry.access.redhat.com/ubi8/ubi-minimal:8.6
WORKDIR /work/
RUN chown 1001 /work \
&& chmod "g+rwX" /work \
&& chown 1001:root /work
COPY --chown=1001:root target/*-runner /work/application
EXPOSE 8080
USER 1001
CMD ["./application", "-Dquarkus.http.host=0.0.0.0"]
- Next, we asked Chat to generate the
pom.xml
for this application by entering the following prompt:
Create the pom.xml file for the sample application above
Chat provided the content of the pom.xml
file, an explanation of its contents, and how to use it. At this point, we were after the generated body of the file pom.xml
:
<?xml version="1.0"?>
<project xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd" xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<modelVersion>4.0.0</modelVersion>
<groupId>org.acme</groupId>
<artifactId>hello-world-quarkus</artifactId>
<version>1.0.0-SNAPSHOT</version>
<properties>
<compiler-plugin.version>3.10.1</compiler-plugin.version>
<maven.compiler.release>11</maven.compiler.release>
<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
<quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id>
<quarkus.platform.group-id>io.quarkus.platform</quarkus.platform.group-id>
<quarkus.platform.version>2.16.5.Final</quarkus.platform.version>
<skipITs>true</skipITs>
<surefire-plugin.version>3.0.0-M7</surefire-plugin.version>
</properties>
<dependencyManagement>
<dependencies>
<dependency>
<groupId>${quarkus.platform.group-id}</groupId>
<artifactId>${quarkus.platform.artifact-id}</artifactId>
<version>${quarkus.platform.version}</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
<dependencies>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-arc</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-resteasy</artifactId>
</dependency>
<dependency>
<groupId>io.quarkus</groupId>
<artifactId>quarkus-junit5</artifactId>
<scope>test</scope>
</dependency>
<dependency>
<groupId>io.rest-assured</groupId>
<artifactId>rest-assured</artifactId>
<scope>test</scope>
</dependency>
</dependencies>
<build>
<plugins>
<plugin>
<groupId>${quarkus.platform.group-id}</groupId>
<artifactId>quarkus-maven-plugin</artifactId>
<version>${quarkus.platform.version}</version>
<extensions>true</extensions>
<executions>
<execution>
<goals>
<goal>build</goal>
<goal>generate-code</goal>
<goal>generate-code-tests</goal>
</goals>
</execution>
</executions>
</plugin>
<plugin>
<artifactId>maven-compiler-plugin</artifactId>
<version>${compiler-plugin.version}</version>
<configuration>
<compilerArgs>
<arg>-parameters</arg>
</compilerArgs>
</configuration>
</plugin>
<plugin>
<artifactId>maven-surefire-plugin</artifactId>
<version>${surefire-plugin.version}</version>
<configuration>
<systemPropertyVariables>
<java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager>
<maven.home>${maven.home}</maven.home>
</systemPropertyVariables>
</configuration>
</plugin>
<plugin>
<artifactId>maven-failsafe-plugin</artifactId>
<version>${surefire-plugin.version}</version>
<executions>
<execution>
<goals>
<goal>integration-test</goal>
<goal>verify</goal>
</goals>
<configuration>
<systemPropertyVariables>
<native.image.path>${project.build.directory}/${project.build.finalName}-runner</native.image.path>
<java.util.logging.manager>org.jboss.logmanager.LogManager</java.util.logging.manager>
<maven.home>${maven.home}</maven.home>
</systemPropertyVariables>
</configuration>
</execution>
</executions>
</plugin>
</plugins>
</build>
<profiles>
<profile>
<id>native</id>
<activation>
<property>
<name>native</name>
</property>
</activation>
<properties>
<skipITs>false</skipITs>
<quarkus.package.type>native</quarkus.package.type>
</properties>
</profile>
</profiles>
</project>
- We then asked Chat to generate the Quarkus test file for this application by entering the following prompt:
Create the content for the Quarkus test for the application above
Chat returned the body for the Quarkus test, how to use it, and the body of the Java class for the test needed for its native compilation. At this point, we were after the generated bodies of the file HelloResourceTest.java
and HelloResourceIT.java
.
Here is the generated body for HelloResourceTest.java
:
package org.acme;
import io.quarkus.test.junit.QuarkusTest;
import org.junit.jupiter.api.Test;
import static io.restassured.RestAssured.given;
import static org.hamcrest.CoreMatchers.is;
@QuarkusTest
public class HelloResourceTest {
@Test
public void testHelloEndpoint() {
given()
.when().get("/hello")
.then()
.statusCode(200)
.body(is("Hello World"));
}
}
Here is the generated body for HelloResourceIT.java
:
package org.acme;
import io.quarkus.test.junit.QuarkusIntegrationTest;
@QuarkusIntegrationTest
public class HelloResourceIT extends HelloResourceTest {
// Execute the same tests but in native mode.
}
- We needed to know how to organize these files in the GitLab project, so we asked about the directory structure for all these files by entering the following prompt in Chat:
Give me the entire directory structure for this project including the location of each file, e.g. pom.xml, Dockerfile.native, application.properties, HelloResource.java, HelloResourceTest.java, and the location of the target directory
Chat replied with a detailed diagram about the entire directory structure for the project and where all these files should be located as well as a description of the purpose of each of them. It even mentioned that the directory target/
and its contents should not be version controlled since it was generated by the build process. Another interesting aspect of the reply was the existence of a file called resources/application.properties
in the directory structure.
With all this information in our hands, we were ready to start creating these files in our GitLab project.
Populating our project with the generated content for each file
We created each of the following files in their corresponding location and their generated content as provided by Chat:
src/main/java/org/acme/HelloResource.java
resources/application.properties
src/test/java/org/acme/HelloResourceTest.java
src/test/java/org/acme/HelloResourceIT.java
pom.xml
Dockerfile.native
NOTE: We considered using GitLab Auto Deploy for this endeavor but later realized that it would not be a supported option. We are mentioning this because in the video at the end of this tutorial, you will see that we asked Chat: How to set the service internalPort to 8080 for auto deploy
. Then we created a file named .gitlab/auto-deploy-values.yaml
with the generated content from Chat. The creation of this file is not necessary for this tutorial.
Before we started tackling the pipeline to build, containerize, and deploy the application to our Kubernetes cluster, we decided to generate the executable locally on our Mac and test the application locally.
Testing the application locally
Here is the process we went through to test the application on our local machine.
- To build the application on the local Mac laptop, from a Terminal window, we entered the following command:
mvn clean package -Pnative
The native compilation failed with the error message:
Cannot find the ‘native-image’ in the GRAALVM_HOME, JAVA_HOME and System PATH. Install it using ‘gu install native-image’
- So, we used our trusty GitLab Duo Chat again and asked it the following:
The command “mvn clean package -Pnative” is failing with error “java.lang.RuntimeException: Cannot find the ‘native-image’ in the GRAALVM_HOME, JAVA_HOME and System PATH. Install it using gu install native-image”. I’m using a MacOS Sonoma. How do I fix this error on my Mac?
Chat replied with a detailed set of steps on how to install the necessary software and set the appropriate environment variables.
- We copied and pasted the following commands from the Chat window to a Terminal window:
brew install –cask graalvm/tap/graalvm-ce-java17
export JAVA_HOME=/Library/Java/JavaVIrtualMachines/graalvm-ce-java17-22.3.1
export GRAALVM_HOME=${JAVA_HOME}
export PATH=${GRAALVM_HOME}/bin:$PATH
xattr -r -d com.apple.quarantine ${GRAALVM_HOME}/../..
gu install native-image
The commands above installed the community edition of GraalVM Version 22.3.1 that supported Java 17. We noticed, during the brew install, that the version of the GraalVM being installed was java17-22.3.1
, so we had to update the pasted value for JAVA_HOME
from graalvm-ce-java17-22.3.0
to graalvm-ce-java17-22.3.1
.
We also had to run the xattr
command to get the GraalVM, which we had downloaded and installed on our Mac, out of quarantine so that it could run locally. Lastly, we installed the GraalVM native-image.
- At this point, we again, from a Terminal window, entered the following command to build the application on the local Mac laptop:
mvn clean package -Pnative
This time the compilation was successful and an executable was generated in the target
directory.
- We ran the executable by entering the following commands from a Terminal window:
cd target
./quarkus-native-1.0.0-SNAPSHOT-runner “-Dquarkus.http.host=0.0.0.0”
- With the application running, we opened a browser window, and in the URL field, we entered:
http://localhost:8080/hello
The application returned the string Hello World
, which was displayed in the browser window.
At this point, we committed and pushed all the changes to our GitLab project and started working on creating a CI/CD pipeline that would build and deploy the application to a Kubernetes cluster running on the cloud.
But before continuing, we remembered to add, commit, and push a .gitignore
file to our project that included the path target/
, since this was the directory where the executable would be created and we didn’t need to keep it – or its contents – under version control.
Creating the pipeline with GitLab Duo Chat
Now that we had already successfully tested the application locally on our Mac, we needed to create the CI/CD pipeline that would compile the application, containerize it, and deploy it to our Kubernetes cluster. We wanted to keep the pipeline simple, brief, and have a single environment in which to deploy it. To this end, the pipeline would not tackle multiple environments or feature branches, for example.
- To avoid manually creating a pipeline from scratch, we decided to once again leverage Chat. We entered the following prompt
Create a .gitlab-ci.yml file with 3 stages: build, containerize, and deploy. Each of these stages should have a single job with the same name. The build job should compile the application natively using the -Pnative maven option and the builder image for mandrel jdk-22 for java17 and store the application executable and its Dockerfile as artifacts. The containerize job should use docker to build and push the image to the built-in container registry. The deploy job should rollout the containerized application, named quarkus-app, to the production environment in the Kubernetes cluster by creating a deployment resource, a service resource and an ingress rule with host ${KUBE_INGRESS_BASE_DOMAIN} and /hello prefix. The deploy job should also include the url for the production environment. When setting the Kubernetes context, use ${KUBE_CONTEXT}, and for application namespace, use ${KUBE_NAMESPACE}
The prompt above provided some points that we thought would be useful and necessary for a good outcome. The requirements listed by the prompt were:
- It specified that the pipeline needed to have three stages: build, containerize, and deploy.
- Each stage would have a single job.
- The build job should natively compile (not build) the application using the -Pnative option.
- The build job should use the open source software
mandrel
and use the version that would support jdk-22 for Java 17. Remember that these were the versions that we tested on our local machine. - The build job would also need to pass the resulting executable and the project Dockerfile to the next job, i.e. containerize, in the pipeline.
- The containerize job should build a docker image with the executable using the Dockerfile, both of which were passed as artifacts.
- The containerize job should push the image to the GitLab container registry.
- The deploy job should use the image from the GitLab container registry, name it
quarkus-app
, and deploy it to the Kubernetes cluster using deployment, service and ingress rule resources. - The deploy job should use the environment variables
${KUBE_INGRESS_BASE_DOMAIN}
,${KUBE_CONTEXT}
, and${KUBE_NAMESPACE}
, which had been defined as group/project variables, accordingly.
Chat responded with the following .gitlab-ci.yml
example:
# Define the stages for the pipeline
stages:
- build
- containerize
- deploy
# Variables section to use GitLab CI/CD variables
variables:
MAVEN_OPTS: "-Dmaven.repo.local=.m2/repository"
MAVEN_CLI_OPTS: "--batch-mode --errors --fail-at-end --show-version -DinstallAtEnd=true -DdeployAtEnd=true"
CONTAINER_IMAGE: ${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}
# Cache configuration for Maven dependencies
cache:
paths:
- .m2/repository
# Build stage: Compile the application natively
build:
stage: build
image:
name: quay.io/quarkus/ubi-quarkus-mandrel-builder-image:22.3-java17
entrypoint:
- ""
script:
- ./mvnw ${MAVEN_CLI_OPTS} clean package -Pnative
artifacts:
paths:
- target/*-runner
- src/main/docker/Dockerfile.native-micro
# This job runs only on the main branch
only:
- main
# Containerize stage: Build and push Docker image
containerize:
stage: containerize
image: docker:20.10.16
services:
- docker:20.10.16-dind
variables:
DOCKER_TLS_CERTDIR: "/certs"
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
script:
- docker build -f src/main/docker/Dockerfile.native-micro -t ${CONTAINER_IMAGE} .
- docker push ${CONTAINER_IMAGE}
# This job runs only on the main branch
only:
- main
# Deploy stage: Deploy to Kubernetes
deploy:
stage: deploy
image:
name: bitnami/kubectl:latest
entrypoint: ['']
script:
- kubectl config use-context ${KUBE_CONTEXT}
- kubectl config set-context --current --namespace=${KUBE_NAMESPACE}
- |
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
name: quarkus-app
spec:
replicas: 1
selector:
matchLabels:
app: quarkus-app
template:
metadata:
labels:
app: quarkus-app
spec:
containers:
- name: quarkus-app
image: ${CONTAINER_IMAGE}
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: quarkus-app
spec:
selector:
app: quarkus-app
ports:
- port: 80
targetPort: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: quarkus-app
spec:
rules:
- host: ${KUBE_INGRESS_BASE_DOMAIN}
http:
paths:
- path: /hello
pathType: Prefix
backend:
service:
name: quarkus-app
port:
number: 80
EOF
- echo "Application deployed to https://${KUBE_INGRESS_BASE_DOMAIN}/hello"
# This job runs only on the main branch
only:
- main
- There were some things we needed to adjust in the sample
.gitlab-ci.yml
file above before we could commit it to ourmain
branch. These are the updates we made to the file:
- We deleted all occurrences of
only: -main
because we wanted to keep of pipeline definition file simple and with no branch-related rules. - We fixed the name of the file
Dockerfile.native-micro
toDockerfile.native
.
- At this point, we wanted to ensure that the deployment would be to the
production
environment so we asked Chat the following prompt:
What is the syntax to specify an environment with its url in a pipeline?
The response from Chat included an example of how to do this so we used this information to add the following environment block to our pipeline:
environment:
name: production
url: http://${KUBE_INGRESS_BASE_DOMAIN}/hello
-
The example provided by Chat includes a URL that started with
https
and we modified that tohttp
since we didn’t really need a secure connection for this simple application. -
Lastly, we noticed that in the
build
job, there was a scriptmvnw
that we didn’t have in our project. So, we asked Chat the following:
How can I get the mvnw script for Quarkus?
Chat responded with the command to execute to bootstrap and create this script. We executed this command from a Terminal window:
mvn wrapper:wrapper
We were now ready to commit all of our changes to the main
branch and have the pipeline executed. However, on our first attempt, our first pipeline failed at the build job.
Troubleshooting using GitLab Duo Root Cause Analysis
Our first attempt at running our brand-new pipeline failed. So, we took advantage of GitLab Duo Root Cause Analysis, which looks at the job logs and provides a thorough natural language explanation (with examples) of the root cause of the problem and, most importantly, how to fix it.
Root Cause Analysis recommended we look at the compatibility of the command that was trying to be executed with the image of mandrel used in the build job. We were not using any command with the image so we concluded that it must have been the predefined entrypoint
for the image itself. We needed to override this so we asked Chat the following:
How do I override the entrypoint of an image using gitlab keywords?
Chat replied with some use case examples of overriding an image entry point. We used that information to update the build job image definition:
build:
stage: build
image: quay.io/quarkus/ubi-quarkus-mandrel-builder-image:22.3-java17
entrypoint:
- “”
We committed our changes to the main
branch, which launched a new instance of the pipeline. This time the build job executed successfully but the pipeline failed at the containerize
job.
Running a successful pipeline
Before drilling down into the log of the failed containerize
job, we decided to drill into the log of the successfully completed build job first. Everything looked good in the log of the build job with the exception of this warning message at the very end of it:
WARNING: src/main/docker/Dockerfile.native: no matching files. Ensure that the artifact path is relative to the working directory …
We took notice of this warning and then headed to the log of the failed containerize
job. In it, we saw that the docker build
command had failed due to a non-existent Dockerfile. We ran Root Cause Analysis on the job and among its suggested fixes was for us to verify that the project structure matched the path of the specified Dockerfile.native
file.
This information confirmed our suspicion of the misplaced Dockerfile.native
file. Instead of being at the directory src/main/docker
as specified in the pipeline, it was located at the root directory of the project.
So, we went back to our project and updated every occurrence of the location of this file in our .gitlab-ci.yml
file. We modified the two locations where this happened, one in the build
job and one in the containerize
job, as follows:
src/main/docker/Dockerfile.native
to
Dockerfile.native
We committed our updates to the main
branch and this time our entire pipeline executed successfully!
Our last step was to check the running application in the production
environment in our Kubernetes cluster.
Accessing the deployed application running in cluster
Once the pipeline ran successfully to completion, we drilled in the log file for the deploy
job. Remember, this job printed the URL of the application at the end of its execution. We scrolled down to the bottom of the log and clicked on the https
application link, which opened a browser window warning us that the connection was not private (we disabled https
for the environment URL but forgot it for this string). We proceeded past the browser warning and then the string “Hello World” was displaced in the browser window indicating that the application was up and running in the Kubernetes cluster.
Finally, to double-check our production deployment URL, we headed to the project Operate > Environments window, and clicked on the “Open” button for it, which immediately opened a browser window with the “Hello World” message.
Try it
We created, compiled, built, and deployed a simple Quarkus application to a Kubernetes cluster using GitLab Duo. This approach allowed us to be more efficient and productive in all the tasks that we performed and it helped us streamline our DevSecOps processes. We have shown only a small portion of how GitLab Duo’s AI-powered capabilities can help you, namely Chat and Root Cause Analysis. There’s so much more you can leverage in GitLab Duo to help you create better software faster and more securely.
Watch this whole use case in action:
All the project assets we used are available here.
Try GitLab Duo for free for 60 days and get started on exciting projects like this.