Having to deal with jvm based projects deployed on Kubernetes expose you to a different set of problems: performance, compatibility, logback issues, jvm flags etc.

Here I want to show you how I deploy Spring boot (Java 11+) based applications on Kubernetes, starting from the application itself to the deployment and monitoring with Grafana.

The application

Let’s say we’re dealing with an application that needs to expose http endpoints; first of all we want download from https://start.spring.io/ the initial code.

spring_boot_initializr

As you can see we are going to use: web dependency (the nature of our applicaiton), Spring Boot Actuator (for healthchecks and prometheus metrics), Prometheus micrometer (for exposing Prometheus format metrics).

Then, after we coded our http api endpoint, we’re going to take care for the resources/application.yml and resources/logback-spring.xml

application.yaml

spring:
  main:
    allow-bean-definition-overriding: true
  profiles:
    active: ${SPR_PROFILE}
  application:
    name: api-endpoint-application

Here we just define the profile that we pass from the Kubernetes Deployment and the application name.

Logging

This logback configuration enables us to log in json format when running with the test,preprod or prod profiles, where we expect to have a log collector such as FluentD/Elasticsearch/Kibana; in this way we can avoid to write custom log parsing on the FluentD config: we just need to use the FluentD json parser!

UPDATE 26/09/21: the above is true when the container runtime is Docker. As you may know, starting from 2021 Kubernetes is deprecating Docker(shim) in favour of CRI-O / ContainerD so we will need a different FlunetD/FluentBit parser. I will show you the configuration for CRI-O in another post (TBD).

logback-spring.xml

<?xml version="1.0" encoding="UTF-8"?>
<configuration><appender name="consoleAppender" class="ch.qos.logback.core.ConsoleAppender">
		<layout class="ch.qos.logback.classic.PatternLayout">
			<Pattern>
				%white(%d{ISO8601}) %highlight(%-5level) [%green(%thread{15})] %yellow(%logger{15}): %msg%n%throwable{40}
			</Pattern>
		</layout>
	</appender>
	<appender name="consoleAppenderLogstash" class="ch.qos.logback.core.ConsoleAppender">
		<encoder class="net.logstash.logback.encoder.LogstashEncoder">
			<fieldNames>
				<version>[ignore]</version>
				<levelValue>[ignore]</levelValue>
			</fieldNames>
			<throwableConverter class="net.logstash.logback.stacktrace.ShortenedThrowableConverter">
				<maxDepthPerThrowable>40</maxDepthPerThrowable>
				<maxLength>4096</maxLength>
				<shortenedClassNameLength>20</shortenedClassNameLength>
				<exclude>sun\.reflect\..*\.invoke.*</exclude>
				<exclude>net\.sf\.cglib\.proxy\.MethodProxy\.invoke</exclude>
				<rootCauseFirst>true</rootCauseFirst>
				<inlineHash>true</inlineHash>
			</throwableConverter>
		</encoder>
	</appender><springProfile name="local,dev">
		<root level="INFO">
			<appender-ref ref="consoleAppender"/>
		</root>
	</springProfile>
	<springProfile name="test,preprod,prod">
		<root level="INFO">
			<appender-ref ref="consoleAppenderLogstash"/>
		</root>
		<logger name="jsonLogger" additivity="false" level="DEBUG">
			<appender-ref ref="consoleAppenderLogstash"/>
		</logger>
	</springProfile></configuration>

In order to use this Logback, import the logback-logstash-encoder inside your pom.xml

pom.xml snippet

<dependency>
	<groupId>net.logstash.logback</groupId>
	<artifactId>logstash-logback-encoder</artifactId>
	<version>${logstash-logback-encoder-version}</version>
</dependency>

Containerization

In order to containerize this application, we are going to use the new “layering” feature from Spring Boot: as you may know, a Docker container consists of a base image and then multiple layers. Each of these layers are cached during the build in order to make subsequent builds faster; since changing the lower-layers mean rebuilding the upper-layers, frequently changing layers should go on top.

docker_layer

Spring Boot allows now (starting from Spring Boot 2.3) to create layered jars: we can map the content of an artifact into layers just by specifing a layers.xml file which provides the list of layers to be created.

layers.xml

<layers xmlns="http://www.springframework.org/schema/boot/layers"
        xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://www.springframework.org/schema/boot/layers
              https://www.springframework.org/schema/boot/layers/layers-{spring-boot-xsd-version}.xsd">
    <application>
        <into layer="spring-boot-loader">
            <include>org/springframework/boot/loader/**</include>
        </into>
        <into layer="application" />
    </application>
    <dependencies>
        <into layer="snapshot-dependencies">
            <include>*:*:*SNAPSHOT</include>
        </into>
        <into layer="company-dependencies">
            <include>it.mycompany.*:*</include>
        </into>
        <into layer="dependencies"/>
    </dependencies>
    <layerOrder>
        <layer>dependencies</layer>
        <layer>spring-boot-loader</layer>
        <layer>snapshot-dependencies</layer>
        <layer>company-dependencies</layer>
        <layer>application</layer>
    </layerOrder>
</layers>

As you can see we have 5 layers:

  • application for our application (classes and resources)
  • snapshot-dependencies for any SNAPSHOT dependency
  • company-dependencies for any dependency that belongs to the company where you work
  • dependencies for any other dependency (stable)
  • spring-boot-loader for the jar loader classes

spring_boot_layering

In order to use make use of the layered jar, we need to modify the spring-boot-maven-plugin inside the pom.xml

<plugin>
	<groupId>org.springframework.boot</groupId>
	<artifactId>spring-boot-maven-plugin</artifactId>
	<configuration>
	  <layers>
		  <enabled>true</enabled>
			<configuration>${project.basedir}/src/layers.xml</configuration> <!-- I put the layers.xml file inside src -->
		</layers>
	</configuration>
</plugin>

Then we can write our Dockerfile in the following way

ARG BUILD_IMAGE=my-registry/base-image/maven-adoptopenjdk-11:release-3.6
ARG RUNTIME_IMAGE=my-registry/base-image/distroless-java:release-11

FROM ${BUILD_IMAGE} as builder
WORKDIR /output
COPY pom.xml .
RUN mvn -Dmaven.repo.local=/image-cache/repository -s /maven-settings/settings.xml -e dependency:resolve
COPY src /output/src
RUN mvn -U -Dmaven.repo.local=/image-cache/repository -s /maven-settings/settings.xml clean package -Djib.skip=true
RUN java -Djarmode=layertools -jar /output/target/app.jar extract

FROM builder AS test
RUN mvn sonar:sonar -Dmaven.repo.local=/image-cache/repository -s /maven-settings/settings.xml

FROM ${RUNTIME_IMAGE} as production
WORKDIR /application
COPY --from=builder /output/dependencies/ ./
COPY --from=builder /output/company-dependencies/ ./
COPY --from=builder /output/snapshot-dependencies/ ./
COPY --from=builder /output/spring-boot-loader/ ./
COPY --from=builder /output/application/ ./
ENTRYPOINT ["java","-Dfile.encoding=UTF-8","-Dspring.config.additional-location=/config/","-agentlib:jdwp=server=y,transport=dt_socket,address=9000,suspend=n","org.springframework.boot.loader.JarLauncher"]

A lot of stuff is going on here, let’s break it up: as you can see the whole build is performed in this multi stage Dockerfile.

  • first stage: here I perform the build by leveraging a volume cache I use to speed up the builds, then the last run is the command that enables us to produce the layerd jars.
  • second stage (optional): after the build I perform a sonar quality code check
  • third stage: here I define the various layers that will be contained inside the final Docker image. The Entrypoint makes use of the “org.springframework.boot.loader.JarLauncher” since we don’t have a jar file to launch.

Then you just execute docker build --target production

As you may see we have additional java flags that we are going to check in the next chapter.

Configuration management

I used to use spring-boot-cloud-config-starter dependency inside the pom.xml and have a single git repo where the config server would pull the properties. With Kubernetes I avoid this approach since it’s not very suitable for a GitOps approach. Instead, I use configmaps and I mount them inside the Kubernetes resource. In this way, even the application configurations became part of the Continous Delivery (ex. ArgoCD) flow hence we are able to deploy them just by triggering a release pipeline on the target environment.

As you can see from the last line of the Dockerfile

 ENTRYPOINT ["java","-Dfile.encoding=UTF-8","-Dspring.config.additional-location=/config/","-agentlib:jdwp=server=y,transport=dt_socket,address=9000,suspend=n","org.springframework.boot.loader.JarLauncher"] 

there is "-Dspring.config.additional-location=/config/" which I use in each project by convention: here the application expects its configuration file(s).

So for example, we can create this configuration file as a configmap

kind: ConfigMap
apiVersion: v1
metadata:
  name: api-endpoint-application
  namespace: my-namespace
data:
  application.properties: |-
    server.port=8080
    server.shutdown=graceful

    spring.lifecycle.tiemout-per-shutdown-phase=20s
    spring.sleuth.sampler.probability=1

    management.endpoint.metrics.enabled=true
    management.endpoint.prometheus.enabled=true

    management.endpoints.web.exposure.include=health,prometheus
    management.metrics.enable.jvm=true
    management.metrics.distribution.percentiles-histogram.http.server.requests=true
    management.metrics.distribution.sla.http.server.requests=100ms,150ms,250ms,500ms,1s
    management.metrics.export.prometheus.enabled=true    

and mount it inside /config path in the container

apiVersion: apps/v1
kind: Deployment
metadata:
  name: api-endpoint-application
  namespace: my-namespace
spec:
  replicas: 1
  revisionHistoryLimit: 2
  selector:
    matchLabels:
      app: api-endpoint-application
  strategy:
    type: Recreate
  template:
    metadata:
      annotations:
        ci-last-build: 2021-08-30_07:28:06
        ci-last-commit: 1c5301a6f03fca24193e9de378f8c4fd6b3d6ca6
        ci-source-branch: develop
      labels:
        app: api-endpoint-application
    spec:
      [...]
      containers:
      - envFrom:
        - configMapRef:
            name: other-common-config
        image: my-registry/api-endpoint-application:1.0.0-SNAPSHOT
        [...]
          runAsUser: 1000
        volumeMounts:
        - mountPath: /config
          name: application-config # mount
          readOnly: true
      imagePullSecrets:
      - name: my-secret
      restartPolicy: Always
      volumes:
      - configMap:
          name: api-endpoint-application # define
        name: application-config

In this way we avoided to include any additional dependency inside the pom.xml since we just leveraged a Spring Boot capability which allows to specify external configuration files: a configmap mounted inside the Deployment.

Observability

In order to gather insights of what the application is doing in terms of memory (JVM) I use the Grafana + Prometheus + Micrometer: in this way we can keep track of the various JVM memory metrics in a unified dashboard on Grafana.

To achive that, import the micrometer dependency

<dependency>
  <groupId>io.micrometer</groupId>
  <artifactId>micrometer-registry-prometheus</artifactId>
  <version>${micrometer-registry-prometheus-version}</version>
</dependency>

then expose these metrics by adding some management properties inside the configuration file:

management.endpoint.metrics.enabled=true
management.endpoint.prometheus.enabled=true
management.endpoints.web.exposure.include=health,prometheus
management.metrics.enable.jvm=true
management.metrics.export.prometheus.enabled=true

Now the application should expose a /prometheus endpoint inside the actuator context. You can verify that by port-forwarding the pod and then hit localhost:8080/actuator/prometheus (I assume you use port 8080 as well).

Then we can leverage Prometheus' Service Monitor to scrape these metrics and display them on a Grafana Dashboard (or even create custom Prometheus Alerts).

Create a ServiceMonitor

ServiceMonitor

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
  name: api-endpoint-application
  namespace: monitoring
  labels:
    app: api-endpoint-application
spec:
  endpoints:
    - port: http
      path: /actuator/prometheus
      interval: 2s
  selector:
    matchLabels:
      app: api-endpoint-application
  namespaceSelector:
    matchNames:
      - my-namespace

then, once Prometheus refreshed its configuration, we will be able to visualize those metrics by using the Micrometer Grafana dashboard

micrometer

Conclusion

In this post I showed you the way I create, containerize, run and monitor Spring Boot applications on a Kubernetes clusters. By leveraging Docker’s multi stage capabilities and Spring Boot layerd Jars, we can obtain a strong cache mechanism during build time that reduces the time to deploy by 10/20seconds (depends on the size of the Jar and how many dependencies Docker is caching) for my use cases. Then I showed you how I externalized configurations for Spring application by just mounting a configmap inside a pre-defined path inside the container, without specify any external maven dependency.

At the end I talked about monitoring: by leveraging Micrometer, Prometheus and Grafana we are able to monitoring JVM memory (and other metrics) on Kubernetes by just creating a ServiceMonitor.

Justin