Logging in Container Based Environments
Logging configuration can change dramatically depending on how an application is packaged. For Docker, you want JSON to STDOUT, while for other applications you may want file-based streaming. In addition, you may want to change logging on the fly without redeploying or restarting the server: the control plane for Java has historically been JMX over RMI but this is very insecure. Instead, you should use Jolokia with restricted access controls and access it with an HTTP interface like Hawt.io. This post explains how you do that, leveraging sbt-native-packager.
Packaging Applications
There are several different ways to run an application in SBT. They are typically used in different situations:
- In Console:
sbt run
will run the application from console, can either fork a JVM or run in SBT's JVM process. - Java Application:
sbt universal:packageZipTarball
will create a packaged application with a script as a.tar.gz
file. - Docker Container:
sbt docker:publishLocal
will create and push to local docker repository with a tag likemyapp:latest
, andsbt docker:publish
will create an image and push it to a remote docker repository.
Depending on how you package the application, you'll want to use different logging options.
For development, you'll want to run the application from SBT typically, and you'll want line oriented logging and text/JSON files. However, you'll also want to run the application as a Java application sometimes, so that you can run additional scripts and verify the javaOptions
are set correctly. Typically you'll do this with sbt stage
which will set up the application structure under target/universal/stage
, but in many situations you'll have a script that sends the packaged application to a remote server so you can run it against resources that aren't available in dev, i.e. AWS.
Configuring Logback
I've written before about configuring Logback for different environments by using composite appenders: see default.xml for a full example.
For docker containers in production, the logging requirements are different. The filesystem may be read-only, and logging may be to STDOUT out using JSON – the 12 factor app approach – or may write logs directly to a TCP port, or to a mapped docker volume. This complicates things, because Logback's file appenders, if defined, will attempt to create a file on the filesystem immediately whether they're filtered out or not. So, those appenders have to never exist in the first place.
The solution to this is to configure Logback with Janino, and then check against a property to define the appender:
<configuration>
<include resource="terse-logback/initial.xml"/>
<if condition='property("local.logback.environment").contains("container")'>
<then>
<appender name="ROOT_APPENDER" class="ch.qos.logback.core.ConsoleAppender">
<!-- logstash JSON encoder -->
</appender>
</then>
<else>
<!-- set up composite appenders... -->
<appender name="selector" class="com.tersesystems.logback.core.SelectAppender">
<appenderKey>${logback.environment:-development}</appenderKey>
<appender-ref ref="development"/>
<appender-ref ref="test"/>
<appender-ref ref="production"/>
</appender>
<appender name="ROOT_APPENDER" class="com.tersesystems.logback.uniqueid.UniqueIdEventAppender">
<appender-ref ref="selector"/>
</appender>
</else>
</if>
<root level="INFO">
<appender-ref ref="ROOT_APPENDER" />
</root>
<include resource="terse-logback/ending.xml" />
</configuration>
From there, it's a question of defining what logback.environment
is going to look like. In the build.sbt
file, we define a LOGBACK_ENVIRONMENT
environment variable and a LOG_DIR
environment variable and set them differently depending on whether we're in the Docker
sbt scope or not. This gives us the option of writing out logs to a Docker bind mount in development if we're running a docker container locally on our machine.
Docker / defaultLinuxLogsLocation := "/opt/docker/logs",
dockerExposedVolumes := Seq((Docker / defaultLinuxLogsLocation).value),
dockerEnvVars := Map(
"LOG_DIR" -> (Docker / defaultLinuxLogsLocation).value,
),
bashScriptExtraDefines += """addJava "-Dlocal.logback.environment=${LOGBACK_ENVIRONMENT:-production}"""",
bashScriptExtraDefines += """addJava "-Dlog.dir=${LOG_DIR:-${app_home}/../logs}"""",
The source code for this is available at https://github.com/wsargent/sbt-with-jdk-13-docker-logging-example.
Working with JMX in Docker Containers
Another challenge when working with Docker containers is having a decent management layer, so that if we want to turn on logging for a particular component, we can do that without bouncing the server. Historically, JMX has filled that role, but JMX Remoting works by using RMI with Java Serialization, which is horribly inefficient and insecure. On top of that, JMX has no built in authorization layer, so there is no way to bake in a "read-only" view or limit capabilities.
As such, the best thing to do with JMX is to make sure that it can only be accessed through Jolokia, which exposes JMX information over HTTP and JSON, and not only has an authentication layer through JAAS or mTLS, but also has an authorization layer. You can then run hawt.io to connect to the Jolokia port and provide a way to get at MBeans.
To do this, we'll add sbt-javaagent as an SBT plugin to projects/plugins.sbt
:
addSbtPlugin("com.lightbend.sbt" % "sbt-javaagent" % "0.1.5")
and enable the plugin with enablePlugins(JavaAgent)
in the project, and then we can add:
javaAgents += JavaAgent( "org.jolokia" % "jolokia-agent-jvm" % "2.0.0-M3" classifier "agent", arguments = "host=0.0.0.0,port=8778"),
dockerExposedPorts += 8778, // expose the jolokia port
// Start up Java Flight Recorder
bashScriptExtraDefines += """addJava "-XX:StartFlightRecording=disk=true,dumponexit=true,filename=$LOG_DIR/recording.jfr,maxsize=1024m,maxage=1d,settings=profile,path-to-gc-roots=true"""",
For good measure, we'll also disable java serialization completely using JEP 290:
javaOptions in Universal ++= Seq(
"-Djdk.serialFilter='!*'", // blacklist all java serialization
...
)
And then running hawt is as simple as:
java -jar hawt/hawtio-app-2.8.0.jar
From there, you can add the java instance as "http://localhost:8778" and then we can control logging on a running JVM by going to the "JMX" tab and searching for ch.qos.logback.classic.jmx.JMXConfigurator
, then clicking on "Operations", then "setLoggerLevel()".
I also recommend going to the "Diagnostics" tab and using flight recorder dumps to monitor the internal state of the JVM. JFR is incredibly useful and free in JDK 11 (you can ignore the hawt warning). You'll need a bind mount to dump it to your host filesystem, but then after that you can use Java Mission Control to load up the data and see events.
The source code for JMX access is in https://github.com/wsargent/play-jdk13-alpine-slim.