Terse Systems

Writing an SBT Plugin

| Comments

One of the things I like about SBT is that it’s interactive. SBT stays up as a long running process, and you interact with it many times, while it manages your project and compiles code for you.

Because SBT is interactive and runs on the JVM, you can use it for more than just builds. You can use it for communication. Specifically, you can use it to make HTTP requests out to things you’re interested in communicating with.

Unfortunately, I knew very little about SBT plugins. So, I talked to Christopher Hunt and Josh Suereth, downloaded eigengo’s sbt-mdrw project, read the activator blog post on markdown and then worked it out on the plane back from Germany.

I made a 0.13 SBT plugin that uses the ROME RSS library to display titles from a list of RSS feeds. It’s available from https://github.com/wsargent/sbt-rss and has lots of comments.

The SBT RSS plugin adds a single command to SBT. You type rss at the console, and it displays the feed:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
> rss
[info] Showing http://typesafe.com/blog/rss.xml
[info]      Title = The Typesafe Blog
[info]      Published = null
[info]      Most recent entry = Scala Days Presentation Roundup
[info]      Entry updated = null
[info] Showing http://letitcrash.com/rss
[info]      Title = Let it crash
[info]      Published = null
[info]      Most recent entry = Reactive Queue with Akka Reactive Streams
[info]      Entry updated = null
[info] Showing https://github.com/akka/akka.github.com/commits/master/news/_posts.atom
[info]      Title = Recent Commits to akka.github.com:master
[info]      Published = Thu May 22 05:51:21 EDT 2014
[info]      Most recent entry = Fix fixed issue list.
[info]      Entry updated = Thu May 22 05:51:21 EDT 2014

Let’s show how it does that.

First, the build file. This looks like a normal build.sbt file, except that there’s a sbtPlugin setting in it:

build.sbtlink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
// this bit is important
sbtPlugin := true

organization := "com.typesafe.sbt"

name := "sbt-rss"

version := "1.0.0-SNAPSHOT"

scalaVersion := "2.10.4"

scalacOptions ++= Seq("-deprecation", "-feature")

resolvers += Resolver.sonatypeRepo("snapshots")

libraryDependencies ++= Seq(
  // RSS fetcher (note: the website is horribly outdated)
  "com.rometools" % "rome-fetcher" % "1.5.0"
)

publishMavenStyle := false

/** Console */
initialCommands in console := "import com.typesafe.sbt.rss._"

Next, there’s the Plugin scala code itself.

SbtRss.scalalink
1
2
3
object SbtRss extends AutoPlugin {
   // stuff
}

So, the first thing to note is the AutoPlugin class. The Plugins page talks about AutoPlugin — all you really need to know is if you define an autoImport object with your setting keys and then import it into an AutoPlugin, you will make the settingKey available to SBT.

The next bit is the globalSettings entry:

SbtRss.scalalink
1
2
3
override def globalSettings: Seq[Setting[_]] = super.globalSettings ++ Seq(
  Keys.commands += rssCommand
)

Here, we’re saying we’re going to add a command to SBT’s global settings, by merging it with super.globalSettings.

The next two bits detail how to create the RSS command in SBT style.

SbtRss.scalalink
1
2
3
4
5
/** Allows the RSS command to take string arguments. */
private val args = (Space ~> StringBasic).*

/** The RSS command, mapped into sbt as "rss [args]" */
private lazy val rssCommand = Command("rss")(_ => args)(doRssCommand)

Finally, there’s the command itself.

SbtRss.scalalink
1
2
3
4
5
def doRssCommand(state: State, args: Seq[String]): State = {
  // do stuff

  state
}

The first thing we need to do within a command is call Project.extract(state). This gives us a bunch of useful settings such as currentRef, which we can use to pull the value of the SettingKey out. The SBT documentation on Build State – Project related data shows some more examples:

SbtRss.scalalink
1
2
3
4
5
// Doing Project.extract(state) and then importing it gives us currentRef.
// Using currentRef allows us to get at the values of SettingKey.
// http://www.scala-sbt.org/release/docs/Build-State.html#Project-related+data
val extracted = Project.extract(state)
import extracted._

Once we have the extracted.currentRef object, we can pull out the list of URLs with this construct, where the documentation is from Build State – Project data:

SbtRss.scalalink
1
val currentList = (rssList in currentRef get structure.data).get

And then we can put that together with the ROME library to print something out.

SbtRss.scalalink
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
package com.typesafe.sbt.rss

import sbt._
import Keys._
import sbt.complete.Parsers._

import java.net.URL
import com.rometools.fetcher._
import com.rometools.fetcher.impl._
import com.rometools.rome.feed.synd._

import scala.util.control.NonFatal

/**
 * An autoplugin that displays an RSS feed.
 */
object SbtRss extends AutoPlugin {

  /**
   * Sets up the autoimports of setting keys.
   */
  object autoImport {
    /**
     * Defines "rssList" as the setting key that we want the user to fill out.
     */
    val rssList = settingKey[Seq[String]]("The list of RSS urls to update.")
  }

  // I don't know why we do this.
  import autoImport._

  /**
   * An internal cache to avoid hitting RSS feeds repeatedly.
   */
  private val feedInfoCache = HashMapFeedInfoCache.getInstance()

  /**
   * An RSS fetcher, backed by the cache.
   */
  private val fetcher = new HttpURLFeedFetcher(feedInfoCache)

  /** Allows the RSS command to take string arguments. */
  private val args = (Space ~> StringBasic).*

  /** The RSS command, mapped into sbt as "rss [args]" */
  private lazy val rssCommand = Command("rss")(_ => args)(doRssCommand)

  /**
   * Adds the rssCommand to the list of global commands in SBT.
   */
  override def globalSettings: Seq[Setting[_]] = super.globalSettings ++ Seq(
    Keys.commands += rssCommand
  )

  /**
   * The actual RSS command.
   *
   * @param state the state of the RSS application.
   * @param args the string arguments provided to "rss"
   * @return the unchanged state.
   */
  def doRssCommand(state: State, args: Seq[String]): State = {
    state.log.debug(s"args = $args")

    // Doing Project.extract(state) and then importing it gives us currentRef.
    // Using currentRef allows us to get at the values of SettingKey.
    // http://www.scala-sbt.org/release/docs/Build-State.html#Project-related+data
    val extracted = Project.extract(state)
    import extracted._

    // Create a new fetcher event listener attached to the state -- this gives
    // us a way to log the fetcher events.
    val listener = new FetcherEventListenerImpl(state)
    fetcher.addFetcherEventListener(listener)

    try {
      if (args.isEmpty) {
        // This is the way we get the setting from rssList := Seq("http://foo.com/rss")
        // http://www.scala-sbt.org/release/docs/Build-State.html#Project+data
        val currentList = (rssList in currentRef get structure.data).get
        for (currentUrl <- currentList) {
          val feedUrl = new URL(currentUrl)
          printFeed(feedUrl, state)
        }
      } else {
        for (currentUrl <- args) {
          val feedUrl = new URL(currentUrl)
          printFeed(feedUrl, state)
        }
      }
    } catch {
      case NonFatal(e) =>
        state.log.error(s"Error ${e.getMessage}")
    } finally {
      // Remove the listener so we don't have a memory leak.
      fetcher.removeFetcherEventListener(listener)
    }

    state
  }

  def printFeed(feedUrl:URL, state:State) = {
    // Allows us to do "asScala" conversion from java.util collections.
    import scala.collection.JavaConverters._

    // This is a blocking operation, but we're in SBT, so we don't care.
    val feed = fetcher.retrieveFeed(feedUrl)
    val title = feed.getTitle.trim()
    val publishDate = feed.getPublishedDate
    val entries = feed.getEntries.asScala
    val firstEntry = entries.head

    // The only way to provide the RSS feeds as a resource seems to be to
    // have another plugin extend this one.  The code's small enough that it
    // doesn't seem worth it.
    state.log.info(s"Showing $feedUrl")
    state.log.info(s"\t\tTitle = $title")
    state.log.info(s"\t\tPublished = $publishDate")
    state.log.info(s"\t\tMost recent entry = ${firstEntry.getTitle.trim()}")
    state.log.info(s"\t\tEntry updated = " + firstEntry.getUpdatedDate)
  }

  /**
   * Listens for RSS events.
   *
   * @param state
   */
  class FetcherEventListenerImpl(state:State) extends FetcherListener {
    def fetcherEvent(event:FetcherEvent) = {
      import FetcherEvent._
      event.getEventType match {
        case EVENT_TYPE_FEED_POLLED =>
          state.log.debug("\tEVENT: Feed Polled. URL = " + event.getUrlString)
        case EVENT_TYPE_FEED_RETRIEVED =>
          state.log.debug("\tEVENT: Feed Retrieved. URL = " + event.getUrlString)
        case EVENT_TYPE_FEED_UNCHANGED =>
          state.log.debug("\tEVENT: Feed Unchanged. URL = " + event.getUrlString)
      }
    }
  }
}

This is an intentionally trivial example, but it’s easy to show how you could use this to check if the build failed, for example. Have fun.

Testing Hostname Verification

| Comments

This is part of a series of posts about setting up Play WS as a TLS client for a “secure by default” setup and configuration through text files, along with the research and thinking behind the setup. I recommend The Most Dangerous Code in the World for more background. And thanks to Jon for the shoutout in Techcrunch.

Previous posts are:

The last talked about implementing hostname verification, which was a particular concern in TMDCitW. This post shows how you can test that your TLS client implements hostname verification correctly, by staging an attack. We’re going to use dnschef, a DNS proxy server, to confuse the client into talking to the wrong server.

To keep things simple, I’m going to assume you’re on Mac OS X Mavericks at this point. (If you’re on Linux, this is old hat. If you’re on Windows, it’s probably easier to use a VM like Virtualbox to set up a Linux environment.)

The first step to installing dnschef is to install a decent Python. The Python Guide suggests Homebrew, and Homebrew requires XCode be installed, so let’s start there.

Install XCode

Install XCode from the App Store and also install the command line tools:

1
xcode-select --install

Install Homebrew itself:

1
ruby -e "$(curl -fsSL https://raw.github.com/Homebrew/homebrew/go/install)"

Homebrew has some notes about Python, so we set up the command line environment:

1
2
export ARCHFLAGS="-arch x86_64"
export PATH=/usr/local/bin:/usr/local/sbin:~/bin:$PATH

Now (if you already have homebrew installed):

1
2
3
brew update
brew install openssl
brew install python --with-brewed-openssl --framework

You should see:

1
2
3
4
$ python --version
Python 2.7.6
$ which python
/usr/local/bin/python

If you run into trouble, then brew doctor or brew link --overwrite python should sort things out.

Now upgrade the various package tools for Python:

1
2
pip install --upgrade setuptools
pip install --upgrade pip

Now that we’ve got Python installed, we can install dnschef:

1
2
3
wget https://thesprawl.org/media/projects/dnschef-0.2.1.tar.gz
tar xvzf dnschef-0.2.1.tar.gz
cd dnschef-0.2.1

Then, we need to use dnschef as a nameserver. An attacker would use rogue DHCP or ARP spoofing to fool your computer into accepting this, but we can just add it directly:

OS X – Open System Preferences and click on the Network icon.

Select the active interface and fill in the DNS Server field. If you are using Airport then you will have to click on Advanced… button and edit DNS servers from there.

Don’t forget to click “Apply” after making the changes!

Now, we’re going to use DNS to redirect https://www.howsmyssl.com to https://playframework.com.

1
2
$ host playframework.com
playframework.com has address 54.243.50.169

We need to specify the IP address 54.243.50.169 as the fakeip argument.

1
2
3
4
5
6
7
8
9
10
11
12
$ sudo /usr/local/bin/python ./dnschef.py --fakedomains www.howsmyssl.com --fakeip 54.243.50.169
          _                _          __
         | | version 0.2  | |        / _|
       __| |_ __  ___  ___| |__   ___| |_
      / _` | '_ \/ __|/ __| '_ \ / _ \  _|
     | (_| | | | \__ \ (__| | | |  __/ |
      \__,_|_| |_|___/\___|_| |_|\___|_|
                   [email protected]

[*] DNSChef started on interface: 127.0.0.1
[*] Using the following nameservers: 8.8.8.8
[*] Cooking A replies to point to 54.243.50.169 matching: www.howsmyssl.com

Now that we’ve got dnschef working as a proxy, we can see whether various TLS clients notice that www.howsmyssl.com has started returning an X.509 certificate that says it came from “playframework.com”:

1
2
3
$ curl https://www.howsmyssl.com/
curl: (60) SSL certificate problem: Invalid certificate chain
More details here: http://curl.haxx.se/docs/sslcerts.html

Curl is not fooled. It knows the subjectAltName.dnsName is different.

Let’s try Play WS:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[ssltest] $ testOnly HowsMySSLSpec
[info] Compiling 1 Scala source to /Users/wsargent/work/ssltest/target/scala-2.10/test-classes...
Mar 31, 2014 6:11:08 PM org.jboss.netty.channel.DefaultChannelFuture
WARNING: An exception was thrown by ChannelFutureListener.
java.net.ConnectException: HostnameVerifier exception.
  at com.ning.http.client.providers.netty.NettyConnectListener.operationComplete(NettyConnectListener.java:81)
  at org.jboss.netty.channel.DefaultChannelFuture.notifyListener(DefaultChannelFuture.java:427)
  at org.jboss.netty.channel.DefaultChannelFuture.notifyListeners(DefaultChannelFuture.java:413)
  at org.jboss.netty.channel.DefaultChannelFuture.setSuccess(DefaultChannelFuture.java:362)
  at org.jboss.netty.handler.ssl.SslHandler.setHandshakeSuccess(SslHandler.java:1383)
  at org.jboss.netty.handler.ssl.SslHandler.unwrap(SslHandler.java:1252)
  at org.jboss.netty.handler.ssl.SslHandler.decode(SslHandler.java:913)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.callDecode(FrameDecoder.java:425)
  at org.jboss.netty.handler.codec.frame.FrameDecoder.messageReceived(FrameDecoder.java:303)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:268)
  at org.jboss.netty.channel.Channels.fireMessageReceived(Channels.java:255)
  at org.jboss.netty.channel.socket.nio.NioWorker.read(NioWorker.java:88)
  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.process(AbstractNioWorker.java:109)
  at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:312)
  at org.jboss.netty.channel.socket.nio.AbstractNioWorker.run(AbstractNioWorker.java:90)
  at org.jboss.netty.channel.socket.nio.NioWorker.run(NioWorker.java:178)
  at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
  at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
  at java.lang.Thread.run(Thread.java:744)

[info] HowsMySSLSpec
[info]
[info] WS should
[info] + NOT be fooled by dnschef
[info]
[info] Total for specification HowsMySSLSpec
[info] Finished in 21 seconds, 162 ms
[info] 1 example, 0 failure, 0 error
[info] Passed: Total 1, Failed 0, Errors 0, Passed 1
[success] Total time: 25 s, completed Mar 31, 2014 6:11:26 PM]

Yep, it throws an exception.

Now let’s try it with hostname verification off by setting the ‘loose’ option on the client:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class HowsMySSLSpec extends PlaySpecification with ClientMethods {
  val timeout: Timeout = 20.seconds
  "WS" should {
    "be fooled by dnschef" in {
      val rawConfig = play.api.Configuration(ConfigFactory.parseString(
        """
          |ws.ssl.loose.disableHostnameVerification=true
        """.stripMargin))

      val client = createClient(rawConfig)
      val response = await(client.url("https://www.howsmyssl.com").get())(timeout)
      response.status must be_==(200)
      response.body must contain("Play Framework")
    }
  }
}

Run the test:

1
2
3
4
5
6
7
8
9
10
11
[ssltest] $ testOnly HowsMySSLSpec
[info] HowsMySSLSpec
[info]
[info] WS should
[info] + be fooled by dnschef
[info]
[info] Total for specification HowsMySSLSpec
[info] Finished in 9 seconds, 675 ms
[info] 1 example, 0 failure, 0 error
[info] Passed: Total 1, Failed 0, Errors 0, Passed 1
[success] Total time: 12 s, completed Mar 31, 2014 6:08:50 PM

It works! We have fooled WS into setting up a TLS connection with a different host, one that we have control over. If we were evil, we could then proxy https://playframework.com to the intended URL, and save off all the content or inject fake data.

Let’s try Apache HttpClient 3.x:

1
2
3
4
5
6
7
8
9
10
11
12
name := "httpclienttest"

version := "1.0-SNAPSHOT"

libraryDependencies ++= Seq(
    "commons-httpclient" % "commons-httpclient" % "3.1",
    "org.specs2" %% "specs2" % "2.3.10" % "test"
)

scalacOptions in Test ++= Seq("-Yrangepos")

resolvers ++= Seq("snapshots", "releases").map(Resolver.sonatypeRepo)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
import org.apache.commons.httpclient.HttpClient
import org.apache.commons.httpclient.methods.GetMethod
import org.specs2.mutable.Specification

class HttpClientSpec extends Specification {
  "HTTPClient" should {
    "do something" in {
      val httpclient = new HttpClient()
      val httpget = new GetMethod("https://www.howsmyssl.com/")
      try {
        httpclient.executeMethod(httpget)
        //val line = httpget.getResponseBodyAsString
        //line must not contain("Play Framework")
        httpget.getStatusCode must not be_==(200)
      } finally {
        httpget.releaseConnection()
      }
    }
  }
}

Running this gives:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[info] HttpClientSpec
[info]
[info] HTTPClient should
[info] x do something
[error]    '200' is equal to '200' (HttpClientSpec.scala:14)
[info]
[info]
[info] Total for specification HttpClientSpec
[info] Finished in 18 ms
[info] 1 example, 1 failure, 0 error
[error] Failed: Total 1, Failed 1, Errors 0, Passed 0
[error] Failed tests:
[error]     HttpClientSpec
[error] (test:test) sbt.TestsFailedException: Tests unsuccessful
[error] Total time: 4 s, completed Mar 31, 2014 8:26:46 PM

Nope. HttpClient 3.x was retired in 2007, but any code that’s still using it under the hood is vulnerable to this attack.

Try this on your own code and see what it does. I’ll bet it’ll be interesting.

Next

Odds and ends I couldn’t cover elsewhere. And then best practices, and summing things up.

Fixing Hostname Verification

| Comments

This is the fourth in a series of posts about setting up Play WS as a TLS client for a “secure by default” setup and configuration through text files, along with the research and thinking behind the setup. I recommend The Most Dangerous Code in the World for more background.

Previous posts are:

The Attack: Man in the Middle

The scenario that requires hostname verification is when an attacker is on your local network, and can subvert DNS or ARP, and somehow redirect traffic through his own machine. When you make the call to https://example.com, the attacker can make the response come back to a local IP address, and then send you a TLS handshake with a certificate chain.

The attacker needs you to accept a public key that it owns so that you will continue the conversation with it, so it can’t simply hand you the certificate chain that belongs to example.com — that has a different public key, and the attacker can’t use it. Also, the attacker can’t give you a certificate chain that points to example.com and has the attacker’s public key — the CA should (in theory) refuse to sign the certificate, since the domain belongs to someone else.

However… if the attacker get a CA to sign a certificate for a site that it does have control of, then the attack works like this:

In the example, DNS is compromised, but an attacker could just as well proxy the request to another server and return the result from a different server. The key to any kind of check for server identity is that the check of the hostname must happen on the client end, and must be tied to the original request coming in. It must happen out of band, and cannot rely on any response from the server.

The Defense: Hostname Verification

In theory, hostname verification in HTTPS sounds simple enough. You call “https://example.com”, save off the “example.com” bit, and then check it against the X.509 certificate from the server. If the names don’t match, you terminate the connection.

So where do you look in the certificate? According to RFC 6125, hostname verification should be done against the certificate’s subjectAlternativeName’s dNSName field. In some legacy implementations, the check is done against the certificate’s commonName field, but commonName is deprecated and has been deprecated for quite a while now.

You generate a certificate with the right name by using keytool with the -ext flag to say the certificate has example.com as the DNS record in the subjectAltName field:

1
2
3
4
5
6
7
8
9
10
keytool -genkeypair \
   -keystore keystore.jks \
  -dname "CN=example.com, OU=Sun Java System Application Server, O=Sun Microsystems, L=Santa Clara, ST=California, C=US" \
  -keypass changeit \
  -storepass changeit \
  -keyalg RSA \
  -keysize 2048 \
  -alias example \
  -ext SAN=DNS:example.com \
  -validity 9999

And to view the certificate:

1
keytool -list -v -alias example -storepass changeit -keystore keystore.jks

Is it really that simple? Yes. HTTPS is very specific about verifying server identity. You make an HTTPS request, then you check that the certificate that comes back matches the hostname of the request. There’s some bits added on about wildcards, but for the most part it’s not complicated.

In fact (and this is part of the problem), you can say that HTTPS is defined by the hostname verification requirement for HTTP on top of TLS. The reason why HTTPS exists as a distinct RFC as apart from TLS is because of the specifics of the hostname verification — LDAP has a distinct secure protocol, LDAPS, which handles hostname verification differently. Every protocol that uses TLS must have its own application level security on top of TLS. TLS, by itself, doesn’t define server identity.

Because TLS in its raw form doesn’t do hostname verification, anything that uses raw TLS without doing any server identity check is insecure. This is pretty amazing information in itself, so let’s break this down, and repeat it in bold face and all caps:

A) VERIFICATION OF SERVER IDENTITY IS APPLICATION PROTOCOL SPECIFIC.

B) BECAUSE OF (A), TLS LEAVES IT TO THE APPLICATION TO DO THE HOSTNAME VERIFICATION.

C) YOU CANNOT SECURELY USE RAW TLS WITHOUT ADDING HOSTNAME VERIFICATION.

Given the previous points and the consequences of failure, this would lead us to believe that there must be a safety system in place to validate the TLS configuration before opening a connection. To my knowledge, no such system exists. This is despite the fact that there are really only three main protocols that use TLS: HTTPS, LDAPS, and IMAPS.

TLS LIBRARIES SHOULD MAKE IT IMPOSSIBLE TO USE THEM RAW WITHOUT ANY HOSTNAME VERIFICATION. THEY DO NOT.

As you might guess, this makes lack of hostname verification a very common failure. The Most Dangerous Code in the World specifically calls out the lack of hostname verification as a very common failure of HTTPS client libraries. This is bad, because man in the middle attacks are extremely common.

In 2011, RFC 6125 was invented to bridge this gap, but most TLS implementations don’t support it. In the absence of a known guide, using RFC 2818 is not unreasonable, and certainly better than nothing.

Implementation in JSSE

The JSSE Reference Guide goes out of its way to mention the need for hostname verification.

“IMPORTANT NOTE: When using raw SSLSockets/SSLEngines you should always check the peer’s credentials before sending any data. The SSLSocket/SSLEngine classes do not automatically verify, for example, that the hostname in a URL matches the hostname in the peer’s credentials. An application could be exploited with URL spoofing if the hostname is not verified.”

JSSE Reference Guide, SSLSession

A little later, the reference guide mentions it again, in context with HttpsURLConnection:

[T]he SSL/TLS protocols do not specify that the credentials received must match those that peer might be expected to send. If the connection were somehow redirected to a rogue peer, but the rogue’s credentials presented were acceptable based on the current trust material, the connection would be considered valid. When using raw SSLSockets/SSLEngines you should always check the peer’s credentials before sending any data. The SSLSocket and SSLEngine classes do not automatically verify that the hostname in a URL matches the hostname in the peer’s credentials. An application could be exploited with URL spoofing if the hostname is not verified.

JSSE Guide, HttpsURLConnection

I’ve never heard the term “URL spoofing” before, and Google shows nothing remotely connected with this term. Ping me if you’ve heard of it.

Anyway. JSSE does do hostname verification, if you set it up just right. For completeness, I’m going to go over all the options.

Hostname Verification in 1.6

In 1.6, if you want to use hostname verification, you have one way to do it. If you use HttpsUrlConnection, then JSSE will do hostname verification for you by default. Other than that, you’re on your own. JSSE 1.6 does not provide any public classes for you to extend; it’s all internal.

If you want to use hostname verification on an SSLEngine, you have to get at an instance of sun.security.ssl.SSLEngineImpl and then call sslEngine.trySetHostnameVerification("HTTPS") on SSLEngine directly, using reflection. This lets ClientHandshaker pass in the identifier to com.sun.net.ssl.internal.ssl.X509ExtendedTrustManager.

Hostname Verification in 1.7

JSSE 1.7 provides you with more options for doing HTTPS hostname verification. In addition to HttpsUrlConnection, you have the option of using X509ExtendedTrustManager, because it “enables endpoint verification at the TLS layer.” What this means in practice is that X509ExtendedTrustManager routes through to X509TrustManagerImpl.checkIdentity, as in JDK 1.6.

The reference guide recommends using X509ExtendedTrustManager rather than the legacy X509TrustManager, and even has a worked example. But there’s a catch: X509ExtendedTrustManager is an abstract class, so you must inherit from it. This limits anything fun you might want to do, like aggregating keystore information. As such, it’s only useful if you’re doing minor tweaks.

In 1.7, the manual method of doing it is not as bad as 1.6. There’s an explicit call you can make (recommended by Stack Overflow):

1
2
val sslParams = sslContext.getEnabledParameters()
sslParams.setEndpointIdentificationAlgorithm("HTTPS") 

But this doesn’t actually work with AsyncHttpClient: you’ll get a NullPointerException!

The reason why: AsyncHttpClient creates an SSLEngine without the peerHost and peerPort. JSSE assumes that if you called sslParams.setEndpointIdentificationAlgorithm("HTTPS") then you also created the SSL engine like this:

1
sslContext.createSSLEngine(peerHost, peerPort)

So, setEndpointIdentificationAlgorithm is not an option. (The lack of hostname could also possibly have an effect on Server Name Indication, although I haven’t tested that.)

There is another way to do hostname verification though. You can pass in a custom HostnameVerifier to the SSLContext.

HostnameVerifier is an interface that normally says “if you’ve tried resolving the hostname yourself and got nothing, then try this.” However, since AsyncHttpClient works directly with SSLEngine, the Netty provider will call the HostnameVerifier on every call to do hostname verification. This gives us the avenue we need.

I ended up using Kevin Locke’s guide to implement a HostnameVerifier that calls to Sun’s internal HostnameChecker, the same way that setEndpointIdentificationAlgorithm("HTTPS") does. The end result is pretty simple:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
class DefaultHostnameVerifier extends HostnameVerifier {
  private val logger = LoggerFactory.getLogger(getClass)

  def hostnameChecker: HostnameChecker = HostnameChecker.getInstance(HostnameChecker.TYPE_TLS)

  def matchKerberos(hostname: String, principal: Principal) = HostnameChecker.`match`(hostname, principal.asInstanceOf[KerberosPrincipal])

  def isKerberos(principal: Principal): Boolean = principal != null && principal.isInstanceOf[KerberosPrincipal]

  def verify(hostname: String, session: SSLSession): Boolean = {
    logger.debug(s"verify: hostname = $hostname")

    val checker = hostnameChecker
    val result = try {
      session.getPeerCertificates match {
        case Array(cert: X509Certificate, _*) =>
          try {
            checker.`match`(hostname, cert)
            // Certificate matches hostname
            true
          } catch {
            case e: CertificateException =>
              // Certificate does not match hostname
              logger.debug("verify: Certificate does not match hostname", e)
              false
          }

        case notMatch =>
          // Peer does not have any certificates or they aren't X.509
          logger.debug(s"verify: Peer does not have any certificates: $notMatch")
          false
      }
    } catch {
      case _: SSLPeerUnverifiedException =>
        // Not using certificates for verification, try verifying the principal
        try {
          val principal = session.getPeerPrincipal
          if (isKerberos(principal)) {
            matchKerberos(hostname, principal)
          } else {
            // Can't verify principal, not Kerberos
            logger.debug(s"verify: Can't verify principal, not Kerberos")
            false
          }
        } catch {
          case e: SSLPeerUnverifiedException =>
            // Can't verify principal, no principal
            logger.debug("Can't verify principal, no principal", e)
            false
        }
    }
    logger.debug("verify: returning {}", result)
    result
  }
}

After that, I could set it on the builder and have hostname verification triggered:

1
2
3
val myHostnameVerifier = new DefaultHostnameVerifier()
val config = new AsyncHttpClientConfig()
config.setHostnameVerifier(myHostnameVerifier)

Disabling hostname verification is a loose option:

1
ws.ssl.loose.disableHostnameVerification = true

And there’s an option to use your own hostname verifier if you’re not on an Oracle JVM:

1
ws.ssl.hostnameVerifierClassName = "com.mycompany.MyHostnameVerifier"

Of course, there’s always more issues.

The hostname verification needs to happen after an SSL handshake has been completed. If you call session.getPeerCertificates() before the SSL handshake has been established, you’ll get an SSLPeerUnverifiedException exception. You need to set up an SSL handshake listener (using SSLHandler for Netty, SSLBaseFilter.HandshakeListener for Grizzly) and only do hostname verification after the session is valid.

AsyncHttpClient 1.8.5 doesn’t use a handshake listener. Instead, it uses a completion handler, and potentially hostname verification may fail unexpectedly if you set a custom hostname verifier. It seems to work most of the time in the field due to a race condition — by the time the completion handler is notified, the handshake has completed already. Still working on this.

Oracle’s HostnameChecker does not implement RFC 6125 correctly. Even HTTPClient’s StrictHostnameValidator seems not to be up to spec, and there are many cases where hostname checkers have failed against NULL or bad CN strings.

Nevertheless, it is the way that HTTPS is supposed to work — if you have a problem with hostname verification, you really need to check your X.509 certificate and make sure that your subjectAltName’s dnsName field is set correctly.

Next

Testing Hostname Verification!

Fixing Certificate Revocation

| Comments

This is the third in a series of posts about setting up Play WS as a TLS client for a “secure by default” setup and configuration through text files, along with the research and thinking behind the setup. (TL;DR — if you’re looking for a better revocation protocol, you may be happier reading Fixing Revocation for Web Browsers on the Internet and PKI: It’s Not Dead, Only Resting.)

Previous posts are:

This post is all about certificate revocation using OCSP and CRL, what it is, how useful it is, and how to configure it in JSSE.

Certificate Revocation (and its Discontents)

The previous post talked about X.509 certificates that had been compromised in some way. Compromised certificates can be a big problem, especially if those certificates have the ability to sign other certificates. If certificates have been broken or forged, then in theory it should be possible for a certificate authority to let a client know as soon as possible which certificates are invalid and should not be used.

There have been two attempts to do certificate revocation: Certificate Revocation Lists (CRLs). CRLs — lists of bad certificates — were huge and hard to manage.

As an answer to CRLs, Online Certificate Status Protocol was invented. OCSP involves contacting the remote CA server and going through verification of the certificate there before it will start talking to the server. According to Cloudflare, this can make TLS up to 33% slower. Part of it may be because OCSP responders are slow, but it’s clear that OCSP is not well loved.

In fact, most browsers don’t even bother with OCSP. Adam Langley explains why OCSP is disabled in Chrome:

While the benefits of online revocation checking are hard to find, the costs are clear: online revocation checks are slow and compromise privacy. The median time for a successful OCSP check is ~300ms and the mean is nearly a second. This delays page loading and discourages sites from using HTTPS. They are also a privacy concern because the CA learns the IP address of users and which sites they’re visiting.

On this basis, we’re currently planning on disabling online revocation checks in a future version of Chrome. (There is a class of higher-security certificate, called an EV certificate, where we haven’t made a decision about what to do yet.)

— “Revocation checking and Chrome’s CRL

Adding insult to injury, OCSP also has security issues:

Alas, there was a problem — and not just “the only value people are adding is republishing the data from the CA”. No, this concept doesn’t work at all, because OCSP assumes a CA never loses control of its keys. I repeat, the system in place to handle a CA losing its keys, assumes the CA never loses the keys.

Dan Kaminsky

and:

OCSP is actually much, much worse than you describe. The status values are, as you point out, broken. Even if you fix that (as some CAs have proposed, after being surprised to find out how OCSP really worked – yes, some of the folks charged with running OCSP don’t actually know how it really works) it doesn’t help, given OCSP’s broken IDs an attacker can trivially work around this. And if you fix those, given the replay-attack-enabled “high-performance” optimisation an attacker can work around that. And if you fix that, given that half the response is unauthenticated, an attacker can go for that. To paraphrase Lucky Green, OCSP is multiple-redundant broken, by design. If you remove the bits that don’t work (the response status, the cert ID, nonces, and the unauthenticated portions of the response) there is literally nothing left. There’s an empty ASN.1 shell with no actual content. There is not one single bit of OCSP that actually works as it’s supposed to (or at least “as a reasonable person would expect it to”, since technically it does work exactly as the spec says it should).

Peter Gutmann, replying to Dan Kaminsky

And to drive the point home, if you have someone sitting on your network with a copy of sslsniff, they can trivially fake out a response:

As an attacker, it is thus possible for us to intercept any OCSP request and send a tryLater response without having to generate a signature of any kind. The composition of the response is literally just the OCSPResponseStatus for the tryLater condition, which is simply the single-byte ASCII character ‘3’.

Most OCSP implementations will accept this response without generating any kind of warning to the user that something might be amiss, thus defeating the protocol.

Defeating OCSP With The Character ‘3’

Given all of this, it’s hard to say OCSP is worthwhile. However, it’s important to note that all of the above comments are talking about public revocation checking against browsers and mobile devices in the wild.

If you’re using web services in an internal network, OCSP actually sounds useful. Privacy is less of an issue, you’re running on an internal network, you can make your OCSP responder fast enough, and using a hard-fail approach for a web service is reasonable. The research on the use of OCSP in web services is thin: I found one article. Presumably, OCSP gets rolled into PKI enterprise management solutions.

I also haven’t heard of any exploits in the wild, perhaps because OCSP is so rarely used. This is not to say that OCSP is secure… but even speed bumps can be effective sometimes.

Certificate Revocation in JSSE

Certificate Revocation in JSSE is disabled by default, because of the performance issues. I decided to leave this disabled out of the box, but did what I could to make it easier to configure.

The implementation is… convoluted. The details are spelled out in Appendix C of the PKI Guide and Enable OCSP checking, but it’s still incomplete.

You need to set up the system properties, on the command line:

1
java -Dcom.sun.security.enableCRLDP=true -Dcom.sun.net.ssl.checkRevocation=true

You need to do this because the system properties set up private static final fields internally:

If you’re calling this in a running JVM, you need to ensure that nothing’s loaded up those classes already, or you’ll have to resort to fiddling the already loaded classes, a solution that isn’t appropriate in production code.

To set up OCSP, you need to set the security property, by adding the following to your initialization code:

1
java.security.Security.setProperty("ocsp.enable", "true")

It is a small mercy that “ocsp.enable” is checked at runtime from PKIXCertPathValidator, so you can do that any time you feel like.

Currently, the configuration look like this:

1
2
ws.ssl.checkRevocation = true
ws.ssl.revocationLists = [ "http://example.com/crl" ]

When checkRevocation is true, it will set “ocsp.enable” to true, set up the static revocation lists and do the work of passing in settings to the trust manager.

Generating an individual CRL is done using a DataInputStream:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
  def generateCRL(inputStream: InputStream): CRL = {
    val cf = CertificateFactory.getInstance("X509")
    cf.generateCRL(inputStream)
  }

  def generateCRLFromURL(url: URL): CRL = {
    val connection = url.openConnection()
    connection.setDoInput(true)
    connection.setUseCaches(false)
    val inStream = new DataInputStream(connection.getInputStream)
    try {
      generateCRL(inStream)
    } finally {
      inStream.close()
    }
  }

When you set up the trust manager, you have to set up an instance of PKIXBuilderParameters (see Fixing X.509 Certificates for where this fits in). Then, the CRLs as a CertStore with its own CollectionCertStoreParameters class.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
  def buildTrustManagerParameters(trustStore: KeyStore,
    revocationEnabled: Boolean,
    revocationListOption: Option[Seq[CRL]],
    signatureConstraints: Set[AlgorithmConstraint],
    keyConstraints: Set[AlgorithmConstraint]): CertPathTrustManagerParameters = {
    import scala.collection.JavaConverters._

    val certSelect: X509CertSelector = new X509CertSelector
    val pkixParameters = new PKIXBuilderParameters(trustStore, certSelect)
    pkixParameters.setRevocationEnabled(revocationEnabled)

    // Set the static revocation list if it exists...
    revocationListOption.map {
      crlList =>
        import scala.collection.JavaConverters._
        pkixParameters.addCertStore(CertStore.getInstance("Collection", new CollectionCertStoreParameters(crlList.asJavaCollection)))
    }

    // Add the algorithm checker in here...
    val checkers: Seq[PKIXCertPathChecker] = Seq(
      new AlgorithmChecker(signatureConstraints, keyConstraints)
    )

    // Use the custom cert path checkers we defined...
    pkixParameters.setCertPathCheckers(checkers.asJava)
    new CertPathTrustManagerParameters(pkixParameters)
  }

And we’re done.

Testing

Once you have everything configured, you can turn on debugging to check that OCSP is enabled:

1
java -Djava.security.debug="certpath ocsp"

And optionally use Wireshark to sniff OCSP responses.

There are a number of OSCP responders which are simply broken when you turn them on, but How’s My SSL works well.

Finally, a note about testing. Because of the system properties problem, using configuration from inside a running JVM is difficult, which can snarl tests:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
class HowsMySSLSpec extends PlaySpecification with CommonMethods {
  val timeout: Timeout = 20.seconds
  "WS" should {
    "connect to a remote server" in {
      val rawConfig = play.api.Configuration(ConfigFactory.parseString(
        """
          |ws.ssl.debug=["certpath", "ocsp"]
          |ws.ssl.checkRevocation=true  # doesn't set system properties before classes load!
        """.stripMargin))

      val client = createClient(rawConfig)
      val response = await(client.url("https://www.howsmyssl.com/a/check").get())(timeout)
      response.status must be_==(200)
    }
  }
}

Instead, you need to specify the system properties to SBT or set the system properties before the test runs:

1
javaOptions in Test ++= Seq("-Dcom.sun.security.enableCRLDP=true", "-Dcom.sun.net.ssl.checkRevocation=true")

You should see as output (under JDK 1.8):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
certpath: -Using checker7 ... [sun.security.provider.certpath.RevocationChecker]
certpath: connecting to OCSP service at: http://gtssl2-ocsp.geotrust.com
certpath: OCSP response status: SUCCESSFUL
certpath: OCSP response type: basic
certpath: Responder's name: CN=GeoTrust SSL CA - G2 OCSP Responder, O=GeoTrust Inc., C=US
certpath: OCSP response produced at: Wed Mar 19 13:57:32 PDT 2014
certpath: OCSP number of SingleResponses: 1
certpath: OCSP response cert #1: CN=GeoTrust SSL CA - G2 OCSP Responder, O=GeoTrust Inc., C=US
certpath: Status of certificate (with serial number 159761413677206476752317239691621661939) is: GOOD
certpath: Responder's certificate includes the extension id-pkix-ocsp-nocheck.
certpath: OCSP response is signed by an Authorized Responder
certpath: Verified signature of OCSP Response
certpath: Response's validity interval is from Wed Mar 19 13:57:32 PDT 2014 until Wed Mar 26 13:57:32 PDT 2014
certpath: -checker7 validation succeeded

And that takes care of testing.

Next

Hostname Verification!

Fixing X.509 Certificates

| Comments

This is a continuation in a series of posts about how to correctly configure a TLS client using JSSE, using The Most Dangerous Code in the World as a guide. This post is about X.509 certificates in TLS, and has some videos to show both what the vulnerabilities are, and how to fix them. I highly recommend the videos, as they do an excellent job of describing problems that TLS faces in general.

Also, JDK 1.8 just came out and has much better encryption. Now would be a good time to upgrade.

Table of Contents

Part One: we talk about how to correctly use and verify X.509 certificates.

  • What X.509 Certificates Do
  • Understanding Chain of Trust
  • Understanding Certificate Signature Forgery
  • Understanding Signature Public Key Cracking

Part Two: We discuss how to check X.509 certificates.

  • Validating a X.509 Certificate in JSSE
  • Validating Key Size and Signature Algorithm

What X.509 Certificates Do

The previous post talked about using secure ciphers and algorithms. This alone is enough to set up a secure connection, but there’s no guarantee that you are talking to the server that you think you are talking to.

Without some means to verify the identity of a remote server, an attacker could still present itself as the remote server and then forward the secure connection onto the remote server. This is the problem that Netscape had.

As it turned out, another organization had come up with a solution. The ITU-T had some directory services that needed authentication, and set a system of public key certificates in a format called X.509 in a binary encoding known as ASN.1 DER. That entire system was copied wholesale for use in SSL, and X.509 certificates became the way to verify the identity of a server.

The best way to think about public key certificates is as a passport system. Certificates are used to establish information about the bearer of that information in a way that is difficult to forge. This is why certificate verification is so important: accepting any certificate means that an attacker’s certificate will be blindly accepted.

X.509 certificates contain a public key (typically RSA based), and a digest algorithm (typically in the SHA-2 family, i.e. SHA512) which provides a cryptographic hash. Together these are known as the signature algorithm (i.e. “RSAWithSHA512”). One certificate can sign another by taking all the DER encoded bits of a new certificate (basically everything except “SignatureAlgorithm”) and passing it through the digest algorithm to create a cryptographic hash. That hash is then signed by the private key of the organization owning the issuing certificate, and the result is stuck onto the end of the new certificate in a new “SignatureValue” field. Because the issuer’s public key is available, and the hash could have only been generated by the certificate that was given as input, we can treat it as “signed” by the issuer.

So far, so good. Unfortunately, X.509 certificates are complex. Very few people understand (or agree on) the various fields that can be involved in X.509 certificates, and even fewer understand ASN.1 DER, the binary format that X.509 is encoded in (which has led to some interesting attacks on the format). So much of the original X.509 specification was vague that PKIX was created to nail down some of the extensions. Currently, these seem to be the important ones:

There are other fields in X.509, but in practice, X.509 compatibility is so broken that few of them matter. For example, nameConstraints is considered near useless and policyConstraints has been misunderstood and exploited.

So if you want to do the minimum amount of work, all you need is some approximation to a DN, maybe a basicConstraints, and if you’re feeling really enthusiastic, keyUsage (although this is often ignored by implementations, see the part 2a slides for examples. Even basicConstraints, the single most fundamental extension in a certificate, and in most cases just a single boolean value, was widely ignored until not too long ago).

Peter Gutmann

Peter Gutmann is an excellent resource on X.509 certificates (although he does have a tendency to rant). Read the X.509 Style Guide, check out the X.509 bits of Godzilla Crypto Tutorial, and buy Engineering Security when it comes out of draft — it has over 500 pages of exhaustively detailed security fails.

If you’re not up for that, the best overall reference is Zytrax’s SSL Survival Guide, and the presentation of “Black Ops of PKI” by Dan Kaminsky is a good introduction:

Understanding Chain of Trust

In TLS, the server not only sends its own certificate (known as an “end entity certificate” or EE), but also a chain of certificates that lead up to (but not including) a root CA certificate issued by a certificate authority (CA for short). Each of these certificates is signed by the one above them so that they are known to be authentic. Certificate validation in TLS goes through a specific algorithm to validate each individual certificate, then match signatures with each one in the chain to establish a chain of trust.

Bad things can happen if the chain of trust only checks the signature and does not also check the keyUsage and the basicConstraints fields in X.509. Moxie Marlinspike has an excellent presentation at DEFCON 17 on defeating TLS, starting off with subverting the chain of trust:

Understanding Certificate Signature Forgery

Certificates should be signed with an algorithm from the SHA-2 library (i.e. at least SHA-256), to avoid forgery. This is important, because it prevents signature forgery.

Certificates are needed because they can say “this certificate is good because it has been signed by someone I trust.” If you can forge a signature, then you can represent yourself as a certificate authority. In MD5 Considered harmful today, a team showed that they were able to forge an MD5 certificate in this manner:

Since the original paper, an MD5 based attack like this has been seen in the wild. A virus called Flame forged a signature (jumping through a series of extremely difficult technical hurdles), and used it to hijack the Windows Update mechanism used by Microsoft to patch machines, completely compromising almost 200 servers.

MD2 was broken in this paper, and is no longer considered a secure hash algorithm. MD4 is considered historic. As shown in the paper and video, MD5 is out, and the current advice is to avoid using the MD5 algorithm in any capacity. Mozilla is even more explicit about not using MD5 as a hash algorithm for intermediate and end entity certificates.

SHA1 has not been completely broken yet, but it is starting to look very weak. The current advice is to stop using SHA-1 as soon as practical and it has been deprecated by Microsoft. Using SHA-1 is still allowed by NIST on existing certificates though.

Federal agencies may use SHA-1 for the following applications: verifying old digital signatures and time stamps, generating and verifying hash-based message authentication codes (HMACs), key derivation functions (KDFs), and random bit/number generation. Further guidance on the use of SHA-1 is provided in SP 800-131A.

NIST’s Policy on hash functions, September 28, 2012

Even the JSSE documentation itself says that SHA-2 is required, although it leaves this as an exercise for the reader:

“The strict profile suggest all certificates should be signed with SHA-2 or stronger hash functions. In JSSE, the processes to choose a certificate for the remote peer and validate the certificate received from remote peer are controlled by KeyManager/X509KeyManager and TrustManager/X509TrustManager. By default, the SunJSSE provider does not set any limit on the certificate’s hash functions. Considering the above strict profile, the coder should customize the KeyManager and TrustManager, and limit that only those certificate signed with SHA-2 or stronger hash functions are available or trusted.”

TLS and NIST’S Policy on Hash Functions

So, SHA-2 library. And indeed, most public certificates (over 95%) are signed this way.

Understanding Signature Public Key Cracking

An X.509 certificate has an embedded public key, almost universally RSA. RSA has a modulus component (also known as key size or key length), which is intended to be difficult to factor out. Some of these public keys were created at a time when computers were smaller and weaker than they are now. Simply put, their key size is now far too small. Those public keys may still be valid, but the security they provide isn’t adequate against today’s technology.

The Mozilla Wiki brings the point home in three paragraphs:

The other concern that needs to be addressed is that of RSA1024 being too small a modulus to be robust against faster computers. Unlike a signature algorithm, where only intermediate and end-entity certificates are impacted, fast math means we have to disable or remove all instances of 1024-bit moduli, including the root certificates.

The NIST recommendation is to discontinue 1024-bit RSA certificates by December 31, 2010. Therefore, CAs have been advised that they should not sign any more certificates under their 1024-bit roots by the end of this year.

The date for disabling/removing 1024-bit root certificates will be dependent on the state of the art in public key cryptography, but under no circumstances should any party expect continued support for this modulus size past December 31, 2013. As mentioned above, this date could get moved up substantially if new attacks are discovered. We recommend all parties involved in secure transactions on the web move away from 1024-bit moduli as soon as possible.

Dates for Phasing out MD5-based signatures and 1024-bit moduli

This needs the all caps treatment:

KEY SIZE MUST BE CHECKED ON EVERY SIGNATURE IN THE CERTIFICATE, INCLUDING THE ROOT CERTIFICATE.

and:

UNDER NO CIRCUMSTANCES SHOULD ANY PARTY EXPECT SUPPORT FOR 1024 BIT RSA KEYS IN 2014.

1024 bit certificates are dead, dead, dead. They cannot be considered secure. NIST has recommended at least 2048 bits in 2013, there’s a website entirely devoted to appropriate key lengths and it’s covered extensively in key management solutions The certificate authorities have stopped issuing them for a while, and over 95% of trusted leaf certificates and 95% of trusted signing certificates use NIST recommended key sizes.

The same caveats apply to DSA and ECC key sizes: keylength.com has the details.

Miscellaneous

OWASP lists some guidelines on creating certificates, notably “Do not use wildcard certificates” and “Do not use RFC 1918 addresses in certificates”. While these are undoubtably questionable practices, I don’t think it’s appropriate to have rules forbidding them.

Part Two: Implementation

The relevant documentation is the Certificate Path Programmer Guide, also known as Java PKI API Programmer’s Guide:

Despite listing problems in verification above, I’m going to assume that JSSE checks certificates and certificate chains correctly, and doesn’t have horrible bugs in the implementation. I am concerned that JSSE may have vulnerabilities, but part of the problem is knowing exactly what the correct behavior should be and TLS does not come with a reference implementation or a reference suite. As far as I know, JSSE has not been subject to NIST PKI testing or the X.509 test suite from CPNI, and CPNI doesn’t release their test suite to the public. I am also unaware of any publically available X.509 certificate fuzzing tools.

There is a certificate testing tool called tlspretense, which (once it is correctly configured) will run a suite of incorrect certificates and produce a nice report.

What I can do is make sure that weak algorithms and key sizes are disabled, even in 1.6.

Validating a Certificate in JSSE

Validating a certificate by itself is easy. Certificate validation is done by java.security.cert and basic certificate validation (including expiration checking) is done using X509Certificate :

1
certificate.checkValidity()

An interesting side note — although a trust store contains certificates, the fact that they are X.509 certificates is a detail. Anchors are just subject distinguished name and public key bindings. This means they don’t have to be signed, and don’t really have an expiration date. This tripped me (and a few others) up, but RFC 3280 and RFC 5280 are quite clear that expiration doesn’t apply to trust anchors or trust stores.

Validating Key Sizes and Signature Algorithms

We need to make sure that JSSE is not accepting weak certificates. In particular, we want to check that the X.509 certificates have a decent signature algorithm and a decent key size.

Now, there is a jdk.certpath.disabledAlgorithms security setting in JDK 1.7 that looks very close to doing what we want. Setting jdk.certpath.disabledAlgorithms is covered in the previous post.

There is a security property jdk.certpath.disabledAlgorithms that validates X.509 certificates. You define it in a security.properties file like so:

1
jdk.certpath.disabledAlgorithms=MD2, MD4, MD5, SHA1, SHA224, SHA256, SHA384, SHA512, RSA, DSA, EC

This property is then read by the class X509DisabledAlgConstraints in SSLAlgorithmConstraints.java:

1
2
private final static AlgorithmConstraints x509DisabledAlgConstraints =
new X509DisabledAlgConstraints();

Note the “private final static” here — you can’t define the security property at runtime after this instance has been loaded into memory. You can, as a workaround, set the constraints dynamically from setAlgorithmConstraints.

But there’s another problem. jdk.certpath.disabledAlgorithms is only in 1.7 and is global across the JVM. We need to support JDK 1.6 and make it local to the SSLContext. We can do better.

Here’s what an example configuration looks like:

1
2
3
4
ws.ssl {
  disabledSignatureAlgorithms = "MD2, MD4, MD5"
  disabledKeyAlgorithms = "RSA keySize <= 1024, DSA keySize <= 1024, EC <= 160"
}

I’ll skip over the details of how parsing and algorithm decomposition is done, except to say Scala contains a parser combinator library which makes writing small parsers very easy. On configuration, each of the statements parses out into an AlgorithmConstraint that is checks to see if the certificate’s key size or algorithm matches.

There’s an AlgorithmChecker that checks for signature and key algorithms:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
class AlgorithmChecker(val signatureConstraints: Set[AlgorithmConstraint], val keyConstraints: Set[AlgorithmConstraint]) extends PKIXCertPathChecker {
  ...
  def check(cert: Certificate, unresolvedCritExts: java.util.Collection[String]) {
    cert match {
      case x509Cert: X509Certificate =>

        val commonName = getCommonName(x509Cert)
        val subAltNames = x509Cert.getSubjectAlternativeNames
        logger.debug(s"check: checking certificate commonName = $commonName, subjAltName = $subAltNames")

        checkSignatureAlgorithms(x509Cert)
        checkKeyAlgorithms(x509Cert)
      case _ =>
        throw new UnsupportedOperationException("check only works with x509 certificates!")
    }
  }
  ...
}

and finally:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
class AlgorithmChecker(val signatureConstraints: Set[AlgorithmConstraint], val keyConstraints: Set[AlgorithmConstraint]) extends PKIXCertPathChecker {
  ...
  def checkSignatureAlgorithms(x509Cert: X509Certificate): Unit = {
    val sigAlgName = x509Cert.getSigAlgName
    val sigAlgorithms = Algorithms.decomposes(sigAlgName)

    logger.debug(s"checkSignatureAlgorithms: sigAlgName = $sigAlgName, sigAlgName = $sigAlgName, sigAlgorithms = $sigAlgorithms")

    for (a <- sigAlgorithms) {
      findSignatureConstraint(a).map {
        constraint =>
          if (constraint.matches(a)) {
            logger.debug(s"checkSignatureAlgorithms: x509Cert = $x509Cert failed on constraint $constraint")
            val msg = s"Certificate failed: $a matched constraint $constraint"
            throw new CertPathValidatorException(msg)
          }
      }
    }
  }

  def checkKeyAlgorithms(x509Cert: X509Certificate): Unit = {
    val key = x509Cert.getPublicKey
    val keyAlgorithmName = key.getAlgorithm
    val keySize = Algorithms.keySize(key).getOrElse(throw new IllegalStateException(s"No keySize found for $key"))

    val keyAlgorithms = Algorithms.decomposes(keyAlgorithmName)
    logger.debug(s"checkKeyAlgorithms: keyAlgorithmName = $keyAlgorithmName, keySize = $keySize, keyAlgorithms = $keyAlgorithms")

    for (a <- keyAlgorithms) {
      findKeyConstraint(a).map {
        constraint =>
          if (constraint.matches(a, keySize)) {
            val certName = x509Cert.getSubjectX500Principal.getName
            logger.debug(s"""checkKeyAlgorithms: cert = "certName" failed on constraint $constraint, algorithm = $a, keySize = $keySize""")

            val msg = s"""Certificate failed: cert = "$certName" failed on constraint $constraint, algorithm = $a, keySize = $keySize"""
            throw new CertPathValidatorException(msg)
          }
      }
    }
  }
}

Now that we have an algorithm checker, we need to put it into the validation chain.

There are two ways of validating a chain in JSSE. The first is using CertPathValidator, which validates a certificate chain according to RFC 3280. The second is CertPathBuilder, which “builds” a certificate chain according to RFC 4158. I’ve been told by informed experts that CertPathBuilder is actually closer to the behavior of modern browsers, but in this case, we’re just adding onto the chain of PKIXCertPathChecker. There are several layers of configuration to go through, but eventually we pass this through to the TrustManager.

However, this doesn’t check the root CA certificate, because that doesn’t get passed in through the PKIXCertPathChecker. So how does SSLAlgorithmConstraints get at the root certificate?

Well, it’s handled through the CertPathValidator instantiation. X509TrustManagerImpl calls Validator.getInstance(validatorType, variant, trustedCerts) — this returns new PKIXValidator(variant, trustedCerts), and from there, PKIXValidator puts the trusted certs into PKIXBuilderParameters, and then calls doValidate.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
public final class PKIXValidator extends Validator {

    private X509Certificate[] doValidate(X509Certificate[] chain,
            PKIXBuilderParameters params) throws CertificateException {
        try {
            setDate(params);

            // do the validation
            CertPathValidator validator = CertPathValidator.getInstance("PKIX");
            CertPath path = factory.generateCertPath(Arrays.asList(chain));
            certPathLength = chain.length;
            PKIXCertPathValidatorResult result =
                (PKIXCertPathValidatorResult)validator.validate(path, params);

            return toArray(path, result.getTrustAnchor());
        } catch (GeneralSecurityException e) {
            throw new ValidatorException
                ("PKIX path validation failed: " + e.toString(), e);
        }
    }

}

So now we’ve moved on to the PKIXCertPathValidator, which pulls out a trust anchor for the AlgorithmChecker.

1
2
3
4
5
6
7
8
9
10
11
public class PKIXCertPathValidator extends CertPathValidatorSpi {
  private PolicyNode doValidate(
              TrustAnchor anchor, CertPath cpOriginal,
              ArrayList<X509Certificate> certList, PKIXParameters pkixParam,
              PolicyNodeImpl rootNode) throws CertPathValidatorException
  {
     ...
     AlgorithmChecker algorithmChecker = new AlgorithmChecker(anchor);
     ...
  }
}

This means that the AlgorithmChecker can check for the weak key size in the trust anchor, but this only works if you control the validator chain. The PKIXBuilderParameters object is not passed to PKIXCertPathChecker, so we can’t simply extend PKIXCertPathChecker and pull out the trust anchor we’d like — we have to do this from the TrustManager directly. Easy enough:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class CompositeX509TrustManager(trustManagers: Seq[X509TrustManager], algorithmChecker: AlgorithmChecker) extends X509TrustManager {

  def checkServerTrusted(chain: Array[X509Certificate], authType: String): Unit = {
    logger.debug(s"checkServerTrusted: chain = ${debugChain(chain)}, authType = $authType")

    // Trust anchor is at the end of the chain... there is no way to pass a trust anchor
    // through to a checker in PKIXCertPathValidator.doValidate(), so the trust manager is the
    // last place we have access to it.
    val anchor: TrustAnchor = new TrustAnchor(chain(chain.length - 1), null)
    logger.debug(s"checkServerTrusted: checking key size only on root anchor $anchor")
    algorithmChecker.checkKeyAlgorithms(anchor.getTrustedCert)

    var trusted = false
    val exceptionList = withTrustManagers {
      trustManager =>
        // always run through the trust manager before making any decisions
        trustManager.checkServerTrusted(chain, authType)
        logger.debug(s"checkServerTrusted: trustManager $trustManager using authType $authType found a match for ${debugChain(chain).toSeq}")
        trusted = true
    }

    if (!trusted) {
      val msg = s"No trust manager was able to validate this certificate chain: # of exceptions = ${exceptionList.size}"
      throw new CompositeCertificateException(msg, exceptionList.toArray)
    }
  }
}

To do this through configuration is a bit more work. We have to create a PKIXBuilderParameters object and then attach the AlgorithmChecker to it, then stick that inside ANOTHER parameters object called CertPathTrustManagerParameters and then pass that into the factory.init method. We end up with a single CompositeX509TrustManager class, and a bunch of trust managers all configured with the same AlgorithmChecker:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
class ConfigSSLContextBuilder {

  def buildCompositeTrustManager(trustManagerInfo: TrustManagerConfig,
    revocationEnabled: Boolean,
    revocationLists: Option[Seq[CRL]], algorithmChecker: AlgorithmChecker) = {

    val trustManagers = trustManagerInfo.trustStoreConfigs.map {
      tsc =>
        buildTrustManager(tsc, revocationEnabled, revocationLists, algorithmChecker)
    }
    new CompositeX509TrustManager(trustManagers, algorithmChecker)
  }

  def buildTrustManager(tsc: TrustStoreConfig,
    revocationEnabled: Boolean,
    revocationLists: Option[Seq[CRL]], algorithmChecker: AlgorithmChecker): X509TrustManager = {

    val factory = trustManagerFactory
    val trustStore = trustStoreBuilder(tsc).build()
    validateStore(trustStore, algorithmChecker)

    val trustManagerParameters = buildTrustManagerParameters(
      trustStore,
      revocationEnabled,
      revocationLists,
      algorithmChecker)

    factory.init(trustManagerParameters)
    val trustManagers = factory.getTrustManagers
    if (trustManagers == null) {
      val msg = s"Cannot create trust manager with configuration $tsc"
      throw new IllegalStateException(msg)
    }

    // The JSSE implementation only sends back ONE trust manager, X509TrustManager
    trustManagers.head.asInstanceOf[X509TrustManager]
  }

  def buildTrustManagerParameters(trustStore: KeyStore,
    revocationEnabled: Boolean,
    revocationLists: Option[Seq[CRL]],
    algorithmChecker: AlgorithmChecker): CertPathTrustManagerParameters = {
    import scala.collection.JavaConverters._

    val certSelect: X509CertSelector = new X509CertSelector
    val pkixParameters = new PKIXBuilderParameters(trustStore, certSelect)
    // ...

    // Add the algorithm checker in here to check the certification path sequence (not including trust anchor)...
    val checkers: Seq[PKIXCertPathChecker] = Seq(algorithmChecker)

    // Use the custom cert path checkers we defined...
    pkixParameters.setCertPathCheckers(checkers.asJava)
    new CertPathTrustManagerParameters(pkixParameters)
  }
}

And now we can check for weak key sizes and bad certificates the same way JSSE 1.7 does.

This still isn’t the best user experience, because it will result in a broken TLS connection at run time. We’d like to give the user as much information as soon as we can, and not waste our time on certificates that we know are going to fail. We can simply filter out certificates that don’t pass muster.

To do this, we iterate through every trust anchor we have in the trust store, and verify that it matches our constraints.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
class ConfigSSLContextBuilder {
  /**
   * Tests each trusted certificate in the store, and warns if the certificate is not valid.  Does not throw
   * exceptions.
   */
  def validateStore(store: KeyStore, algorithmChecker: AlgorithmChecker) {
    import scala.collection.JavaConverters._
    logger.debug(s"validateKeyStore: type = ${store.getType}, size = ${store.size}")

    store.aliases().asScala.foreach {
      alias =>
        Option(store.getCertificate(alias)).map {
          c =>
            try {
              algorithmChecker.checkKeyAlgorithms(c)
            } catch {
              case e: CertPathValidatorException =>
                logger.warn(s"validateKeyStore: Skipping certificate with weak key size in $alias" + e.getMessage)
                store.deleteEntry(alias)
              case e: Exception =>
                logger.warn(s"validateKeyStore: Skipping unknown exception $alias" + e.getMessage)
                store.deleteEntry(alias)
            }
        }
    }
  }
}

But we’re still not done. The default trust store is used if SSLContext is initialized with null, and we don’t have access to it unless we do horrible things with reflection.

However, given that the default SSLContextImpl will call out to the TrustManagerFactory and any configuration with system properties will also apply with the factory, we can use the factory method to recreate the trust manager and validate the trust certificates that way.

So given:

1
2
3
4
5
6
7
8
9
10
11
val useDefault = sslConfig.default.getOrElse(false)
val sslContext = if (useDefault) {
  logger.info("buildSSLContext: ws.ssl.default is true, using default SSLContext")
  validateDefaultTrustManager(sslConfig)
  SSLContext.getDefault
} else {
  // break out the static methods as much as we can...
  val keyManagerFactory = buildKeyManagerFactory(sslConfig)
  val trustManagerFactory = buildTrustManagerFactory(sslConfig)
  new ConfigSSLContextBuilder(sslConfig, keyManagerFactory, trustManagerFactory).build()
}

We can do this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
  def validateDefaultTrustManager(sslConfig: SSLConfig) {
    // This is really a last ditch attempt to satisfy https://wiki.mozilla.org/CA:MD5and1024 on root certificates.
    // http://grepcode.com/file/repository.grepcode.com/java/root/jdk/openjdk/7-b147/sun/security/ssl/SSLContextImpl.java#79

    val tmf = TrustManagerFactory.getInstance(TrustManagerFactory.getDefaultAlgorithm)
    tmf.init(null.asInstanceOf[KeyStore])
    val trustManager: X509TrustManager = tmf.getTrustManagers()(0).asInstanceOf[X509TrustManager]

    val disabledKeyAlgorithms = sslConfig.disabledKeyAlgorithms.getOrElse(Algorithms.disabledKeyAlgorithms)
    val constraints = AlgorithmConstraintsParser.parseAll(AlgorithmConstraintsParser.line, disabledKeyAlgorithms).get.toSet
    val algorithmChecker = new AlgorithmChecker(keyConstraints = constraints, signatureConstraints = Set())
    for (cert <- trustManager.getAcceptedIssuers) {
      algorithmChecker.checkKeyAlgorithms(cert)
    }
  }

And now… we’re done. Now we can check for bad X.509 algorithms out of the box, and have it be local to the SSLContext.

Testing

The best way to create X.509 certificates with Java is using keytool. Unfortunately, keytool doesn’t support subjectAltName in 1.6, but in 1.7 and 1.8 you can specify the subjectAltName (which is required for hostname verification) using the -ext parameter.

For example, to create your own self signed certificates (private key + public key both) for using in testing, you would specify:

1
2
3
4
5
6
7
8
9
10
keytool -genkeypair \
-keystore keystore.jks \
-dname "CN=example.com, OU=Example Org, O=Example Company, L=San Francisco, ST=California, C=US" \
-keypass changeit \
-storepass changeit \
-keyalg RSA \
-keysize 2048 \
-alias example.com \
-ext SAN=DNS:example.com \
-validity 365

And then add example.com to /etc/hosts.

You can verify your certificate with KeyStore Explorer, a GUI tool for certificates, or java-keyutil, or you can list your certificate directly:

1
keytool -list -v -alias example.com -storepass changeit -keystore keystore.jks

You should see:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
Alias name: example.com
Creation date: Mar 26, 2014
Entry type: PrivateKeyEntry
Certificate chain length: 1
Certificate[1]:
Owner: CN=example.com, OU=Example Org, O=Example Company, L=San Francisco, ST=California, C=US
Issuer: CN=example.com, OU=Example Org, O=Example Company, L=San Francisco, ST=California, C=US
Serial number: 4180f5e0
Valid from: Wed Mar 26 10:22:59 PDT 2014 until: Thu Mar 26 10:22:59 PDT 2015
Certificate fingerprints:
   MD5:  F3:32:40:C9:00:59:D3:32:E1:75:85:7A:A9:68:6D:F5
   SHA1: 37:9D:90:44:AB:41:AD:8D:F5:E4:6C:03:5F:22:61:53:EF:23:67:1E
   SHA256: 88:FF:83:43:E1:2D:F1:19:7B:3E:1D:4D:88:40:C3:8C:8A:96:2D:75:16:4F:C8:E9:0B:99:F5:0E:53:4A:C1:17
   Signature algorithm name: SHA256withRSA
   Version: 3

Extensions:

#1: ObjectId: 2.5.29.17 Criticality=false
SubjectAlternativeName [
  DNSName: example.com
]

#2: ObjectId: 2.5.29.14 Criticality=false
SubjectKeyIdentifier [
KeyIdentifier [
0000: 62 30 8E 8C F2 7C 7A BC   FD EB AC 75 F6 BD FD F1  b0....z....u....
0010: 3E 73 D5 A9                                        >s..
]
]

If you have a signature of SHA256withRSA and DNSName: example.com, then it worked.
And then calls to “https://example.com“would work fine. Then you can pass in your local keystore using the options defined in the customization section:

1
java -Djavax.net.ssl.trustStore=keystore.jks -Djavax.net.ssl.keyStore=keystore.jks -javax.net.ssl.keyStorePassword=changeit -Djavax.net.ssl.trustStorePassword=changeit

Or you can wire the certificates into an SSLContext directly using a TrustManagerFactory and a KeyManagerFactory, and then set up a server and a client from the SSLContext as shown here:

1
2
3
4
5
6
7
8
9
10
11
private SSLContext sslc;
private SSLEngine ssle1;    // client
private SSLEngine ssle2;    // server

private void createSSLEngines() throws Exception {
    ssle1 = sslc.createSSLEngine("client", 1);
    ssle1.setUseClientMode(true);

    ssle2 = sslc.createSSLEngine("server", 2);
    ssle2.setUseClientMode(false);
}

X.509 certificates are one of the moving pieces of TLS that have many, many ways of going wrong. Be prepared to find out of order certificates, missing intermediate certificates, and other problematic practices.

Certificate path debugging can be turned on using the -Djava.security.debug=certpath and -Djavax.net.debug="ssl trustmanager" settings. How to analyze Java SSL errors is a good example of tracking down bugs.

Next

Certificate Revocation!