Keycloak with TLS in Docker compose behind Envoy proxy

20th of February, 2022:
I have published a version of this article adapted for Keycloak 17: Keycloak 17.0.0 with TLS in Docker compose behind Envoy proxy.

The 24 hours of Nürburgring race was just red flagged for the remainder of the night due to the fog. That’s a perfect opportunity to add TLS to my Keycloak Docker Compose setup described previously here[1].

There are multiple ways of setting up TLS for Keycloak, one of them being the native Java JKS key store / trust store gymnastics.

Well, that’s certainly a way to go. If you prefer that path, feel free to do so, details are here[2] but that’s a lot of work. I like my life simple so I choose to use a proxy to terminate TLS instead.

Let’s have a look at the original compose.yml file, it’s short:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
version: '3.9'

services:
  postgres:
    image: postgres:13.2
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRESQL_DB}
      POSTGRES_USER: ${POSTGRESQL_USER}
      POSTGRES_PASSWORD: ${POSTGRESQL_PASS}
    networks:
      - local-keycloak

  keycloak:
    depends_on:
      - postgres
    container_name: local_keycloak
    environment:
      DB_VENDOR: postgres
      DB_ADDR: postgres
      DB_DATABASE: ${POSTGRESQL_DB}
      DB_USER: ${POSTGRESQL_USER}
      DB_PASSWORD: ${POSTGRESQL_PASS}
    image: jboss/keycloak:${KEYCLOAK_VERSION}
    ports:
      - "28080:8080"
    restart: unless-stopped
    networks:
      - local-keycloak

networks:
  local-keycloak:

To enable the proxy with TLS support, let’s modify the yaml file to this:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
version: '3.9'

services:

  envoy:
    image: envoyproxy/envoy:v1.18.2
    restart: unless-stopped
    command: /usr/local/bin/envoy -c /etc/envoy/envoy-keycloak.yaml -l debug
    ports:
      - 443:443
      - 8001:8001
    volumes:
      - type: bind
        source: ./etc/envoy
        target: /etc/envoy
    networks:
      - local-keycloak

  postgres:
    image: postgres:13.2
    restart: unless-stopped
    environment:
      POSTGRES_DB: ${POSTGRESQL_DB}
      POSTGRES_USER: ${POSTGRESQL_USER}
      POSTGRES_PASSWORD: ${POSTGRESQL_PASS}
    networks:
      - local-keycloak

  keycloak:
    depends_on:
      - envoy
      - postgres
    container_name: local_keycloak
    environment:
      DB_VENDOR: postgres
      DB_ADDR: postgres
      DB_DATABASE: ${POSTGRESQL_DB}
      DB_USER: ${POSTGRESQL_USER}
      DB_PASSWORD: ${POSTGRESQL_PASS}
      PROXY_ADDRESS_FORWARDING: "true"
    image: jboss/keycloak:${KEYCLOAK_VERSION}
    restart: unless-stopped
    networks:
      - local-keycloak

networks:
  local-keycloak:

There are four differences in the new file:

  • there is a new envoy service
  • the keycloak service additionally depends on the envoy service
  • the keycloak service no longer exposes the 28080 port on the host
  • there is a new environment variable defined for the keycloak service: PROXY_ADDRESS_FORWARDING: "true"

§envoy configuration

Looking closely at the envoy service, we can spot the host ./etc/envoy to container /etc/envoy volume bind. The proxy command references the /etc/envoy/envoy-keycloak.yaml configuration file. The file must have the yaml extension, yml is not going to work. The content is:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
static_resources:
  listeners:
  - address:
      socket_address:
        address: 0.0.0.0
        port_value: 443
    listener_filters:
    - name: "envoy.filters.listener.tls_inspector"
    filter_chains:
    - filter_chain_match:
        server_names:
        - idp.gruchalski.com
      filters:
      - name: envoy.filters.network.http_connection_manager
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.filters.network.http_connection_manager.v3.HttpConnectionManager
          codec_type: AUTO
          stat_prefix: ingress_http
          route_config:
            name: local_route
            virtual_hosts:
            - name: keycloak
              domains:
              - "*"
              routes:
              - match:
                  prefix: "/"
                route:
                  cluster: proxy-domain1
          http_filters:
          - name: envoy.filters.http.router
      transport_socket:
        name: envoy.transport_sockets.tls
        typed_config:
          "@type": type.googleapis.com/envoy.extensions.transport_sockets.tls.v3.DownstreamTlsContext
          common_tls_context:
            tls_certificates:
            - certificate_chain:
                filename: /etc/envoy/certificates/idp.gruchalski.com.crt
              private_key:
                filename: /etc/envoy/certificates/idp.gruchalski.com.key
  clusters:
  - name: proxy-domain1
    type: STRICT_DNS
    lb_policy: ROUND_ROBIN
    connect_timeout: 10s
    load_assignment:
      cluster_name: proxy-domain1
      endpoints:
      - lb_endpoints:
        - endpoint:
            address:
              socket_address:
                address: keycloak
                port_value: 8080

The directory structure looks like this:

.
├── compose.yml
└── etc
    └── envoy
        └── envoy-keycloak.yaml

You might ask what is this config file doing so let’s look at it from top to bottom.

  • First, we define a listener bound to 0.0.0.0 on port 443 - standard HTTPS stuff.
  • Next, we create a filter chain matching the idp.gruchalski.com domain name - this is the TLS SNI matching. The TLS SNI implies that our service, here Keycloak, will be accessed over HTTPS only and hostname advertised during the TLS handshake is used to find the upstream (cluster) target to forward the traffic to. As you can probably already imagine, I will be accessing Keycloak via https://idp.gruchalski.com.
  • The connections matching the filtered domain will be forwarded to the proxy-domain1 cluster via the http_filter.
  • The cluster forwards the requests to the load balancer endpoints, in this, we have one at keycloak:8080. This is the name of the container on Docker network used for this setup.

The part I’ve glossed over is the transport_socket.common_tls_context.tls_certificates. It points at the TLS certificate and key used for the filter for the domain.

Okay, a couple of caveats:

  • we don’t have the certificate yet
  • how do we access Keycloak using the domain name when it is running in local compose

§the domain name

Easy, modify the /etc/hosts file by adding:

127.0.0.1   idp.gruchalski.com

§certificates

This isn’t a rocket science either. In your case, you probably already have a domain name you want to use instead of idp.gruchalski.com so replace all occurences with your own domain in configs above and commands below.

Because Keycloak is used in the browser, we want real TLS certificates from one of the public trusted certificate authorities. Let’s Encrypt is for sure an awesome choice. We can get the LE certficites in multiple ways but at the core, we either need the control over the DNS for the dns-01 challenge or we need a http/https server reachable via the domain names for which the certificates should be issued. More about LE challenge types[3].

Long story short, as I am requesting the certifciates for the local compose setup, the http-01 and tls-alpn-01 challenges are not an option because Let’s Encrypt will not be able to call back to a server running on my local machine.

The dns-01 challenge is the way to go but it requires having an administrative control over the DNS server so the required TXT records can be created to complete the LE challenge. I have that, I use AWS Route 53 as my DNS of choice.

A couple of days ago, I have written about the LEGO client which I used for obtaining the certificates[4]. Here, I’d use the following command:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
cd etc/envoy
docker run --rm \
    -v $(pwd):/lego \
    -v ${HOME}/.aws/credentials:/root/.aws/credentials \
    -e AWS_PROFILE=lego \
    -ti goacme/lego \
    --accept-tos \
    --domains=idp.gruchalski.com \
    --server=https://acme-v02.api.letsencrypt.org/directory \
    --email=radek@gruchalski.com \
    --path=/lego \
    --dns=route53 run

As a result, my file structure now looks like this:

.
├── compose.yml
└── etc
    └── envoy
        ├── accounts
        │   └── acme-v02.api.letsencrypt.org
        │       └── radek@gruchalski.com
        │           ├── account.json
        │           └── keys
        │               └── radek@gruchalski.com.key
        ├── certificates
        │   ├── idp.gruchalski.com.crt
        │   ├── idp.gruchalski.com.issuer.crt
        │   ├── idp.gruchalski.com.json
        │   └── idp.gruchalski.com.key
        └── envoy-keycloak.yaml

The configuration is now complete.

After starting the setup with docker compose -f compose.yml up, I can access my Keycloak by entering https://idp.gruchalski.com in the browser address bar. The TLS request is terminated at Envoy and Envoy finds the cluster based on the hostname advertised during the TLS handshake. The request is then forwarded to Keycloak on port 8080.

Installtion finalization is exactly the same as in the previous article.