Valid SSL/TLS certificates are a core requirement of the modern application landscape. Unfortunately, managing certificate (or cert) renewals is often an afterthought when deploying an application. Certificates have a limited lifetime, ranging from roughly 13 months for certificates from DigiCert to 90 days for Let’s Encrypt certificates. To maintain secure access, these certificates need to be renewed/reissued prior to their expiration. Given the substantial workload of most Ops teams, cert renewal sometimes falls through the cracks, resulting in a scramble as certificates near – or worse, pass – their expiration date.
It doesn’t need to be like this. With some planning and preparation, cert management can be automated and streamlined. Here, we will look at a solution for Kubernetes using three technologies:
In this blog, you’ll learn to simplify cert management by providing unique, automatically renewed and updated certificates to your endpoints.
Before we get into technical details, we need to define some terminology. The term “TLS certificate” refers to two components required to enable HTTPS connections on our Ingress controller:
Both the certificate and private key are issued by Let’s Encrypt. For a full explanation of how TLS certificates work, please see DigiCert’s post How TLS/SSL Certificates Work.
In Kubernetes, these two components are stored as Secrets. Kubernetes workloads – such as the NGINX Ingress Controller and cert-manager – can write and read these Secrets, which can also be managed by users who have access to the Kubernetes installation.
The cert-manager project is a certificate controller that works with Kubernetes and OpenShift. When deployed in Kubernetes, cert-manager will automatically issue certificates required by Ingress controllers and will ensure they are valid and up-to-date. Additionally, it will track expiration dates for certificates and attempt renewal at a configured time interval. Although it works with numerous public and private issuers, we will be showing its integration with Let’s Encrypt.
When using Let’s Encrypt, all cert management is handled automatically. While this provides a great deal of convenience, it also presents a problem: How does the service ensure that you own the fully-qualified domain name (FQDN) in question?
This problem is solved using a challenge, which requires you to answer a verification request that only someone with access to the specific domain’s DNS records can provide. Challenges take one of two forms:
HTTP-01 is the simplest way to generate a certificate, as it does not require direct access to the DNS provider. This type of challenge is always conducted over Port 80 (HTTP). Note that when using HTTP-01 challenges, cert-manager will utilize the Ingress controller to serve the challenge token.
An Ingress controller is a specialized service for Kubernetes that brings traffic from outside the cluster, load balances it to internal Pods (a group of one or more containers), and manages egress traffic. Additionally, the Ingress controller is controlled through the Kubernetes API and will monitor and update the load balancing configuration as Pods are added, removed, or fail.
To learn more about Ingress controllers, read the following blogs:
In the examples below, we will use NGINX Ingress Controller that is developed and maintained by F5 NGINX.
These examples assume that you have a working Kubernetes installation that you can test with, and that the installation can assign an external IP address (Kubernetes LoadBalancer object). Additionally, it assumes that you can receive traffic on both Port 80 and Port 443 (if using the HTTP-01 challenge) or solely Port 443 (if using the DNS-01 challenge). These examples are illustrated using Mac OS X, but can be used on Linux or WSL as well.
You will also need a DNS provider and FQDN that you can adjust the A record for. If you are using the HTTP-01 challenge, you only need the ability to add an A record (or have one added for you). If you are using the DNS-01 challenge, you will need API access to a supported DNS provider or a supported webhook provider.
The easiest way is to deploy via Helm. This deployment allows you to use both the Kubernetes Ingress and the NGINX Virtual Server CRD.
$ helm repo add nginx-stable https://helm.nginx.com/stable  "nginx-stable" has been added to your repositories $ helm repo update  Hang tight while we grab the latest from your chart repositories...
  ...Successfully got an update from the "nginx-stable" chart repository
  Update Complete. ⎈Happy Helming!⎈ $ helm install nginx-kic nginx-stable/nginx-ingress \  --namespace nginx-ingress  --set controller.enableCustomResources=true \ 
  --create-namespace  --set controller.enableCertManager=true 
  NAME: nginx-kic
  LAST DEPLOYED: Thu Sep  1 15:58:15 2022
  NAMESPACE: nginx-ingress
  STATUS: deployed
  REVISION: 1
  TEST SUITE: None
  NOTES:
  The NGINX Ingress Controller has been installed. $ kubectl get deployments --namespace nginx-ingress  NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
  nginx-kic-nginx-ingress   1/1     1            1           23s
  $ kubectl get services --namespace nginx-ingress
  NAME                      TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
  nginx-kic-nginx-ingress   LoadBalancer   10.128.60.190   www.xxx.yyy.zzz   80:31526/TCP,443:32058/TCP   30s The process here will depend on your DNS provider. This DNS name will need to be resolvable from the Let’s Encrypt servers, which may require that you wait for the record to propagate before it will work. For more information on this, please see the SiteGround article What Is DNS Propagation and Why Does It Take So Long?
Once you can resolve your chosen FQDN you are ready to move on to the next step.
$ host cert.example.com  cert.example.com has address www.xxx.yyy.zzzThe next step is to deploy the most recent version of cert-manager. Again, we will be using Helm for our deployment.
$ helm repo add jetstack https://charts.jetstack.io  "jetstack" has been added to your repositories $ helm repo update  Hang tight while we grab the latest from your chart repositories...
  ...Successfully got an update from the "nginx-stable" chart repository
  ...Successfully got an update from the "jetstack" chart repository
  Update Complete. ⎈Happy Helming!⎈ $ helm install cert-manager jetstack/cert-manager \  --namespace cert-manager --create-namespace \
  --version v1.9.1  --set installCRDs=true 
  NAME: cert-manager
  LAST DEPLOYED: Thu Sep  1 16:01:52 2022 
  NAMESPACE: cert-manager
  STATUS: deployed
  REVISION: 1 
  TEST SUITE: None
  NOTES:
  cert-manager v1.9.1 has been deployed successfully!
  In order to begin issuing certificates, you will need to set up a ClusterIssuer
  or Issuer resource (for example, by creating a 'letsencrypt-staging' issuer).
  More information on the different types of issuers and how to configure them
  can be found in our documentation:
  https://cert-manager.io/docs/configuration/
  For information on how to configure cert-manager to automatically provision
  Certificates for Ingress resources, take a look at the `ingress-shim`
  documentation:
  https://cert-manager.io/docs/usage/ingress/$ kubectl get deployments --namespace cert-manager  NAME                      READY   UP-TO-DATE   AVAILABLE   AGE
  cert-manager              1/1     1            1           4m30s
  cert-manager-cainjector   1/1     1            1           4m30s
  cert-manager-webhook      1/1     1            1           4m30s We are going to be using the NGINX Cafe example to provide our backend deployment and Services. This is a common example used within the documentation provided by NGINX. We will not be deploying Ingress as part of this.
$ git clone https://github.com/nginxinc/kubernetes-ingress.git  Cloning into 'kubernetes-ingress'...
  remote: Enumerating objects: 44979, done.
  remote: Counting objects: 100% (172/172), done.
  remote: Compressing objects: 100% (108/108), done.
  remote: Total 44979 (delta 87), reused 120 (delta 63), pack-reused 44807
  Receiving objects: 100% (44979/44979), 60.27 MiB | 27.33 MiB/s, done.
  Resolving deltas: 100% (26508/26508), done. $ cd ./kubernetes-ingress/examples/ingress-resources/complete-example $ kubectl apply -f ./cafe.yaml
  deployment.apps/coffee created
  service/coffee-svc created
  deployment.apps/tea created
  service/tea-svc createdkubectl get command. You are looking to ensure that the Pods are showing as READY, and the Services are showing as running. The example below shows a representative sample of what you are looking for . Note that the kubernetes service is a system service running in the same namespace (default) as the NGINX Cafe example.$ kubectl get deployments,services  --namespace default  NAME                     READY   UP-TO-DATE   AVAILABLE   AGE
  deployment.apps/coffee   2/2     2            2           69s
  deployment.apps/tea      3/3     3            3           68s
NAME                 TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)   AGE
  service/coffee-svc   ClusterIP   10.128.154.225   <none>        80/TCP    68s
  service/kubernetes   ClusterIP   10.128.0.1       <none>        443/TCP   29m
	service/tea-svc      ClusterIP   10.128.96.145    <none>        80/TCP    68s Within cert-manager, the ClusterIssuer can be used to issue certificates. This is a cluster-scoped object that can be referenced by any namespace and used by any certificate requests with the defined certificate-issuing authority. In this example, any certificate requests for Let’s Encrypt certificates can be handled by this ClusterIssuer.
Deploy the ClusterIssuer for the challenge type you have selected. Although it is out of scope for this post, there are advanced configuration options that allow you to specify multiple resolvers (chosen based on selector fields) in your ClusterIssuer.
The Automated Certificate Management Environment (ACME) protocol is used to determine if you own a domain name and can therefore be issued a Let’s Encrypt certificate. For this challenge, these are the parameters that need to be passed:
This example shows how to set up a ClusterIssuer to use the HTTP-01 challenge to prove domain ownership and receive a certificate.
$ cat << EOF | kubectl apply -f   apiVersion: cert-manager.io/v1
  kind: ClusterIssuer
  metadata:
    name: prod-issuer
  spec:
    acme:
      email: example@example.com
      server: https://acme-v02.api.letsencrypt.org/directory
      privateKeySecretRef:
        name: prod-issuer-account-key
      solvers:
      - http01:
         ingress:
           class: nginx
  EOF
  clusterissuer.cert-manager.io/prod-issuer created $ kubectl get clusterissuer  NAME          READY   AGE
	prod-issuer   True    34s This example shows how to set up a ClusterIssuer to use the DNS-01 challenge to authenticate your domain ownership. Depending on your DNS provider you will likely need to use a Kubernetes Secret to store your token. This example is using Cloudflare. Note the use of namespace. The cert-manager application, which is deployed into the cert-manager namespace, needs to have access to the Secret .
For this example, you will need a Cloudflare API token, which you can create from your account. This will need to be put in the <API Token> line below. If you are not using Cloudflare you will need to follow the documentation for your provider.
$ cat << EOF | kubectl apply -f   apiVersion: v1
  kind: Secret
  metadata:
    name: cloudflare-api-token-secret
    namespace: cert-manager
  type: Opaque
  stringData:
    api-token: <API Token> 
  EOF $ cat << EOF | kubectl apply -f   apiVersion: cert-manager.io/v1
  kind: ClusterIssuer
  metadata:
    name: prod-issuer
  spec:
    acme:
      email: example@example.com
      server: https://acme-v02.api.letsencrypt.org/directory
      privateKeySecretRef:
        name: prod-issuer-account-key
      solvers:
        - dns01:
            cloudflare:
              apiTokenSecretRef:
                name: cloudflare-api-token-secret
                key: api-token
  EOF $ kubectl get clusterissuer  NAME          READY   AGE
	prod-issuer   True    31m This is the point we’ve been building towards – the deployment of the Ingress resource for our application. This will route traffic into the NGINX Cafe application we deployed earlier.
If you are using the standard Kubernetes Ingress resource, you will use the following deployment YAML to configure the Ingress and request a certificate.
apiVersion: networking.k8s.io/v1   kind: Ingress 
  metadata: 
    name: cafe-ingress 
    annotations: 
      cert-manager.io/cluster-issuer: prod-issuer 
      acme.cert-manager.io/http01-edit-in-place: "true" 
  spec: 
    ingressClassName: nginx 
    tls: 
    - hosts: 
      - cert.example.com 
      secretName: cafe-secret 
    rules: 
    - host: cert.example.com 
      http: 
        paths: 
        - path: /tea 
          pathType: Prefix 
          backend: 
            service: 
              name: tea-svc 
              port: 
                number: 80 
        - path: /coffee 
          pathType: Prefix 
          backend: 
            service: 
              name: coffee-svc 
              port: 
number: 80 It’s worth reviewing some key parts of the manifest:
metadata.annotations where we set acme.cert-manager.io/http01-edit-in-place to “true”. This value is required and adjusts the way that the challenge is served. For more  information see the Supported Annotations document. This can also be handled by using a master/minion setup.spec.ingressClassName refers to the NGINX Ingress controller that we installed and will be using.spec.tls.secret Kubernetes Secret resource stores the certificate key that is returned when the certificate is issued by Let’s Encrypt. cert.example.com is specified for spec.tls.hosts and spec.rules.host. This is the hostname for which our ClusterIssuer issued the certificate.spec.rules.http section defines the paths and the backend Services that will service requests on those paths. For example, traffic to /tea will be directed to Port 80 on the tea-svc.spec.rules.host and spec.tls.hosts values, but you should review all parameters in the configuration. $  kubectl apply -f ./cafe-virtual-server.yaml  virtualserver.k8s.nginx.org/cafe created $ kubectl get certificates  NAME                                      READY   SECRET        AGE
  certificate.cert-manager.io/cafe-secret   True    cafe-secret   37m If you are using the NGINX CRDs, you will need to use the following deployment YAML to configure your Ingress.
  apiVersion: k8s.nginx.org/v1 
  kind: VirtualServer 
  metadata: 
    name: cafe 
  spec: 
    host: cert.example.com 
    tls: 
      secret: cafe-secret 
      cert-manager: 
        cluster-issuer: prod-issuer 
    upstreams: 
      - name: tea 
        service: tea-svc 
        port: 80 
      - name: coffee 
        service: coffee-svc 
        port: 80 
    routes: 
      - path: /tea 
        action: 
          pass: tea 
      - path: /coffee 
        action: 
          pass: coffeeOnce again, it’s worth reviewing some key parts of the manifest:
spec.tls.secret Kubernetes Secret resource stores the certificate key that is returned when the certificate is issued by Let’s Encrypt. cert.example.com is specified for spec.host. This is the hostname for which our ClusterIssuer issued the certificate .spec.upstreams values point to our backend Services, including the ports.spec.routes defines both the route and the action to be taken when those routes are hit.spec.host value, but you should review all parameters in the configuration.  $  kubectl apply -f ./cafe-virtual-server.yaml  virtualserver.k8s.nginx.org/cafe created$ kubectl get VirtualServers  NAME   STATE   HOST                    IP             PORTS      AGE
  cafe   Valid   cert.example.com   www.xxx.yyy.zzz   [80,443]   51m You can view the certificate via the Kubernetes API. This will show you details about the certificate, including its size and associated private key.
$ kubectl describe secret cafe-secret  Name:         cafe-secret
  Namespace:    default
  Labels:       <none>
  Annotations:  cert-manager.io/alt-names: cert.example.com
                cert-manager.io/certificate-name: cafe-secret
                cert-manager.io/common-name: cert.example.com
                cert-manager.io/ip-sans:
                cert-manager.io/issuer-group:
                cert-manager.io/issuer-kind: ClusterIssuer
                cert-manager.io/issuer-name: prod-issuer
                cert-manager.io/uri-sans:Type:  kubernetes.io/tlsData
  ====
  tls.crt:  5607 bytes
  tls.key:  1675 bytes If you’d like to see the actual certificate and key, you can do so by running the following command. (Note: This does illustrate a weakness of the Kubernetes Secrets. Namely, they can be read by anyone with the necessary access permissions.)
$ kubectl get secret cafe-secret -o yamlTest the certificates . You can use any method that you wish here. The example below uses cURL. Success is indicated by a block similar to what is shown before , which includes the server name, internal address of the server, date, the URI (route) chosen (coffee or tea), and the request ID. Failures will take the form of HTTP error codes, most likely 400 or 301.
$ curl https://cert.example.com/tea
  Server address: 10.2.0.6:8080
  Server name: tea-5c457db9-l4pvq
  Date: 02/Sep/2022:15:21:06 +0000
  URI: /tea
  Request ID: d736db9f696423c6212ffc70cd7ebecf
  $ curl https://cert.example.com/coffee
  Server address: 10.2.2.6:8080
  Server name: coffee-7c86d7d67c-kjddk
  Date: 02/Sep/2022:15:21:10 +0000
  URI: /coffee
Request ID: 4ea3aa1c87d2f1d80a706dde91f31d54 At the start, we promised that this approach would eliminate the need to manage certificate renewals. However, we have yet to explain how to do that. Why? Because this is a core, built-in part of cert-manager. In this automatic process, when cert-manager realizes that a certificate is not present, is expired, is within 15 days of expiry, or if the user requests a new cert via the CLI, then a new certificate is automatically requested. It doesn’t get much easier than that.
If you are an NGINX Plus subscriber, the only difference for you will involve installing the NGINX Ingress Controller. Please see the Installation Helm section of the NGINX Docs for instructions on how to modify the Helm command given above to accomplish this.
This largely depends on your use case.
The HTTP-01 challenge method requires that Port 80 is open to the Internet and that the DNS A record has been properly configured for the IP address of the Ingress controller. This approach does not require access to the DNS provider other than to create the A record.
The DNS-01 challenge method can be used when you cannot expose Port 80 to the Internet, and only requires that the cert-manager have egress access to the DNS provider. However, this method does require that you have access to your DNS provider’s API, although the level of access required varies by specific provider.
Since Kubernetes is so complex, it’s difficult to provide targeted troubleshooting information. If you do run into issues, we’d like to invite you to ask us on NGINX Community Slack (NGINX Plus subscribers can use their normal support options).
Get started by requesting your free 30-day trial of NGINX Ingress Controller with NGINX App Protect WAF and DoS, and download the always‑free NGINX Service Mesh.
"This blog post may reference products that are no longer available and/or no longer supported. For the most current information about available F5 NGINX products and solutions, explore our NGINX product family. NGINX is now part of F5. All previous NGINX.com links will redirect to similar NGINX content on F5.com."