reverse proxy tls request smuggling

Reverse proxies are often used in in hybrid environments to provide select access to specified services to potentially untrusted clients outside of the network.

There are two primary types of proxies, Layer 4, and Layer 7. Layer 4 proxies operate on the Transport Layer (TCP/UDP) and Layer 7 proxies operate on the Application Layer (HTTP/HTTPS).

In each model, the client communicates with the proxy who then forwards the request to the service, and then responds to the client when it receives the response from the service.

However in layer 7 proxies, the proxy effectively acts as an application-layer client with the service, while in layer 4 proxies the proxy simply passes the TCP / UDP packets back and forth. This distinction is important for a few reasons.

From a performance perspective, layer 4 proxies are more efficient than layer 7 proxies because they are able to use the network stack to forward the request to the service, while layer 7 proxies must first buffer and parse the full request before forwarding it to the service, and do the same before forwarding the subsequent response.

A significant role of a reverse proxy is to grant access to certain resources, while restricting access to other resources. A reverse proxy does this by matching the incoming request against a list of rules, and then forwarding the request to the service if the rule matches.

The most frequent matching rules are Host, Path, and Client IP. This exploit focuses on proxies which use the Host to match and route requests.

In a simulated environment, we have two internal services,, and should only ever be accessed by internal clients, while is a portal accessed by both internal users and external partners through a reverse proxy.

For brevity, the examples below are condensed, and assume that the reverse proxy sits in a DMZ and resolves against internal DNS, allowing proxying to the same DNS as listening virtual host / server block. NGINX used for example purposes, but the same applies for others - also, not using nginx upstream blocks just to keep compact. Ok, with that out of the way...

A basic reverse proxy configuration will only allow requests through if they match the host header on the request.
        server {
            listen      80;
            location / {

The server_name directive matches the incoming request's Host header, and then forwards the request to the service. If a user makes a request to the reverse proxy for, the proxy will not have any matching server blocks, and the request will fail.

A beginner mistake is to use the default server block or drop the server_name directive, thinking if they only have one origin, they only need to provide one listener.
        server {
            listen      80;
            server_name _;
            location / {
The danger with this configuration is that it will accept any request, regardless of the host header. Even though the proxy_pass is pointing to the DNS of the origin, if that DNS points anywhere else but directly to the single origin server, the client can start to manipulate requests in your network.

For example, if the client makes a request to the reverse proxy, but sets their host header to, the proxy will accept the connection and forward the request to the origin. It is then up to the origin server to determine how to handle that request. In the best case, the origin recognizes the host header mismatch and drops the request, but in modern decoupled architectures, most app services have no context or understanding of any protocol layers around them, by design.

In most cases, the origin server will accept the traffic and continue processing requests without issue. However if the proxy is pointing to a load balancer, shared gateway, or another proxy in a leap-frog design, the client's request will route through the first proxy, and then the second layer will receive the request with the forged request header, and forward that traffic - now originating from within the network.

This vulnerability can be mitigated by matching the client HTTP host header in the server_name. But with it being 2022, we expect to have TLS everywhere. To save costs and reduce complexity (or because of a technical limitation such as with CloudFront), we have created one SSL certificate for our website, with two SAN domains for our subdomain services - and

The internal application teams have installed the certificate into their origins, but they elect to use a Layer 4 TCP passthrough proxy and terminate TLS at the origin gateway. This is often useful when using more modern gateways such as NGINX, Istio, or Traefik, where Policy-as-Code is defined and managed at the gateway layer in front of the applications, but behind the proxy. Layer 4 load balancers are also the default for many providers such as GCP.
        server {
            listen 443;
            ssl_preread on;

You may note the inclusion of the ssl_preread directive. As the traffic is encrypted, the proxy will not necessarily be able to know where to route the traffic when there are multiple listeners on the same port. RFC6066 describes a process by which a client can provide a ServerName in the extended ClientHello message. This is known as Server Name Indication (SNI), and allows the client to indicate to the server which server they wish to connect to, before sending the encrypted packets to the server.

As the Layer 4 TCP traffic through the proxy remains encrypted, once the proxy opens the connection with the origin based on the valid SNI, the client is then able to spoof the host header at Layer 7 and reroute the traffic accordingly. If the origin is an internal load balancer, gateway, or second hop of a leap frog proxy, the client can then access any resource behind that origin (presumably services matching domains in the SAN list) based on the HTTP host header supplied within the encrypted TLS session.

In our example, a client is able to spoof a request to the reverse proxy, and then access the internal service.

        curl -H "Host:"
The client can even access deeper paths and fully interact with the service. Since the connection to the service is secured with TLS, the proxy can just see the initial ClientHello packet (which is for the domain requested), but then once the proxy opens the connection with the origin, the client immediately sends the Host header, and the origin begins operating on that request.

mitigating the vulnerability

There are a few options to mitigate such a vulnerability, depending on the nature of your environment and your desired security posture and network design.

terminate TLS at proxy

This exploit is made possible due to the fact that the encrypted traffic is being passed through the proxy, so the proxy is unable to enforce any security policies on the Layer 7 traffic, only the Layer 4 traffic, such as the Client IP.

If we terminate the TLS at the proxy and then open a subsequent TLS session with the origin, we can decrypt the traffic and validate / inject any Layer 7 data we want.
        server {
            listen      443 ssl;
            ssl_protocols   TLSv1 TLSv1.1 TLSv1.2;
            ssl_ciphers     HIGH:!aNULL:!MD5;
            ssl_certificate /certs/;
            ssl_certificate_key /certs/;

            location / {
                proxy_set_header Host $host;
                proxy_ssl_server_name on;
                proxy_ssl_name $host;
                proxy_http_version 1.1;
This has the upside ensuring that the inbound traffic is being decrypted, validated, and then re-encrypted before being sent to the origin. However this TLS decryption and re-encryption on each request will add extra load on the proxy, as well as increase client request latency.

Additionally, this adds the operational overhead of ensuring the TLS certificates for each origin are always up to date in all your proxy servers. cert-manager-sync can assist with this, but it still does add operational overhead and complexity.

separate SAN certs, end wildcards

A key part of this exploit is to take advantage of certificates with multiple SAN domains which cover resources both inside and outside the network, resources using split horizon DNS, or wildcard certs which would cover all subdomains.

If SSL certificates are separated into individual per-origin/service CN certificates with no SANs, then the client can only initiate the SNI for the single domain. The more domains on the cert, the more options the client can potentially request.

SAN certificates should be reduced to only common services, and should be kept as small as possible to reduce the attack surface.

dedicated external origin

In addition to taking advantage of the SNI ClientHello message in the TLS handshake, this exploit also relies on origins which serve requests for multiple services, such as load balancers and gateways. This also includes the inner sides of leap-frog reverse proxies.

Once the client has passed the SNI check, the client has access to any resource behind that origin.

If the origin serves traffic for both external and internal services, then the client can access both services.

However if a dedicated origin is created for external ingress, not only does it mitigate this vulnerability, but it also ensures a single ingress path for all external traffic for better securing and threat analysis.

This exploit relies on a few different parameters aligning:

In addition to relying on potential proxy misconfigurations, it also leans into the presumption of security with TLS. When configuring a Layer 4 TCP proxy, the idea of not decrypting traffic and using a native TLS concept (SNI) makes it deceptively simple and secure.

However exactly because the traffic is encrypted, the proxy is unable to truly validate the origin before sending it on to the origin, and if the origin is able to route both internal and external resources and does no additional authentication/authorization/policy based routing, it will gladly handle the request.

last updated 2022-03-01T20:26:26-0800