Skip to content
Dmitry Petrov edited this page May 26, 2023 · 6 revisions

HTTP proxy servers and application/content delivery network controller provide multiple ways to make a routing decision for incoming request. Situation, when a request is forwarded to a wrong backend server, is called virtual host confusion, and usually happens due to misconfiguration or misimplementation of security checks. An article from Black Hat conference describes, how the most common mistakes in configuration and implementation details of Nginx may lead to virtual host confusion. This security threats was taken into account during TempestaFW development, but the more flexible configuration is, the more attention from administrators is required to provide the most optimal and secure configuration.

In this article we describe the current state of virtual host confusion (original article was published in 2014), and how it can be mitigated/avoided in TempestaFW. The last section contains some hints upon reproducing test cases.

TCP layer confusion

For TCP layer confusion examples, it's required to assign multiple IP addresses to the same virtual machine. Both addresses can be assigned to the same interface, e.g.:

ip addr add 192.168.122.12/24 dev enp1s0
ip addr add 192.168.122.13/24 dev enp1s0

To check confusion attack, a user need to open a connection to desired target IP address and set a custom Host HTTP header. The simplest way to achieve it — create temporal entries in /etc/hosts/ as below:

192.168.122.12 tempesta-tech.com
192.168.122.13 wiki.tempesta-tech.com

And finally use curl -vk https://tempesta-tech.com/ to open TLS connection and make a request. Some combinations of host file content and URI may represent a client attempt to confuse HTTP routing engine.

Confusion on Nginx

Two virtual hosts are existing in the same configuration, but configured on different listen ports.

events {
    worker_connections 1024;
}

http {
    # Virtual Host #1
    server {
        listen              192.168.122.12:443 ssl default_server;
        server_name         tempesta-tech.com;
        ssl_certificate     tfw-root.pem;
        ssl_certificate_key tfw-root.key;

        root /srv/http/root/;
    }

    # Virtual Host #2
    server {
        listen              192.168.122.13:443 ssl;
        server_name         wiki.tempesta-tech.com;
        ssl_certificate     tfw-root.pem;
        ssl_certificate_key tfw-root.key;

        root /srv/http/wiki/;
    }
}

In this configuration, different IP addresses are used to isolate different domains from each other. The original article described an issue, when no real port isolation happened, and both virtual hosts can be served on any listed IP address. Now the situation differ: isolation is stricter and each listed address can serve only virtual hosts configured on that address.

But now neither SNI TLS extension nor host header can affect virtual host routing. If a server receives a request on 192.168.122.12:443, it will be forwarded on virtual host #1, even if SNI and host header explicitly request wiki.tempesta-tech.com host name. And vice versa.

If virtual hosts are configured on the same IP, but on different ports, the client is required to provide an exact port in Host header to reach the desired server. Configuration is below:

events {
    worker_connections 1024;
}

http {
    # Virtual Host #1
    server {
        listen              443 ssl default_server;
        server_name         tempesta-tech.com;
        ssl_certificate     tfw-root.pem;
        ssl_certificate_key tfw-root.key;

        root /srv/http/root/;
    }

    # Virtual Host #2
    server {
        listen              444 ssl;
        server_name         wiki.tempesta-tech.com;
        ssl_certificate     tfw-root.pem;
        ssl_certificate_key tfw-root.key;

        root /srv/http/wiki/;
    }
}

In this case the situation is absolutely the same: only listen address is used to build a routing decision and both Host header and SNI extension values are ignored.

Since target IP address and port are not authenticated by TLS, if an attacker can redirect the victim's traffic, it can steal its request with cookies and other important information. If both servers, attacker's and the good one use the same certificate (common on cloud installations), a client browser won't be able to spot the attack.

Confusion on Tempesta

The TCP-level confusion is not reproduced in Tempesta. Although port and IP-address isolation is not yet supported, when multiple virtual hosts are defined, an explicit definition of http_chain directive is required. It's always evaluated for every request. Block as default routing action is always added implicitly to the end of the http_chain list, it also mitigates incorrect virtual host matching.

For the same behaviour Nginx requires to have an explicit virtual server section with explicit empty server_name directive and return 4xx directive.

Confusion on HTTP layer

Nginx's configuration contains multiple files which can be included to each other. Of course, it gives a lot of flexibility in configuration. But resulting HTTP routing rules are built implicitly from many files, may contain regular expressions, and it's harder to track the HTTP confusions. TempestaFW uses a single configuration file, which is less flexible, but explicit routing rules are easier to debug and understand.

HTTP routing rules

Every incoming request is matched against HTTP Tables. HTTP table rules are very flexible, and multiple sources can be used to provide a routing decision: any header content, URI content, NetFilter marks, cookies. Since all rules are tried one-by-one, it's recommended to split the rules into separate chain levels and group them by their meaning. Mixing all the possible rules in a single table can not only reduce performance, but also makes configuration support more time-consuming. E.g. if a current set of the rules in a table was optimized to provide faster filtering and virtual host matching, addition or deletion of a single virtual host will require a review of the full table.

Instead, we recommend the following HTTP tables structure:

  1. Pre-routing filtering table: a set of rules to mitigate attacks targeted on the installation in whole, and to filter unwanted requests as early as possible. This filter will be applied to every incoming request.
  2. Routing table: a set of rules to find a target virtual host for incoming request, and to provide a next level table. The set should provide high level decision, not allowing to confuse requests to different virtual hosts. E.g. this table can find a next level table by authority (host) described in the request.
  3. Post-routing set of tables: once routing decision is done, a set of rules can be used to filter malicious requests, or to provide more precise routing, such as A/B testing, user groups isolation, and splitting a single service into different virtual hosts. This set of tables can be considered as a per-vhost set of rules.

Example:

vhost tempesta-tech.com {
    ...
}
vhost example.com {
    ...
}

# Virtual host rules with fine-grade filtering.
http_chain tempesta {
    host == "beta.tempesta-tech.com" -> tempesta_beta;
    host != "tempesta-tech.com"  -> block;
    hdr "X-Custom-Bar-Hdr" == "*"  -> mark = 4;
    -> tempesta-tech.com;
}

http_chain example {
    mark == 2 -> block;
    -> example.com;
}

# Primary Routing chain
http_chain routing {
    host == "*tempesta-tech.com"  -> tempesta;
    host == "example.com"     -> example;
    -> block;
}

# Pre-routing chain
http_chain {
    # Mitigate incoming referer attack as early as possible
    hdr "Referer" == "http://badhost.com*" -> block;
    # And then do a routing:
    -> routing;
}

Example of configuration, that allows virtual host confusion:

vhost vh1 {
    ...
}
vhost vh2 {
    ...
}

<... A lot of other virtual hosts are listed here. ...>

http_chain {
    hdr "Cookie" == "secret_cookie=*" -> vh1;
    host == "example.com"     -> vh2;
    <... A lot of other host match rules are listed here. ...>
    host == "site.com"        -> vh1;
    -> block;
}

In this scenario, all requests with the specified cookie will be forwarded to the virtual host vh1. This rule is a short path rule to jump over a lot of host matches and chose the target virtual host a bit faster, and it allows virtual host confusion, even requests with the host example.com, but with a matching cookie will be routed to a wrong virtual host.

Confusion on TLS layer

When a client connects to a Web server using TLS connections, it usually sets an SNI TLS extension header to identify the requested server. If an attacker has a server behind the same proxy as a good client, or at least can upload malicious content to any domain behind the same proxy, it may try to trigger a virtual host confusion attack.

First, the attacker need to force client and server to resume their previous session. If session is resumed, certificate checks are skipped and the client can't verify the server. Reverse proxies or CDNs often use a multi domain certificates, and even full certificate validation can't help the client to verify which backend server group was assigned to its connection. With this in mind an attacker may force a client to make requests to attackers content in the same security context as requests to the origin backend server.

SNI validation with http_strict_host_checking

SNI paired with HTTP tables-based routing also could be misconfigured if http_strict_host_checking is enabled in the frang layer. This comes from Tempesta vhost selection mechanism:

  1. Virtual host is choosen by matching SNI from TLS handshake sent by client with CN/SAN records from all certificates. Information about picked vhost is stored in the connection structure.
  2. When actual request arrives, routing based on HTTP tables rules is applied to select appropriate virtual host.
  3. If http_strict_host_checking is enabled, then Tempesta ensures among other things that virtual hosts selected in steps 1 and 2 are the same. If this condition is not satisfied, the request is rejected (based on block_action attack directive) and warning abount mismatching vhost appears in the log:
vhost by SNI doesn't match vhost by authority $CLIENT_IP ('$TLS_VHOST_ID' vs '$HTTP_VHOST_ID`)

Here is the example of such inheritly broken configuration:

tls_certificate /www/cert/my-site.com.crt;
tls_certificate /www/cert/my-site.com.key;

vhost static {
	...
}

vhost dynamic {
	...
}

http_tables {
	uri == "*.php" -> dynamic;
	-> static;
}

In this example dynamic and static virtual hosts share the same TLS certificate and one of them (currently — depending on order of appearance in config file) is chosen during TLS handshake by SNI my-site.com. Then HTTP tables come into play and choose vhost based on request URI.

This is where things become broken: for either static or dynamic vhost this selection will always mismatch SNI vhost.

Advised workaround use distinct hostnames and certificates (i.e. CN=my-site.com for dynamic, and SAN=*.my-site.com for static.my-site.com name).

Important note on wildcard certificates: if wildcard certificate can't distinguish between two vhosts (let's say SAN=*.my-site.com and vhosts css.my-site.com and images.my-site.com), then the same problem will arise because single SNI could match more than one certificate.

Confusion on Nginx

With TLS layer confusion, the TLS session cache or TLS session tickets might be confused when a previously opened TLS session with one virtual host can be resumed with the other virtual host. Only TLS tickets was tested in this article. Configuration was used as follows:

events {
    worker_connections 1024;
}


http {
    # Virtual Host #1
    server {
        listen              443 ssl default_server;
        server_name         tempesta-tech.com;
        ssl_certificate     tfw-root.pem;
        ssl_certificate_key tfw-root.key;
        ssl_session_tickets on;

        root /srv/http/root/;
    }

    # Virtual Host #2
    server {
        listen              443 ssl;
        server_name         wiki.tempesta-tech.com;
        ssl_certificate     tfw-root.pem;
        ssl_certificate_key tfw-root.key;
        ssl_session_tickets on;

        root /srv/http/wiki/;
    }
}

At the first sight no virtual host confusion happened: real browser requests to tempesta-tech.com and wiki.tempesta-tech.com use different entries from TLS connection cache on client side, so different TLS tickets and session identifiers for resumption are sent to the server. But it mostly explained by efforts on the client side.

With a more synthetic scenario, it's possible to manually extract master key and ticket data from one connection, and reuse it in another connection with a different SNI value. With configuration above the session is successfully resumed. Although the attack is hard to reproduce with current browsers and looks quite synthetic, an attacker may still exploit implementation mistakes in client code to force the client to resume a session with an untrusted server. Source code of such scenarios are listed in functional tests for TempestaFW.

Nginx can't be configured to mitigate attack entirely.

Confusion on Tempesta

TempestaFW directly manipulates TLS ticket content. TLS server name and client IP address are always added into TLS ticket data, thus tickets can't be shared between clients or reused for different TLS server names. This option is always on and can't be disabled.

The Frang Tempesta module, has a special limit used for block requests, with malicious host header content:

frang_limits {
    http_strict_host_checking;
}

The limit provides the following restriction, if some of them doesn't meet, then client connection will be blocked:

  • every HTTP request must declare requested host (authority). A header Host in HTTP/1 and both :authority and Host in HTTP/2 can be used for that. If HTTP/2 sends both Host and :authority, they must be equal.
  • Requested authority must not be an IP v4/v6 address;
  • Authority can also be listed inside uri, in this case, it must be equal to the header (HTTP/1.x only);
  • Port, requested in authority, must be equal to the TCP port, where request was received;
  • SNI in TLS connection must resolve to the same vhost as the requested authority. This mean that HTTP tables rules must be consistent with certificates declared inside vhosts.

Frang limits by default are enabled with reasonable defaults.

Required files to reproduce

Generate multi domain certificate

First, create a self-signed multi domain certificate, create a file key.conf:

[req]

distinguished_name = req_distinguished_name
req_extensions = v3_req
prompt = no

[req_distinguished_name]

C = US
O = Tempesta Tech., Ltd.
CN = tempesta-tech.com

[v3_req]

keyUsage = nonRepudiation, digitalSignature, keyEncipherment
extendedKeyUsage = serverAuth
subjectAltName = @alt_names

[ alt_names ]
DNS.1 = *.app.tempesta-tech.com
DNS.2 = *.wiki.tempesta-tech.com

And generate a private key and a certificate:

openssl req -x509 -new -nodes \
    -newkey ec -pkeyopt ec_paramgen_curve:prime256v1 -keyout tfw-root.key \
    -config key.conf -extensions 'v3_req' \
    -out tfw-root.pem

Now the certificate can be validated:

openssl x509 -text -in tfw-root.crt

It will contain a section:

            X509v3 Subject Alternative Name:
                DNS:*.app.tempesta-tech.com, DNS:*.wiki.tempesta-tech.com

Generate resources for virtual hosts

Since virtual host confusion is evaluated, different resources must be created for each virtual host and must be easily distinguished from each other. E.g.:

$ sudo cat /srv/http/root/index.html
<html>
<link rel="icon"  href="data:;base64,iVBORw0KGgo=">
<body>
<p></p>&nbsp;<p></p>
<h2 align='center'>
HELLO WOLD  ROOT domain!
</h2>
<p></p>&nbsp;<p></p>
</body>
</html>
$ sudo cat /srv/http/wiki/index.html
<html>
<link rel="icon"  href="data:;base64,iVBORw0KGgo=">
<body>
<p></p>&nbsp;<p></p>
<h2 align='center'>
HELLO WOLD  WIKI domain!
</h2>
<p></p>&nbsp;<p></p>
</body>
</html>
Clone this wiki locally