Skip to main content

Netctl undocumented features

I use netctl to configure my network interfaces. While some features seem to be missing, here is a list of some useful features that are supported by the tool but not documented:

Passing options to your DHCP client

Although currently, only dhclient and dhcpcd are officially supported, it still seems quite a pain to pass them custom parameters without using the environment.

Some people suggest to use the DHCPClient variable to pass the name of the binary along with their options.

Nonetheless, it is possible to use the variables DhcpcdOptions or DhclientOptions.

Example (here, we pass a custom hostname and a custom metric to dhcpcd:

> DhcpcdOptions='-h hostname -m metric'

cURL and the TLS SNI extension

Let's say that you are hosting multiple websites on the same port of a single machine which IPv4 address is 176.31.99.217. Both websites must be accessible via HTTP and HTTPS.

You own the NS domain example.org and it points to your machine.

You are using virtual hosts to manage your different websites. In other words, the Host HTTP header is analysed by your web server so it can make the decision of which website it is going to serve.

However, these virtual hosts are not pointing to your machine yet. Only example.org does.

Here is an excerpt of a Nginx configuration using three virtual hosts:

  • example.org (it will be served by default because of the default_server directive),
  • vhost1.example.org,
  • vhost2.example.org
server {
    listen 0.0.0.0:80   default_server;
    listen 0.0.0.0:443  ssl default_server;

    server_name         example.org;

    root                /srv/www/example.org/www;

    ssl_certificate     ssl/example.org.crt;
    ssl_certificate_key ssl/example.org.key;

    try_files $uri $uri/ =404;
}

server {
    listen 0.0.0.0:80;
    listen 0.0.0.0:443  ssl;

    server_name         vhost1.example.org;

    root                /srv/www/vhost1.example.org/www;

    ssl_certificate     ssl/vhost1.example.org.crt;
    ssl_certificate_key ssl/vhost1.example.org.key;

    try_files $uri $uri/ =404;
}

server {
    listen 0.0.0.0:80;
    listen 0.0.0.0:443  ssl;

    server_name         vhost2.example.org;

    root                /srv/www/vhost2.example.org/www;

    ssl_certificate     ssl/vhost2.example.org.crt;
    ssl_certificate_key ssl/vhost2.example.org.key;

    try_files $uri $uri/ =404;
}

For testing purposes:

  • the file /srv/www/example.org/www/index.html contains the string default.
  • the file /srv/www/vhost1.example.org/www/index.html contains the string vhost1.
  • the file /srv/www/vhost2.example.org/www/index.html contains the string vhost2.

HTTP

Let's try that out with HTTP:

$ curl http://example.org
default

$ curl http://176.31.99.217
default

The default website is served because 172.31.99.217 does not match any virtual host server_name directive:

$ curl http://vhost1.example.org
curl: (6) Could not resolve host: vhost1.example.org
$ curl http://vhost2.example.org
curl: (6) Could not resolve host: vhost2.example.org

Since the DNS does not know about vhost1.example.org and vhost2.example.org, we can't test our websites this way. We are offered (at least) two possibilities:

  • adding vhost1.example.org and vhost2.example.org to the file /etc/hosts. Example:

    $ cat /etc/hosts
    127.0.0.1       localhost
    176.31.99.217   vhost1.example.org
    176.31.99.217   vhost2.example.org
    
    $ curl http://vhost1.example.org
    vhost1
    $ curl http://vhost2.example.org
    vhost2
    
  • specifying the Host header manually using the option -H of cURL. This is my favorite way in most cases. Example:

    $ curl http://176.31.99.217 -H 'Host: vhost1.example.org'
    vhost1
    $ curl http://176.31.99.217 -H 'Host: vhost2.example.org'
    vhost2
    

    The Host header has been placed in the HTTP request. We can verify this way:

    $ curl -v http://176.31.99.217 -H 'Host: vhost1.example.org'
    * Rebuilt URL to: http://176.31.99.217/
    * Hostname was NOT found in DNS cache
    *   Trying 176.31.99.217...
    * Connected to 176.31.99.217 (176.31.99.217) port 80 (#0)
    > GET / HTTP/1.1
    > User-Agent: curl/7.38.0
    > Accept: */*
    > Host: vhost1.example.org
    

HTTPS

Now let's see what happens if we use the HTTPS version:

$ curl https://example.org
default

$ curl https://176.31.99.217
curl: (51) SSL: certificate subject name 'example.org' does not match target host name '176.31.99.217'

The second command fails because the default web server answered with its certificate, but it is not valid for 176.31.99.217, only example.org.

This is a normal behaviour. We can solve this issue by creating a certificate valid for example.org and 176.31.99.217 for example.

But what about requesting our virtual hosts ? Well, pretty much the same happens as it did before (unless you didn't clear your /etc/hosts file):

$ curl https://vhost1.example.org
curl: (6) Could not resolve host: vhost1.example.org
$ curl https://vhost2.example.org
curl: (6) Could not resolve host: vhost2.example.org

The good news is: it will still work if you modify your /etc/hosts file (hurray):

$ cat /etc/hosts
127.0.0.1       localhost
176.31.99.217   vhost1.example.org
176.31.99.217   vhost2.example.org

$ curl https://vhost1.example.org
vhost1
$ curl https://vhost2.example.org
vhost2

And the bad news is:

$ curl https://176.31.99.217 -H 'Host: vhost1.example.org'
curl: (51) SSL: certificate subject name 'example.org' does not match target host name '176.31.99.217'
$ curl https://176.31.99.217 -H 'Host: vhost2.example.org'
curl: (51) SSL: certificate subject name 'example.org' does not match target host name '176.31.99.217'

So why does it fail ?

Well, as we showed previously, the Host header is analysed to figure out which website to serve. Nevertheless, when HTTPS is being used, the first thing that the user agent and the server have to do is negotiate the certificate. At this point, the HTTP headers have just NOT been sent yet.

So how does the web server decide which certificate to send ?

Note

10 years ago, that was simply not possible. The web servers had to send the same certificate for every virtual host behind the same address and port.

Nowadays, there is a TLS extension named SNI, which stands for Server Name Indication, which has been created to address this issue. SNI is supported by most browsers and operating systems. However, the TLS stack present in Windows XP does not support it. As a consequence, Internet Explorer under Windows XP cannot use SNI. But most other browsers use their own TLS stack and can thus use SNI even under XP.

Briefly, the extension works by sending the server name along with the first TLS packet. This way, the remote server knows which certificate to reply with. Only then, after the certificate has been negotiated, the server can receives the HTTP headers and is able to analyse the Host header.

Note

The host header does not require to match with the server name that has been sent in the TLS packet.

So what can we do ?

  • We can still use the /etc/hosts file as previously stated.

  • We can also use the --resolve option of cURL (which does pretty much the same thing as modifying the /etc/hosts file). The 443 is the port being used: example:

    $ curl https://vhost1.example.org --resolve vhost1.example.org:443:176.31.99.217
    > vhost1
    $ curl https://vhost2.example.org --resolve vhost2.example.org:443:176.31.99.217
    > vhost2
    
  • Or we can negotiate one certificate that we know is good and then use the Host header to get the content of another website:

    $ curl https://example.org -H 'Host: vhost1.example.org'
    > vhost1
    $ curl https://example.org -H 'Host: vhost2.example.org'
    > vhost2
    

    We just negotiated the certificate of example.org and got the content of another virtualhost ! We can verify:

    $ curl -v https://example.org -H 'Host: vhost1.example.org'
    ...
    > * Server certificate:
    ...
    > *     common name: example.org (matched)
    ...
    > *     SSL certificate verify ok.
    > ...
    > ...
    vhost1
    

That's all folks.

About Me

Hello World,

I am a low level software developer. I am also keen on software security.

I have been writing some small projects to satisfy my needs. I usually write open source code in order to share my work with people willing to:

  • understand the internal behaviour of a script or a binary
  • learn some concepts or tricks by reading the code
  • contribute or adapt it to their needs (and then ideally share it back)

I set up this website to write or share articles about development and security.

You can download my Curriculum Vitae here You can also browse my public repositories here or on github.