Introduction

This tutorial explains how to set up our MirageVPN solution. Among other things, it will cover the deployment of a MirageVPN client (with an OpenVPN server) on a machine in order to encrypt all your traffic. This tutorial is only for UNIX system.

If you find typos, or find parts difficult to understand, please report an issue or pull request at the source repository.

Before we can start using our client, we need to have a server available. This short tutorial will show you how to set up a simple OpenVPN server with the aim of testing our client: we strongly advise you not to consider our configuration as 'production-ready'. We'll be using Debian 12 and OpenVPN 2.6.3.

Installation & Generation

So you probably have SSH access on your server. Let's connect to the server and install OpenVPN.

$ export OPENVPN_IP=<ipv4> # Must be set!
$ ssh root@$OPENVPN_IP
$ apt update
$ apt upgrade
$ apt install openvpn ufw

We then need to create a [Public Key Infrastructure][pki] so that we can manage who can connect via our VPN server. We'll use easy-rsa for this. The aim of a PKI is to be able to add new users without reloading our server (among other things). We are going to create an "authority" and create our users' keys via this authority. In this way, our OpenVPN server will be able to authenticate users not by their keys but by the fact that the keys were created via our authority.

$ mkdir easy-rsa
$ ln -s /usr/share/easy-rsa/* ~/easy-rsa/
$ chmod 700 easy-rsa
$ cd easy-rsa
$ cat >vars<<EOF
set_var EASYRSA_ALGO "ec"
set_var EASYRSA_DIGEST "sha512"
EOF
$ ./easyrsa init-pki
$ ./easyrsa build-ca nopass
$ ./easyrsa build-dh
$ ./easyrsa gen-req server nopass
$ ./easyrsa sign server server

We then need to copy the elements we need to set up our server and generate the final material required.

$ cp pki/issued/server.crt pki/ca.crt pkg/dh.pem /etc/openvpn/server/
$ cp pki/private/server.key /etc/openvpn/server
$ cd /etc/openvpn/server
$ openvpn --genkey secret ta.key

Now we need to create a client identity1.

$ cd ~/easyrsa
$ ./easyrsa gen-req alice nopass
$ ./easyrsa sign-req client alice

The two files we are ultimately interested in for our client are the newly created alice.crt and ta.key. The first is used for authentication (to prove your identity to the server). The second encrypts the connection between you and the server.

1

The example of generating the materials needed for the server here is very simplified. More robust methods (such as generating the server and client keys elsewhere than on the target server) are recommended. As a reminder, this description does not produce a "production-ready" OpenVPN server.

Configuration

We can now move on to configuring the server. This consists of a simple file and setting up a private network. The configuration file simply describes where the materials needed to launch the server are located.

$ cat >/etc/openvpn/server/server.conf<<EOF
proto tcp
port 1194
dev tun
ca ca.crt
server server.crt
key server.key
dh dh.pem
topology subnet
server 10.8.0.0 255.255.255.0
ifconfig-pool-persist ipp.txt
keepalive 10 30
tls-auth ta.key 0
persist-key
persist-tun
user nobody
group nogroup
EOF

There's a lot to be said for this configuration. The first is the use of tls-auth to encrypt our control channel. There are other ways of doing this, which we'll describe later, but let's start by having a working server. TCP is also used as a protocol (instead of UDP). We will also assign a specific IP for our alice client:

$ echo "alice,10.8.0.2," >> /etc/openvpn/server/ipp.txt

Finally, we configure our network so that our customers can communicate with Internet:

$ sysctl -w net.ipv4.ip_forward=1
$ ip route list default | cut -d' ' -f5
eth0
$ cat >>/etc/ufw/before.rules<<EOF
*nat
:POSTROUTING ACCEPT [0:0]
-A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE
COMMIT
EOF
$ sed 's/DEFAULT_FORWARD_POLICY="DROP"/DEFAULT_FORWARD_POLICY="ACCEPT"/' \
  -i /etc/default/ufw
$ ufw allow 1194/tcp
$ ufw allow OpenSSH
$ systemctl -f enable openvpn-server@server.service
$ systemctl start openvpn-server@server.service

And here's a working OpenVPN server for testing our client!

We are going to use mirage-router as a unikernel. Unikernel can be used to highlight a possible network configuration for your computer. This configuration consists of having 2 interfaces:

  1. one to retrieve all the packets you want to send to Internet
  2. one to send all encrypted packets to your VPN server only
    Gunikernelunikerneltap0tap010.8.0.3unikernel->tap0tap1tap110.0.0.2unikernel->tap1br0br010.8.0.2tap0->br0br1br110.0.0.1tap1->br1OpenVPN serverOpenVPN serverbr0->OpenVPN serverComputerComputerComputer->br1

The unikernel will then encrypt all the packets received and send them back to the VPN server. The server can then decrypt them and send them to Internet. The reverse will also be true.

One disadvantage of this method is the assignment of IP addresses. They can only be fixed between the client and the server.

Network configuration

As with the virtualisation of any system, a network configuration stage is necessary so that the virtualised systems can communicate. The general idea is to create a 'bridge' to which we can attach the virtual interfaces used by our systems. In our configuration, 2 bridges are required. However, we need to use the same IP address as the one our VPN server will allocate to us. In our previous configuration, we assigned alice the IP 10.8.0.2.

Create our bridges

Let's start by creating our bridge:

$ sudo ip link add name br0 type bridge
$ sudo ip link set dev br0 up
$ sudo ip address add 10.8.0.2/24 dev br0
$ sudo ip link add name br1 type bridge
$ sudo ip link set dev br1 up
$ sudo ip address add 10.0.0.1/24 dev br1

Create TAP interfaces

A solo5 unikernel needs a tap interface. This is a virtual interface on which our unikernel will define its IP address and connect to the network. Here's how to create a tap interface.

$ sudo ip tuntap add mode tap tap0
$ sudo ip link set dev tap0 up
$ sudo ip link set tap0 master br0
$ sudo ip tuntap add mode tap tap1
$ sudo ip link set dev tap1 up
$ sudo ip link set tap1 master br1

A tap interface (unlike tun) requires the Ethernet frame. Some will note that we have configured our OpenVPN server with tun (dev tun). However, our unikernel does not transmit packets to the OpenVPN server without the Ethernet frame - so our unikernel is compatible with a server using a tun interface.

This choice is based on our observation that there are few OpenVPN server configurations with tap interfaces.

Firewall

As with the server, we need to enable our virtual machines to communicate with the outside world:

$ sudo sysctl net.ipv4.ip_forward=1
$ sudo iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -j MASQUERADE

The aim of our last command is to let our unikernel communicate with the outside world even if it is using a private IP address (in our case 10.0.0.2). Note that this type of configuration becomes incompatible with OpenVPN as a client.

Persistence

This configuration is not persistent, meaning that the next time you reboot your computer, br{0,1} and tap{0,1} will disappear. However, your system can manage the creation of these elements at boot time. In Debian, for example, you can modify the /etc/network/interfaces file to create your bridges at boot time:

$ cat >>/etc/network/interfaces <<EOF
auto br0
iface br0 inet static
  address 10.8.0.2
  netmask 255.255.255.0
  broadcast 10.8.0.255
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0

auto br1
iface br1 inet static
  address 10.0.0.1
  netmask 255.255.255.0
  broadcast 10.0.0.255
  bridge_ports none
  bridge_stp off
  bridge_fd 0
  bridge_maxwait 0
EOF

Other distributions (like Archlinux) use netctl, for example. Simply create a new profile such as:

$ cat >/etc/netctl/openvpn-bridge<<EOF
Description="OpenVPN Bridge"
Interface=br0
Connection=bridge
IP=static
Address='10.8.0.2/24'
SkipForwardingDelay=yes
EOF
$ cat >/etc/netctl/kvm-bridge<<EOF
Description="KVM Bridge"
Interface=br1
Connection=bridge
IP=static
Address='10.0.0.1/24'
SkipForwardingDelay=yes
EOF

And enable it:

$ sudo netctl enable openvpn-bridge
$ sudo netctl enable kvm-bridge

As far as tap interfaces are concerned, their management is often relegated to virtual machine managers such as libvirt, Xen or albatross. On this last point, we are going to present these different solutions later, particularly QubesOS, which uses Xen.

MirageVPN configuration

We can now start configuring our client. The materials required for our client are:

  1. our ca.crt
  2. our client's certificate alice.crt
  3. her private key alice.key
  4. the ta.key file we generated on our server

Our unikernel has no file system. The idea is to create an image of our configuration which can then be used by our unikernel. Fortunately, OpenVPN allows you to put the content of our materials directly into the configuration file:

$ export OPENVPN_IP=<ipv4> # Must be set!
$ cat >config.sh<<EOF
#!/bin/bash

CA_FILE=\$1
CRT_FILE=\$2
KEY_FILE=\$3
TA_FILE=\$4
OUTPUT_FILE=\$5

cat >\$OUTPUT_FILE<<PRELUDE
client
proto tcp
remote $OPENVPN_IP 1194
nobind
persist-key
cipher AES-256-CBC
remote-cert-tls server
PRELUDE

function extract() {
  cat \$2 | sed -ne "/-BEGIN \${1}-/,/-END \${1}-/p"
}

echo "<ca>" >> \$OUTPUT_FILE
extract "CERTIFICATE" \$CA_FILE >> \$OUTPUT_FILE
echo "</ca>" >> \$OUTPUT_FILE

echo "<cert>" >> \$OUTPUT_FILE
extract "CERTIFICATE" \$CRT_FILE >> \$OUTPUT_FILE
echo "</cert>" >> \$OUTPUT_FILE

echo "<key>" >> \$OUTPUT_FILE
extract "PRIVATE KEY" \$KEY_FILE >> \$OUTPUT_FILE
echo "</key>" >> \$OUTPUT_FILE

echo "tls-auth [inline] 1" >> \$OUTPUT_FILE
echo "<tls-auth>" >> \$OUTPUT_FILE
extract "OpenVPN Static key V1" \$TA_FILE >> \$OUTPUT_FILE
echo "</tls-auth>" >> \$OUTPUT_FILE

SIZE=\$(stat --printf="%s" \$OUTPUT_FILE)
truncate -s \$(( ( ( \$SIZE + 512 - 1 ) / 512 ) * 512 )) \$OUTPUT_FILE
EOF

In this little script, you need to set $OPENVPN_IP to the public IP address of your OpenVPN server. This little script will generate a compatible configuration file for our unikernel. Just run it this way:

$ scp root@$OPENVPN_IP:/root/easy-rsa/pki/ca.crt .
$ scp root@$OPENVPN_IP:/root/easy-rsa/pki/issued/alice.crt .
$ scp root@$OPENVPN_IP:/root/easy-rsa/pki/private/alice.key .
$ scp root@$OPENVPN_IP:/etc/openvpn/server/ta.key .
$ chmod +x config.sh
$ ./config.sh ca.crt alice.crt alice.key ta.key alice.config

The last command creates a file which can be used as a "block" by our unikernel. The constraint is that its size must be a multiple of 512 and that the extra bytes must correspond to '\000'.

How to launch your MirageVPN client

You can download a reproducible binary. Get the bin/ovpn-router.hvt artifact.

We can now launch our unikernel. This involves using our Solo5 "tender" and defining the right routes to redirect all our traffic to br1 and to ensure that all encrypted traffic leaving our br1 goes to our OpenVPN server.

The Solo5 tender is available via apt or on GitHub.

$ apt install gnupg
$ curl -fsSL https://apt.robur.coop/gpg.pub | gpg --dearmor > /usr/share/keyrings/apt.robur.coop.gpg
$ echo "deb [signed-by=/usr/share/keyrings/apt.robur.coop.gpg] https://apt.robur.coop ubuntu-20.04 main" > /etc/apt/sources.list.d/robur.list
# replace ubuntu-20.04 with e.g. debian-11 on a debian buster machine
$ apt update
$ apt install solo5

We need to know 2 pieces of information: the interface used to communicate with Internet and our gateway.

$ export INTERFACE=$(ip route | grep default | cut -f5 -d' ')
$ export GATEWAY=$(ip route | grep default | cut -f3 -d' ')

We can therefore specify that any packets destined for our OpenVPN server must pass through our interface via our gateway.

$ sudo ip route add $OPENVPN_IP via $GATEWAY dev $INTERFACE

Now we can launch our unikernel. It should be able to initialise a tunnel with your OpenVPN server. Here's the command to launch the unikernel with our Solo5 tender:

$ solo5-hvt --block:storage=alice.config \
  --net:service=tap1 --net:private=tap0 -- ovpn-router.hvt \
  --private-ipv4=10.8.0.3/24 --private-ipv4-gateway=10.8.0.2 \
  --ipv4=10.0.0.2/24 --ipv4-gateway=10.0.0.1

All we need to do now is redirect all our traffic to our unikernel. In this case, the unikernel uses the IP address 10.8.0.3. So we're going to redirect all the packets to this IP address.

$ sudo ip route add 0.0.0.0/1 via 10.8.0.3 dev br0

And that's it! All our traffic uses our unikernel to connect directly to our OpenVPN server. We can confirm that, from the outside, we are recognised by the public IP address of our OpenVPN server rather than our real public IP address.

$ test $(curl ifconfig.me) = $OPENVPN_IP
$ echo $?
0

One advantage of MirageVPN is its ability to produce a unikernel, i.e. a fully-fledged operating system for a virtualizer such as [Xen]. In this respect, QubesOS users can secure their VMs' Internet connections via a MirageVPN client.

Unlike a standard OpenVPN client, the VM used is smaller and its attack surface just as reduced, as the operating system only does VPN: an OpenVPN client would require a kernel (like Linux), a kernel configuration to redirect VM connections to the VPN tunnel, and several libraries installed in order to function.

In this chapter, we'll look at how to configure and install a MirageVPN client for QubesOS. We'll assume that the OpenVPN server is configured as described in this handbook.

Download and configuration

You can download the unikernel from its official repository: GitHub or from our reproducible build platform builds.robur.coop.

Next, copy the unikernel to dom0 with this command (from a dom0 terminal). In this example, we've uploaded the unikernel to the app-vm personal:

$ mkdir -p /var/lib/qubes/vm-kernels/qubes-miragevpn/
$ cd /var/lib/qubes/vm-kernels/qubes-miragevpn/
$ qvm-run -p personal 'cat qubes-miragevpn.xen' > vmlinuz

Still from dom0, we now need to create a new VM from the downloaded image. An empty initramfs file must also be created (for QubesOS < 4.2):

$ gzip -n9 < /dev/null > initramfs
$ qvm-create \
  --property kernel=qubes-miragevpn \
  --property kernelopts='' \
  --property memory=256 \
  --property maxmem=256 \
  --property netvm=sys-net \
  --property provides_network=True \
  --property vcpus=1 \
  --property virt_mode=pvh \
  --label=green \
  --class StandaloneVM \
  qubes-miragevpn
$ qvm-features qubes-miragevpn no-default-kernelopts 1

The configuration of the MirageVPN client for QubesOS is constrained in the same way as our MirageVPN client for KVM: the configuration must fit into a single file. This file must be compressed with the tar command and made available to the unikernel via qvm.

$ tar cvf config.tar alice.config
$ qvm-volume import qubes-miragevpn:root config.tar
$ qvm-prefs --set qubes-miragevpn -- kernelopts '--config_key=alice.config'

Finally, we need to configure a VM (like personal) to use our VPN client for all connections.

$ qvm-prefs --set <my-app-vm> netvm qubes-miragevpn

The MirageVPN server is a unikernel which listens on a port for client connections, and provides Internet connectivity (with NAT) for all authenticated clients. A single network interface is used for both client connections and providing connectivity.

The unikernel will encrypt all packets received from the Internet and send them to the respective client (if there's a NAT table entry for the quadruple source IP, destination IP, source port, destination port). All packets received from a client will be decrypted and forwarded to the Internet. If "client-to-client" is enabled in the configuration, packets from one client which destination is another client will be forwarded by the server.

Scope of the server

The scope is at the moment limited to IPv4 traffic over the tunnel (no IPv6), the server only listens on TCP (no UDP). Only layer 3 networking is supported (tun device), there's no support for tap devices.

The server will route all traffic to its default gateway. If "client-to-client" is specified, packets for other clients will be forwarded to the specific client.

Only a single network interface is used by the server, where both TCP listening and forwarding packets (using NAT) is done. NAT will always be used.

Authentication

The server as is only authenticates via X.509 certificates. It is possible to authenticate username and password, but since we do can not in a MirageOS unikernel execute shell scripts, the verification hook script won't work. If you need username and password authentication, please get in touch via our issue tracker and we will find a solution.

Getting the unikernel binary

You can download the unikernel binary from our reproducible build infrastructure. Download the bin/ovpn-server.hvt artifact. If you did that, skip to "VPN Configuration".

Building from source (alternative)

We will provide reproducible binaries in the future, here we document how to build the unikernel from source.

Prerequisites

First, make sure to have "opam" and "mirage" installed on your system.

Git repository

Do a git clone https://github.com/robur-coop/miragevpn.git to retrieve the source code of the MirageVPN server.

Building

Inside of the cloned repository, execute mirage configure (other targets are available, please check the mirage documentation):

$ cd miragevpn/mirage-server
$ mirage configure -t hvt
$ make

The result is a binary, dist/ovpn-server.hvt, which we will use later.

VPN Configuration

The configuration needs to be stored in a block device. Use the provided tool openvpn-config-parser --server to embed all external files into a single file. You can use the configuration and keys as described in the OpenVPN server chapter of this handbook.

$ dune exec -- openvpn-config-parser --server server.conf > server.conf.full

Network configuration for the unikernel

All you need is a tap interface to run the unikernel on. You also need your unikernel to be reachable from the outside (on the listening port), and be able to communicate to the outside. There are multiple approaches to achieve this, we will focus on setting up your firewall for this:

$ sysctl -w net.ipv4.ip_forward=1
$ ip route list default | cut -d' ' -f5
eth0

# allow the server to communicate to the outside
$ sudo iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -j MASQUERADE

# redirect port 1194 to the unikernel
$ sudo iptables -t nat -A PREROUTING -i eth0 -p tcp --dport 1194 -j DNAT \
  --to-destination 10.0.0.2:1194

Setting up the network interface:

$ sudo ip tuntap add mode tap tap0
$ sudo ip link set dev tap0 up
$ sudo ip addr add tap0 10.0.0.1/24

We're all set now: the unikernel is allowed to communicate to the outside, port 1194 is forwarded to the unikernel IP address, and a tap0 interface exists where the host system has the IP address 10.0.0.1 configured.

Launching MirageVPN-server

To launch the unikernel, you need a solo5 tender (that the Building section already installed).

$ solo5-hvt --block:storage=server.conf.full --net:service=tap0 -- \
    dist/ovpn-server.hvt --ipv4=10.0.0.2/24 --ipv4-gateway=10.0.0.1

Connecting client(s)

Now, clients can connect to the running server, either using OpenVPN, MirageVPN, or any other implementation. The client configuration prepared in the earlier chapter (alice) can be used for this. Execute the following on your client machine:

$ openvpn alice.config

Now, all your traffic will be redirected through the VPN server.