Introduction
mnet is a library that provides the networking foundation for unikernels. It
lets you build services ranging from public-facing web servers to more
specialized tools such as a DNS resolver (to circumvent censorship) or a DNS
blocker (to filter out advertising). In this short book, we will walk through
several practical examples that show what unikernels in OCaml can do.
What is a unikernel?
A unikernel is a specialized, single-purpose operating system that bundles your application code with only the OS components it actually needs. Nothing more. Instead of running your OCaml application on top of a general-purpose OS such as Linux (which ships with thousands of features you will never use), a unikernel compiles your code together with only the minimal set of libraries needed for networking, storage, and memory management.
The result is a single bootable image that runs inside a sandboxed environment. There is no shell, no unnecessary drivers, and no multi-user support. It is just your application and the bare minimum required to run it.
In practice, building an OCaml unikernel relies on two key components.
Solo5 provides the sandboxed execution environment: it defines a
minimal, stable interface between your unikernel and the underlying host
(whether that host is a hypervisor such as KVM, or a sandboxed Linux process
using seccomp). Solo5 handles the low-level details of how your
unikernel boots, accesses network interfaces, and reads from block devices. On
top of Solo5, mkernel is a library that lets you write unikernels
in OCaml using the Miou scheduler. It exposes the devices that Solo5
provides (network interfaces and block storage) and gives you a familiar OCaml
programming model for building your application.
When you compile your OCaml code with mkernel, the build system produces a
standalone image that can be launched using a Solo5 tender (a small host-side
program such as solo5-hvt). The practical benefits are significant: a smaller
attack surface, faster boot times (often measured in milliseconds), a reduced
memory footprint, and simpler deployment, since the entire system is a single
artifact.
The ecosystem for OCaml unikernels
mnet is part of a broader ecosystem of OCaml libraries that our
cooperative maintains for unikernel development. This ecosystem provides
pure OCaml reimplementations of essential components (networking, cryptography,
and more) so that you can build fully self-contained applications without
relying on C bindings or system libraries. Here are some of the libraries we
use throughout this tutorial.
- At the foundation,
mkernelprovides the runtime, including hypercalls for network and block devices, clock access, and integration with the Miou scheduler. - For networking,
utcpis a pure OCaml implementation of the TCP protocol, used internally bymnet. It originated from a manual extraction of a HOL4 specification of the TCP state machine (described in detail in this paper). ocaml-solo5is a variant of the OCaml compiler that targets Solo5, making cross-compilation possible.- On the cryptography side,
mirage-cryptoprovides our cryptographic primitives, and some of its operations are derived from formally verified proofs in Rocq/Coq via the fiat project.
We will encounter more of these libraries as we go.
Prerequisites
Unikernels require a different build process than standard executables. We are
actively improving the development workflow for mkernel, but it is still
evolving. Everything described in this tutorial is accurate and functional,
but you can expect the process to become smoother over time. To get started,
you will need:
- OCaml version 5.0.0 or later,
- along with OPAM, the OCaml package manager (you can find installation instructions here).
- You will also need
ocaml-solo5, which lets you compile an OCaml project as a unikernel, as well as the Solo5 tools (solo5-hvtorsolo5-spt) for running unikernels. - Finally, you will need access to a hypervisor such as KVM, BHyve, or VMM.
You can install everything you need using these commands:
$ opam switch create 5.4.0
$ eval $(opam env)
$ opam install solo5
$ opam install ocaml-solo5
$ opam install mkernel
$ opam install mnet
To run a unikernel, you need access to a hypervisor or a sandboxing mechanism. On Linux, the simplest option is KVM (Kernel-based Virtual Machine). You can check whether your system supports it by running:
$ ls /dev/kvm
If this device exists, you are ready to go. You may need to add your user to
the kvm group so that you can access it without root privileges:
$ sudo usermod -aG kvm $USER
After running this command, log out and log back in for the change to take
effect. Once KVM is available, you can run your unikernel with the solo5-hvt
tender, which uses KVM to execute your image in an isolated virtual
environment.
Your first unikernel
A unikernel is an executable that must be cross-compiled. This means it is
built using the ocaml-solo5 compiler rather than the regular host compiler.
Because of this, the build configuration looks slightly different from what you
might be used to:
$ cat >dune<<EOF
(executable
(name main)
(modules main)
(link_flags :standard -cclib "-z solo5-abi=hvt")
(libraries mkernel)
(foreign_stubs
(language c)
(names manifest)))
(rule
(targets manifest.c)
(deps manifest.json)
(enabled_if
(= %{context_name} "solo5"))
(action
(run solo5-elftool gen-manifest manifest.json manifest.c)))
(rule
(targets manifest.c)
(enabled_if
(= %{context_name} "default"))
(action
(write-file manifest.c "")))
(vendored_dirs vendors)
EOF
To boot as quickly as possible, a unikernel does not perform device
discovery: it never asks the tender which devices are available. Instead, it
contains a static list of the devices it requires. This list is written as
a JSON file, which is then compiled into the manifest.c file that becomes
part of your unikernel:
$ cat >manifest.json<<EOF
{"type":"solo5.manifest","version":1,"devices":[]}
EOF
To cross-compile your executable with ocaml-solo5, you need to define a new
build context in the dune-workspace file:
$ cat >dune-workspace<<EOF
(lang dune 3.0)
(context (default))
(context (default
(name solo5)
(host default)
(toolchain solo5)
(disable_dynamically_linked_foreign_archives true)))
EOF
Cross-compilation requires that the source code of your dependencies (in this
case, mkernel) is available locally. You can fetch it with opam source:
$ mkdir vendors
$ opam source mkernel --dir vendors/mkernel
You can now create your unikernel:
$ cat >dune-project<<EOF
(lang dune 3.0)
EOF
$ cat >main.ml<<EOF
let () = Mkernel.(run []) @@ fun () ->
print_endline "Hello World!"
EOF
$ dune build ./main.exe
Launching a unikernel is different from launching a regular executable, because
it runs as a virtual machine. You need to use a tender to start it. Here, we
use solo5-hvt:
$ solo5-hvt -- _build/solo5/main.exe --solo5:quiet
Hello World!
Congratulations, you have just created your first unikernel! In the next
chapter, we will build a small echo server using mnet and set up networking
for your unikernel. Unikernels come with their own concepts and constraints
that are important to understand. The mkernel documentation
covers these fundamentals in depth, explaining how Solo5 and OCaml fit
together.
Important constraints
There are two essential things to keep in mind when building unikernels.
The first is that the Unix module is not available. The ocaml-solo5
compiler does not provide the unix.cmxa library. Since there is no underlying
operating system, system calls like Unix.openfile or Unix.socket simply do
not exist. This means that any library, including transitive dependencies, that
relies on the Unix module cannot be used in a unikernel. In practice, this is
why our ecosystem relies on pure OCaml reimplementations of protocols and
services (networking, DNS, TLS, and so on) rather than wrappers around C system
libraries.
The second is that dependencies must be vendored. Nearly all of your
dependencies need their source code present locally in a vendors/ directory.
This is because cross-compilation with ocaml-solo5 requires compiling C
stubs (if any) with the Solo5 toolchain, which is only possible when dune
has direct access to the source files. You can vendor a dependency with
opam source:
$ opam source <package> --dir vendors/<package>
This must be done for every dependency that your unikernel uses, not just your direct dependencies, but their transitive dependencies as well.
Echo server
mnet is a TCP/IP stack written entirely in OCaml, designed for unikernels.
Because the network stack is reimplemented in a memory-safe language, mnet
benefits from OCaml’s type system and from the broader ecosystem of formal
verification tools that can produce correct-by-construction OCaml code. This
means fewer classes of bugs (no buffer overflows, no use-after-free) and a
codebase that is easier to audit and reason about than a traditional C
implementation.
A note on performance
Before going further, it is worth setting realistic expectations about performance.
A pure OCaml TCP/IP stack will not match the raw throughput of an optimized C implementation. The garbage collector introduces pause times, and OCaml’s memory representation adds some overhead compared to bare pointers and manual memory management.
However, the language is only part of the story. Regardless of whether a
unikernel is written in OCaml or C, it faces an inherent I/O disadvantage
compared to an application running directly on the host. A regular process on
Linux can make system calls that interact with the kernel’s network stack
directly. A unikernel cannot: it runs inside a sandboxed environment and must
go through two layers of indirection for every I/O operation. First, the
unikernel issues a hypercall to the tender (the host-side process that manages
the virtual machine). Then, the tender issues a system call to the host kernel,
which actually performs the I/O. This double indirection adds latency to every
network operation. There are techniques to reduce this cost (for example,
shared-memory ring buffers between the tender and the unikernel, similar to
what virtio provides), but the overhead can never be fully eliminated. This
is a fundamental constraint of the isolation model, not a limitation of any
particular implementation.
If your goal is to build the fastest possible web server, a unikernel is not the right tool. But raw throughput is rarely the only metric that matters. Unikernels excel in other dimensions: they have a minimal attack surface because there is no shell, no unused drivers, and no package manager, only the code your application needs. They boot in milliseconds because there is no operating system to initialize, and their images are only a few megabytes compared to hundreds for a typical container. The per-instance cost of running a unikernel is therefore very low.
These properties naturally lead to a different way of thinking about services. Rather than building a single monolithic application, you can decompose your system into small, focused unikernels, where each one does one thing, boots quickly, and consumes minimal resources. The deployment cost per component becomes low enough that this architecture is practical, not just theoretical.
In short, do not expect a unikernel to outperform a native application in I/O. Instead, think of unikernels as a way to build smaller, safer, and more composable services.
Initialization
The TCP/IP stack depends on a source of randomness (for generating TCP sequence
numbers, IPv6 addresses, and so on). We use mirage-crypto with its
Fortuna engine for this purpose. mkernel provides a mechanism to
declare the resources that a unikernel needs before it starts, including
devices (such as a network interface) and other values that require
initialization. To set up a working TCP/IP stack, we need three things: the
static IPv4 address to assign to the unikernel, an initialized random number
generator, and a network device (here named "service").
module RNG = Mirage_crypto_rng.Fortuna
let ( let@ ) finally fn = Fun.protect ~finally fn
let rng () = Mirage_crypto_rng_mkernel.initialize (module RNG)
let rng = Mkernel.map rng Mkernel.[]
let () =
let ipv4 = Ipaddr.V4.Prefix.of_string_exn "10.0.0.2/24" in
Mkernel.(run [ rng; Mnet.stack ~name:"service" ipv4 ])
@@ fun rng (stack, _tcp, _udp) () ->
let@ () = fun () -> Mirage_crypto_rng_mkernel.kill rng in
let@ () = fun () -> Mnet.kill stack in
print_endline "Hello World!"
One important thing to notice in this code is the use of finalizers. Every
resource we create is paired with a cleanup function via the let@ operator
(which is a shorthand for Fun.protect ~finally). This is not optional: Miou,
the scheduler used in our unikernels, requires that all resources be properly
released before the program exits. The random number generator, for instance,
spawns a background task that continuously feeds entropy to the Fortuna engine.
If that task is not terminated explicitly with
Mirage_crypto_rng_mkernel.kill, the scheduler will reject the program at
exit. The Miou tutorial covers this topic in more detail.
The same principle applies to mnet. Calling Mnet.stack starts several
background daemons (an Ethernet frame reader, an ARP responder, TCP timers, and
others) that run for the entire lifetime of the stack. Mnet.kill terminates
all of them. Forgetting to call it would leave dangling tasks, which Miou
treats as an error.
This cleanup pattern is not specific to unikernels: it applies to every application built with Miou. Whenever you create a long-lived resource, you should attach a finalizer to ensure it is released, even if an exception interrupts the normal control flow.
Compilation
As explained in the introduction, cross-compiling a unikernel with
ocaml-solo5 requires that the source code of all dependencies (including
transitive ones) be available locally in a vendors/ directory. Our echo
server depends on mnet and its own transitive dependencies, so we need to
vendor all of them. There is one point worth mentioning about Zarith: it needs to be
compiled with dune. For several years, the Mirage team has maintained a fork
of Zarith that uses dune, available here. To make unikernel
compilation work, we need to pin this package. Here are the commands to
run:
$ opam pin git+https://github.com/mirage/Zarith.git#zarith-1.14
$ opam source bstr --dir vendors/bstr
$ opam source mnet --dir vendors/mnet
$ opam source mirage-crypto-rng-mkernel --dir vendors/mirage-crypto-rng-mkernel
$ opam source gmp --dir vendors/gmp
$ opam source digestif --dir vendors/digestif
$ opam source kdf --dir vendors/kdf
$ opam source utcp --dir vendors/utcp
$ opam source zarith --dir vendors/zarith
Next, we update the dune file to declare the libraries our unikernel depends
on, the Solo5 ABI we are targeting, and the C stub for the device manifest:
(executable
(name main)
(modules main)
(link_flags :standard -cclib "-z solo5-abi=hvt")
(libraries
mkernel
mirage-crypto-rng-mkernel
mnet
gmp)
(foreign_stubs
(language c)
(names manifest)))
Finally, since our unikernel now uses a network device, we need to declare it
in manifest.json. The name "service" must match the name argument we
passed to Mnet.stack in the code above:
{"type":"solo5.manifest","version":1,"devices":[{"name":"service","type":"NET_BASIC"}]}
Tip
To simplify the workflow around device manifests, you can run your unikernel as a regular executable, and it will print the manifest it expects to stdout:
$ dune exec ./main.exe > manifest.json
Network configuration
Before we can run our unikernel, we need to set up a virtual network on the
host. This step is necessary because a unikernel does not share the host’s
network stack; it implements its own (that is the whole point of mnet). From
the unikernel’s perspective, it is a machine with its own Ethernet interface,
its own IP address, and its own TCP/IP stack. It needs to be connected to a
network just like a physical machine would be plugged into a switch.
On Linux, we can create this virtual network using two standard kernel
features: tap interfaces and bridges (bridge-utils).
Think of a physical network in an office. Each computer has an Ethernet port and a cable that connects it to a switch (a device whose only job is to forward Ethernet frames between the machines plugged into it). Any machine on the switch can talk to any other machine on the same switch, because the switch delivers each frame to the right port based on the destination MAC address. This forms what is called a local area network, or LAN.
We need to reproduce this setup virtually. A tap interface plays the role of
the Ethernet cable: it is a network device created by the Linux kernel that
behaves like a physical network card, except that no real hardware is involved.
When the tender (solo5-hvt) starts the unikernel, it attaches the unikernel’s
network device to a tap interface. From that point on, every Ethernet frame
that the unikernel sends appears on the tap interface, and every frame written
to the tap interface is delivered to the unikernel. A bridge plays the role of
the switch: it connects several network interfaces together and forwards
Ethernet frames between them. When we attach the tap interface to a bridge, the
unikernel becomes part of the local network formed by that bridge, just as
plugging a cable into a switch makes a computer part of the office LAN.
There is one more piece to the puzzle. A local network lets machines talk to each other, but it does not, by itself, provide access to the outside world. In our office analogy, the switch connects the computers to each other, but there must be a router somewhere that connects the office LAN to the internet. That router is what we call a gateway: it is the machine that knows how to forward packets beyond the local network. When a machine wants to reach an IP address that is not on its local network, it sends the packet to the gateway, and the gateway takes care of routing it further.
In our setup, the host plays the role of the gateway. We assign an IPv4 address to the bridge, which gives the host a presence on the unikernel’s local network. The unikernel is then configured to use that address as its gateway. When the unikernel wants to reach an address outside the local network (for instance, a DNS server on the internet) it sends the packet to the host via the bridge, and the host forwards it through its own network connection.
This is admittedly more system administration than development. The configuration we describe here is simple and generic; your network topology may require adjustments. But it is worth understanding what these pieces do, because a unikernel sits at the intersection of application development and deployment. Understanding both sides is part of what makes the unikernel approach powerful.
Here is how to set this up on Linux:
$ sudo ip link add br0 type bridge
$ sudo ip addr add 10.0.0.1/24 dev br0
$ sudo ip tuntap add tap0 mode tap
$ sudo ip link set tap0 master br0
$ sudo ip link set br0 up
$ sudo ip link set tap0 up
The first command creates a bridge named br0. The second assigns it the
address 10.0.0.1 on the 10.0.0.0/24 subnet (this is the address the
unikernel will use as its gateway). The third command creates a tap interface
named tap0. The fourth attaches it to the bridge, and the last two bring both
interfaces up.
Launching our unikernel
Now that the network is in place, we can run our unikernel. The
--net:service=tap0 flag tells the tender to connect the unikernel’s
"service" network device to the tap0 interface we just created:
$ solo5-hvt --net:service=tap0 -- ./_build/solo5/main.exe --solo5:quiet
Hello World!
The output looks the same as before: the unikernel prints its message and
exits. But behind the scenes, something new happened. mnet initialized its
TCP/IP stack and connected to the virtual network. We can observe this by
capturing traffic on the bridge with tcpdump:
$ sudo tcpdump -i br0
listening on br0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
14:30:00.000000 ARP, Request who-has 10.0.0.2 tell 10.0.0.2, length 28
The ARP request you see is the unikernel announcing its presence on the local network. It is asking who has this IP address? to verify that no other machine is already using it. This is a standard part of the IPv4 initialization process (known as Gratuitous ARP).
Implementing the echo server
We can now turn our “Hello World” unikernel into an actual echo server. The idea is straightforward: we listen on a TCP port, accept incoming connections, and for each client, read whatever they send and write it back until they disconnect.
If you have already followed the Miou tutorial, the concurrency pattern will look familiar. Each client connection is handled in its own Miou task, and we use Miou’s orphans mechanism to keep track of these tasks and collect their results as they complete.
let handler flow =
let finally = Mnet.TCP.close in
let r = Miou.Ownership.create ~finally flow in
Miou.Ownership.own r;
let buf = Bytes.create 0x7ff in
let rec go () =
match Mnet.TCP.read flow buf with
| 0 -> Miou.Ownership.release r
| len ->
let str = Bytes.sub_string buf 0 len in
Mnet.TCP.write flow str;
go () in
go ()
let rec clean_up orphans =
match Miou.care orphans with
| Some None | None -> ()
| Some (Some prm) ->
match Miou.await prm with
| Ok () -> clean_up orphans
| Error exn ->
Logs.err (fun m -> m "Unexpected exception: %s" (Printexc.to_string exn))
let () =
let ipv4 = Ipaddr.V4.Prefix.of_string_exn "10.0.0.2/24" in
Mkernel.(run [ rng; Mnet.stack ~name:"service" ipv4 ])
@@ fun rng (stack, tcp, _udp) () ->
let@ () = fun () -> Mirage_crypto_rng_mkernel.kill rng in
let@ () = fun () -> Mnet.kill stack in
let rec go orphans listen =
clean_up orphans;
let flow = Mnet.TCP.accept tcp listen in
let _ = Miou.async ~orphans @@ fun () -> handler flow in
go orphans listen in
go (Miou.orphans ()) (Mnet.TCP.listen tcp 9000)
The handler function is the per-client logic. It registers the TCP connection
(flow) with Miou’s ownership system so that the connection is automatically
closed if the task is cancelled or crashes. It then enters a loop where it
reads into a buffer, writes back what was received, and repeats. When the
client closes the connection, read returns 0 and the handler releases the
resource.
The clean_up function iterates over completed tasks in the orphans set.
This is how Miou lets you collect the results of concurrent tasks without
blocking the accept loop. If a handler raised an unexpected exception, we log
it here rather than letting it propagate silently.
The main entry point ties everything together. It initializes the stack (as we
saw earlier), then enters an accept loop where it waits for a new client with
Mnet.TCP.accept, spawns a handler task with Miou.async, and repeats. The
Mnet.TCP.listen call prepares port 9000 for incoming connections, much like
the listen system call in the Unix socket API.
You will notice that the mnet API intentionally mirrors the Unix socket API.
The listen, accept, read, write, and close functions all work the way
you would expect. This is a deliberate design choice: rather than inventing a
new abstraction, we keep the interface familiar so that the only new concepts
you need to learn are related to the unikernel model itself, not to the
networking API.
Testing the echo server
We can now build, launch, and test the echo server. We start the unikernel in
the background, then connect to it with nc (netcat) from the host:
$ solo5-hvt --net:service=tap0 -- ./_build/solo5/main.exe --solo5:quiet &
$ UNIKERNEL=$!
$ nc -q0 10.0.0.2 9000
Hello World!
Hello World!
^D
$ kill $UNIKERNEL
solo5-hvt: Exiting on signal 15
We type “Hello World!” and the unikernel sends it right back. Pressing Ctrl-D
closes the connection. The $! variable captures the PID of the background
process so that we can stop the unikernel cleanly with kill when we are done.
Conclusion
And with that, we have a working echo server running as a unikernel. As you have seen, the process is fairly straightforward once you know the key steps: vendor your dependencies, declare your devices, and configure the virtual network on the host. The networking concepts (tap interfaces, bridges, and gateways) may be unfamiliar if you come from a pure application development background, but they quickly become second nature with a bit of practice.
Now that the foundations are in place, the fun really begins. In the next chapter, we will build on what we have learned here and implement a web server. Our cooperative provides implementations of several protocols (such as ocaml-tls and ocaml-dns) that you can use to build a wide range of services. We hope you are as excited as we are to see what you will build next.
Web server
In the previous chapter, we built an echo server: a unikernel that accepts TCP connections and sends back whatever it receives. Now we will take a bigger step and implement an HTTP web server.
Between the raw Ethernet frames we started with and the HTTP protocol lies a whole stack of intermediate layers (TCP, IP, TLS), each of which is interesting in its own right. Later chapters may explore some of those layers. For now, we jump straight to HTTP because it demonstrates something important: even though a unikernel is a minimal, single-purpose system, it can host a fully featured web service.
This chapter assumes you have already completed the echo server
chapter. You should have a working build setup (dune, dune-workspace,
manifest.json) and a configured network (a tap interface tap0 and a bridge
br0) on your host.
Vendoring the dependencies
Building an HTTP server requires more libraries than a simple echo server. The HTTP protocol is layered on top of several components: a parser, a serializer, content-type handling, and a framework to tie them all together. We need to vendor all of them into our project:
$ opam source flux --dir vendors/flux
$ opam source h1 --dir vendors/h1
$ opam source httpcats --dir vendors/httpcats
$ opam source mhttp --dir vendors/mhttp
$ opam source multipart_form-miou --dir vendors/multipart_form-miou
$ opam source prettym --dir vendors/prettym
$ opam source tls --dir vendors/tls
$ opam source x509 --dir vendors/x509
$ opam source vifu --dir vendors/vifu
That is quite a few packages, so let us walk through what each one does.
- The library we will interact with most directly is
vifu. It is a web framework for OCaml 5 designed specifically for unikernels (the u invifustands for unikernel). It provides routing, request handling, and response building: everything we need to define HTTP endpoints.vifuis the unikernel variant of vif, a web framework that our cooperative uses in production for builds.robur.coop. If you are familiar with web frameworks such as Express (JavaScript) or Sinatra (Ruby),vifufills the same role. We also recommend this tutorial on implementing a chatroom with websockets using Vif. - Under the hood,
mhttp,h1, andhttpcatstogether implement the HTTP protocol for unikernels. They handle parsing of incoming HTTP requests and serialization of outgoing HTTP responses, so we do not have to deal with the wire format ourselves. fluxis a streaming library used internally by the HTTP stack to process request and response bodies without buffering them entirely in memory. If you are interested in handling streams with Miou, we have written a tutorial on the subject.- For file uploads,
multipart_form-miouhandlesmultipart/form-dataparsing, which is the encoding that web browsers use when uploading files through an HTML form. - Finally,
tlsandx509provide TLS encryption and X.509 certificate handling. Even though our example uses plain HTTP (no encryption),vifudepends on these libraries because it supports HTTPS out of the box.
Note
As with the echo server, these dependencies must be vendored because cross-compilation with
ocaml-solo5requires local access to all source code. See the introduction for a reminder of why.
A minimal web server
The only change to the build configuration compared to the echo server is
adding vifu (and its dependency gmp) to the library list in the dune
file. The dune-workspace, dune-project, and manifest.json files remain
the same:
- (libraries mkernel mirage-crypto-rng-mkernel mnet)
+ (libraries mkernel mirage-crypto-rng-mkernel mnet vifu gmp)
Here is the complete web server:
module RNG = Mirage_crypto_rng.Fortuna
let ( let@ ) finally fn = Fun.protect ~finally fn
let rng () = Mirage_crypto_rng_mkernel.initialize (module RNG)
let rng = Mkernel.map rng Mkernel.[]
let index req _server () =
let open Vifu.Response.Syntax in
let* () = Vifu.Response.with_text req "Hello World!\n" in
Vifu.Response.respond `OK
let () =
let ipv4 = Ipaddr.V4.Prefix.of_string_exn "10.0.0.2/24" in
Mkernel.(run [ rng; Mnet.stack ~name:"service" ipv4 ])
@@ fun rng (stack, tcp, _udp) () ->
let@ () = fun () -> Mirage_crypto_rng_mkernel.kill rng in
let@ () = fun () -> Mnet.kill stack in
let cfg = Vifu.Config.v 80 in
let routes =
let open Vifu.Uri in
let open Vifu.Route in
[ get (rel /?? any) --> index ] in
Vifu.run ~cfg tcp routes ()
The first half is the same initialization and cleanup boilerplate from the echo
server. The new part is the index handler and the route table.
The index handler receives an HTTP request, sets the response body to
"Hello World!\n" using with_text, and responds with HTTP status 200
(OK). The let* syntax, provided by Vifu.Response.Syntax, sequences these
response-building operations.
The route table maps URL patterns to handlers. Here we define a single route.
- The
getcombinator matches HTTP GET requests. - The
relpart starts a URL pattern relative to the root/. - Adding
/?? anytells the route to accept any query string on that path. - Finally,
-->connects the pattern to theindexhandler.
So this route matches GET / (with or without query parameters) and calls
index. The vif tutorial covers the routing DSL in more
detail, including how to capture path segments and query parameters.
The last two lines tie everything together. Vifu.Config.v 80 configures the
server to listen on port 80. Vifu.run takes the TCP state from our network
stack and the route table, then starts serving HTTP requests. Unlike the echo
server, where we wrote the accept loop ourselves, vifu handles connection
management, HTTP parsing, and request dispatching for us.
Building and running
The build and launch steps are identical to the echo server. If you have not set up the network yet, the echo chapter walks you through it.
$ dune build ./main.exe
$ solo5-hvt --net:service=tap0 -- ./_build/solo5/main.exe --solo5:quiet &
$ UNIKERNEL=$!
$ curl http://10.0.0.2/
Hello World!
$ kill $UNIKERNEL
solo5-hvt: Exiting on signal 15
With just a few lines of OCaml, we have a working HTTP server running as a
unikernel. You can also point a web browser at http://10.0.0.2/ and see the
same result.
Crunch & Zip!
A plain-text “Hello World” is nice, but a real web service needs to serve HTML
pages, stylesheets, and other assets. To show what vifu can do, we are going
to build a small but practical service: a web page where users can upload files
and receive a zip archive in return. This raises two questions. First, how do
we serve static files (such as index.html) from a unikernel that has no file
system? And second, how do we receive uploaded files from the user and zip
them? Let us tackle the first question now.
How to open a file?
A unikernel has access to a block device (essentially a raw disk), but there is no file system built on top of it. We could implement one, but that would add significant complexity for what is actually a simple need: serving files whose content is known at build time and never changes at runtime.
Instead of reading files from disk, we can embed them directly into the unikernel binary. The idea is straightforward: a tool reads our files at build time and generates an OCaml module where each file’s content is available as a plain value. That module is compiled and linked into the unikernel like any other code. At runtime, serving a file is just reading an OCaml value. No I/O, no file system, no overhead.
The tool for this job is mcrunch. You can install it with:
$ opam install mcrunch
This is a great example of one of the most powerful advantages of the unikernel approach: we get to rethink what our application truly needs instead of carrying along assumptions from traditional operating systems. In a conventional server, we would reach for a file system without a second thought. But a file system is a remarkably complex piece of software: it handles permissions, directories, concurrent writes, journaling, and much more. Do we actually need all of that here? For our web service, the answer is clearly no. Our HTML and CSS files are known at build time, they never change at runtime, and we only need to read them. There is no need for write access, no need for a directory hierarchy, and we are only dealing with a couple of small files.
Once we recognize this, we can choose a much simpler solution: embed the file
contents directly into our binary. This is what we mean by reifying the
building blocks of our application. Instead of pulling in a general-purpose
abstraction such as a file system, we pick the simplest tool that actually
solves our problem. The unikernel philosophy encourages this kind of thoughtful
minimalism, and mcrunch is a perfect example of it in practice.
Let us create a small upload page. Save the following as index.html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>Upload</title>
<link rel="stylesheet" href="style.css">
</head>
<body>
<h3>File selection</h3>
<form action="/upload" method="POST" enctype="multipart/form-data">
<div id="file-list">
<div class="file-group">
<input type="file" name="files[]" required>
</div>
</div>
<div class="controls">
<button type="button" onclick="addFileField()">+ Add a field</button>
</div>
<div class="submit-area">
<button type="submit" class="btn-submit">Start archiving</button>
</div>
</form>
<script>
function addFileField() {
const container = document.getElementById('file-list');
const group = document.createElement('div');
group.className = 'file-group';
group.innerHTML = `
<input type="file" name="files[]" required>
<button type="button" class="btn-remove" onclick="this.parentElement.remove()">Remove</button>
`;
container.appendChild(group);
}
</script>
</body>
</html>
And the accompanying style.css:
body {
font-family: ui-sans-serif, system-ui, sans-serif;
color: #1a1a1a;
margin: 40px;
line-height: 1.5;
}
form { max-width: 500px; }
.file-group {
display: flex;
align-items: center;
margin-bottom: 8px;
gap: 12px;
}
input[type="file"] {
font-size: 13px;
border: 1px solid #ddd;
padding: 4px;
flex-grow: 1;
}
button {
background: none;
border: 1px solid #1a1a1a;
color: #1a1a1a;
padding: 4px 12px;
font-size: 13px;
cursor: pointer;
transition: all 0.2s;
}
button:hover {
background: #1a1a1a;
color: white;
}
.btn-remove {
border-color: #ccc;
color: #666;
}
.controls {
margin-top: 20px;
display: flex;
gap: 10px;
border-top: 1px solid #eee;
padding-top: 20px;
}
.submit-area { margin-top: 10px; }
.btn-submit { background: #1a1a1a; color: white; width: 100%; padding: 8px; }
The page displays a form that lets users select one or more files and submit
them via a POST request to /upload. The JavaScript function addFileField()
adds extra file inputs dynamically so that users can upload multiple files at
once. We will implement the /upload handler later; for now, let us focus on
serving these two files.
Then, to embed index.html and style.css into the unikernel, we update the
dune file with two changes: we add the documents module to the executable,
and we add a rule that generates it at build time using mcrunch.
(executable
(name main)
(modules main documents)
(link_flags :standard -cclib "-z solo5-abi=hvt")
(libraries mkernel mirage-crypto-rng-mkernel mnet vifu gmp)
(foreign_stubs
(language c)
(names manifest)))
(rule
(targets documents.ml)
(deps index.html style.css)
(action
(run mcrunch --list --file index:index.html --file style:style.css -o documents.ml)))
The mcrunch command reads our two files and generates a documents.ml file
containing two OCaml values: Documents.index and Documents.style. Each
value holds the file’s content as a string list.
A couple of details are worth noting about this command. The
--file index:index.html syntax maps an OCaml value name (index) to a
source file on disk (index.html); the colon separates the two. The --list
flag tells mcrunch to produce string list values rather than an array. This
matters because it works naturally with the streaming API we will use next: the
content can be sent to the client incrementally, without first concatenating
everything into a single buffer.
Now we replace our plain-text handler index with two handlers that serve the
embedded files:
let index req _server () =
let open Vifu.Response.Syntax in
let from = Flux.Source.list Documents.index in
let* () = Vifu.Response.with_source req from in
Vifu.Response.respond `OK
let style req _server () =
let open Vifu.Response.Syntax in
let from = Flux.Source.list Documents.style in
let* () = Vifu.Response.with_source req from in
Vifu.Response.respond `OK
Instead of with_text (which takes a plain string), we now use with_source,
which takes a flux source (a stream of data). Flux.Source.list creates a
source from the string list that mcrunch generated. The response body is
then sent to the client piece by piece, which is more memory-efficient than
building the entire response as a single string.
Finally, we add a route for the stylesheet:
let routes =
let open Vifu.Uri in
let open Vifu.Route in
[ get (rel /?? any) --> index
; get (rel / "style.css" /?? any) --> style ]
The first route matches GET / and serves the HTML page. The second route
matches GET /style.css. When the browser loads index.html and encounters
the <link rel="stylesheet" href="style.css"> tag, it makes a second request
for /style.css, which is handled by the style handler.
Zip on the fly
Let us continue building our service by adding a handler for the POST /upload
endpoint. When a user submits the upload form, the browser sends the selected
files as a multipart/form-data request. Our handler will read those files,
pack them into a zip archive, and send the archive back as the response.
This is where we need to talk about memory. A unikernel has a fixed memory
budget (512 megabytes by default). That is far less than a typical server
application running on a machine with tens of gigabytes of RAM. If your
application tries to hold too much data in memory at once, it will not slow
down gracefully: it will crash with an Out_of_memory exception. This means
you need to think carefully about how your code consumes memory. In particular,
you want to avoid loading entire files into memory when you do not have to.
The solution here is streaming. Instead of reading all the uploaded files into memory, building the zip archive in memory, and then sending it to the client, we process the data incrementally: we read a piece of input, compress it, write it to the output, and move on to the next piece. At no point does the full content of any file need to exist in memory all at once.
The library that makes this possible is flux. It lets you describe data
transformations as pipelines of streams, where each stage produces and consumes
data in small chunks. If you want to understand streaming in more depth, the
flux tutorial covers the concepts in detail. On top of flux,
the flux_zip library knows how to produce zip archives from a stream of
files.
Here is the upload handler:
let nsec_per_day = Int64.mul 86_400L 1_000_000_000L
let ps_per_ns = 1_000L
let now_d_ps () =
let nsec = Mkernel.clock_wall () in
let nsec = Int64.of_int nsec in
let days = Int64.div nsec nsec_per_day in
let rem_ns = Int64.rem nsec nsec_per_day in
let rem_ps = Int64.mul rem_ns ps_per_ns in
(Int64.to_int days, rem_ps)
let now () = Ptime.v (now_d_ps ())
let gen =
let tmp = Bytes.create 8 in
fun () ->
Mirage_crypto_rng.generate_into tmp 8;
let bits = Bytes.get_int64_le tmp 0 in
Fmt.str "%08Lx" bits
let into_queue q =
let open Flux in
let init = Fun.const q
and push q x = Bqueue.put q x; q
and full = Fun.const false
and stop = Bqueue.close in
Sink { init; push; full; stop }
let zip req _server _ =
let open Vifu.Response.Syntax in
match Vifu.Request.of_multipart_form req with
| Error _ ->
let* () = Vifu.Response.with_text req "Invalid multipart/form-data request" in
Vifu.Response.respond `Bad_request
| Ok stream ->
let mtime = now () in
let src = Flux.Source.with_task ~size:0x7ff @@ fun q ->
let fn (part, orig) =
let filename = Vifu.Multipart_form.filename part in
let filename = Option.value ~default:(gen ()) filename in
let src = Flux.Source.with_task ~size:0x7ff @@ fun q ->
Flux.Stream.into (into_queue q) (Flux.Stream.from orig) in
Flux_zip.of_filepath ~mtime filename src in
let stream = Flux.Stream.map fn stream in
Flux.Stream.into (into_queue q) stream in
let stream = Flux.Stream.from src in
let stream = Flux.Stream.via Flux_zip.zip stream in
let* () = Vifu.Response.add ~field:"Content-Type" "application/zip" in
let* () = Vifu.Response.with_stream req stream in
Vifu.Response.respond `OK
The zip handler starts by asking vifu to parse the incoming request as
multipart/form-data. If the request is malformed, the handler responds with a
400 Bad Request error. If parsing succeeds, vifu gives us a stream of parts,
where each part represents one uploaded file.
The core of the handler builds a pipeline in several stages. The outer
Flux.Source.with_task creates a task that iterates over the uploaded parts.
For each part, it extracts the filename (or generates one with gen if the
browser did not provide one), then wraps the part’s content into a flux source
using an inner Flux.Source.with_task. That source is passed to
Flux_zip.of_filepath, which produces a zip entry: a value that flux_zip
knows how to turn into the bytes of a zip archive. All these entries are
collected into a single stream, which is then piped through Flux_zip.zip to
produce the final zip output. The handler sets the response’s Content-Type to
application/zip and sends the stream to the client with with_stream.
There is a subtle but important point here. When we write this code, nothing
actually happens yet. We are describing a transformation pipeline, not
executing it. The data only starts flowing when the client begins reading the
response. This is what makes the approach memory-efficient: the unikernel never
needs to hold the entire archive in memory. Each Flux.Source.with_task
creates a bounded queue (the ~size:0x7ff parameter sets the upper bound), so
the amount of data in memory at any given moment is limited, regardless of how
large the uploaded files are. A user could upload a one-gigabyte file and the
unikernel would process it using only a few kilobytes of buffer space.
Finally, we need to add the new route to our route table:
let routes =
let open Vifu.Uri in
let open Vifu.Route in
[ get (rel /?? any) --> index
; get (rel / "style.css" /?? any) --> style
; post Vifu.Type.multipart_form (rel / "upload" /?? any) --> zip ]
The post combinator matches HTTP POST requests, so this route handles
POST /upload, which is exactly what our HTML form submits to.
Build and launch the unikernel the same way as before. Open http://10.0.0.2/
in your browser, select a few files, click “Start archiving”, and your browser
will download a zip file containing the uploaded files. You can test it also
with curl:
$ solo5-hvt --net:service=tap0 -- ./_build/solo5/main.exe --solo5:quiet &
$ UNIKERNEL=$!
$ curl -F file=@foo.txt -X POST http://10.0.0.2/upload -o foo.zip
$ unzip foo.zip
Archive: foo.zip
inflating: foo.txt
$ kill $UNIKERNEL
solo5-hvt: Exiting on signal 15
Conclusion
We started this chapter with a three-line “Hello World” handler and ended with
a unikernel that accepts file uploads and produces zip archives on the fly.
Along the way, we saw how vifu provides a familiar web-framework experience
(handlers, routes, responses) even though there is no operating system
underneath, how mcrunch solves the static-file problem by embedding content
directly into the binary at build time, and how flux enables memory-efficient
streaming so that even a unikernel with a small memory budget can process large
files comfortably.