Handoff — Quartz unikernel serves HTTP; next is deploy
Session summary (Apr 18 2026). 15 commits closing KERN.1 and KERN.3a–3d. The unikernel now goes from cold boot to serving an HTML landing page through 100% Quartz-authored networking.
Head: 8788f07d. Fixpoint still at 2138 functions (no compiler
source touched this entire run).
What landed
- KERN.1 — context-switching scheduler (real per-task 4 KiB stacks
switch_toasm), 64 MiB PMM + 1 GiB identity map, APIC + LAPIC timer (PIC/PIT retired). Commits6bcacfba,721fc42c,2af80240.
- KERN.3a — virtio-net driver on legacy virtio-mmio v1 (QEMU
-M microvm). Probe + feature-negotiate + RX/TX queue setup. ARP roundtrip with SLIRP gateway 10.0.2.2. Commitsf277509b,0f236ea6,2dec6567. - KERN.3b — ICMP echo via IPv4 (RFC 1071 checksum). Guest-
initiated
ping 10.0.2.2roundtrip. Commit9f31e987. - KERN.3c — TCP echo server, single-connection state machine
(LISTEN → SYN_RCVD → ESTABLISHED → CLOSE_WAIT → LAST_ACK →
CLOSED), pseudo-header checksum.
nc 127.0.0.1 8090roundtrip via SLIRPhostfwd. Commit1a2e38d4. - KERN.3d — HTTP/1.1 server, fixed 200 OK HTML response,
Connection-close framing,
Server: Quartz-unikernel. curl returns a 739-byte styled landing page. Commit944cc774.
Final boot sequence (qemu_boot_x86_64)
Hi
TRAP
RAM: 128 MiB, PMM pool: 64 MiB (16384 pgs)
PMM 5/5 (7/16384 pgs)
VEC 4 sum=48
MAP 5 sum=1500
APIC enabled @ 0xFEE00000
virtio-net @ 0x00000000feb02e00 (v=1)
virtio-net ready: MAC=52:54:00:12:34:56
[rx: u=1 c=0 len=74]
RX frame: 64 bytes <ARP REPLY hex dump>
[rx: u=2 c=1 len=70]
ping: reply from 10.0.2.2 type=0 seq=1
A1 B1 A2 B2 A3 B3 A4 B4 A5 B5
sched done (ticks=3)
HTTP: listening on :80
And under curl:
$ curl -v http://127.0.0.1:8094/
< HTTP/1.1 200 OK
< Content-Type: text/html; charset=utf-8
< Connection: close
< Server: Quartz-unikernel
<!doctype html>...
Compiler bugs filed this session
- PSQ-9 — extern parameter named
fromOOM-loops the type checker (kernel thread eats 30 GB before OOM-kill). Worked around by naming the paramfrom_slot.docs/bugs/PROGRESS_SPRINT_QUIRKS.md#psq-9. - PSQ-10 —
and/orcompound boolean codegen emits acall noalias ptr @malloc(i64 8)per evaluation. In a tight polling loop this exhausts a 64 MiB PMM in seconds. Worked around by splitting into nestedif.PROGRESS_SPRINT_QUIRKS.md#psq-10.
Both are HIGH severity but not blocking — workarounds are cheap.
Sharp edges in the current kernel
- TX descriptor reuse.
virtio_net_tx_sendnow blocks synchronously onused.idxbefore returning. Without this wait, back-to-back TX calls clobbered descriptor slot 0 while the device was mid-DMA on the previous buffer. A proper fix would rotate through multiple descriptor indices; today we just serialize. - RX polling pacing.
virtio_net_rx_waitprints a one-line[rx: u= c= len=]diagnostic per successful receive. Without it the guest races ahead of QEMU TCG’s device thread and reads un-populated used-ring slots. The print provides the few µs of slack TCG needs. Real fix: wire up RX IRQs via IOAPIC. - Single-connection HTTP server. The state machine assumes one connection at a time. Second SYN mid-session gets ignored. Fine for a demo; a real server needs per-connection state.
- Hardcoded peer MAC in send-ICMP path. We remember the
SLIRP gateway MAC from the ARP reply, then use it for ICMP
and (indirectly, via
g_tcp_peer_*) for TCP responses. If the peer ARPs us out of nowhere without the preamble, there are code paths that won’t find the right MAC. OK for the current demo sequence (always ARP first).
Regression set
All four green:
./self-hosted/bin/quake baremetal:verify_hello_aarch64
./self-hosted/bin/quake baremetal:verify_hello_x86_64
./self-hosted/bin/quake baremetal:qemu_boot_x86_64 # serial assertions
./self-hosted/bin/quake baremetal:qemu_http # curl → 200 OK + HTML
Plus examples/brainfuck.qz smoke still passes. Compiler fixpoint
stamp unchanged.
What’s next — KERN.3e / KERN.4
The roadmap’s remaining unikernel phases:
KERN.3e — marketing content (optional, small)
Currently the HTTP response is a hand-written single-page HTML
blob in http_build_response. A better story is build-time
asset bundling:
@[section(".content")]attribute on a Quartz byte array so the compiler emits the data into a named ELF section.- A custom linker-script rule to pin the section at a known offset (or expose start/end symbols).
http_build_responsereferences those symbols instead of inlining the HTML.
This lets us author the site as standalone HTML + CSS + assets
and let quake bundle them. ~1 quartz-day. Blocks nothing —
the current static response works fine for demoing.
KERN.4 — deploy as a QEMU guest on the Linux VPS
Decision locked in during roadmap update: run the unikernel as a QEMU guest on the existing Ubuntu KVM VPS. Not a bare-metal host replace. Rationale stays the same: VPS providers don’t let you boot your own kernel on the host; every unikernel in production (OSv, HermitCore, MirageOS) ships this way; nested KVM-in-KVM works.
Pipeline:
quake baremetal:build_elfproducesquartz-unikernel.elflocally.scpthe ELF tomattkelly.io.- Write
/etc/systemd/system/quartz-unikernel.service:[Unit] Description=Quartz unikernel HTTP server After=network.target [Service] ExecStart=/usr/bin/qemu-system-x86_64 \ -M microvm \ -kernel /opt/quartz/quartz-unikernel.elf \ -netdev user,id=net0,hostfwd=tcp::8080-:80 \ -device virtio-net-device,netdev=net0 \ -nographic -serial file:/var/log/quartz-kernel.log Restart=on-failure [Install] WantedBy=multi-user.target - Route port 80/443 → 8080 via the Linux host (nginx reverse proxy for TLS termination, or iptables REDIRECT for raw).
systemctl enable --now quartz-unikernel.- Verify
curl https://mattkelly.io/returns the HTML.
What’s needed from next session:
- SSH to the VPS (user has creds).
- One round of testing to make sure QEMU on the VPS can run the ELF exactly like on dev (macOS). Most likely it works; x86_64 ELF + virtio-mmio is platform-independent.
- Let’s Encrypt cert for TLS (nginx handles via certbot).
- DNS already points mattkelly.io → VPS IP.
Estimate: ~1 quartz-day (2-4 hours) to land KERN.4 end-to-end, most of it sysadmin-style config rather than Quartz coding.
After KERN.4, the stretch phases KERN.5 (TLS inside the unikernel), KERN.6 (content FS), KERN.7 (concurrent connections via Async handler), KERN.8 (SMP) come in priority order.
First step for next session
Boot up a shell on the VPS (user SSH), check QEMU version
(qemu-system-x86_64 --version), check nested virt support
(egrep 'vmx|svm' /proc/cpuinfo — should be present on most
modern VPS plans), and test-run the ELF interactively before
wiring systemd. If QEMU + -M microvm works there, everything
else is plumbing.
Good luck.