Fun Ideas: Language Feature Pipeline
Ideas for Quartz’s future. Items marked COMMITTED are on the roadmap. Items marked EXPLORE need research spikes. Items marked DISCUSS need design sessions before committing. Everything else is here for inspiration.
For far-future moonshots, see moonshots.md.
COMMITTED: C Backend
Emit C from MIR alongside LLVM IR. Nim proved this strategy works brilliantly.
Why:
- Instant portability to anywhere a C compiler exists (embedded, exotic architectures)
- Bootstrapping becomes trivial: anyone with
gcccan build Quartz from scratch - C output is debuggable with standard tools (gdb, valgrind, sanitizers)
- MIR is already close to C semantics — second codegen pass, not a rewrite
- Since everything is i64 at runtime, C output is straightforward (
int64_teverywhere)
Implementation: Write codegen_c.qz alongside codegen.qz. Same MIR input, different output format.
References: Nim Backend Integration
COMMITTED: First-Class WASI / WebAssembly Target
First-class WASI emission, not just “use the LLVM WASM backend.” This is a core strategic target.
Why:
- Browser deployment, edge computing, sandboxed plugin systems, WASI for server-side
- Unlocks the dogfooding vision (see below)
- MoonBit is betting its entire identity on WASM-native; Quartz can compete
- WASM Component Model (WASI 0.2+) enables polyglot composition with type-safe interfaces
Paths:
- LLVM’s WASM backend (quickest — we already emit LLVM IR)
- First-class WASM emission from MIR (the goal — full control)
- C backend → Emscripten (fallback, gets us there fast)
Limitations to watch: WASM Component Model still lacks standardized multithreading (as of 2026). WASI 0.3 adding native async I/O.
References: WASM Component Model, MoonBit, WASI Status (Feb 2025)
COMMITTED: The Dogfooding Vision
Build the entire marketing presence in Quartz, top to bottom:
- Web server — written in Quartz, dogfooding the networking stack
- Web framework — built on top of the Quartz web server
- Marketing site — built with the Quartz framework, served by the Quartz server
- Rendered via WASM — compile the frontend to WebAssembly, draw to canvas
The radical approach: Instead of HTML/CSS/DOM, render the entire page as a canvas application via WASM. More like an interactive desktop GUI or video game than a traditional web page. Users click on graphical elements (pixels, not divs). This sidesteps the entire browser rendering pipeline and demonstrates Quartz’s systems-level capability in the browser.
This proves:
- Quartz can build real networked services
- The WASM target works end-to-end
- The language is ergonomic enough for application development
- The performance story is real (canvas rendering in WASM)
COMMITTED: Refinement Types (World-Class Implementation)
This is going to be a first-class, world-class implementation. Not a half-measure.
type NonZero = { v: Int | v != 0 }
type Positive = { v: Int | v > 0 }
type BoundedIndex(n: Int) = { v: Int | v >= 0 and v < n }
def divide(a: Int, b: NonZero): Int = a / b
def sqrt(x: Positive): Float = # ...
def safe_get(arr: Vec<T>, i: BoundedIndex(vec_len(arr))): T = vec_get(arr, i)
Plan:
- Deep research phase: study every existing implementation
- Flux (refinement types for Rust) — how they compose with ownership
- LiquidHaskell — 10,000+ lines verified, the gold standard for usability
- Martin Kleppmann’s work on AI-assisted formal verification
- Thrust (PLDI 2025) — prophecy-based refinement types
- Generic Refinement Types (POPL 2025)
- Design the Quartz-specific integration with our existential type model
- SMT solver integration (Z3) — we’re okay with this dependency
- Gradual adoption: refinements start as runtime checks, compiler elides what it can prove statically
- AI-assisted spec generation as a stretch goal (LLMs generating refinement annotations)
Why this matters: Eliminates entire classes of runtime errors at compile time. Array bounds, division by zero, null safety, protocol state invariants — all verified before the program runs, with zero runtime cost for proven refinements.
References: Flux: Liquid Types for Rust (PLDI’23), LiquidHaskell Tutorial, Thrust (PLDI’25), AI + Formal Verification (Kleppmann)
COMMITTED: Full GPU Compute
The goal is first-class GPU support with ergonomic annotation syntax. Start with SIMD vectorization hints (building on existing S.3-S.9 work), then push toward full GPU kernels.
@gpu
def matrix_mul(a: Buffer<Float>, b: Buffer<Float>, out: Buffer<Float>, n: Int)
i = gpu_thread_id_x()
j = gpu_thread_id_y()
var sum = 0.0
for k in 0..n
sum += a[i * n + k] * b[k * n + j]
end
out[i * n + j] = sum
end
Phased approach:
- Now: SIMD vectorization hints (already have S.3-S.9, auto-vectorization metadata)
- Next: LLVM NVPTX backend for
@gpuannotated functions - Later: Host-side kernel launch code generation, memory transfer intrinsics
- Stretch: Multi-vendor (AMD via AMDGPU backend), kernel fusion
Why this is viable: LLVM already has NVPTX and AMDGPU backends. We don’t need MLIR for the basic case. The annotation-based approach (write normal Quartz, mark what runs on GPU) is the most ergonomic design.
References: Mojo, MLIR HPC Kernels (SC’25), Triton, Futhark
COMMITTED: E-Graph Equality Saturation (MIR Optimization)
Replace sequential MIR optimization passes with a single, more powerful pass using e-graph equality saturation.
How it works: Instead of applying rewrite rules in a fixed order (where each pass commits to a rewrite and may block later optimizations), an e-graph represents ALL equivalent forms of an expression simultaneously. Equality saturation applies all rewrite rules exhaustively, then extracts the optimal version.
Why this is baller:
- Eliminates phase-ordering problems completely
- The 3.4x slowdown vs C bootstrap could shrink significantly
- Guided Equality Saturation (POPL 2024) makes it tractable for real programs
- Slotted E-Graphs (PLDI 2025) need 20x fewer iterations
- Could be implemented in Quartz itself as a self-hosted MIR optimization pass
References: Guided Equality Saturation (POPL 2024), Slotted E-Graphs (PLDI 2025), egg library
COMMITTED: Polyhedral Loop Optimization (via LLVM Polly)
Model loop nests as integer polyhedra; apply affine transformations for cache locality, vectorization, and auto-parallelization.
Why:
- Gold standard for dense numerical computation optimization
- LLVM’s Polly pass does this automatically — just need clean loop IR
- LOOPer (2025) combines polyhedral analysis with ML-based search
- Low effort: mostly about emitting clean loop structures and adding
-pollyto the pipeline
References: Polly - LLVM Polyhedral Optimizer, LOOPer (2024)
COMMITTED (Needs Design Session): Language-Integrated AI / LLM Directives
This needs a long, serious design discussion before implementation. The potential is world-changing.
@ai("classify this text into positive/negative/neutral")
def sentiment(text: String): Sentiment
@ai("translate from {source} to {target}")
def translate(text: String, source: Language, target: Language): String
@ai("extract structured data matching the return type")
def parse_invoice(text: String): Invoice
The idea: Function body is a prompt. The compiler generates the API call, JSON parsing, and type validation. The type system ensures the LLM output conforms to the return type.
Design questions to resolve:
- How does the compiler generate the API call? Compile-time codegen? Runtime library?
- How do we handle LLM non-determinism in a typed language?
- Caching / memoization of LLM calls?
- Cost control (token limits, model selection)?
- Fallback behavior when the LLM can’t conform to the return type?
- LMQL-style constrained decoding (token masking during generation)?
- Offline / local model support?
References: LMQL, MoonBit AI-Aware Design
COMMITTED (Exploration): LLM-Driven Compiler Optimization
Use ML models to make optimization decisions in the compiler itself.
What exists:
- Google’s Iterative BC-Max (NeurIPS 2024) — ML for inlining decisions, smaller binaries
- Meta’s LLMCompiler (2024) — fine-tuned models for code optimization
- CompilerR1 (2025) — reinforcement learning for pass ordering
For Quartz: Train a model on MIR transformations. The model suggests which rewrites to apply, what to inline, how to order passes. Could complement or replace the e-graph approach.
References: Google Scalable Self-Improvement, Meta LLM Compiler
DISCUSS: Union / Intersection Types (Algebraic Subtyping)
def handle(x: Int | String): String
match x
Int(n) => int_to_str(n)
String(s) => s
end
end
Likes it a lot, but wants further discussion. Key questions:
- How does this interact with our existential type model?
- Does this require reworking type inference (biunification vs unification)?
- Can we start with just union types and add intersection later?
- How do tagged unions differ from our existing enum mechanism?
References: Boolean-Algebraic Subtyping (Parreaux, 2024), Simple Essence of Algebraic Subtyping
DISCUSS: The Existential Type Model — Strength or Limitation?
Needs a deep discussion. Core questions:
- Is the i64-everywhere model going to prevent us from reaching C-level speed?
- Where exactly do we pay the performance cost? (struct layout, cache behavior, SIMD?)
- Narrow types (I8-U32) already address hot paths — is that sufficient?
- Is anybody else doing what we’re doing? Is this radical? Foolish? Or elegant?
- How does this interact with GPU compute (where types matter for memory layout)?
- What would it take to selectively opt into “real” types where performance demands it?
DISCUSS: Quantitative Type Theory (Idris 2 Style)
Each binding annotated with usage count: 0 (erased at runtime), 1 (linear, exactly once), or unrestricted. Unifies dependent types and linear types.
Why this is interesting for Quartz: Our existential model already erases types at runtime. QTT’s “0-use = erased” aligns naturally. The “1-use = linear” gives us resource safety (file handles, connections). Unrestricted is what we have now.
Needs discussion: How much of QTT can we adopt without becoming a research language? What’s the minimum viable version?
References: Idris 2: QTT in Practice
EXPLORE: Generational References (Vale-style Memory Safety)
Each allocation gets a generation counter; dereferences check it matches. ~10.84% overhead vs unsafe (better than refcounting’s 25.29%).
Why explore for Quartz:
- Quartz already uses arena-style storage — adding generation counters per slot is natural
- Unlike borrow checking, allows mutable aliasing (observer patterns, callbacks, graphs)
- Unlike GC, no pause times
- Could be a compelling “safe by default, opt into zero-cost” story
References: Vale Memory Safety, Generational References
EXPLORE: Austral’s Linear Type Checker
Austral divides types into two universes: Free (unlimited use) and Linear (use exactly once). Resources like file handles, connections, memory are linear. The linearity checker is under 600 lines of code.
Why explore for Quartz:
- Dead simple to implement (Austral proved it)
- Prevents resource leaks, double-free, use-after-free at compile time
- Capabilities layered on top: if you don’t have a
FileSystemcapability value, you can’t access files - Could be opt-in:
linear struct FileHandle { ... }
References: Austral Language, Introducing Austral
DISCUSS: Language-Integrated Queries
Could Quartz benefit from compile-time query integration (SQL, GraphQL, etc.)? Zig’s comptime ORM generates type-safe queries from struct definitions. Something similar could work with Quartz’s comptime evaluation (if implemented).
Needs discussion: Does this fit the language’s identity? Is it a library concern or a language concern?
COMMITTED: Launch Visuals & Demo Ideas
Creative assets and demo concepts for maximum launch impact. See Phase W in ROADMAP.md for the actionable items.
Gource Timelapse Video
Run gource on the git history. 743 commits exploding across the file tree in 60 seconds. Post on Twitter/X, YouTube, Reddit. The visual density of 47 days of development at this velocity is undeniable.
Commit Heatmap Wall
Visual wall of 743 commits. Each one a colored block — green for features, blue for tests, gold for fixpoint milestones. Hover for commit message. The density itself tells the story.
Feature Density Timeline
Animated timeline. Each week, features explode onto it. Week 1: self-hosting fixpoint. Week 2: generics. Week 3: concurrency. Week 4: SIMD. Week 5: const generics + narrow types. Week 6: auto-vectorization. Week 7: stdlib unification. Any one of these is a quarter’s work for a team.
The Fixpoint Proof Page
Dedicated website page. Show the pipeline: C bootstrap → gen2 → gen3 → gen4, diff them, show “0 bytes different, 353,132 lines identical.” Two columns of IR scrolling in sync. Cryptographic proof that the language works. Nobody else does this.
Benchmark Arena
Side-by-side performance charts: Quartz vs C vs Rust vs Zig on the 7 existing benchmarks. Interactive animated bar chart race. Show that a 47-day-old language competes with decades-old compilers.
Boot Sequence Landing Page
Site loads as a terminal. The compiler bootstraps itself in real-time text scroll — lexing, parsing, typechecking, codegen. Then the terminal “cracks open” and the site renders. The message: this language builds itself.
WASM Playground (Browser Compiler)
Compile the C bootstrap to WASM via Emscripten. Users type Quartz code in the browser, see it compile to LLVM IR in real-time, see the output. No install required.
The Compiler Compiling Itself — Live
Website page where you click “Build” and watch the self-hosted compiler compile itself in real-time. Function count climbing: 1… 50… 200… 455. IR line count climbing to 353,132. Rocket launch countdown in reverse.
SIMD Particle System Demo
Real-time particle physics using F32x4 SIMD intrinsics. Compile to native, record at 60fps. Thousands of particles bouncing, colliding — all vectorized. “This is what 4-wide SIMD in a 47-day-old language looks like.”
Bare Metal Boot Demo
Video of Quartz code running on bare metal via QEMU. No OS. No runtime. Just the language talking to hardware. “Quartz runs where there’s nothing else to run."
"First Blood” Challenge
Launch with an open challenge: “Write something in Quartz and submit a PR to examples/. First 25 contributors get their name in the compiler source.” Community from minute one.
4K Demoscene Entry
Write a 4K intro (fits in 4096 bytes) in Quartz. Procedural graphics, music, the whole thing. Submit to a demoscene competition. The judges will lose their minds when they see the language credits.
Compile the Site from Source On Every Visit
Most deranged option: serve the Quartz source to the browser, compile it client-side via WASM-bootstrapped compiler, execute the result. The site doesn’t exist until the compiler builds it.
Ideas Retained for Inspiration
Comptime / Compile-Time Evaluation
Natural extension of const and def. Same language for compile-time and runtime code. Quartz’s existential type model means no Zig-style type-as-value complexity needed. Could enable generics without separate syntax.
References: Zig comptime, Comptime Zig ORM
Structured Concurrency
Scope-bound task groups where child tasks can’t outlive parents. Industry convergence (Java 25, Swift, Kotlin, Python). Sidesteps async/await function coloring.
Note on runtime: This needs a small runtime library (thread pool), similar to how we already link libc. It’s not a VM or GC — just a statically linked library of a few hundred lines. No overhead if you don’t use it.
References: JEP 505, Swift Structured Concurrency
Algebraic Effects / Effect Handlers
Effects as compile-time metadata that vanishes at runtime (fits our existential philosophy). Subsumes exceptions, async, generators under one mechanism. Koka and Effekt are the reference implementations.
Row Polymorphism (Structural Records)
Functions accept any record with at least certain fields. Compile-time only, zero runtime cost with monomorphization. No systems language has this.
References: Row Polymorphism Without the Jargon
Mutable Value Semantics (Hylo-style)
Rust’s safety without lifetimes. Values copy on assignment, compiler elides copies when it proves uniqueness. Aligns with const-by-default.
References: Hylo
Contracts (requires/ensures)
Precondition/postcondition annotations. Stepping stone to refinement types. Could start as runtime assertions, compiler elides what it can prove.
Quick-Reference Matrix
| # | Feature | Status | Effort | Impact |
|---|---|---|---|---|
| 1 | C backend | DONE | Medium | High |
| 2 | WASI target | DONE | Medium | High |
| 3 | Dogfooding vision | DONE (DG.1/MEM/OPQ phases complete) | Large | Strategic |
| 4 | Refinement types | COMMITTED | High | Very high |
| 5 | Full GPU | COMMITTED | High | High |
| 6 | E-graph optimization | DONE (acyclic e-graph + hashcons CSE) | Medium | High |
| 7 | Polyhedral loops | DONE (bench pipeline) | Low | Medium |
| 8 | LLM directives | COMMITTED (design needed) | Medium | Novelty |
| 9 | LLM compiler optimization | COMMITTED (explore) | Research | Unknown |
| 10 | Union/intersection types | DONE (parser + TC + codegen) | High | High |
| 11 | Existential type model | DISCUSS | N/A | Strategic |
| 12 | QTT / Idris 2 | DISCUSS | High | Medium |
| 13 | Generational references | EXPLORE | Medium | Medium |
| 14 | Austral linear types | DONE (linear struct, borrows, Drop) | Medium | Medium |
| 15 | Language-integrated queries | DISCUSS | Unknown | Unknown |
| 16 | Networking & concurrency hardening | DONE (HTTP/2, TLS, race detector) | Medium | High |
| 17 | Structured concurrency | DONE (task_group, supervisors, actors) | Medium | High |
| 18 | Custom iterators | DONE ($next protocol) | Low | Medium |
| 19 | API unification sprint | COMMITTED (Phase W) | Low | High |
| 20 | Examples gallery (12-15 programs) | COMMITTED (Phase W) | Low | High |
| 21 | Auto-generated API reference | COMMITTED (Phase W) | Medium | High |
| 22 | Website skeleton | DONE (GitHub Pages + Docker) | Medium | Strategic |
| 23 | Literate source site | COMMITTED (Phase W) | Medium | Novelty |
| 24 | VS Code extension | COMMITTED (Phase W) | Low | High |
| 25 | CLI unification | COMMITTED (Phase W) | Medium | High |
| 26 | Launch blog post | COMMITTED (Phase W) | Low | Strategic |
| 27 | Gource timelapse | FUN | Trivial | Viral |
| 28 | WASM playground | DONE (self-contained runtime) | Medium | High |
| 29 | Boot sequence landing page | FUN | Medium | Novelty |
| 30 | Demoscene 4K intro | FUN | Medium | Extreme novelty |
| 31 | Compile-site-on-every-visit | FUN | High | Unhinged flex |