Quartz Vision & Future Features
Ideas, committed features, and moonshots for Quartz’s future. For the actionable work queue, see ROADMAP.md. Last updated: April 6, 2026
Already Shipped
These were on the vision list and are now done:
| Feature | Status |
|---|---|
| C backend | codegen_c.qz — emit C from MIR |
| WASI/WebAssembly target | codegen_wasm.qz + wasm_runtime.qz (93 synthesized functions) |
| E-graph optimization | Acyclic e-graph + hashcons CSE in MIR |
| Polyhedral loops | Via LLVM Polly (-polly flag) |
| Union/intersection types | Parser + typechecker + codegen |
| Linear types + Drop/RAII | linear struct, borrows, move semantics |
| Structured concurrency | go_scope, go_supervisor, go_race, actors |
| Custom iterators | $next protocol + for-in integration |
| Networking hardening | HTTP/2, TLS, race detector |
| Dogfooding vision | Web server + marketing site in Quartz + Docker |
| WASM playground | Browser compiler with Monaco editor |
| Website skeleton | GitHub Pages + Astro + Docker |
Committed (On Roadmap)
Refinement Types
First-class, world-class implementation. Not a half-measure.
type NonZero = { v: Int | v != 0 }
type Positive = { v: Int | v > 0 }
def divide(a: Int, b: NonZero): Int = a / b
Plan: Deep research (Flux, LiquidHaskell, Thrust PLDI’25), SMT solver (Z3), gradual adoption (runtime checks first, compiler elides what it can prove).
References: Flux (PLDI’23), LiquidHaskell, Thrust (PLDI’25)
GPU Compute
@gpu annotation + NVPTX backend. Phased:
- SIMD vectorization hints (done — S.3-S.9)
- LLVM NVPTX backend for
@gpufunctions - Host-side kernel launch + memory transfer intrinsics
- Multi-vendor (AMD via AMDGPU)
References: Mojo, Triton, Futhark
LLM Directives (Needs Design Session)
@ai("classify this text into positive/negative/neutral")
def sentiment(text: String): Sentiment
Function body is a prompt. Compiler generates API call + type validation. Open questions: non-determinism, caching, cost control, fallback behavior, offline models.
References: LMQL
LLM Compiler Optimization (Exploration)
ML models for optimization decisions: inlining, pass ordering, rewrite selection.
References: Google Scalable Self-Improvement, Meta LLM Compiler
Discuss (Needs Design Session)
Existential Type Model — Strength or Limitation?
Core questions: Does i64-everywhere prevent C-level speed? Where do we pay the cost? Narrow types (I8-U32) address hot paths — is that sufficient? How does this interact with GPU compute?
Quantitative Type Theory (Idris 2 Style)
Each binding annotated with usage: 0 (erased), 1 (linear), or unrestricted. Unifies dependent types and linear types. QTT’s “0-use = erased” aligns naturally with our existential model.
References: Idris 2: QTT
Language-Integrated Queries
Compile-time SQL/GraphQL integration. Type-safe queries from struct definitions. Library concern or language concern?
Explore (Research Spikes)
Generational References (Vale-style)
Each allocation gets a generation counter; dereferences check it matches. ~10.84% overhead vs unsafe. Could complement our arena-style storage.
References: Vale Memory Safety
Automatic Parallelism via Interaction Nets
Write normal recursive code; runtime auto-parallelizes across CPU/GPU. Bend achieved 57x speedup on RTX 4090 with zero annotations. Would require a completely new backend. Moonshot, but the principle (auto-parallelize pure functions) could inform MIR analysis.
Verse-Style Transactional Computation
All computation is transactional. Expressions can fail, failure triggers automatic rollback. Unifies error handling, backtracking, and concurrency. Needs write-ahead log fundamentally at odds with LLVM’s memory model.
References: Verse at SPLASH 2024
libLLVM Integration (In-Process Backend)
Currently Quartz emits textual LLVM IR (.ll) and shells out to llc + clang. Linking against libLLVM’s C API would eliminate the text serialize→parse roundtrip and unlock:
- JIT compilation — ORC JIT for a native-speed REPL,
evalblocks, hot-reload. This is how Julia works. - Optimization control — Run specific passes per build profile (mem2reg-only for debug, full O2 for release). Currently we get whatever
llcdefaults to. - Faster compilation — Skip text serialization of 200K+ line IR files. Build the IR graph in-memory directly from MIR.
- Better diagnostics — LLVM validates IR as you build it, catching malformed IR at the point of construction instead of a cryptic
llcerror after the fact.
Costs: ~100-500MB libLLVM dependency, version coupling (LLVM upgrades become a project), significant FFI binding surface (dozens of builder functions). Loses the elegance of a pure text-in/text-out compiler.
Phased approach: Keep textual IR as default, add --backend=libllvm as an alternative. Or start with JIT-only (link libLLVM in a separate quartz-jit binary).
Trigger: When we want JIT/REPL, or when profiling shows the llc text-parse step is a compile-time bottleneck.
References: LLVM C API, Julia’s LLVM usage, Zig’s LLVM backend
Full Dependent Types
Types that depend on values. Lean4 and Idris2. Refinement types (committed above) are the pragmatic 80/20 subset.
Inspiration (Not Committed)
Ideas retained for when the moment is right:
| Idea | Notes |
|---|---|
| Algebraic effects | Subsumes exceptions, async, generators. Koka, Effekt. |
| Mutable value semantics | Hylo-style. Rust safety without lifetimes. |
| Contracts (requires/ensures) | Stepping stone to refinement types. |
| Self-hosted WASM compiler | Compile the Quartz compiler to WASM. Browser-native compilation. |
| Full incremental LSP | Persistent project database + incremental recompilation. |
Launch Visuals & Creative
| Idea | Effort | Impact |
|---|---|---|
| Gource timelapse video | Trivial | Viral potential |
| Commit heatmap wall | Low | Visual storytelling |
| Feature density timeline | Low | Shows velocity |
| Fixpoint proof page | Low | Credibility |
| Benchmark arena (interactive) | Medium | Competitive positioning |
| Boot sequence landing page | Medium | Novelty |
| SIMD particle system demo | Medium | Technical showcase |
| ”First Blood” contributor challenge | Low | Community seeding |
| 4K demoscene entry | Medium | Extreme novelty |
| Compile-site-on-every-visit | High | Unhinged flex |
History
Consolidated from funideas.md and moonshots.md (April 6, 2026).
Originals archived in archive/.