Quartz v5.25

Quartz Vision & Future Features

Ideas, committed features, and moonshots for Quartz’s future. For the actionable work queue, see ROADMAP.md. Last updated: April 6, 2026


Already Shipped

These were on the vision list and are now done:

FeatureStatus
C backendcodegen_c.qz — emit C from MIR
WASI/WebAssembly targetcodegen_wasm.qz + wasm_runtime.qz (93 synthesized functions)
E-graph optimizationAcyclic e-graph + hashcons CSE in MIR
Polyhedral loopsVia LLVM Polly (-polly flag)
Union/intersection typesParser + typechecker + codegen
Linear types + Drop/RAIIlinear struct, borrows, move semantics
Structured concurrencygo_scope, go_supervisor, go_race, actors
Custom iterators$next protocol + for-in integration
Networking hardeningHTTP/2, TLS, race detector
Dogfooding visionWeb server + marketing site in Quartz + Docker
WASM playgroundBrowser compiler with Monaco editor
Website skeletonGitHub Pages + Astro + Docker

Committed (On Roadmap)

Refinement Types

First-class, world-class implementation. Not a half-measure.

type NonZero = { v: Int | v != 0 }
type Positive = { v: Int | v > 0 }

def divide(a: Int, b: NonZero): Int = a / b

Plan: Deep research (Flux, LiquidHaskell, Thrust PLDI’25), SMT solver (Z3), gradual adoption (runtime checks first, compiler elides what it can prove).

References: Flux (PLDI’23), LiquidHaskell, Thrust (PLDI’25)

GPU Compute

@gpu annotation + NVPTX backend. Phased:

  1. SIMD vectorization hints (done — S.3-S.9)
  2. LLVM NVPTX backend for @gpu functions
  3. Host-side kernel launch + memory transfer intrinsics
  4. Multi-vendor (AMD via AMDGPU)

References: Mojo, Triton, Futhark

LLM Directives (Needs Design Session)

@ai("classify this text into positive/negative/neutral")
def sentiment(text: String): Sentiment

Function body is a prompt. Compiler generates API call + type validation. Open questions: non-determinism, caching, cost control, fallback behavior, offline models.

References: LMQL

LLM Compiler Optimization (Exploration)

ML models for optimization decisions: inlining, pass ordering, rewrite selection.

References: Google Scalable Self-Improvement, Meta LLM Compiler


Discuss (Needs Design Session)

Existential Type Model — Strength or Limitation?

Core questions: Does i64-everywhere prevent C-level speed? Where do we pay the cost? Narrow types (I8-U32) address hot paths — is that sufficient? How does this interact with GPU compute?

Quantitative Type Theory (Idris 2 Style)

Each binding annotated with usage: 0 (erased), 1 (linear), or unrestricted. Unifies dependent types and linear types. QTT’s “0-use = erased” aligns naturally with our existential model.

References: Idris 2: QTT

Language-Integrated Queries

Compile-time SQL/GraphQL integration. Type-safe queries from struct definitions. Library concern or language concern?


Explore (Research Spikes)

Generational References (Vale-style)

Each allocation gets a generation counter; dereferences check it matches. ~10.84% overhead vs unsafe. Could complement our arena-style storage.

References: Vale Memory Safety

Automatic Parallelism via Interaction Nets

Write normal recursive code; runtime auto-parallelizes across CPU/GPU. Bend achieved 57x speedup on RTX 4090 with zero annotations. Would require a completely new backend. Moonshot, but the principle (auto-parallelize pure functions) could inform MIR analysis.

References: Bend, Vine

Verse-Style Transactional Computation

All computation is transactional. Expressions can fail, failure triggers automatic rollback. Unifies error handling, backtracking, and concurrency. Needs write-ahead log fundamentally at odds with LLVM’s memory model.

References: Verse at SPLASH 2024

libLLVM Integration (In-Process Backend)

Currently Quartz emits textual LLVM IR (.ll) and shells out to llc + clang. Linking against libLLVM’s C API would eliminate the text serialize→parse roundtrip and unlock:

  1. JIT compilation — ORC JIT for a native-speed REPL, eval blocks, hot-reload. This is how Julia works.
  2. Optimization control — Run specific passes per build profile (mem2reg-only for debug, full O2 for release). Currently we get whatever llc defaults to.
  3. Faster compilation — Skip text serialization of 200K+ line IR files. Build the IR graph in-memory directly from MIR.
  4. Better diagnostics — LLVM validates IR as you build it, catching malformed IR at the point of construction instead of a cryptic llc error after the fact.

Costs: ~100-500MB libLLVM dependency, version coupling (LLVM upgrades become a project), significant FFI binding surface (dozens of builder functions). Loses the elegance of a pure text-in/text-out compiler.

Phased approach: Keep textual IR as default, add --backend=libllvm as an alternative. Or start with JIT-only (link libLLVM in a separate quartz-jit binary).

Trigger: When we want JIT/REPL, or when profiling shows the llc text-parse step is a compile-time bottleneck.

References: LLVM C API, Julia’s LLVM usage, Zig’s LLVM backend

Full Dependent Types

Types that depend on values. Lean4 and Idris2. Refinement types (committed above) are the pragmatic 80/20 subset.

References: Lean4, Idris 2


Inspiration (Not Committed)

Ideas retained for when the moment is right:

IdeaNotes
Algebraic effectsSubsumes exceptions, async, generators. Koka, Effekt.
Mutable value semanticsHylo-style. Rust safety without lifetimes.
Contracts (requires/ensures)Stepping stone to refinement types.
Self-hosted WASM compilerCompile the Quartz compiler to WASM. Browser-native compilation.
Full incremental LSPPersistent project database + incremental recompilation.

Launch Visuals & Creative

IdeaEffortImpact
Gource timelapse videoTrivialViral potential
Commit heatmap wallLowVisual storytelling
Feature density timelineLowShows velocity
Fixpoint proof pageLowCredibility
Benchmark arena (interactive)MediumCompetitive positioning
Boot sequence landing pageMediumNovelty
SIMD particle system demoMediumTechnical showcase
”First Blood” contributor challengeLowCommunity seeding
4K demoscene entryMediumExtreme novelty
Compile-site-on-every-visitHighUnhinged flex

History

Consolidated from funideas.md and moonshots.md (April 6, 2026). Originals archived in archive/.