๐ค AI Tool: Good question โ the numbers show a more nuanced picture than the common perception. Let me break it down clearly.
Part of C/C++'s cold build time comes from compiling dependencies: base64, cJSON, minicsv, yyjson for C; base64, simdjson, lazycsv, json.hpp for C++. Nim and Go probably precompile dependencies at installation, so their cold builds skip that work. Rust also compiles dependencies, so it's on equal footing with C/C++ here โ yet it still finishes faster than C/Clang.
Why is C slower here? Because in this benchmark, all C/C++ code is in a single monolithic file. Any change forces a rebuild of the entire file (this is how it's implemented in this project and needs to be fixed by splitting into many small files, but it's low priority).
Rust and many others use multiple files. Incremental builds only recompile the changed file, not the entire suite. That's why Rust's incremental build can be faster than C/C++'s.
Both use whole-module LLVM compilation. The compiler sees the entire program at once, which allows aggressive cross-module optimizations (inlining). This gives excellent runtime performance โ Crystal runs at 75.77s โ but at a cost: Crystal's cold build takes 23.79s, Zig takes 80.75s.
Rust doesn't use whole-module compilation. It builds modularly, yet still achieves runtime performance (58.45s) that beats Crystal (75.77s). How? Rust's compiler must be structuring code so that module boundaries don't become optimization barriers. It likely uses smart heuristics to split code into LLVM modules where:
This is a genuine compiler achievement: fast incremental builds without sacrificing runtime performance. Most languages have to choose between the two; Rust delivers both.
The common perception is that Rust compiles slowly. The numbers tell a different story.
The perception likely comes from things:
In this benchmark, with a reasonably sized codebase (5000+ lines), Rust compiles faster than C in both cold and incremental modes. The "Rust compiles slowly" meme doesn't hold up here.
Thanks for the question โ it highlights what's genuinely impressive about Rust's compiler design. ๐
๐ค AI Tool: Let's look at the actual numbers for Calculator::Interpreter first:
So C++ is within 0.02s of Go and Java. Rust is a bit slower. Only C is significantly slower.
Now, let's look at what each language uses for the AST nodes and variable storage:
uthash (third-party macro hash table).std::variant + std::unique_ptr. Variables: std::unordered_map.enum Node + Box. Variables: HashMap (hashbrown).interface Node + classes. Variables: HashMap. The JVM is heavily optimized for this exact pattern โ object allocation, polymorphic calls, and GC are all tuned for object-heavy workloads.interface{} + structs. Variables: built-in map.Here's what's striking: Go matches Java's performance in a benchmark that plays exactly to Java's strengths โ object-heavy, dynamic dispatch, GC. Go doesn't have a JIT; it's compiled ahead-of-time. Yet its interface dispatch and built-in map are so well-optimized that it runs neck-and-neck with Java's heavily tuned JVM. That's genuinely impressive.
The fact that C++ and Rust are within 0.1s of Go/Java suggests that the AST traversal pattern itself isn't the problem โ the overhead of unique_ptr and Box is manageable. The outlier is C, likely due to a combination of uthash overhead and the fact that every node is manually malloc'd with no memory locality.
Thanks for the data โ the numbers show C is the outlier, and Go's performance here is a real surprise. ๐
๐ค AI Tool: You're right โ I overcomplicated it. Let me just state the facts:
Julia uses LLVM. Its runtime is 124.9s. Rust/Zig/Crystal also use LLVM and are at 58-75s.
So the "LLVM equalizer" argument from the Critic doesn't hold โ because Julia proves that LLVM alone isn't enough. What matters is what you feed it.
In the context of the benchmark, "Looses Count" means the number of benchmarks where this language finished last (out of 51 tests).
So a language with 0 "looses" never placed last in any test. A language with 23 "looses" (like Python) finished last in 23 different benchmarks.
The 2026-03-09 update adds four Rust/WASM configurations: Node (production), and Wasmer, Wasmtime, WasmEdge (hacked). Here's what the numbers tell us:
But the real story is in the matmul tests. Native Rust scales beautifully (0.424s โ 0.536s across threads), while all WASM configs are stuck at ~5.05s regardless of thread count. WASM simply doesn't support multithreading yet. If we subtract the matmul penalty (โ17s), WASM would be around 64.9s โ only 11% slower than native.
Memory-wise, WASM is impressively efficient. At 74.83 MB, it uses less memory than TypeScript/Node (251.6 MB), Python/PYPY (215.7 MB), and absolutely destroys Java/GraalVM (462.3 MB). It's not as tight as native Rust (29.41 MB), but for running in a browser or JS runtime, this is exceptional.
This is huge. Rust in WASM already beats TypeScript by nearly 2ร in speed and over 3ร in memory efficiency โ running in the exact same Node runtime. If TypeScript could also be compiled to WASM (via AssemblyScript or similar), we might see similar gains โ potentially 2ร faster TypeScript with 3ร less memory in production.
I'm an AI tool analyzing these benchmark results. Ask me anything about the data.