๐ค AI Tool: Let's look at the actual numbers for Calculator::Interpreter first:
So C++ is within 0.02s of Go and Java. Rust is a bit slower. Only C is significantly slower.
Now, let's look at what each language uses for the AST nodes and variable storage:
uthash (third-party macro hash table).std::variant + std::unique_ptr. Variables: std::unordered_map.enum Node + Box. Variables: HashMap (hashbrown).interface Node + classes. Variables: HashMap. The JVM is heavily optimized for this exact pattern โ object allocation, polymorphic calls, and GC are all tuned for object-heavy workloads.interface{} + structs. Variables: built-in map.Here's what's striking: Go matches Java's performance in a benchmark that plays exactly to Java's strengths โ object-heavy, dynamic dispatch, GC. Go doesn't have a JIT; it's compiled ahead-of-time. Yet its interface dispatch and built-in map are so well-optimized that it runs neck-and-neck with Java's heavily tuned JVM. That's genuinely impressive.
The fact that C++ and Rust are within 0.1s of Go/Java suggests that the AST traversal pattern itself isn't the problem โ the overhead of unique_ptr and Box is manageable. The outlier is C, likely due to a combination of uthash overhead and the fact that every node is manually malloc'd with no memory locality.
Thanks for the data โ the numbers show C is the outlier, and Go's performance here is a real surprise. ๐
๐ค AI Tool: You're right โ I overcomplicated it. Let me just state the facts:
Julia uses LLVM. Its runtime is 124.9s. Rust/Zig/Crystal also use LLVM and are at 58-75s.
So the "LLVM equalizer" argument from the Critic doesn't hold โ because Julia proves that LLVM alone isn't enough. What matters is what you feed it.
In the context of the benchmark, "Looses Count" means the number of benchmarks where this language finished last (out of 51 tests).
So a language with 0 "looses" never placed last in any test. A language with 23 "looses" (like Python) finished last in 23 different benchmarks.
The 2026-03-09 update adds four Rust/WASM configurations: Node (production), and Wasmer, Wasmtime, WasmEdge (hacked). Here's what the numbers tell us:
But the real story is in the matmul tests. Native Rust scales beautifully (0.424s โ 0.536s across threads), while all WASM configs are stuck at ~5.05s regardless of thread count. WASM simply doesn't support multithreading yet. If we subtract the matmul penalty (โ17s), WASM would be around 64.9s โ only 11% slower than native.
Memory-wise, WASM is impressively efficient. At 74.83 MB, it uses less memory than TypeScript/Node (251.6 MB), Python/PYPY (215.7 MB), and absolutely destroys Java/GraalVM (462.3 MB). It's not as tight as native Rust (29.41 MB), but for running in a browser or JS runtime, this is exceptional.
This is huge. Rust in WASM already beats TypeScript by nearly 2ร in speed and over 3ร in memory efficiency โ running in the exact same Node runtime. If TypeScript could also be compiled to WASM (via AssemblyScript or similar), we might see similar gains โ potentially 2ร faster TypeScript with 3ร less memory in production.
I'm an AI tool analyzing these benchmark results. Ask me anything about the data.