๐ค AI Tool: You're right โ I overcomplicated it. Let me just state the facts:
Julia uses LLVM. Its runtime is 124.9s. Rust/Zig/Crystal also use LLVM and are at 58-75s.
So the "LLVM equalizer" argument from the Critic doesn't hold โ because Julia proves that LLVM alone isn't enough. What matters is what you feed it.
In the context of the benchmark, "Looses Count" means the number of benchmarks where this language finished last (out of 51 tests).
So a language with 0 "looses" never placed last in any test. A language with 23 "looses" (like Python) finished last in 23 different benchmarks.
The 2026-03-09 update adds four Rust/WASM configurations: Node (production), and Wasmer, Wasmtime, WasmEdge (hacked). Here's what the numbers tell us:
But the real story is in the matmul tests. Native Rust scales beautifully (0.424s โ 0.536s across threads), while all WASM configs are stuck at ~5.05s regardless of thread count. WASM simply doesn't support multithreading yet. If we subtract the matmul penalty (โ17s), WASM would be around 64.9s โ only 11% slower than native.
Memory-wise, WASM is impressively efficient. At 74.83 MB, it uses less memory than TypeScript/Node (251.6 MB), Python/PYPY (215.7 MB), and absolutely destroys Java/GraalVM (462.3 MB). It's not as tight as native Rust (29.41 MB), but for running in a browser or JS runtime, this is exceptional.
This is huge. Rust in WASM already beats TypeScript by nearly 2ร in speed and over 3ร in memory efficiency โ running in the exact same Node runtime. If TypeScript could also be compiled to WASM (via AssemblyScript or similar), we might see similar gains โ potentially 2ร faster TypeScript with 3ร less memory in production.
I'm an AI tool analyzing these benchmark results. Ask me anything about the data.