If you care about compilation speed b/c it's slowing down development - wouldn't it make sense to work on an interpreter?
Maybe I'm naiive, but it seems the simpler option
Compiling for executable-speed seems inherently orthogonal to compilation time
With an interpreter you have the potential of lots of extra development tools as you can instrument the code easily and you control the runtime.
Sure, in some corner cases people need to only be debugging their full-optimization-RELEASE binary and for them working on a interpreter, or even a DEBUG build just doesn't makes sense. But that's a tiny minority. Even there, you're usually optimizing a hot loop that's going to be compiling instantly anyway
"Trying to understand the Zig goals a bit better..
If you care about compilation speed b/c it's slowing down development - wouldn't it make sense to work on an interpreter?"
I've heard Rust classified as Safety, Performance, Usability in that order and Zig as Performance, Usability, Safety. From that perspective the build speed-ups make sense while an interpreter would only fit a triple where usability comes first.
I don't think it's right to say Zig is more performance-focused than Rust. Based on their designs, they should both be able to generate similar machine code. Indeed, benchmarks show that Rust and Zig perform very similarly: https://programming-language-benchmarks.vercel.app/rust-vs-z...
This is why the value proposition of Zig is a question for people, because if you got measurably better performance out of Zig in exchange for worse safety that would be one thing. But Rust is just as performant, so with Zig you're exchanging safety for expressivity, which is a much less appealing tradeoff; the learning from C is that expressivity without safety encourages programmers to create a minefield of exploitable bugs.
I don't think Zig is about expressivity over safety. Many people liken Zig to C, like with Rust to C++.
Zig is a lot safer than C by nature. It does not aim to be as safe as Rust by default. But the language is a lot simpler. It also has excellent cross-compiling, much faster compilation (which will improve drastically as incremental compilation is finished), and a better experience interfacing with/using C code.
So there are situations where one may prefer Zig, but if memory-safe performance is the most important thing, Rust is the better tool since that is what it was designed for.
The value proposition in Zig is that it has far and away the best C interoperability I've seen out of a language that wasn't also a near-superset of it. This makes migrations away from C towards Zig a lot more palatable than towards Rust unless security is your primary concern.
On the contrary I would not call Zig very C-like at all, and that's what makes the interoperability so impressive to me.
Zig feels like a small yet modern language that lacks many of C's footguns. It has far fewer implicit type conversions. There is no preprocessor. Importing native zig functionality is done via modules with no assumed stable ABI. Comptime allows for type and reflection chicanery that even C++ would be jealous of.
Yet I can just include a C header file and link a C library and 24 times out of 25, it just works. Same thing if you run translate-c on a C file, and as a bonus, it reveals the hoops that the tooling has to go through to preserve the fiddly edge cases while still remaining somewhat readable.
> On the contrary I would not call Zig very C-like at all, and that's what makes the interoperability so impressive to me.
Zig isn't C-like in its syntax or its developer experience (e.g. whether it has a package manager), but in its execution model: unsafe, imperative, deemphasizes objects, metaprogramming by token manipulation.
Please elaborate what "execution model" entails. What is the Rust "Abstract Machine"?
I am only aware of the memory model, specifically pointer and consistency/synchronization which extends the C memory model by abstractions deemed "memory-safe" based on Separation Logic and Strict Aliasing.
In my understanding you criticize Zig for not offering safety abstractions, because you can also write purely unsafe Rust also covered by the Rust "execution model".
Rust has no inheritance typical for OOP. Zig has no token manipulation, but uses "comptime".
It's much easier to write ultra low latency code in Zig because doing so requires using local bump-a-pointer allocators, and the whole Zig standard library is built around that (everything that allocates must take the allocator as a param). Rust on the other hand doesn't make custom allocators easy to use, partially because proving they're safe to the borrow checker is tricky.
Zig is a modern safer and more approachable C, that's the value proposition.
It's value proposition is aiming for the next generation of system programmers that aren't interested in Rust like languages but C like ones.
Current system devs working on C won't find Zig or Odin's value propositions to matter enough or indeed as you point out, to offer enough of a different solution like Rust does.
But I'm 100% positive that Zig will be a huge competitor for Rust in the next decade because it's very appealing to people willing to get into system programming, but not interested in Rust.
> expressivity without safety encourages programmers to create a minefield of exploitable bugs.
Also, psychologically, the combination creates a powerful and dangerously false sensation of mastery in practitioners, especially less experienced ones. Part of Zig's allure, like C's bizarre continued relevance, is the extreme-sport nature of the user experience. It's exhilarating to do something dangerous and get away with it.
Please — rust advocates need to realize that you're not winning anyone to rust with this kind of outlandish comments.
Zig is not an extreme sports experience. It has much better memory guarantees than C. It's clearly not as good as rust if that's your main concern but rust people need to also think long and hard about why rust has trouble competing with languages like Go, Zig and C++ nowadays.
I'm not a Rust advocate. I'm a "the whole debate is silly: 99% of the time you want a high level language with a garbage collector" advocate.
What's the use case for Zig? You're in that 1% of projects in which you need something beyond what a garbage collector can deliver and, what, you're in the 1% of the 1% in which Rust's language design hurts the vibe or something?
You can also get that "safer, but not Safe" feeling from modern C++. So what? It doesn't really matter whether a language is "safer". Either a language lets you prove the absence of certain classes of bug or it doesn't. It's not like C, Zig, and Rust form a continuum of safety. Two of these are different from the third in a qualitative sense.
> What's the use case for Zig? You're in that 1% of projects in which you need something beyond what a garbage collector can deliver and, what, you're in the 1% of the 1% in which Rust's language design hurts the vibe or something?
You want a modern language that has package management, bounds checks and pointer checks out of the box. Zig is a language you can pick up quickly whereas rust takes years to master. It's a good replacement to C++ if you're building a game engine for example.
> Either a language lets you prove the absence of certain classes of bug or it doesn't. It's not like C, Zig, and Rust form a continuum of safety. Two of these are different from the third in a qualitative sense.
Again repeating my critique from the previous comment – yes Zig brings in additional safety compared to C. Dismissing all of that out of hand does not convince anyone to use rust.
thanks for bringing up Julia! I totally forgot it exists. Its really dropped off the HNniverse. Wonder why.. havent seen it mentioned in like a couple of years
> Compiling for executable-speed seems inherently orthogonal to compilation time
That true at the limit. As is often the case there's a vast space in the middle, between the extremes of ultimate executable speed and ultimate compiler speed, where you can make optimizations that don't trade off against each other. Where you can make the compiler faster while producing the exact same bytecode.
This is why games ship interpreters and include a lot of code scripting the gameplay (e.g in Lua) rather than having it all written in C++. You get fast release builds with a decent framerate and the ability to iterate on a lot of the “business logic” from within the game itself.
That's only true for badly written game code though ;)
Even with C++ and heavy stdlib usage it's possible to have debug builds that are only around 3..5 times slower than release builds in C++. And you need that wiggle room anyway to account for lower end CPUs which your game should better be able to run on.
I've never done it, but I just find it hard to believe the slow down would be that large. Most of the computation is on GPU, and you can set your build up such that you link to libraries built at different compilation optimizations.. and they're likely the ones doing most of the heavy lifting. You're not rebuilding all of the underlying libs b/c you're not typically debugging them.
EDIT:
if you're targeting a console.. why would you not debug using higher end hardware? If anything it's an argument in favor of running on an interpreter with a very high end computer for the majority of development..
Yeah, around 3x slower is what I've seen when I was still working on reasonably big (around a million lines of 'orthodox C++' code) PC games until around 2020 with MSVC and without stdlib usage.
This was building the entire game code with the same build options though, we didn't bother to build parts of the code in release mode and parts in debug mode, since debug performance was fast enough for the game to still run in realtime. We also didn't use a big integrated engine like UE, only some specialized middleware libraries that were integrated into the build as source code.
We did spend quite some effort to keep both build time and debug performance under control. The few cases were debug performance became unacceptable were usually caused by 'accidentially exponential' problems.
> Most of the computation is on GPU
Not in our case, the GPU was only used for rendering. All game logic was running on the CPU. IIRC the biggest chunk was pathfinding and visibility/collision/hit checks (e.g. all sorts of NxM situations in the lower levels of the mob AI).
Ah okay, then that makes sense. It really depends on your build system. My natural inclination would be to not have a pathfinding system in DEBUG unless I was actively working on it. But it's not always very easy to set things up to be that way
The slowdown can be enormous if you use SIMD, I believe MSVC emits a write to memory after every single SIMD op in debug, thus the code is incredibly slow(more than 10x).
If you have any SIMD kernels you will suffer, and likely switch to release, with manual markings used to disable optimizations per function/file.
If you target consoles, you’re already on the lowest end hardware you’ll run on. It’s extremely rare to have much (if any) headroom for graphically competitive games. 20% is a stretch, 3x is unthinkable.
Then I would do most of the development work that doesn't directly touch console APIs on a PC version of the game and do development and debugging on a high-end PC. Some compilers also have debug options which apply optimizations while the code still remains debuggable (e.g. on MSVC: https://learn.microsoft.com/en-us/visualstudio/debugger/how-...) - don't know about Sony's toolchain though. Another option might be to only optimize the most performance sensitive parts while keeping 90% of the code in debug mode. In any case, I would try everything before having to give up debuggable builds.
It doesn’t matter. You will hit (for example) rendering bugs that only happen on a specific version of a console with sufficient frequency that you’ll rarely use debug builds.
Debug builds are used sometimes, just not most of the time.
If you care about compilation speed b/c it's slowing down development - wouldn't it make sense to work on an interpreter? Maybe I'm naiive, but it seems the simpler option
Compiling for executable-speed seems inherently orthogonal to compilation time
With an interpreter you have the potential of lots of extra development tools as you can instrument the code easily and you control the runtime.
Sure, in some corner cases people need to only be debugging their full-optimization-RELEASE binary and for them working on a interpreter, or even a DEBUG build just doesn't makes sense. But that's a tiny minority. Even there, you're usually optimizing a hot loop that's going to be compiling instantly anyway