> Not going to bother with this one. Do some research into how compilers work, maybe.
> so the code generated by your language's LANG-to-WASM backend can't be optimized as heavily as its native backend.
https://cs.lmu.edu/~ray/notes/ir/
Intermediary representations. Most modern compiled languages are optimised independently of the target architecture. So the code has been optimised way before it even became was text. the LANG-to-WASM backend has most, if not all optimisations that LANG-to-arm64 would have done. The final parser is nearly trivial in compute and complexity, making its implementation a pretty approachable intermediate programming exercise.
Comparing it to running a modern compiler optimisation for a high-order language is apples and oranges. The only optimisation realistically remaining is the processor's speculative execution engine.
> Not sure where I claimed it wasn't
> Not only is it subjective but V8 does so much to optimize JavaScript code that I wouldn't be surprised if the benefits for most applications were negligible anyway.
> so the code generated by your language's LANG-to-WASM backend can't be optimized as heavily as its native backend.
https://cs.lmu.edu/~ray/notes/ir/ Intermediary representations. Most modern compiled languages are optimised independently of the target architecture. So the code has been optimised way before it even became was text. the LANG-to-WASM backend has most, if not all optimisations that LANG-to-arm64 would have done. The final parser is nearly trivial in compute and complexity, making its implementation a pretty approachable intermediate programming exercise.
Comparing it to running a modern compiler optimisation for a high-order language is apples and oranges. The only optimisation realistically remaining is the processor's speculative execution engine.
> Not sure where I claimed it wasn't
> Not only is it subjective but V8 does so much to optimize JavaScript code that I wouldn't be surprised if the benefits for most applications were negligible anyway.