What trace compilation buys you is: 1) elimination of method call boundaries in analysis; 2) elimination of data flow merges.
Dynamic languages benefit particularly from these characteristics because their semantics are replete with method calls and data flow merges (e.g. after any generic operation). But static languages can have these qualities too.
To give credit, where credit is due: the original work on trace compilation is much, much older. The paper you cited is an application.
The fundamental papers to hunt for are Joseph A. Fisher's publications on trace scheduling (sadly, his PhD thesis from the 70ies is nowhere to be found online) and the Multiflow reports from the 90ies. The Dynamo paper built upon that foundation ten years later in '99 (get the full HP report, not the short summary). A related research area is about trace caches for use in CPUs with various papers from the 90ies.
AFAIK there's no up-to-date comprehensive summary of the state of research on trace compilers. Most papers don't even scratch the surface of the challenges you'll face when building a production-quality trace compiler.
> AFAIK there's no up-to-date comprehensive summary of the state of research on trace compilers. Most papers don't even scratch the surface of the challenges you'll face when building a production-quality trace compiler.
In general, there are few good + comprehensive resources for advanced compilation techniques. I would gladly fork over $$$ if you wrote a textbook on tracing compilers aimed at people who have expert-level knowledge in more traditional compilation methods (aka, not JIT).
What trace compilation buys you is: 1) elimination of method call boundaries in analysis; 2) elimination of data flow merges.
Dynamic languages benefit particularly from these characteristics because their semantics are replete with method calls and data flow merges (e.g. after any generic operation). But static languages can have these qualities too.