Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Static energy consumption analysis of LLVM IR programs (arxiv.org)
65 points by drjohnson on May 27, 2014 | hide | past | favorite | 9 comments


How can they do this for IR? Wouldn't the actual hardware make a difference in power usage?


The two main inputs to their model are the IR and the ISA.


See also: Wattch[1], a similar framework based on a similar idea. Wattch relies on simulation of the target rather than static analysis, however, but the application of instruction-level cost models seems the same.

[1] : http://www.eecs.harvard.edu/~dbrooks/isca2000.pdf


Can they answer the question "Which architecture is most efficient?"


Nope, the ISA power model is an input to their work.

And on the underlying question, there's been recent academic work on this, e.g. in [1]. Common wisdom these days (among many people at least) is that the architecture (ISA) doesn't matter too much w.r.t. power, as compared to the microarchitecture. The reason is that modern implementations translate whatever quirks exist in the ISA into a fairly uniform set of underlying micro-ops; an out-of-order superscalar implementation of x86 and ARM will look pretty similar. (Go read Bob Colwell's book on P6/Pentium Pro, the first out-of-order x86, for some more insight on this.)

The conflation of ARM/low-power and x86/high-power is a historical thing: x86 started on desktops and moved downward, while ARM started in embedded systems and moved upward. It's becoming less true as each heads toward the same target (mobile). Remaining differences in efficiency are mostly functions of implementation choices and engineering quality.

Big disclaimer: I worked at Intel after doing grad school in computer architecture, so I may be slightly biased. :-)

[1] E. Blem et al. "Power struggles: Revisiting the RISC vs. CISC debate on Contemporary ARM and x86 Architectures." In HPCA-19, 2013.


Nit: ARM was originally developed for desktops as well, but very price constrained ones, and the story as I remember reading was that they couldn't afford the cost of a ceramic package. So they were very careful about power dissipation, and when the first silicon came back, they found it consumed 1/10 of their design goal.

The low cost and low power made it a natural for lots of embedded designs following that.


> Using these techniques we can automatically infer an approximate upper bound of the energy consumed when running a function under different platforms, using different compilers - without the need of actually running it.

Wouldn't a necessary first step be to solve the Halting Problem?

Sorry, I haven't read the actual paper.


Determining the runtime bounds of a function is also undecidable.[1] That doesn't mean that we can't do it in practice for the things we're interested in.

[1] https://cstheory.stackexchange.com/questions/5004/are-runtim...


They seem to approximate with recurrence relations; they cite a bunch of literature about the technique, referring to "cost relations".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: