Higher-Order and Symbolic Computation, 13(3)161-178
Static and Dynamic Program Compilation by Interpreter Specialization
Scott Thibault, COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France
Charles Consel, COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France
Julia L. Lawall, Computer Science Department, Boston University, 111 Cummington St., Boston, MA 02215, USA
Renaud Marlet, COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France
Gilles Muller, COMPOSE group, IRISA/INRIA, Campus de Beaulieu, 35042 Rennes Cedex, France
Abstract: Interpretation and run-time compilation techniques
are increasingly important because they can support heterogeneous
architectures, evolving programming languages, and dynamically-loaded
code. Interpretation is simple to implement, but yields poor
performance. Run-time compilation yields better performance, but is
costly to implement. One way to preserve simplicity but obtain good
performance is to apply program specialization to an interpreter in
order to generate an efficient implementation of the program
automatically. Such specialization can be carried out at both compile
time and run time.
Recent advances in program-specialization technology have
significantly improved the performance of specialized
interpreters. This paper presents and assesses experiments applying
program specialization to both bytecode and structured-language
interpreters. The results show that for some general-purpose bytecode
languages, specialization of an interpreter can yield speedups of up
to a factor of four, while specializing certain structured-language
interpreters can yield performance comparable to that of an
implementation in a general-purpose language, compiled using an
optimizing compiler.
Keywords: partial evaluation, compilation, compiler design,
Just-In-Time compilation, run-time code generation, domain-specific
languages, bytecode languages
|
This article can be downloaded [here].
|
|