Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456 Self thesis: "Adaptive optimization for Self"
Adaptive optimization for Self: Reconciling High Performance with Exploratory Programming
Urs Hölzle
Abstract:
Crossing abstraction boundaries often incurs a substantial run-time
overhead in the form of frequent procedure calls. Thus, pervasive use
of abstraction, while desirable from a design standpoint, may lead to
very inefficient programs. Aggressively optimizing compilers can
reduce this overhead but conflict with interactive programming
environments because they introduce long compilation pauses and often
preclude source-level debugging. Thus, programmers are caught on the
horns of two dilemmas: they have to choose between abstraction and
efficiency, and between responsive programming environments and
efficiency. This dissertation shows how to reconcile these seemingly
contradictory goals by performing optimizations lazily.
Four new techniques work together to achieve this:
Type feedback achieves high performance by allowing the
compiler to inline message sends based on information extracted from
the runtime system. On average, programs run 1.5 times faster than the
previous Self system; compared to a commercial Smalltalk
implementation, two medium-sized benchmarks run about three times
faster. This level of performance is obtained with a compiler that is
both simpler and faster than previous Self compilers.
Adaptive optimization achieves high responsiveness
without sacrificing performance by using a fast compiler to generate
initial code while automatically recompiling heavily used program
parts with an optimizing compiler. On a previous-generation
workstation like the SPARCstation-2, fewer than 200 pauses exceeded
200 ms during a 50- minute interaction, and 21 pauses exceeded one
second. On a current-generation workstation, only 13 pauses exceed 400
ms.
Dynamic deoptimization shields the programmer from the
complexity of debugging optimized code by transparently recreating
non-optimized code as needed. No matter whether a program is optimized
or not, it can always be stopped, inspected, and single-stepped.
Compared to previous approaches, deoptimization allows more debugging
while placing fewer restrictions on the optimizations allowed.
Polymorphic inline caching generates type-case sequences
on-the-fly to speed up messages sent from the same call site to
several different types of objects. More significantly, they collect
concrete type information for the compiler.
With better performance yet good interactive behavior, these
techniques reconcile exploratory programming, ubiquitous abstraction,
and high performance.
Ph.D. thesis, Computer Science Department, Stanford University.
Published as Stanford CSD Technical Report STAN-CS-TR-94-1520 and
Sun
Microsystems Laboratories TR 95-35.
To get the PostScript file, click
here (860K compressed PostScript)
or here (700K PDF).
For a free printed copy of the SunLabs TR,
send e-mail to Amy
Tashbook at SunLabs or snail-mail to Editor, Technical Reports,
Sun Microsystems Laboratories, 2550 Garcia Avenue, M/S
UMTV29-01, Mountain View, CA 94043-1100.