|
|
Since its introduction over 15 years ago, numerous applications of dynamic slicing aimed at improving software quality, reliability, security, and performance have been identified. For example, dynamic slicing can be used for debugging programs, detecting spyware, driving software testing, measuring module cohesion to drive code restructuring, and guiding performance enhancing transformations via measuring the criticality of instructions and identifying instruction isomorphism. Although a number of innovative uses of dynamic slicing have been identified, there is no significant evidence of their widespread use as part of software development tools. The primary reason for their lack of widespread adoption is the prohibitively high space cost of dynamic slicing -- we recently showed that dynamic dependence graphs that are constructed to enable computation of dynamic slices take up Gigabytes of memory for realistic program runs causing existing algorithms to run out of memory.
In this talk I will present evidence that for the first time shows that precise dynamic slicing may be practical. I will present two new dynamic slicing algorithms: the demand driven algorithm (LP) and compacted graph algorithm (OPT). The OPT algorithm saves program traces on disk and, in response to slicing requests, constructs the relevant portion of the dynamic dependence graph in memory using the traces. The OPT algorithm saves memory by using a compacted version of the dependence graph. This approach is based upon the observation that in many cases a single representative edge can be shared across multiple dynamic instances of an exercised dependence. The memory requirements of both of these algorithms are reasonably low. However, while the average time to compute a slice using LP ranges from 4.69 to 25.21 minutes for different benchmarks, the OPT algorithm takes 1.74 to 36.35 seconds to compute the same slices.