Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Guy Blelloch: Research
[go: Go Back, main page]

Guy E. Blelloch: Research

This page describes research that my students and I have been involved in.

Adaptive Computation

It is often the case that by slightly changing the input to an algorithm, the execution path will only change slightly. The idea of adaptive computation is to let the algorithm "adapt" to such small changes by only updating the part of the computation that is required to be updated. The idea is implemented by maintaining a dependence graph of the computation and using this graph to propagate changes. The propagation will both update the output and update the graph itself (so that it is ready for another change).

We are interested both in techniques for maintaining the graph, and in proving bounds for how much needs to be updated for different algorithms.

Our initial work in this area is described in the following paper.

Graph and Index Compression

Multiprocessor Garbage Collection

Thread Scheduling

Many of today's high level parallel languages support dynamic, fine-grained parallelism. The language implementation typically requires an efficient scheduling algorithm to assign computations to processors at runtime. However, in an attempt to expose a sufficient degree of parallelism to keep all processors busy, schedulers often create many more parallel threads than necessary, leading to excessive memory usage. Further, the order in which the threads are scheduled can greatly affect the total size of the live data at any instance during the parallel execution, and unless the threads are scheduled carefully, the parallel execution of a program may require much more memory than its serial execution.

Our research involves finding theoretically space- and time-efficient scheduling algorithms for multithreaded programs, including programs with nested parallelism and synchronization variables. In addition to proving the space and time bounds of the resulting parallel schedules, we demonstrate that they are efficient in practice. We have implemented a runtime system that uses our algorithm to schedule threads in computations with nested parallelism. The following papers describe our results.

Parallel Algorithms

We have studied and experimented with parallel algorithms for a variety of problems, including sorting, Delaunay-triangulation, graph-connectivity, N-body simulations and set operations. The goal of our work has been to look at key problems and see what the very fastest algorithm we could develop for the problem is. Our work has consisted of both looking at theoretical and practical consequences. We have implemented all the algorithms we describe. In many cases our algorithms are the fastest available (at least at the time they were implemented). I've also written the chapter in the CRC Press Computer Science and Engineering Handbook (with Bruce Maggs). Our work includes the following.

The NESL Programming Language

In the early 90s we designed a parallel programming language called NESL (NESted-parallel Language) and implemented it on a wide variety of parallel machines. The goal has been to experiment with ideas of nested parallelism, and with language-based cost models. We have coded up a large number of algorithms in NESL, and in most cases the code is very much more concise than in other parallel programming languages. Many of these algorithms are available off of the NESL algorithm library page, and demos of some of them are available off of the algorithm animation page. An online tutorial and interpreter are also available. The Chapter on Parallel Algorithms in the CRC Computer Science and Engineering Handbook uses pseudocode based on NESL.

Perhaps the most important feature of NESL is the model it supplies for analyzing costs. In particular it defines costs in terms of work (total number of operations) and depth (longest sequence of dependences, or critical path) and defines rules for composing these costs across expression. In general when making calls sequentially the sum is taken of both the work and depth, and when making calls in parallel the maximum is taken over the work and depth. This idea of using work and depth as cost measures has been adopted by other languages, such as the CILK programming language.

In addition to the NESL home page the following papers describe our work on NESL.

Provably-Efficient Language Implementations

The idea of a provably-efficient language implementations is to
  1. Formally define an abstract cost model from which the users of the language can analyze their code.
  2. Specify a mapping from these costs to running times on a concrete machine model.
  3. Describe an implementation and prove that this mapping is achieved with the implementation.
This work was originally motivated by the need to model time in the parallel programming language NESL, but we have applied it to other parallel programming models. We formalize the cost model using a profiling semantics, which is an operational semantics extended with cost measures. The following papers describe our results.

Other Research


Guy.Blelloch@cs.cmu.edu. Last updated October 1998.