A full list is under construction.
Multi-dimensional Search Tree with Mimimum Attribute
The study of data structure for rapid searching is a fascinating subject of both practical and theoretical interest. This paper proposes a new data structure, called k-d-m tree, which is the multi-dimensional tree with minimum attribute. A k-d-m tree represents a set of N points in k-dimensional space. In addition to the insertion and deletion operations, many kinds of searching operations can be performed more efficiently on a k-d-m tree, including exact-match, partial-match, range searching, and particularly minimum searching, which has many applications in areas such as data mining.
Diffusion after Fusion: Deriving Efficient Parallel Algorithms
Parallel skeletons are ready-made components whose efficient implementations are known to exist. Using them, programmers can develop parallel programs without concerning about lower level of implementation. However, programmers may often suffer from the difficulty to choose a proper combination of parallel skeletons so as to construct efficient parallel programs. In this paper, we propose a new method, namely diffusion after fusion, which can not only guide a programmer to develop efficient parallel programs in a systematic way, but also be suitable for optimization of skeleton parallel programs. In this method, first we remove unnecessary intermediate data structure by fusion without being conscious of parallelism. Then we parallelize the fused program by diffusion. Using the Line-of-sight problem, this paper shows that the proposed method is quite effective.
Complete axiomatization of algebraic construction of tree decomposable graphs
People frequently found that many NP-hard graphs problems are reduced to polynomial (or, even linear) time when the range of problems is restricted to tree decomposable graphs (or, equivalently graphs with bounded tree width). The key idea is tree decomposition, i.e., the algebraic construction of tree decomposable graphs.
In this talk, we present the complete axiomatization of an algebraic construction of tree decomposable graphs. We first show an algebraic construction of graphs and its complete (infinite) axiomatization. Next we restrict them to tree decomposable graphs. The proof is mostly based on rewriting techniques, including the new sufficient condition to show that an extension of reduction relation is conservative.
Paper Introduction: Fegaras & Sheard, Revising Catamorphisms over Datatypes with Embedded Functions, POPL'96.
Abstract of the paper: We revisit the work of Parterson and of Meijer and Hutton, which describes how to construct catamorphisms for recursive datatype definitions that embed contravariant occurrences of the type being defined. Their construction requires, for each catamorphism, the definition of an anamorphism that has an inverse-like relationship to that catamrphsism. We present an alternative construction, which replaces the stringent requirement that an inverse anamorphism be defined for each catamorphsims with a more lenient restriction. The resulting construction has a more efficient implementation than that of Paterson, Meijer, and Hutton, and the relevant restriction can be enforced by a Hindley Milner type inference algorithm. We provide numerous examples illustrating our method.
Combining Semantics with Non-Standard Interpreter Hierarchies
Abstract:
This talk reports on results concerning the combination of
non-standard semantics via interpreters. We define what a "semantics
combination" means and identify under which conditions a combination
can be realized by computer programs (robustness, definedness
preserving). We develop the underlying mathematical theory and
examine the meaning of several non-standard interpreter towers in
computational terms. Our results suggest a novel technique for the
implementation of programming language dialects by composing a
hierarchy of simpler, more elementary non-standard interpreters.
Joint work with Sergei Abramov, Russian Academy of Sciences.
Make it Practical: A Generic Linear Time Algorithms for Solving Maximum Weightsum Problems
Abstract:
In this talk, we propose a new method for deriving a practical
linear-time algorithm from the specification of a maximum-weightsum
problem: From the elements of a data structure $x$, find a subset
which satisfies a certain property $p$ and whose weightsum is maximum.
Previously proposed methods for automatically generating linear-time
algorithms are theoretically appealing, but the algorithms generated
are hardly useful in practice due to a huge constant factor for space
and time. The key points of our approach are to express the property
$p$ by a recursive boolean function over the structure $x$ rather than
a usual logical predicate and to apply program transformation
techniques to reduce the constant factor. We present an optimization
theorem, give a calculational strategy for applying the theorem, and
demonstrate the effectiveness of our approach through several
nontrivial examples which would be difficult to deal with when using
the methods previously available.
Joint work with Zhenjiang Hu, Masato Takeichi, Mizuhito Ogawa
A Solution to a Class of Longest Segment Problems
Abstract:
Many researches have been devoted to derivation of efficient
algorithms for solving the longest p-segment problems, but most of
them require some closure property for the predicate p. There are
some other interesting problems which do not meet such closure
property. One example, as discussed by Zantema, is the problem of
computing a longest segment whose leftmost element is "at most" as its
rightmost element.
In this talk, we show how to generalize the Zantema's idea to deal with more general relation among the elements in a segment, and demonstrate how to apply it to calculate new efficient online algorithms for solving some practical data mining problems.
Joint work with Shunsuke Hayashi, Masato Takeichi, Hideya Iwasaki
Proof-Directed Compilation and De-Compilation
Abstract:
Parallel to the correspondence between the typed lambda calculus and
the natural deduction proof system, it is possible to establish a
correspondence between a lower-level language (such as a bytecode code
language) and a proof system of the intuitionistic propositional
logic. This enables us to present compilation of the lambda calculus
as a transformation from the natural deduction to a proof system of a
low-level language. Moreover, the inverse transformation is equally
possible, yielding proof-directed de-compilation of a low-level
language.
In this talk, I will outline several proof systems corresponding to low-level languages and proof transformations to and from them. Since this is a relatively new topic, I will organize this talk as a one on ongoing recent work rather than a tutorial on an established area, and will discuss various research issues regarding this proof-theoretical view of low-level languages.
Linear-time self-interpretation of the pure lambda calculus
Abstract:
We show that linear-time self-interpretation of the pure untyped
lambda calculus is possible, in the sense that interpretation has a
constant overhead compared to direct execution under various execution
models. The present paper shows this result for reduction to weak head
normal form under call-by-name, call-by-value and call-by-need.
We use a self-interpreter based on previous work on
self-interpretation and partial evaluation of the pure untyped lambda
calculus.
We use operational semantics to define each reduction strategy. For
each of these we show a simulation lemma that states that each
inference step in the evaluation of a term by the operational
semantics is simulated by a sequence of steps in evaluation of the
self-interpreter applied to the term (using the same operational
semantics).
By assigning costs to the inference rules in the operational
semantics, we can compare the cost of normal evaluation and
self-interpretation. Three different cost-measures are used: number of
beta-reductions, cost of a substitution-based implementation (similar
to graph reduction) and cost of an environment-based implementation.
For call-by-need we use a non-deterministic semantics, which
simplifies the proof considerably.
Sized Types and Context Specialisation
Abstract:
Many program optimisations and analyses, such as array-bound checking
and termination analysis, depend on knowing the size of a function's
input and output. However, size information can
be problematic to formulate and compute.
Accurate size computation requires detecting the
{\em size relation} between different inputs of a function.
Such a relation over sizes cannot be easily computed
using the usual ``independent attribute analysis'', such as
traditional abstract interpretation based on collecting semantics, or
the standard unification-based type inference. Consequently, we
expressed size relations in terms of Presburger arithmetic, and
employed a constraint solver called Omega Calculator to assist in
size inference. To infer the size of a recursive function, we
performed transitive closure, whenever possible, over size relation.
We also devised some generalisation heuristics to assist in obtaining
a suitable fixed point for the size, when transitive closure computation
of Omega fails.
In this talk, we describe a type system for inferring sized type, and
demonstrate how it may be used by an advanced
form of program specialisation, called context specialisation.
This specialisation propagates constraints that can be used
to eliminate unnecessary tests/checks. We explore the capabilities
of context specialisation, as well as key problems encountered in
automating the process.
Calculating Linear-time Algorithms from Predicate Description of Problems on Recursively Data Structures
Abstract: In this talk, we shall present a new method of deriving linear-time algorithms from predicative specification, and demonstrate effectiveness of our methods by deriving linear-time algorithms to solve the maximum sum segment problems both on lists and trees.
Digital Cities
Abstract: Digital cities have been investigated by researchers all over the world. Among these, AOL Digital cities, European Digital Cities, Digital City Kyoto and Digital City Shanghai seem to obtain most of the attention. In this presentation, I'd like to give a brief introduction of Digital City Shanghai and Digital City Kyoto.
The Universal Resolving Algorithm: Inverse Computation in a Functional Language
Abstract: We present an algorithm for inverse computation in a first-order functional language based on the notion of perfect process trees. The universal resolving algorithm (URA) introduced in this talk is sound and complete. The algorithm which we designed and implemented for TSG, a typed dialect of S-Graph, shows some remarkable results for inverse computation of programs such as pattern matching and the inverse interpretation of While-programs.
Joint work with Sergei Abramov (Russian Academy of Sciences)
C -> Haskell, or Yet Another Interface Generator
Abstract: This talk introduces an interfacing tool that simplifies Haskell access to C libraries. The tool processes existing C header files that determine data layout and function signatures on the C side in conjunction with Haskell modules that specify Haskell-side type signatures and marshaling details. Hooks embedded in the Haskell code signal access to C structures and functions---they are expanded by the interfacing tool in dependence on information from the corresponding C header file. Another noteworthy property is the lightweight nature of the approach.
Fair Termination (Preliminary Results)
Abstract: The goals of this research are to define the abstract notions of fair termination in the framework of term rewriting and to devise the technique for verifying fair termination of Dijkstra's guarded command programs, in which we transform the target programs to term rewriting systems. In this talk we present the method to reduce fair termination problem of guarded command programs to another problem, termination of relative rewriting, which is known as a generalization of termination problem of term rewriting.
The Design and Implementation of an Extensible Structure Editor
Abstract:
Program transformation systems and theorem proving tools
need a suitable user interface: a structure editor is ideal for the
purpose. We describe a design for such an editor, where the edit
actions are Haskell functions on a fairly simple data structure - the
user can easily extend the editor by writing new functions using the
combinators provided. Two guiding design principles are that the
editor should be modeless, and that entry of new tree fragments should
be no more difficult than in an ordinary text editor. We show how our
design can be combined with a graphical front end, written using the
FranTk library. The result is a compact Haskell program that is easily
adapted to serve as a user interface for various tools.
(joint work with Meurig Sage and Bernard Sufrin)
Proof methods for structured corecursive programs
Abstract:
Corecursive programs produce values of greatest fixpoint
types, in contrast to recursive programs, which consume
values of least fixpoint types. There are a number of
widely used methods for proving properties of corecursive
programs, including fixpoint induction, the take lemma, and
coinduction. However, these methods are all rather
low-level, in that they do not exploit the common structure
that is often present in corecursive definitions. We argue
for a more structured approach to proving properties of
corecursive programs. In particular, we show that by
writing corecursive programs using a simple operator that
encapsulates a common pattern of corecursive definition, we
can then use high-level algebraic properties of this
operator to conduct proofs in a purely calculational style
that avoids the use of inductive or coinductive methods.
(joint work with Graham Hutton, University of Nottingham)
Paper Introduction
Matteo Frigo. A Fast Fourier Transform Compiler. In Proceedings of the ACM PLDI'99 (SIGPLAN Notices Vol.34, No.5), pp.169--180, ACM Press, 1999.
Summary: The FFTW library for computing the discrete Fourier transform(DFT) has gained a wide acceptance in both academia and industry, because it provides excellent performance on a variety of machines. In FFTW, most of the performance-critical code was generated automatically by a special-purpose compiler, that outputs C code. It is written in Objective Caml, and it can produce DFT programs for any input length, and specialize the DFT program for the real case. This paper describes the internals of this special-purpose compiler.
The PS version of this paper and further information is available at http://www.fftw.org/.
Effects of recursion removal under environments with cache and garbage collection
Basis of software development is to guarantee productivity and
safety. Programming using recursion and/or garbage collection (gc)
satisfies these with sacrificing execution time.
Methods of recursion removal, which transforms recursive programs
into iterative ones, have been therefore studied,
yet its effects are not argued so much.
This report explains recursion removal
brings great refinement toward recursive programs
and ones running with garbage collection
under the current computational environment with cache.
In our experiments, recursion-removed programs run at most
9.1 times faster than the original at gcc and
5.7 times faster at Common Lisp.
D Sands. Total Correctness by Local Improvement in Program Transformation. In Proceedings of the 22nd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages (POPL), pp. 221-232. Volume of . ACM Press. 1995.
Summary: This paper presents a condition for the total correctness of transformations on recursive programs, which, for the first time, deals with higher-order functional languages (both strict and non-strict) including lazy data structures. The main technical result is an improvement theorem which says that if the local transformation steps are guided by certain optimisation concerns (a fairly natural condition for a transformation), then correctness of the transformation follows. The improvement theorem makes essential use of a formalised improvement-theory; as a rather pleasing corollary it also guarantees that the transformed program is a formal improvement over the original. The theorem has immediate practical consequences: (i) It is a powerful tool for proving the correctness of existing transformation methods for higher-order functional programs, without having to ignore crucial factors such as memoization or folding. We have applied the theorem to obtain a particularly simple proof of correctness for a higher-order variant of deforestation. (ii) It yields a simple syntactic method for guiding and constraining the unfold/fold method in the general case so that total correctness (and improvement) is always guaranteed.
Danvy's type-directed partial evaluation (TDPE, a.k.a Berger and Schwichtenberg's normalization by evaluation) is a simple and efficient way of strong normalization in simply-typed lambda-calculus. In this talk, I present a completely online variant of TDPE, which is straightforwardly applicable not only to simply-typed lambda-calculus but also to realistic functional languages such as ML and Scheme. Furthermore, I show the correspondence of our method with an online variant of Thiemann's cogen-based syntax-directed partial evaluation (SDPE), which sheds light on the relationship of TDPE and SDPE.
Well-quasi-order (WQO) is one of the most important concepts in combinatorics. In this survey, we will introduce Kruscal's theorem and its extensions (which enable us to extend a WQO to various data structures) with their basic proof techinique, called minimal bad sequence (MBS). We also briefly overview their applications, such as automatic termination detection and polynomial-time algorithm generation.
Abstract: To be given soon.
Parallel primitives (skeletons) intend to encourage programmers
to build a parallel program from ready-made components for which
efficient implementations are known to exist, making the
parallelization process easier. However, programmers often suffer
from the difficulty to choose proper parallel primitives as well
as to combine them well in order to develop efficient parallel
programs. To overcome this problem, we shall propose a new
transformation, called {\em diffusion}, which can efficiently
decompose a recursive definition into several functions such that
each function can be described using parallel primitives,
allowing programmers to describe algorithms in a more natural
recursive form. We demonstrate our idea with several interesting
examples.
We propose a language mechanism for functional-style data-parallel processing of recursively defined data. The crucial idea behind the language mechanism is that a recursively defined datum is represented as a system of equations, where every node of the recursive datum is assigned a unique variable $\alpha$ and each equation $\alpha=V$ denotes that the recursive datum has a value $V$ at $\alpha$. We can achieve data-parallelism on recursively defined data by applying the same function over system of equations that simultaneously transforms each equation to another one.
We give a data-parallel programming language that integrates the mechanism into a typed higher-order functional language. Combining the mechanism and the ordinal recursion, we obtain a functional-style data-parallel programming method called data-parallel recursion. A significant class of data-parallel algorithms based on the pointer-jumping technique can be expressed by data-parallel recursion, and the resulting functions are type-safe and can be used polymorphically.
(joint work with Atsushi Ohori)
Formal methods have been extensively researched in academia and much progress has been made in formal specification, refinement, and verification. Applications of formal methods to safety-critical systems are increasing and some of them have produced significant benefits in reducing costs and enhancing system dependability. In order to make formal methods more effective and applicable to complex systems development at large, obstacles have yet to be overcome. For example, how can formal specifications be made easy to read and write for most engineers in industry? how can formal methods be effectively and consistently integrated into established industrial software processes? how can software systems be easily verified and validated under practical constraints of schedule, budget, and labour? how can formal methods be supported intelligently in the entire software process? how can documentation, management, cost estimation, and communication be improved in a software project by applying formal methods?
To overcome these obstacles, we propose to take an engineering approach to use formal methods. We expect formal engineering methods to offer engineering disciplines and techniques to provide practical and rigorous ways for systems development.
In this talk, I will first give a comprehensive introduction to formal methods and then describe what are formal engineering methods by giving a comparison between them and traditional formal methods. As an example of formal engineering methods, I will talk about SOFL (Structured Object-based Formal Language) and its method, and show how SOFL is applied to development of real software systems.
Program specialization can lead to significant running time improvements for a number of well known and good algorithms for practical problems, but not for all; and code explosion sometimes occurs.
The talk will discuss benefits gained and new problems encountered in applying partial evaluation to speed up programs in areas outside programming languages.
Situations in which partial evaluation can be of benefit will be analyzed in general terms, and applications to computer graphics and some other problems will be described. A few types of program especially susceptible to such optimizations will be characterized. Principles that lead to speedup are emphasized, particularly for the ubiquitous "divide and conquer" paradigm, so one can see when to consider using specialization for practical problems.
A class of "oblivious" algorithms not suffering from code explosion
will also be described. Surprisingly, the speedup obtained by specializing
to small problem size sometimes speeds up the entire computation by nearly
the same amount as n goes to infinity, and avoids problems of code explosion.
How NOT to Write an Interpreter for Specialization
A partial evaluator, given a program and a known ``static'' part of its input data, outputs a residual program in which computations depending only on the static data have been performed.
Ideally the partial evaluator would be a ``black box'' able to extract nontrivial static computations whenever possible; which never fails to terminate; and which always produces residual programs of reasonable size and maximal efficiency, so all possible static computations have been done.
Practically speaking, partial evaluators fall short of this goal; they may loop, sometimes pessimize, and/or explode code size. A partial evaluator is analogous to a spirited horse: while impressive results can be obtained when used well, the user must know what he/she is going.
Our thesis is that this knowledge can be communicated to
new users of these tools.
This paper preesents a series of examples, concentrating on a quite
broad and on the whole successful application area: using specialisation
to remove interpretational overhead. It presents a series of examples,
both positive and negative, to illustrate the effects of program style
on the efficiency and size of the of target programs obtained by specialising
an interpreter with respect to a known source program.
Innermost termination of equational rewriting
We outline the recent study, based on the equational rewriting, about innermost reduction. Equational rewriting is a framework that gives a suitable model so that equations like commutative and associative axioms can be handled in term rewriting naturally. On the other hand, the innermost reduction corresponds to the call-by-value semantics of the computation mechanism, which is found in the bases of programming languages, e.g., ML, Miranda, and recently Mathematica. In this talk, we generalize the concept of innermost reduction to equational rewriting, and then, we discuss its properties such as compatibility, preservation of reducibility, and persistence. Also we compare several different definitions of innermost reduction and we examine those validity from the viewpoint of the generalization of term rewriting.
DualNAVI: The Interactive Document Retrieval Interface
Verification based on flow analysis
Verification based on flow analysis is presented.
The method is originated from Le Metayer's work (ACM PEPM95) and
is reconstructed within the framework of abstract interpretation.
Several examples including quick-sort, merge-sort, format are also
examined. We also discuss on future directions including its
implementation.
Programming Polytypic Scans
This is to report our ongoing research, aiming to show how to generalize scan, a powerful parallel primitives, from lists to other data types, how to derive parallel programs in scan from naive specifications, and how to implement scan efficiently.
Synchronisation Analysis to Stop Tupling
The tupling transformation strategy can be used to merge loops together by combining recursive calls and also to eliminate redundant calls for a class of programs. In the latter case, this transformation can producec super-linear speedup. Existing works in deriving a safe and automatic tupling only apply to a very limited class of programs. In this talk, we present a new paramter analysis, called synchronisation analysis, to solve the termination problem for tupling. With it, we can perform tupling on functions with multiple recursion and accumulative arguments without the risk of non-termination. This significatly widens the scope for tupling, and potentially enhances its usefulness.
Binding-Time Analysis for Specializer without Code Duplication
This paper presents a monovariant binding-time analysis for an offline specializer where no code duplication will occur. Although the standard let-insertion technique is used, our framework allows us to residualize an expression in a let-expression without making it dynamic. This is in contrast to the conventional specializers where once an expression is residualized in a let-expression, it becomes completely dynamic. The key technique here is to carry both static and dynamic values for expressions that are used in both static and dynamic context. The overhead of carrying paired values is minimized by identifying those expressions that are actually used as both static and dynamic. Type-based binding-time analysis for this specializer is formalized and demonstrated with an example. An extension of this framework to cope with data structures is also presented. This technique is essential for some languages such as Scheme where the identity of data structures are checked for equality and thus has to be preserved.
Imperative Program Specialization:A Case Study
Partial evaluation is an automatic program optimization technique, similar in concept to, but in several ways different from optimizing compilers. Optimization is achieved by changing the times at which computations are performed. A partial evaluator can be used to overcome losses in performance that are due to highly parameterized, modular software.
This talk gives a short introduction to program specialization by off-line partial evaluation. We show how a binding-time analysis can be used to identify potential sources for specialization in numerical algorithms. To demonstrate the effectiveness of this approach we used an automatic partial evaluator for FORTRAN which we developed. Results for several well-known numerical algorithms show that some remarkable speedup factors can be obtained.
Joint work with R.Baier, R.Nakashige, R.Zoechling.
Generating Extensions and Program Specialization
Program specializers can divide a computation into several computation stages. Self-applicable specializers have been used successfully to automate the generation of compilers using the Futamura projections.
While the Futamura projections define the generation of compilers from interpreters, the specializer projections presented in this talk define the generation of specializers (and related transformers) from interpreters. We investigate the theoretical and practical limitations of conventional specialization tools, present multi-level generating extensions and demonstrate that, in combination with the so-called cogen approach, they are far more practical than previously supposed. The program generator which we designed and implemented for a higher-order functional language converts programs into very compact multi-level generating extensions that guarantee fast successive specialization.
Based on joint work with J.Joergensen.
On a New Method for Dataflow Analysis of Java Virtual Machine Subroutines
Report on IFIP WG 2.1 Working Meeting #51
Nested Parallel Programming
Current main-stream data-parallel languages forbid the use of nested parallelism, i.e., parallel operations cannot be used in parallel. For example, the parallel FORALL construct of High Performance Fortran (HPF) cannot be used within another FORALL. This restriction considerably simplifies the implementation of a data-parallel language, but also severely restricts the range of algorithms that can be implemented conveniently, in particular, it complicates the processing of irregular data structures, such as sparse matrices.
This talk aims to give an overview over Blelloch's functional language Nesl, which provides nested parallelism and can be efficiently implemented using Blelloch's Flattening transformation. I will demonstrate how algorithms that crucially rely on nested parallelism can be expressed in Nesl; but I will also discuss the limitations of the language and possible improvements and extensions. The material presented in this talk was partially developed together with M. Simons, G. Keller, and W. Pfannenstiel of the Software Engineering Group (SWT) of the Technical University of Berlin.
Adapting the Fusion/Tupling transformation to a parallel context
The flattening transformation as an implementation technique for nested data-parallel languages results in code consisting of many very fine-grained parallel operations. In the current implementations of Nesl, these operations are realized, as part of a parallel vector library, which is available for a wide range of computers.
However, especially on distributed memory machines, this approach has serious performance problems. This is due to two main reasons:
Constraint-Based Parallelization
Transforming List Homomorphisms to Vector Codes
Intersection typing and partial combinatory algebras
The intersection typing discipline is introduced by Dezani-Coppo-Salle in 1970s, and it has been applied for the study of normalization properties of lambda-terms, operational semantics, and abstract interpretations.
Although combinators are important as compilation codes of lazy functional programming languages, the normalization properties has not been studied so far.
For it, we introduce an intersection typing system which characterizes the strongly normalizable combinators. The system is based on Dezani-Hindley's intersection typing system for combinators. As a by-product of our characterization, we can characterize the strongly normalizable lambda-terms, by using a translation from the lambda-terms to the combinators.
The system is proved to be sound and complete for the partial combinatory algebras, which are models of the call-by-value lambda-calculus, and yields a model of higher typed lambda-calculi.
Based on above results, we will answer the question of Bethke-Klop. That is, given a partial combinatory algebra of strongly normalizable combinators modulo the weak equality, its maximal consistent extension is isomorphically embeddable to the filter domain of our typing system.
Types and trees in lambda calculus
In the Torino tradition of intersection type disciplines there has been developed type assignment systems for various variants of Boehm trees (Boehm trees, Levy-Longo tree, Berarducci tree ...). In this talk I will try to give a uniform approach to the various intersection type assigment systems.
Partial Evaluation for Higher-Order Imperative Languages
Offline partial evaluation (pe) is an automatic program transformation technique that is strong enough to tackle realistic problems and simple enough to admit efficient implementation. While there are successful pe systems for pure functional languages, it must be observed that many programs rely on impure effects like state and exceptions for efficiency or convenience. Our work aims at extending the scope of offline pe to achieve satisfactory specialization for programs of the latter kind.
We have designed and implemented an offline pe system that applies to a large subset of the Scheme language. Its distinguishing feature is the ability to perform assignments at specialization time. It yields satisfactory specialization for programs in message-passing style, programs with cyclic data, DAG unification, and so on.
First, we briefly introduce the principles underlying the binding-time analysis in our system. Then we describe the design of the specializer and its extension for the specialization of named program points. Finally, we discuss some implementation issues of the specializer.
This work is based on collaboration with Dirk Dussart, Belgium.
Deriving Laws by Program Specialisation
Laws over user-defined functions are frequently needed in more advanced program transformation and reasoning techniques. Some examples include identity, idempotence, commutativity, associativity, distributivity and promotion laws. While it is not unreasonable to expect programmers to provide these laws, it would be nice if there are some methods to derive them systematically from their function definitions. We make a first attempt to provide a collection of techniques to help derive a repertoire of such laws. The method works by first specialising the expressions, before an attempt is make to relate the specialised code to existing functions.
Proof nets construction in linear logic
Proof nets have been defined to be the equivalent in linear logic of natural deduction in intuitionist logic. So proof nets could have application both in linear logic programming and in automatic program generation, because of Curry-Howard isomorphism. Therefore it seems important to be able to construct them, if possible automatically.
We present efficient algorithms to construct proof nets in MLL (multiplicative fragment) and MALL (multiplicative ans additive fragment of linear logic).
Perpetuality and Uniform Normalization
(by Zurab Khasidashvili and Mizuhito Ogawa, to appear in ALP '97)
We define a perpetual one-step reduction strategy which enables one to construct minimal (w.r.t. L\'evy's ordering $\lleq$ on reductions) infinite reductions in Conditional Orthogonal Expression Reduction Systems. We use this strategy to derive two characterizations of perpetual redexes, i.e., redexes whose contractions retain the existence of infinite reductions. These characterizations generalize existing related criteria for perpetuality of redexes. We give a number of applications of our results, demonstrating their usefulness. In particular, we prove equivalence of weak and strong normalization (the uniform normalization property) for various restricted $\lambda$-calculi, which cannot be derived from previously known perpetuality criteria.
TITLE: Surveys on Term Rewriting systems LECTURER: Mizuhito Ogawa (NTT) TIME: 14:00 -- 18:00 on August 14 and 15 (1997) (Two afternoons) PLACE: Room 132, Building 9 of Faculty of Engineering, Univ. of Tokyo FEE: Free CONTEXTS: The followings are the (brief) contents of surveys of TRS. 1. Church-Rosser property and Unique normal form property 2. Termination 3. Completion and Inductionless induction 4. Decidable results - Reachability, joinability, and Higher-order unification If time permitted, a surver on WQO (Well Quasi Order) will be given too. 1. Definitions and Various Kruskal's type theorems 2. Applications of Kruskal's type theorems 3. Overview on proofs of Kruskal's type theorems 4. Regurality and WQO 5. Future research topics Slides for WQO are available at Mr. Ogawa's homepage (in Japanese, http://www.brl.ntt.co.jp/people/mizuhito/index-j.html).
Implementation strategies for nested parallelism
Regular computations are well suited for execution on parallel machines, since central problems like the load balancing strategy and data distribution can be determined at compile time. However, numerous computational intensive problems like adaptive simulation or numerical problems have an irregular structure, which makes the efficient implementation on a parallel computer much harder. Consequently, compilers for programming languages allowing to express such computations have to be designed carefully.
Nested parallel languages do support irregular problems. The so-called flattening technique poses one way to implement nested parallelism, and several implementations prove the efficiency of this techinque for shared memory machines. When using flattening, nested parallel computations are compiled into equivalent flat computations without changing the degree of parallelism specified in the source language. However, on distributed memory machines, the implementation has to deal with additional problems, and there is yet no efficient implementation on distributed memory machines proving the feasibility of the approach for those architectures.
In the talk, I will first introduce a formal framework that allows to understand the flattening technique as a programm transformation based on equational reasoning. Such a framework also provides the basis for optimizations. Second, I will propose an implementation technique that tackles many problems faced when implementing flattened nested parallel lanuages on distributed memory machines.
T. Sheard. A Type-Directed, On-Line, Partial Evaluation for a Polymorphic Language. PEPM'97.
Co-ordination in the Lambda Calculus
Goffin is an extension of the functional language Haskell with combinators from concurrent constraint programming. It allows the explicit expression of the co-ordinating behaviour that is needed to program parallel and distributed systems. An extension of the lambda calculus is proposed as the essential core of Goffin. It distinguishes computing expressions and co-ordinating agents by the type system and uses an explicit notion of names for a smooth integration of equational constraints into the calculus.
Lambda Calculus with First-class Environment and its Semantics
We present a new lambda calculus which enables us to handle first-class environments, where we can treat environment as first-class objects like integer or boolean values. We have studied fundamental properties of the calculus for these years: e.g. confluence, strong normalization for simple typed version, type inference algorithm, etc. An untyped version of this calculus and its semantics is now investigating, which will be mainly talked in the next talk.
Ralativized Stable Domains
Partial Evaluation of Call-by-value Lambda-calculus with Side-effects
"Constraints to Stop Higher Order Deforestation", By Helmuth Seidl and Morten Heine S\o{}rensen, POPL'97.
Some ideas on "Parallelizing Sequential Programs"
Towards a Categorical Foundation of HYLO
Analytic functors as a foundation of the theory of recursive path orderings
Deriving Efficient Parallel Programs from Complex Recurrences
The Geometry of Orthogonal Reduction Spaces
Metayer's PEPM'95 paper
About the HYLO Calculational System
HARL can easily be reached by the frequent Tobu-Tojo line from Ikeburo in Tokyo. Take the express train (One at every 15 minutes from Ikebukero: xx.00, xx.15, xx.30, xx.45) to Takasaka. This takes about one hour. Then one takes the west exit of Takasaka station. One proceeds then either by Hitachi bus or by taxi (app. 15 min. and about 2000 yen, I believe). For more details, see here for route and here for the time table for Hitachi bus.
A Lambda-to-CL Translation for Strong Normalization
Some Conjuctures on "CR property, Unique normalizing, ..."
On Analysis of Functional Programs
Meaningless terms in rewriting
On Infinite Rewriting
Survey on the Third Homomorphism Theorem
To be announced.
BTA Algorithms to Ensure Termination of Off-line Partial Evaluation
Unique Normal Form Property of Higher-Order Rewriting Systems
Towards Practical Tupling Transformation
Survey on Set Constraints
On Full Abstraction of PCF and Game Semantics (Cont'd)
J. Hughes, L. Pareto, and A. Sabry. Proving Correctness of Reactive Systems Using Sized Types. POPL '96. (Con'd)
Infinite Lambda Calculus
On Full Abstraction of PCF and Game Semantics (Cont'd)
J. Hughes, L. Pareto, and A. Sabry. Proving Correctness of Reactive Systems Using Sized Types. POPL '96.
Study on Abstract Domain of Computational Path Analysis
On Full Abstraction of PCF and Game Semantics
(Click here to get a postscript compus map.)
Tupling Transformation in Calculational Form
On Translations of classical logic into constructive logic
Ryu Hasegawa, Well-orderings of algebras and Kruskal's theorem, Logic Language and Computation: Festschrift in Honor of Satoru Takasu, pp.133-172, LNCS 792, 1994 (Con'd)
Barry Jay, Shapely Types and Shape Polymorphisms, Programming Languages and Systems -- ESOP'94. Proceedings, pp. 302-316, LNCS 788, 1994 (Con'd),
(Some related papers can be accessed from the Barry Jay's www home page.)
Ryu Hasegawa, Well-orderings of algebras and Kruskal's theorem, Logic Language and Computation: Festschrift in Honor of Satoru Takasu, pp.133-172, LNCS 792, 1994
On Shape Polymorphisms (Some related papers can be accessed from the Barry Jay's www home page.)
An example calculation from a paper by Bird & de Moor: Relational Program Derivation and Context-free Language Recognition in 'A Classical Mind', Prentice Hall, 1994 (Cont'd)
Bananas Go Bananas
An example calculation from a paper by Bird & de Moor: Relational Program Derivation and Context-free Language Recognition in 'A Classical Mind', Prentice Hall, 1994
On Relational Calculus
Categorical Aspects of Algorithm Calculus (Cont'd)
Deriving Hylomorphisms from Recursive Definitions