Note: The documents distributed by this server have been provided by the contributing authors as a means to ensure timely dissemination of scholarly and technical work on a noncommercial basis. Copyright and all rights therein are maintained by the authors or by other copyright holders, notwithstanding that they have offered their works here electronically. It is understood that all persons copying this information will adhere to the terms and constraints invoked by each author's copyright. These works may not be reposted without the explicit permission of the copyright holder.
For the fuller list of publications including those in Japanese (we write quite a few of them, mostly with my students and colleagues), please visit here.
[pdf] Kazunori Ueda and Norio Kato, LMNtal: a language model with links and membranes. In Proc. Fifth Int.\ Workshop on Membrane Computing (WMC 2004), LNCS 3365, Springer, 2005, pp.110-125.
LMNtal (pronounced "elemental") is a simple language model based on graph rewriting that uses logical variables to represent links and membranes to represent hierarchies. The two major goals of LMNtal are (i) to unify various computational models based on multiset rewriting and (ii) to serve as the basis of a truly general-purpose language covering various platforms ranging from wide-area to embedded computation. Another important contribution of the model is it greatly facilitates programming with dynamic data structures.
Kazunori Ueda and Norio Kato, The Language Model LMNtal. In Proc. 19th Int. Conf. on Logic Programming (ICLP'03), LNCS 2916, Springer-Verlag, pp.517-518, 2003.
[pdf]
Kazunori Ueda,
A Pure Meta-Interpreter for Flat GHC, A Concurrent Constraint
Language.
In Computational Logic: Logic Programming and Beyond
(Essays in Honour of Robert A. Kowalski, Part I),
A.C. Kakas, F. Sadri (Eds.), Lecture Notes in Artificial Intelligence
2407, Springer-Verlag, 2002, pp.138-161.
This paper discusses the construction of a
meta-interpreter of Flat GHC, one of the simplest and
earliest concurrent constraint languages.
Meta-interpretation has a long history in logic programming, and has
been applied extensively to building programming systems, adding
functionalities, modifying operational semantics and evaluation
strategies, and so on.
Our objective, in contrast, is to design the pair of
(i) a representation of programs suitable for code mobility and
(ii) a pure interpreter (or virtual machine)
of the represented code,
bearing networked applications of concurrent constraint programming
in mind. This is more challenging
than it might seem; indeed,
meta-interpreters of many programming languages
achieved their objectives by adding small
primitives into the languages and exploiting their functionalities.
A meta-interpreter in a pure, simple concurrent language is
useful because it is fully amenable to theoretical support including
partial evaluation.
After a number of trials and errors, we have arrived at \textit{treecode},
a ground-term representation of Flat GHC programs that can be easily
interpreted, transmitted over the network, and converted back to the
original syntax. The paper describes how the interpreter
works, where the subtleties lie, and what its design implies. It also
describes how the interpreter, given the treecode of a program, is
partially evaluated to the original program by the unfold/fold
transformation system for Flat GHC.
Yasuhiro Ajiro and Kazunori Ueda, Kima: an Automated Error Correction System for Concurrent Logic Programs. Automated Software Engineering, Vol.9, No.1 (2002), pp.67-94.
We have implemented Kima, an automated error correction system for
concurrent logic programs. Kima corrects near-misses such as wrong
variable occurrences in the absence of explicit declarations of program
properties.
Strong moding/typing and constraint-based analysis are turning out to
play fundamental roles in debugging concurrent logic programs as well as
in establishing the consistency of communication protocols and data
types. Mode/type analysis of Moded Flat GHC is a constraint
satisfaction problem with many simple mode/type constraints, and can be
solved efficiently. We proposed a simple and efficient technique which,
given a non-well-moded/typed program, diagnoses the ``reasons'' of
inconsistency by finding minimal inconsistent subsets of mode/type
constraints. Since each constraint keeps track of the symbol occurrence
in the program, a minimal subset also tells possible sources of program
errors.
Kima realizes automated correction by replacing symbol occurrences
around the possible sources and recalculating modes and types of
the rewritten programs systematically. As long as bugs are near-misses,
Kima proposes a rather small number of alternatives that include an
intended program. Search space is kept small because the minimal subset
confines possible sources of errors in advance. This paper presents the
basic algorithm and various optimization techniques implemented in Kima,
and then discusses its effectiveness based on quantitative experiments.
Kazunori Ueda,
A Close Look at Constraint-Based Concurrency (a tutorial).
In Proc. 17th Int. Conf. on Logic Programming (ICLP'01),
Codognet, P. (ed.), Lecture Notes in Computer Science
2237, Springer, 2001, p.9.
PDF file of the talk slides
Constraint-based concurrency, also known as the cc
(concurrent
constraint) formalism, is a simple framework of concurrency that
features (i) asynchronous message passing, (ii) polyadicity and data
structuring mechanisms, (iii) channel mobility, and (iv) nonstrictness
(or computing with partial information). Needless to say, all these
features originate from the use of constraints and logical variables for
interprocess communication and data representation.
Another feature of constraint-based concurrency is its remarkable
stability; all the above features were available essentially in its
present form by mid 1980's in concurrent logic programming languages.
Another well-studied framework of concurrency is \textit{name-based
concurrency} represented by the family of pi-calculi, in which
names represent both channels and tokens conveyed by channels.
Some variants of the original pi-calculus featured asynchronous
communication, and some limited the use of names in pursuit of nicer
semantical properties. These modifications are more or less related
to the constructs of constraint-based concurrency.
Integration of constraint-based and name-based concurrency can be
found in proposals of calculi such as the gamma-calculus, the
rho-calculus and the Fusion calculus, all of which incorporate
constraints or name equation into name-based concurrency.
This tutorial takes a different, analytical approach in
relating the two formalisms; we compare the roles of logical
variables and names rather than trying to integrate one into the
other. Although the comparison under their original, untyped
setting is
not easy, once appropriate type systems are incorporated to both
camps, name-based communication and constraint-based communication
exhibit more affinities.
The examples of such type systems are linear types for the
pi-calculus and the mode/linearity/capability systems for Guarded
Horn Clauses (GHC). Both are concerned with the polarity and
multiplicity of communication, and prescribe the ways in which
communication channels can be used. They help in-depth understanding
of the phenomena occurring in the course of computation.
The view of constraint-based concurrency that the tutorial intends to
provide will complement the usual, abstract view based on ask
and tell operations on a constraint store. For instance, it
reveals the highly local nature of a constraint store (both at
linguistic and implementation levels) which is often understood to be
a global, shared entity. It also brings resource-consciousness into
the logical setting.
This is expected to be a step towards the deployment of
cc languages as a common platform of
non-sequential computation including parallel, distributed, and
embedded computing.
[pdf]
Kazunori Ueda,
Resource-Passing Concurrent Programming.
In Proc. Fourth Int. Symp. on Theoretical Aspects of Computer Software,
Kobayashi, N. and Pierce, B. (Eds.), Lecture Notes in Computer Science
2215, Springer, 2001, pp.95-126.
PDF file of the talk slides
The use of types to deal with access capabilities of program
entities is becoming increasingly popular.
In concurrent logic programming, the first attempt was made in
Moded Flat GHC in 1990, which gave polarity structures (modes) to
every variable occurrence and every predicate argument. Strong
moding turned out to play fundamental roles in programming,
implementation and the in-depth understanding of constraint-based
concurrent computation.
The moding principle guarantees that each variable is written
only once and encourages capability-conscious programming.
Furthermore, it gives less generic modes to programs that discard
or duplicate data, thus providing the view of "data as
resources." A simple linearity system built upon the mode system
distinguishes variables read only once from those read possibly
many times, enabling compile-time garbage collection. Compared to
linear types studied in other programming paradigms, the primary
issue in constraint-based concurrency has been to deal with
logical variables and highly non-strict data structures they
induce.
In this paper, we put our resource-consciousness one step
forward and consider a class of `ecological' programs which
recycle or return all the resources given to them while allowing
concurrent reading of data structures via controlled aliasing.
This completely recyclic subset enforces us to think more about
resources, but the resulting programs enjoy high symmetry which we
believe has more than aesthetic implications to our programming
practice in general.
The type system supporting recyclic concurrent programming
gives a [-1,+1] capability to each occurrence of variable and
function symbols (constructors), where positive/negative values
mean read/write capabilities, respectively, and fractions mean
non-exclusive read/write paths. The capabilities are intended to
be statically checked or reconstructed so that one can tell the
polarity and exclusiveness of each piece of information handled by
concurrent processes. The capability type system refines and
integrates the mode system and the linearity system for Moded Flat
GHC. Its arithmetic formulation contributes to the simplicity.
The execution of a recyclic program proceeds so that every
variable has zero-sum capability and the resources (i.e.,
constructors weighted by their capabilities) a process absorbs
match the resources it emits. Constructors accessed by a process
with an exclusive read capability can be reused for other
purposes.
The first half of this paper is devoted to a tutorial
introduction to constraint-based concurrency in the hope that it
will encourage cross-fertilization of different concurrency
formalisms.
[ps.gz]
Yasuhiro Ajiro and Kazunori Ueda,
Kima -- an Automated Error Correction System for Concurrent
Logic Programs.
In Proc. Fourth International Workshop on
Automated Debugging (AADEBUG 2000).
We have implemented Kima, an automated error correction system for
concurrent logic programs. Kima corrects near-misses such as wrong
variable occurrences in the absence of explicit declarations of program
properties.
Strong moding/typing and constraint-based analysis are turning to play
fundamental roles in debugging concurrent logic programs as well as in
establishing the consistency of communication protocols and data types.
Mode/type analysis of Moded Flat GHC is a constraint satisfaction
problem with many simple mode/type constraints, and can be solved
efficiently. We proposed a simple and efficient technique which, given
a non-well-moded/typed program, diagnoses the "reasons" of
inconsistency by finding minimal inconsistent subsets of mode/type
constraints. Since each constraint keeps track of the symbol occurrence
in the program, a minimal subset also tells possible sources of program
errors.
Kima realizes automated correction by replacing symbol occurrences
around the possible sources and recalculating modes and types of
the rewritten programs systematically. As long as bugs are near-misses,
Kima proposes a rather small number of alternatives that include an
intended program. Search space is kept small because the minimal subset
confines possible sources of errors in advance. This paper presents the
basic algorithm and various optimization techniques implemented in Kima,
and then discusses its effectiveness based on quantitative experiments.
[ps]
Kazunori Ueda, Linearity Analysis of Concurrent Logic Programs.
In Proc. International Workshop on Parallel and Distributed
Computing for Symbolic and Irregular Applications,
Ito, T. and Yuasa, T. (eds.), World Scientific, 2000, pp.253-270.
Automatic memory management and the hiding of the notion of pointers are the prominent features of symbolic processing languages. They make programming easy and guarantee the safety of memory references. For the memory management of linked data structures, copying garbage collection is most widely used because of its simplicity and desirable properties. However, if certain properties about runtime storage allocation and the behavior of pointers can be obtaind by static analysis, a compiler may be able to generate object code closer to that of procedural programs. In the fields of parallel, distributed and real-time computation, it is highly desirable to be able to identify data structures in a program that can be managed without using garbage collection. To this end, this paper proposes a framework of linearity analysis for a concurrent logic language Moded Flat GHC, and proves its basic property. The purpose of linearity analysis is to distinguish between fragments of data structures that may be referenced by two or more pointers and those that cannot be referenced by two or more pointers. Data structures with only one reader are amenable to compile-time garbage collection or local reuse. The proposed framework of linearity analysis is constraint-based and involves both equality and implicational constraints. It has been implemented as part of klint v2, a static analyzer for KL1 programs.
[ps]
Kazunori Ueda, Concurrent Logic/Constraint Programming: The Next
10 Years.
In The Logic Programming Paradigm: A 25-Year Perspective,
K. R. Apt, V. W. Marek, M. Truszczynski, and D. S. Warren (eds.),
Springer-Verlag, 1999, pp.53-71.
PDF file of the talk slides
Concurrent logic/constraint programming is a simple and elegant formalism of concurrency that can potentially address a lot of important future applications including parallel, distributed, and intelligent systems. Its basic concept has been extremely stable and has allowed efficient implementations. However, its uniqueness makes this paradigm rather difficult to appreciate. Many people consider concurrent logic/constraint programming to have rather little to do with the rest of logic programming. There is certainly a fundamental difference in the view of computation, but careful study of the differences will lead to the understanding and the enhancing of the whole logic programming paradigm by an analytic approach. As a model of concurrency, concurrent logic/constraint programming has its own challenges to share with other formalisms of concurrency as well. They are: (1) a counterpart of lambda-calculus in the field of concurrency, (2) a common platform for various non-sequential forms of computing, and (3) type systems that cover both logical and physical aspects of computation.
[ps]
Yasuhiro Ajiro, Kazunori Ueda and Kenta Cho,
Error-Correcting Source Code.
In Proc. International Conference on Principles and Practice of Constraint
Programming (CP98),
Michael Maher and Jean-Francois Puget (eds.),
Lecture Notes in Computer Science 1520, Springer-Verlag, 1998, pp.40-54.
We study how constraint-based static analysis can be applied to the
automated and systematic debugging of program errors.
Strongly moding and constraint-based mode analysis are turning out to play
fundamental roles in debugging concurrent logic/constraint programs as
well as in establishing the consistency of communication protocols and
in optimization. Mode analysis of Moded Flat GHC is a constraint
satisfaction problem with many simple mode constraints, and can be
solved efficiently by unification over feature graphs. We have proposed
a simple and efficient technique which, given a non-well-moded program,
diagnoses the "reasons" of inconsistency by finding minimal
inconsistent subsets of mode constraints. Since each constraint keeps
track of the symbol occurrence in the program that imposed the
constraint, a minimal subset also tells possible sources of program
errors. The technique is quite general and can be used with other
constraint-based frameworks such as strong typing.
Based on the above idea, we study the possibility of
automated debugging in the absence of mode/type declarations. The
mode constraints are usually imposed redundantly, and the
constraints that are considered correct can be used for correcting
wrong symbol occurrences found by the diagnosis. As long as bugs
are near-misses, the automated debugger can propose a rather small
number of alternatives that include the intended program. Search space
is kept small because constraints effectively prune many irrelevant
alternatives. The paper demonstrates the technique by way of
examples.
[ps]
Kazunori Ueda and Ryoji Tsuchiyama,
Optimizing KLIC Generic Objects by Static Analysis.
In Proc. 11th International Conference on Applications of Prolog,
Prolog Association of Japan, 1998, pp.27-33.
The KLIC system has achieved both high portability and extensibility by employing C as an intermediate language and featuring generic objects that allow users to define new classes of data. It is also efficient for an untyped and unmoded language with fine-grained concurrency, but its flexibility incurs runtime overhead that could be reduced by static analysis. This paper studies how constraint-based static analysis and abstract interpretation can be used to reduce dynamic data checking and to optimize loops. We applied the proposed technique to the optimization of floating-point numbers and their arrays. The optimized KL1 programs turned out to be only 34%-70% slower than the comparable C programs.
[ps]
Kenta Cho and Kazunori Ueda, Diagnosing Non-Well-Moded Concurrent Logic Programs.
In Proc. 1996 Joint International Conference and Symposium on Logic Programming (JICSLP'96), Michael Maher (ed.), The MIT Press, 1996, pp.215-229.
Strong moding and constraint-based mode analysis are expected to play
fundamental roles in debugging concurrent logic/constraint programs
as well as in establishing the consistency of
communication protocols and in optimization.
Mode analysis of Moded Flat
GHC is a constraint satisfaction problem with many simple mode
constraints, and can be solved efficiently by unification over feature
graphs. In practice, however, it is important to be able to analyze
non-well-moded programs (programs whose mode constraints are
inconsistent) and present plausible "reasons" of inconsistency
to the programmers in the absence of mode declarations.
This paper discusses the application of strong moding to systematic
and efficient static program debugging. The basic idea, which turned
out to work well at least for small programs, is to find a minimal
inconsistent subset from an inconsistent set of mode constraints and
indicate the symbol( occurrence)s in the program text that imposed
those constraints. A bug can be pinpointed better by finding more
than one overlapping minimal subset. These ideas can be readily extended
to finding multiple bugs at once. For large programs,
stratification of predicates narrows search space and produces more
intuitive explanations. Stratification plays a fundamental role in
introducing mode polymorphism as well.
Our experiments show that the
sizes of minimal subsets are small enough for bug location and do not
depend on the program size, which means that diagnosis can be done in
almost linear time.
[ps]
Kazunori Ueda, Experiences with Strong Moding in Concurrent Logic/Constraint
Programming.
In Proc. International Workshop on Parallel Symbolic
Languages and Systems (PSLS'95), Lecture Notes in Computer Science 1068,
Springer-Verlag, Berlin, April 1996, pp.134-153.
Strong moding is turning out to play fundamental roles in concurrent logic programming (or in general, concurrent constraint programming) as strong typing does but in different respects. "Principal modes" can most naturally be represented as feature graphs and can be formed by unification. We built a mode analyzer, implementing mode graphs and their operations by means of concurrent processes and streams (rather than records and pointers). This is a non-trivial programming experience with complicated process structures and has provided us with several insights into the relationship between programming with dynamic data structures and programming with dynamic process structures. The mode analyzer was then applied to the analyzer itself to study the characteristics of the mode constraints it imposed and of the form of large mode graphs. Finally, we show how our framework based on principal moding can be extended to deal with (1) random-access data structures, (2) mode polymorphism, (3) higher-order constructs, and (4) various non-Herbrand constraint systems.
[ps (slides)]
Kazunori Ueda, Strong Moding in Concurrent Logic/Constraint Programming.
Tutorial given at the
Twelfth International Conference on Logic Programming, Kanagawa,
Japan, June 1995.
[ps]
Kazunori Ueda, I/O Mode Analysis in Concurrent Logic Programming.
In Theory and Practice of Parallel Programming, Lecture Notes in
Computer Science 907, Springer-Verlag, Berlin, pp.356-368, May 1995.
This paper briefly reviews concurrent logic programming and the I/O mode system designed for the concurrent logic language Flat GHC. The mode system plays fundamental roles both in programming and implementation in almost the same way as type systems do but in different respects. It provides us with the information on how data are generated and consumed and thus the view of "data as resources". It statically detects bugs resulting from ill-formed dataflow and advocates the "programming as wiring" paradigm. Well-modedness guarantees the safety of unification, the basic operation in concurrent logic programming. Information on the numbers of access paths to data can be obtained by slightly extending the framework, which can be used for compile-time garbage collection and the destructive update of structures.
[ps]
Kazunori Ueda, Moded Flat GHC for Data-Parallel Programming.
In Proc. FGCS'94 Workshop on Parallel Logic Programming,
ICOT, Tokyo, pp.27-35, December 1994.
Concurrent logic languages have been used mainly for the (parallel) processing of rather irregular symbolic applications. However, since concurrent logic languages are essentially general-purpose, they should be applicable to problems with regular structures and their data-parallel processing as well. This paper studies the possibility of massively parallel processing in concurrent logic programming, focusing on arrays and its data-parallel processing.
[ps]
Kazunori Ueda and Masao Morita, Moded Flat GHC and Its Message-Oriented
Implementation Technique.
New Generation Computing, Vol.13, No.1, pp.3-43, November 1994.
Concurrent processes can be used both for programming computation and for programming storage. Previous implementations of Flat GHC, however, have been tuned for computation-intensive programs, and perform poorly for storage-intensive programs (such as programs implementing reconfigurable data structures using processes and streams) and demand-driven programs. This paper proposes an optimization technique for programs in which processes are almost always suspended. The technique compiles unification for data transfer into message passing. Instead of reducing the number of process switching operations, the technique optimizes the cost of each process switching operation and reduces the number of cons operations for data buffering.
The technique is based on a mode system which is powerful enough to analyze bidirectional communication and streams of streams. The mode system is based on mode constraints imposed by individual clauses rather than on global dataflow analysis, enabling separate analysis of program modules. The introduction of a mode system into Flat GHC effectively subsets Flat GHC; the resulting language is called Moded Flat GHC. Moded Flat GHC programs enjoy two important properties under certain conditions: (1) reduction of a goal clause retains the well-modedness of the clause, and (2) when execution terminates, all the variables in an initial goal clause will be bound to ground terms. Practically, the computational complexity of all-at-once mode analysis can be made almost linear with respect to the size n of the program and the complexity of the data structures used, and the complexity of separate analysis is higher only by O(log n) time. Mode analysis provides useful information for debugging as well.
Benchmark results show that the proposed technique well improves the performance of storage-intensive programs and demand-driven programs compared with a conventional native-code implementation. It also improves the performance of some computation-intensive programs. We expect that the proposed technique will expand the application areas of concurrent logic languages.
[ps]
Kazunori Ueda and Masao Morita, Message-Oriented Parallel Implementation of
Moded Flat GHC.
New Generation Computing, Vol.11, No.3-4, pp.323-341, July 1993.
We proposed in [Ueda and Morita 1990] a new, message-oriented implementation technique for Moded Flat GHC that compiled unification for data transfer into message passing. The technique was based on constraint-based program analysis, and significantly improved the performance of programs that used goals and streams to implement reconfigurable data structures. In this paper we discuss how the technique can be parallelized. We focus on a method for shared-memory multiprocessors, called the shared-goal method, though a different method could be used for distributed-memory multiprocessors. Unlike other parallel implementations of concurrent logic languages which we call process-oriented, the unit of parallel execution is not an individual goal but a chain of message sends caused successively by an initial message send. Parallelism comes from the existence of different chains of message sends that can be executed independently or in a pipelined manner. Mutual exclusion based on busy waiting and on message buffering controls access to individual, shared goals. Typical goals allow last-send optimization, the message-oriented counterpart of last-call optimization. We are building an experimental implementation on Sequent Symmetry. In spite of the simple scheduling currently adopted, preliminary evaluation shows good parallel speedup and good absolute performance for concurrent operations on binary process trees.
[pdf]
Kazunori Ueda, The Fifth Generation Project: Personal Perspectives.
Commun. ACM, Vol.36, No.3, pp.65-76, March 1993.
(The paper is avaiable from ACM Ditital Library as a joint article with
Kazuhiro Fuchi, Robert Kowalski, Koichi Furukawa, Ken Kahn, Takashi Chikayama and Evan Tick under the title "Launching the New Era".
This article reviews the design process of KL1, the kernel language for the Parallel Inference Machine in the Fifth Generation Computer Systems (FGCS) project. The FGCS project chose logic programming as its central principle, but the shift to concurrent logic programming started very early in the R&D; of the kernel language, which provoked many discussions and many criticisms. The author proposed a concurrent logic language GHC in December 1984 as a basis of KL1. GHC was recognized as a stable platform with a number of justifications, and the design of KL1 started to converge. The article focuses on the history of the design of KL1, and also presents personal principles behind the language design and perspectives on the future of GHC and KL1.
[ps]
Kazunori Ueda and Takashi Chikayama, Design of the Kernel Language for the
Parallel Inference Machine.
The Computer Journal, Vol.33, No.6, pp.494-500, December 1990.
[ps]
Kazunori Ueda,
Designing a Concurrent Programming Language.
In Proceedings of an International Conference organized by the IPSJ to
Commemorate the 30th Anniversary (InfoJapan'90), Information Processing
Society of Japan, October 1990, pp.87-94.
This paper reviews the design and the evolution of a concurrent programming language Guarded Horn Clauses (GHC). GHC was born from a study of parallelism in logic programming, but turned out to be a simple and flexible concurrent programming language with a number of nice properties. We give both an abstract view of GHC computation based the notion of transactions and a concrete view, namely an operational semantics, based on reductions. Also, we discuss in detail the properties of GHC such as its atomic operations, which have much to do with the design of GHC.
[pdf]
Kazunori Ueda, Parallelism in Logic Programming.
In Information Processing 89, Ritter, G.X. (ed.), North-Holland,
pp.957-964, August 1989.
[ps]
Kazunori Ueda, Guarded Horn Clauses: A Parallel Logic Programming
Language with the Concept of a Guard.
ICOT Technical Report TR-208, Institute for New Generation Computer
Technology (ICOT), Tokyo, 1986.
Revised version in Programming of Future Generation Computers,
Nivat, M. and Fuchi, K. (eds.), North-Holland, Amsterdam, pp.441-456, 1988.
[ps]
Kazunori Ueda, Theory and Practice of Concurrent Systems: The Role of Kernel
Language in the FGCS Project.
In Proc. Int. Conf. on Fifth Generation Computer Systems 1988
(FGCS'88), ICOT, Tokyo, November 1988, pp.165-166.
[ps]
Kazunori Ueda and Koichi Furukawa, Transformation Rules for GHC Programs.
In Proc. Int. Conf. on Fifth Generation Computer Systems 1988
(FGCS'88), ICOT, Tokyo, November 1988, pp.582-591.
Transformation rules for (Flat) GHC programs are presented, which refine the previous rules proposed by one of the authors (Furukawa et al. 1987). The rules are based on unfolding/folding and are novel in that they are stated in terms of idempotent substitutions with preferred directions of bindings. They are more general than the old rules in that no mode system is assumed and that the rule of folding is included. The presentation of the rules suggests that idempotent substitutions with preferred directions of bindings are an appropriate tool for modeling information in (concurrent) logic programming. A semantic model is given which associates a multiset of goals with the set of possible finite sequences of transactions (via substitutions) with the multiset. A transformation preserves the set of transaction sequences that are made without the risk of the failure of unification. The model also deals with anomalous behavior such as the failure of unification and deadlock, so it can be shown with the same model that the transformation cannot introduce those anomalies. Applications of the transformation technique include the fusion of communicating parallel processes. Some familiarity with GHC is assumed.
[ps]
Kazunori Ueda, Making Exhaustive Search Programs Deterministic, Part II.
In Proc. Fourth Int. Conf. on Logic Programming, The MIT Press,
Cambridge, Mass., U.S.A., pp.356-375, May 1987.
[ps]
Kazunori Ueda, Making Exhaustive Search Programs Deterministic.
New Generation Computing, Vol.5, No.1, pp.29-44, 1987.
Kazunori Ueda and Takashi Chikayama, Concurrent Prolog Compiler on Top of Prolog.
In Proc. 1985 Symp. on Logic Programming,
IEEE Computer Society, pp.119-126, July 1985.
[ps]
Kazunori Ueda, Guarded Horn Clauses.
ICOT Technical Report TR-103,
Institute for New Generation Computer Technology (ICOT), Tokyo, 1985.
Revised version in
Proc. Logic Programming '85,
Wada, E.(ed.), Lecture
Notes in Computer Science 221, Springer-Verlag, Berlin Heidelberg New York
Tokyo, 1986, pp.168-179.
Also in Concurrent Prolog: Collected Papers,
Shapiro, E.Y. (ed.), The MIT Press, Cambridge, 1987, pp.140-156.
Kazunori Ueda, Guarded Horn Clauses.
[pdf (1013898 bytes)]
The University of Tokyo, 1986.
This thesis introduces the programming language Guarded Horn Clauses which is abbreviated to GHC. Guarded Horn Clauses was born from the examination of existing logic programming languages and logic programming in general, with special attention paid to parallelism.
The main feature of GHC is its extreme simplicity compared with the other parallel programming languages. GHC is a restriction of a resolution-based theorem prover for Horn-clause sentences. The restriction has two aspects: One is the restriction on data flow caused by unification, and the other is the introduction of choice nondeterminism. The former is essential for a general-purpose language and it also provides GHC with a synchronization primitive. The latter is required by the intended applications which include a system interacting with the outside world. What is characteristic with GHC is that all the restrictions have been imposed as the semantics given to the sole additional syntactic construct, guard.
Although Guarded Horn Clauses can be classified into the family of logic programming languages, it has close relationship to other formalisms including dataflow languages, Communicating Sequential Processes, and functional languages for multiprocessing. Except for the lack of higher-order facilities, GHC can be viewed as a generalization of these frameworks. The simplicity and generality of GHC will make it suitable for a standard not only of parallel logic programming languages but of parallel programming languages. Moreover, it is simple enough to be regarded as a computation model as well as a programming language.
Attention has always been paid to the possibility of efficient implementation during the design stage of GHC. We showed that stream merging and distribution which are expected to be heavily used can be implemented with the same time-complexity as the time-complexity of many-to-one communication in procedural languages. Furthermore, we made available an efficient compiler-based implementation of a subset of GHC on top of Prolog.
GHC has lost the completeness as a theorem prover deliberately, not as a result of compromise. Nevertheless, it can be used for efficient implementation of exhaustive solution search for Horn-clause programs. We showed how to automatically compile a Horn-clause program for exhaustive search into a GHC program.
Herbert Kuchen and Kazunori Ueda (eds.), Functional and Logic Programming---5th International Symposium on Functional and Logic Programming, FLOPS 2001. Lecture Notes in Computer Science 2024, Springer-Verlag, 2001.
R. K. Shyamasundar and Kazunori Ueda (eds.), Advances in Computing Science---ASIAN'97. Lecture Notes in Computer Science 1345, Springer-Verlag, 1997.
Logic Programming: Proceedings of the 1991 International Symposium
Vijay Saraswat and Kazunori Ueda (eds.),
The MIT Press, Cambridge, Mass, U.S.A., October 1991.
[pdf] Kazunori Ueda and Norio Kato, Programming with Logical Links: Design of the LMNtal language. In Proc. Third Asian Workshop on Programming Languages and Systems (APLAS 2002), pp.115-126, 2002.
We propose LMNtal, a simple language model based on the rewriting of hierarchical graphs that use logical variables to represent links. The two major goals of the model are (i) to unify various computational models based on multiset rewriting and (ii) to serve as the basis of a truly general-purpose language covering various platforms ranging from wide-area to embedded computation. Another important contribution of the model is it greatly facilitates programming with dynamic data structures.