Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
From clt@dec-lite.stanford.edu Mon Jul 29 08:15:23 1991
Date: Fri, 26 Jul 91 15:46:13 -0700
From: Carolyn Talcott
To: leavens@judy.cs.iastate.edu
Subject: abstracts
Reply-To: clt@sail.stanford.edu
==============================================================================
Abstracts of publications on programming language theory and formal reasoning
For information contact
Carolyn Talcott
Computer Science Department
Stanford University
clt@sail.stanford.edu
415-723-0936
============================================================================
[reverse chronological order]
@techreport{nagayama-talcott-91bmp,
author = {Nagayama, Misao and Talcott, Carolyn},
title = {An NQTHM Mechanization of
``An Exercise in the Verification of Multi-Process Programs''},
institution = {Computer Science Department, Stanford University},
number = {to appear},
year = 1991
}
This report presents a formal verification of the local correctness of
a mutex algorithm using the Boyer-Moore theorem prover. The
formalization follows closely an informal proof of Manna and Pnueli.
The proof method of Manna and Pnueli is to first extract from the
program a set of states and induced transition system. One then
proves suitable invariants There are two variants of the proof. In
the first (atomic) variant, compound tests involving quantification
over a finite set are viewed as atomic operations. In the second
(molecular) variant, this assumption is removed, making the details of
the transitions and proof somewhat more complicated.
The original Manna-Pnueli proof was formulated in terms of finite
sets. This led to a concise and elegant informal proof, however one
that is not easy to mechanize in the Boyer-Moore logic. In the
mechanized version we use a dual isomorphic representation of program
states based on finite sequences. Our approach was to outline the
formal proof of each invariant, making explicit the case analyses,
assumptions and properties of operations used. The outline served as
our guide in developing the formal proof. The resulting sequence of
events follows the informal plan quite closely. The main difficulties
encountered were in discovering the precise form of the lemmas and
hints necessary to guide the theorem prover.
The complete formal proofs (input to the Boyer-Moore prover) appear as
appendices. Some comments on formalization techniques, difficulties,
and alternatives are included as comments in the theorem prover input.
------------
@inproceedings{mason-talcott-91pepm,
author={Mason, I. A. and Talcott, C. L.},
title={Program Transformation for Configuring Components},
year={1991},
booktitle = {Symposium on Partial Evauation and Semantics-Based Program
Manipulation, PEPM'91},
publisher = acm,
pages = {297-308}
}
In this paper we report progress in development of methods for reasoning about
the equivalence of objects with memory and the use of these methods to
describe sound operations on such objects, in terms of formal program
transformations. We also formalize three different aspects of objects: their
specification, their behavior, and their canonical representative. Formal
connections among these aspects provide methods for optimization and reasoning
about systems of objects. To illustrate these ideas we give a formal
derivation of an optimized specialized window editor from generic
specifications of its components. A new result in this paper enables one to
make use of symbolic evaluation (with respect to a set of constraints) to
establish the equivalence of objects. This form of evaluation is not only
mechanizable, it is also generalizes the conditions under which partial
evaluation usually takes place.
--------------
@inproceedings{talcott-91amast,
author = {Talcott, Carolyn L.},
title = {Towards a Theory of Binding Structures},
booktitle = {Second International Conference on Algebraic Methodology
and Software Technology, {AMAST}},
year = 1991,
clt-note = {draft full version available, trees and names part in 90fest}
}
In this paper we present a theory of binding structures. Binding structures
enrich traditional abstract data types by providing support for representing
binding mechanisms and structures with holes. They are intended to facilitate
the representation and manipulation of symbolic structures such as programs,
logical formulae, derivations, specifications and specification modules. The
basic ingredients of this theory are binding operators, free variables, bound
variables, and holes. A binding operator has a finite set of binding indices
(binders), a finite set of argument selectors, and a binding relation
consisting of a set of pairs with first component a binder and second
component an argument selector. The binders allow for multiple binding, the
argument selectors specify the arity of the operator (keyword arguments), and
the binding relation specifies which binders are bound in which arguments. By
explicitly distinguishing between free and bound variables we can represent
externally bound variables. There are adjustment operations on binding
structures used to preserve external binding relations when a binding
structure is moved relative to its external binding context. In addition
there are three fundamental replacement operations on binding structures, one
for each of the three sorts of atomic structures. Substitution replaces free
variables and filling replaces holes. Both use a map from identifiers to
binding structures. Unbinding replaces externally bound variables using a map
>from binding points to binding structures. These three replacement
operations, and many more, can be expressed as a simple traversal
paramaterized by a function to be applied at atoms, and operation used to
adjust the function at binding operation nodes. Compositions of the
replacement operations satisfy equations generalizing those for substitution
in a world without binding or holes. We present two alternative
representations of binding structures, one based on labeled trees and one
based abstract syntax trees using variable names to establish binding
relations. Labeled trees make explicit the intuition about the geometry of
binding structures. The conversion map from abstract syntax trees with
binding to binding structures shows how the operations of higher-order
abstract syntax have been captured within a first-order framework.
------------------------
@misc{mason-talcott-91ptcp,
author={Mason, I. A. and Talcott, C. L.},
year={1991},
title={Program Transformation via Constraint Propagation},
clt-note = {to appear},
}
In this paper we describe progress towards a theory of program
development by systematic refinement beginning with a clean,
functional program thought of as a specification. Transformations
include reuse of storage, and re-representation of abstract data. The
transformation rules are based on a theory of constrained equivalence
for functional languages with imperative features (i.e.
Lisp/Scheme/ML). Constrained equivalence is a relation between sets
of constraints (on computation contexts) and pairs of expressions.
The interpretation is that in all contexts satisfying the constraints,
evaluation of the expressions is either undefined or produces the same
results and has the same effect on memory (modulo garbage).
Constrained equivalence naturally allows reasoning by cases and
permits use of a variety of induction principles. An inference system
for constrained equivalence that is complete for expressions not
involving recursively defined functions has been developed by the
authors. One difficulty with constrained equivalence is that it is
not a congruence. One cannot, in general, place expressions
equivalent under some non-empty set of constraints into an arbitrary
program context and preserve equivalence. For this and other reasons,
we have developed a formal system for propagating constraints into
program contexts. In this system, it is possible to place expressions
equivalent under some non-empty set of constraints into a program
context and preserve equivalence provided that the constraints
propagate into that context. Constrained equivalence and constraint
propagation provide a basis for systematic development of program
transformation rules. In this paper we present three main rules:
subgoal induction, recursion induction, and the peephole rule. Use of
these rules is illustrated by the transformation of a simple, but
non-trivial, rewriting program, known as the Boyer Benchmark.
------------
@article{mason-talcott-93tcs,
author={Mason, I. A. and Talcott, C. L.},
year={1993},
title={Inferring the Equivalence of Functional Programs That Mutate Data},
journal={Theoretical Computer Science},
volume={103},
clt-note={revision and extension of STAN-CS-89-1250},
}
In this paper we study the constrained equivalence of programs with effects.
In particular, we present a formal system for deriving such equivalences.
Constrained equivalence is defined via a model theoretic characterization of
operational, or observational, equivalence called strong isomorphism. Two
expressions are strongly isomorphic if in all memory states they return the
same value, and have the same effect on memory (modulo the production of
garbage). Strong isomorphism implies operational equivalence. The converse
is true for first-order languages; it is false for full higher-order
languages. Since strong isomorphism is defined by quantifying over memory
states, rather than program contexts, it is a simple matter to restrict this
equivalence to those memory states which satisfy a set of constraints. It is
for this reason that strong isomorphism is a useful relation, even in the
higher-order case.
The formal system we present defines a single-conclusion consequence relation
between finite sets of constraints and assertions. The assertions we consider
are of the following two forms: (i) an expression fails to return a value,
(ii) two expressions are strongly isomorphic. In this paper we focus on the
first-order fragment of a Scheme- or Lisp-like language, with data operations
`atom' `eq' `car', `cdr', `cons', `setcar', `setcdr', the control primitives
`let' and `if', and recursive definition of function symbols. A constraint is
an atomic or negated atomic formula in the first-order language consisting of
equality, the unary function symbols `car' and `cdr', the unary relation
`cell', and constants from the set of atoms. Constraints have the natural
first-order interpretation.
Although the formal system is computationally adequate, even for expressions
containing recursively defined functions, it is inadequate for proving
properties of recursively defined functions. We show how to enrich the formal
system by addition of induction principles. To illustrate the use of the
formal system, we give three non-trivial examples of constrained equivalence
assertions of well known list-processing programs. We also establish several
metatheoretic properties of constrained equivalence and the formal system,
including soundness, completeness, and a comparison of the equivalence
relations on various fragments.
------------
@article{mason-talcott-91fp,
author={Mason, I. A. and Talcott, C. L.},
year={1991},
title={Equivalence in Functional Languages with Effects},
journal={Journal of Functional Programming},
volume={to appear}
clt-note={extended version of mason-talcott-89icalp,
revised version sent dec90}
}
Traditionally the view has been that direct expression of control and store
mechanisms and clear mathematical semantics are incompatible requirements.
This paper shows that adding objects with memory to the call-by-value lambda
calculus results in a language with a rich equational theory, satisfying many
of the usual laws. Combined with other recent work this provides evidence
that expressive, mathematically clean programming languages are indeed
possible.
This paper presents a study of operational approximation and equivalence in
the presence of function abstractions and objects with memory. A syntactic
representation of memory is defined and used as the basis for developing tools
for reasoning about programs that operate on memory. Three equivalent
definitions of operational approximation and equivalence are given and basic
laws are presented. Two of the definitions are simple variants of the
standard definition (Plotkin). The third definition is a weak form of
extensionality and is basis for developing methods of proving approximation
and equivalence. In particular a simple semantic criteria called strong
isomorphism is shown to imply operational equivalence using this form of
extensionality. As a consequence the laws of strong isomorphism are valid for
operational equivalence. For example, if one expression reduces to another
then the two expressions are operationally equivalent. Laws of strong
isomorphism have been studied in depth for the first-order case, and there
well-developed inference rules for proving systems for proving strong
isomorphism. Several other principles for establishing operational
equivalence are provided.
A notion of recursion operator is defined and two examples are given. In a
purely functional language recursion operators use self-application to
implement recursion. When memory is introduced recursion operators may also
use memory loops to implement recursion. Using the weak form of
extensionality it is shown that recursion operators compute the least fixed
point (with respect to operational approximation) of functionals.
The weak form of extensionality is used to derive a simulation induction
principle for proving equivalence of operations with local memory. To
illustrate the application of this principle we classify several presentations
of streams as such operations and prove properties of operations on such
stream presentations.
The notions of operational equivalence and strong isomorphism in various
fragments of the language are related. In particular results are presented
that essentially characterize the difference between operational equivalence
and strong isomorphism in the presence of higher-order objects.
------------
@techreport{qlisp-90primer,
author = {Mason, I. A. and Pehoushek, J. D. and Talcott, C. and Weening, J.},
title = {Programming in {QLisp}},
institution = {Stanford University, Computer Science Department},
number = {STAN-CS-90-1340},
year = 1990}
Qlisp is an extension of Common Lisp, to support parallel programming.
It was initially designed by John McCarthy and Richard Gabriel in
1984. Since then it has been under development both at Stanford
University and Lucid, Inc. and has been implemented on several
commercial shared-memory parallel computers. Qlisp is a queue-based,
shared-memory, multi-processing language. This report is a tutorial
introduction to the Stanford dialect of Qlisp.
------------
@inproceedings{mason-talcott-90plilp,
author={Mason, I. A. and Talcott, C. L.},
year={1990},
title={Reasoning About Programs with Effects},
booktitle=
{Programming Language Implementation and Logic Programming, PLILP'90},
series = {Lecture Notes in Computer Science},
volume = 456,
publisher = {Springer-Verlag},
page = {189--203}
}
In this paper we give several examples that illustrate our techniques for
reasoning about programs with effects. One technique is to define
correspondences between effect-free programs and programs with effect and
transfer results about effect-free programs to corresponding programs with
effects. We give two examples of this technique. The first involves the
correspondence between loops and tail-recursive programs, replacing assignment
to loop variables by $\qlet$ binding. The second example involves a
correspondence between mutable objects and streams, replacing updating by
explicitly passing the store. The correspondences are left informal here, but
in practice translators can be defined to insure correct correspondences (and
avoid the work of hand translation). A second technique is to lift principles
for reasoning about effect-free programs to those with effects, typically by
finding appropriate abstractions and restrictions. We present three examples
here: the use of structure copying to insure non-interference among
operations; the use of structural induction principles for establishing
program equivalence; and the use of abstract objects to encapsulate effects
and potential interference in a controlled way.
---------------
@inproceedings{talcott-weyhrauch-90ecai,
author={Talcott, C. L. and Weyhrauch R. W.},
year={1990},
title={Towards a Theory of Mechanized Reasoning I:
FOL contexts, an extensional view},
booktitle={European Conference on Artificial Intelligence},
clt-note = {full version in preparation}
}
The FOL system was designed to test some ideas about how to build thinking
individuals by mechanizing the activity of reasoning and providing data
structures powerful enough to represent both general information and
information about reasoning itself. The logic of FOL generalizes the standard
(first order) notions of language and model to admit partial knowledge and
provides a natural basis for mechanization of mixed syntactic and semantic
reasoning. The logic of FOL provides a new way to axiomatize theories which
is particularly amenable as a basis for presenting mechanized theories. It is
more expressive than using sets of formulas in the sense that it permits more
direct and natural formal presentations of informal theories ---both
mathematical and symbolic. Formalizations in FOL reflect the intensional
aspects of theories and the FOL architecture is intended as a natural
framework for mechanization (implementation).
New notions from the point of view of logic are: simulation structures, FOL
contexts, FOL systems, and inference rules as consistency and constraint
preserving maps. Simulation structures are data structures that are
presentations of models. They function as an alternative to the use of axioms
for providing information about the intended models of a theory. They contain
explicit data objects that represent both individuals of the domain and rules
for computing functions and relations. Simulation structures also provide
explicit resource limited mechanisms for applying the computation rules. In
this way, even if we do not have enough resources to determine the value of a
function or the truth of a proposition, we will always know if we have an
answer and, if not, we will have (as a data structure) a computation state
that can be restarted if more resources become available. Simulation
structures include classical models (using infinite resources) but they also
include richer structures that allow us to formalize the intermixing of
syntactic and semantic reasoning in a very general way. FOL contexts are data
structures that are presentations of theories. A context contains a language,
a simulation structure, a connection between symbols of the language and
objects of the simulation structure, and a set of facts expressed in the
language. A model of an FOL context is a classical model that extends the
simulation structure, satisfies the facts, and preserves the connection
between symbols and objects. FOL systems are collections of FOL contexts
together with some fixed relations between the component contexts called
constraints. The mechanism for reasoning is to apply rules that preserve
these constraints (generally including the basic constraint that each theory
remain consistent). FOL systems provide natural structures for formalizing
simple theories (mathematical and common sense), meta theoretic reasoning,
states of knowledge that change over time, the use of more than one modal
operator at a time, non-monotonic principles of reasoning, etc.
In this paper we focus on simulation structures and FOL contexts.
---------------
@inproceedings{galbiati-talcott-90ctrs,
author = {Galbiati, L and Talcott, C.},
title = {A Simplifier for Untyped Lambda Expressions},
booktitle = {CTRS90},
note = {to appear as {LNCS} volume, full version
Stanford University Computer Science Department Report STAN-CS-90-1337},
year = 1990
}
Many applicative programming languages are based on the call-by-value lambda
calculus. For these languages tools such as compilers, partial evaluators,
and other transformation systems often make use of rewriting systems that
incorporate some form of beta reduction. For purposes of automatic rewriting
it is important to develop extensions of beta-value reduction and to develop
methods for guaranteeing termination. This paper describes an extension of
beta-value reduction and a method based on abstract interpretation for
determining terminating rewriting strategies. The main innovations are (1)
the use of rearrangement rules in combination with beta-value conversion to
increase the power of the rewriting system and (2) the definition of a
non-standard interpretation of expressions, the generates relation, as a basis
for developing terminating strategies.
------------
@inproceedings{talcott-90disco,
author={Talcott, C. L.},
year=1990,
title = {A Theory for Program and Data Specification},
booktitle =
{Design and Implementation of Symbolic Computation Systems, DISCO'90},
series = {Lecture Notes in Computer Science},
volume = 429,
publisher = {Springer-Verlag},
page = {91--100}
clt-note = {full version submitted to tcs nov90}
}
This paper represents the initial stage of a project to develop a
wide spectrum formalism which will support not only reasoning about program
equivalence, but also specification of programs and data types, reasoning
about properties of computations, operations on programs, and operations on
program specifications. In this initial stage a two layered theory of
progam equivalence and class membership has been developed. The lower layer
is a theory of program equivalence and definedness. Program primitives that
can be treated within this theory include functional application and
abstraction, conditional, numbers, pairing, and continuation capture and
resumption. The upper layer is the theory of class membership with a
general comprehension principle for defining classes. Many standard class
constructions and data-types can be represented naturally including
algebraic abstract data types, list-, record-, and function- type
constructions. In addition classes can be constructed satifsying systems of
equations such as [S = A x G; G = B -> S] which are useful in treating
programming concepts such as streams and object behaviors. Coroutines can
also be classified within this theory.
The theory presented here, IOCC (Impredicative theory of Operations, Control
abstractions, and Classes) is an essentially first-order two-sorted theory
of individuals and classes. The general idea of a two-sorted theory of
operations and classes is based on Fefermans theories for formalizing
constructive mathematics. The theory of equivalence is based on work on
program equivalence and and is essentially the lambda-c theory of Moggi.
For simplicity we treat a specific language (choice of primitive constants).
The methods are quite general and work for a wide range of choices. Models
of the full theory can be uniformly constructed from models of the lower
layer (the theory of program equivalence) using methods of Feferman. We
have developed methods for constructing models of the lower layer from
models of of partial algebraic theories we call computation theories. This
builds on work in denotational semantics and on work of Broy, Wirsing and
others in algebraic specification of programming languages.
------------
@inproceedings{bronstein-talcott-89fmvlsi,
author = {Bronstein, Alexandre, and Carolyn Talcott},
title ={Formal Verification of Pipelines Based on String-Functional Semantics},
booktitle = {IFIP 1989 workshop on applied formal methods for correct
VLSI design},
year = 1989
}
The functional style has been gaining popularity in the field of hardware
specification, and is especially well suited for reasoning about pipelining.
Within precise model-theoretic semantics based on string functions, it is
possible to build a sound verification methodology, and to formally model
several notions of correctness for pipelined designs. On the applied side, we
have implemented this theory inside the Boyer-Moore theorem prover, in a form
general enough to prove correctness properties for standard as well as
pipelined synchronous circuits, including in the latter category, the
Saxe-Leiserson retimed correlator, a pipelined ripple adder, and an abstract
pipelined CPU. We present both the theory and the applications here.
------------
@inproceedings{mason-talcott-89icalp,
author={Mason, I. A. and Talcott, C. L.},
year={1989},
title={Programming, Transforming, and Proving with Function Abstractions
and Memories},
booktitle={Proceedings of the 16th EATCS Colloquium on Automata, Languages,
and Programming, Stresa},
series = {Lecture Notes in Computer Science},
volume = 372,
publisher = {Springer-Verlag},
page = {574--588}
clt-note = {extended version to appear in Journal of Functional Programming
under title `Equivalence in Functional Languages with Effects'}
}
Notions of program equivalence are fundamental to the process of program
specification and transformation. This paper presents a study of operational
approximation and equivalence in the presence of function abstractions and
objects with memory. A syntactic representation of memory is defined and used
as the basis for developing tools for reasoning about programs that operate on
memory. Three equivalent definitions of operational approximation and
equivalence are given and basic laws are presented.
------------
@inproceedings{bronstein-talcott-89ccube,
author = {Bronstein, Alexandre, and Carolyn Talcott},
title = {Formal Verification of Synchronous Circuits Based on
String-Functional Semantics: The 7 Paillet Circuits in Boyer-Moore},
booktitle = {C-Cube Workshop on Automatic Verification Methods for
Finite State Systems, Grenoble, France, June 1989},
year = 1989
}
Since the beginning of time, the semantics of choice for synchronous circuits
has been the finite state machine (FSM). Years of research on FSMs have
provided many tools for the design and verification of synchronous hardware.
But from a mathematical manipulation point of view, FSMs have several
drawbacks, and a new hardware specification style based on the functional
approach has gained ground recently. In a previous report we described a
functional semantics for synchronous circuits based on monotonic
length-preserving functions on finite strings. We have now implemented this
theory inside the Boyer-Moore theorem prover, and proved correctness
properties for a variety of circuits. Paillet has developed a sequence of
synchronous circuits of increasing ``sequential complexity'' and given
hand-proofs of their correctness in his P-calculus. Our semantics supports a
calculus which extends his. It was therefore very tempting to investigate
mechanical proofs of those circuits in the Boyer-Moore implementation of our
theory. We present the results here.
------------
@inproceedings{mason-talcott-89lics,
author={Mason, I. A. and Talcott, C. L.},
year={1989},
title={Axiomatizing Operational Equivalence in the Presence of Side Effects},
booktitle={Fourth Annual Symposium on logic in computer science},
publisher={IEEE}
clt-note{Full version: STAN-CS-89-1250}
}
A formal system is presented for deriving assertions about programs with
effects. Programs are expressions of a first-order Scheme- or Lisp-like
language. The formal system defines a single-conclusion consequence relation
finite sets of constraints and assertions of the form `e is undefined' or `e0
is equivalent to e1'. A constraint is an atomic or negated atomic formula in
the first-order language consisting of equality, selectors, and tests for
atomicity. The semantics of the formal system is given by a semantic
consequence relation which is defined in terms of a class of memory models for
assertions and constraints. The main results of this paper are that the
deduction system is sound and complete with respect to the semantics.
------------
@inproceedings{talcott-89amast,
author={Talcott, Carolyn L.},
year=1989,
title={Algebraic Methods in Programming Language Theory},
booktitle={First International Conference on Algebraic Methodology and
Software Technology, {AMAST'89}},
}
The point of this paper is to describe a challenging application of algebraic
methods to programming language theory. Much work on the theory of
Scheme-like languages (applicative, but not necessarily functional) has an
essentially algebraic flavor. Thus it seems appropriate to make the algebraic
aspect explicit. An algebraic setting provides a unifying framework for the
various views of programs and facilitates modular specifications of
programming languages. The use of syntactic algebras, data algebras, and
algebras of computation states provides a uniform treatment of data, textual,
and control abstraction mechanisms in languages. To illustrate some of the
issues some specific examples of algebraic specifications are given and models
of interest, relations between models, and relations between specifications
are discussed.
------------
@techreport{talcott-89wix,
author = {Talcott, C. L.},
year = 1989,
title = {Programming and Proving Function and Control Abstractions},
institution = {Stanford University Computer Science Department},
number = {{STAN-CS-89-1288}}
}
These notes are based on lectures given at the Western Institute of Computer
Science summer program, 31 July -- 1 August 1986. Here we focus on programing
and proving with function and control abstractions and present a variety of
example programs, properties, and techniques for proving these properties.
Examples include such powerful programming tools such as functions as values,
streams, aspects of object oriented programming, escape mechanisms, and
coroutines. We begin with an intensional semantic theory of function and
control abstractions as computation primitives. A first order theory of
program equivalence based on this sematics is developed and used to formalize
and prove extensional properties of programs. In addition a method is
developed for transforming intensional properties of programs into extensional
properties of related programs called derived programs. Application of this
method to formalize and prove intensional properties of programs.
------------
@techreport{mason-talcott-89completeness,
author={Mason, I. A. and Talcott, C. L.},
year={1989},
title={A Sound and Complete Axiomatization of Operational Equivalence Between
Programs with Memory},
institution={Department of Computer Science, Stanford University},
number={STAN-CS-89-1250}
clt-note = {full version of mason-talcott-89lics, extended version
to appear in tcs}
}
In this paper we present a formal system for deriving assertions about
programs with memory. The assertions we consider are of the following three
forms: (i) an expression diverges (i.e. fails to reduce to a value); (ii) two
expressions reduce to the same value and have exactly the same effect on
memory; and (iii) two expressions reduce to the same value and have the same
effect on memory up to production of garbage (are strongly isomorphic). The
expressions considered are expressions of a first-order Scheme- or Lisp-like
language over a given set of atoms with the data operations `atom' `eq' `car',
`cdr', `cons', `setcar', `setcdr', the control primitives `let' and `if', and
recursive definition of function symbols.
The formal system we present defines a single-conclusion consequence
relation between finite sets of constraints and assertions. A constraint is
an atomic or negated atomic formula in the first-order language consisting
of equality, the unary function symbols `car' and `cdr', the unary relation
`atom', and constants from the set of atoms. Constraints have the natural
first-order interpretation. The semantics of the formal system is given by
a semantic consequence relation which is defined in terms of a class of
memory models for assertions and constraints.
The main results of this paper are:
(Soundness) The deduction system is sound --- if the formal proves that an
assertion is a consequence of some set of constraints then the assertion is
a semantic consequence of that set of constraints.
(Completeness) The deduction system is complete for assertions not containing
recursively defined functions --- if such an assertion is a semantic
consequence of a set of constraints then it is derivable in the formal system.
------------
@techreport{bronstein-talcott-88string,
author = {Bronstein, Alexandre, and Carolyn Talcott},
title = {String-Functional Semantics for Formal Verification of Synchronous
Circuits},
institution={Department of Computer Science, Stanford University},
number={STAN-CS-88-1210}
year = 1988
}
A new functional semantics is proposed for synchronous circuits, as a basis
for reasoning formally about that class of hardware systems.
Technically, we define an extensional semantics with monotonic
length-preserving functions on finite strings, and an intensional semantics
based on functionals on those functions. As support for the semantics,
we prove the equivalence of the extensional semantics with a simple operational
semantics, as well as a characterization of th circuits which obey the
"every loop is clocked" design rule. Also we develop the foundations in
complete detail, both to increase confidence in the theory, and as a
prerequisite to its future mechanization.
------------
@article{mason-88scp,
author={Mason, I. A.},
year={1988},
title={Verification of Programs that Destructively Manipulate Data},
journal={Science of Computer Programming},
volume={10},
page = {177--210}
}
We investigate various equivalence relations between expressions in a
first-order functional programming language augmented with the ability to
destructively alter the underlying data. To define the semantics we introduce
the notion of a memory structure. A computation theory for a lexcially scoped
functional language is then defined over these structures. The equivalence
relations are then defined within this model-theoretic framework. A
distinction is made between intensional relations and extensional relations.
The former class turn our to have a much more manageable theory than the
latter. The principal intensional relation studied is strong isomorphism.
Its properties allow for elegant verification proofs in a style similar to
that of pure functional languages. In particular the relation is preserved
under many standard syntactic manipulations and transformations.
------------
@inproceedings{talcott-weyhrauch-87pe,
author = {Talcott, C. L. and Weyhrauch, R. W.},
year = 1988,
title = {Partial Evaluation, Higher-Order Abstractions,
and Reflection Principles as System Building Tools},
booktitle =
{Partial Evaluation and Mixed Computation},
editor = {Bjorner, D. and Erschov, A. P. and Jones, N. D.},
publisher = {North--Holland},
page = {507--529},
clt-note = {{IFIP} {TC2} Working Conference on
Partial Evaluation and Mixed Computation,
Gammel Avernaes, Denmark, 18-24 October, 1987}
}
In this paper we take a look at partial evaluation from the point of view of
symbolic computation systems, point out some challenging new applications for
partial evaluation in such systems, and outline some criteria for a theory of
partial evaluation. The key features of symbolic computation systems are
summarized along with work on semantics of such systems which will hopefully
aid in meeting the challenges. The new applications are illustrated by an
example using on the concept of component configuration. This is a new idea
for software development, based on the use of higer-order and reflective
computation mechanisms, that generalizes such ideas as modules, classes, and
programming in the large.
------------
@techreport{mason-87hoare,
author={Mason, I.A.},
year={1987},
title={{Hoare's Logic in the LF}},
institution={Laboratory for Foundations of Computer Science,
University of Edinburgh},
number={ECS-LFCS-87-32}
}
@inproceedings{talcott-86trento,
author = {Talcott, Carolyn L.},
year = 1988
title = {Rum: An intensional theory of function and control abstractions},
booktitle = {Workshop on Functional and Logic
Programming, Trento Italy, December 1986}
series = {Lecture Notes for Computer Science},
volume = 306,
publisher = {Springer-Verlag},
clt-note = {Invited talk}
}
Rum is an intensional semantic theory of function and control abstractions as
computation primitives. Properties of powerful programming tools such as
functions as values, streams, object oriented programming, escape mechanisms,
and co-routines can be represented naturally. In addition a wide variety of
operations on programs can be treated including program transformations which
introduce function and control abstractions, compiling morphisms that tranform
control abstractions into function abstractions, and operations that transform
intensional properties of programs into extensional properties. The theory
goes beyond a theory of functions computed by programs, providing tools for
treating both intensional and extensional properties of programs. This
provides operations on programs with meanings to transform as well as meanings
to preserve. This paper briefly surveys related ideas from work in symbolic
computation and semantics and gives a guided tour through Rum illustrating the
main ideas.
------------
@inproceedings{mason-86lics,
author={Mason, I. A.},
year={1986a},
title={Equivalence of First Order Lisp Programs: Proving Properties of
Destructive Programs via Transformation},
booktitle={First Annual Symposium on Logic in Computer Science},
publisher={IEEE}
}
@phdthesis{mason-thesis,
author = {Mason, I. A.},
year = 1986,
title = {The Semantics of Destructive {L}isp},
school = {Stanford University},
note={Also available as
CSLI Lecture Notes No. 5, Center for the Study of Language and Information,
Stanford University}
}
In this thesis we investigate various equivalence relations between
expressions in first order LISP. This fragment of LISP includes the
destructive operations {\it rplaca} and {\it rplacd}. To define the semantics
we introduce the notion of a memory structure. The equivalence relations are
then defined within this model theoretic framework.
A distinction is made between intensional relations and extensional relations.
The former class turn out to have a much more managable theory than the
latter. The principle intensional relation studied is {\it strong
isomorphism}, its properties allow for elegant verification proofs in a style
similar to that of pure Lisp. In particular the relation is preserved under
many standard syntactic manipulations and transformations. In particular it
satisfied a Substitution theorem; any program that is obtained from another by
replacing a portion by another strongly isomorphic one is guaranteed to be
strongly isomorphic to the original one.
A plethora of verification proofs of both simple and complex programs was
given using the intensional equivalence relation. All of these proofs were of
the transformation plus induction variety. In contrast, we gave some
verification proofs of programs, using the extensional relations. Because the
Substitution Theorem fails for these extensional relations, the proofs were
necessarily of the hand simulation variety.
In a more theoretical light, we also proved that the equivalence relations
introduced here are decidable, and used them to study the expressive powers
of certain fragments of Lisp.
-----------------------
@phdthesis{talcott-thesis,
author = {Talcott, C. L.},
year = 1985,
title = {The Essence of {Rum}:
A Theory of the Intensional and Extensional Aspects of
{L}isp-Type Computation},
school = {Stanford University}
}
Rum is a theory of applicative, side-effect free computations over an
algebraic data structure. It goes beyond a theory of functions computed by
programs, treating both intensional and extensional aspects of computation.
Powerful programming tools such as streams, object-oriented programming,
escape mechanisms, and co-routines can be represented. Intensional properties
include the number of multiplications executed, the number of context
switches, and the maximum stack depth required in a computation. Extensional
properties include notions of equality for streams and co-routines and
characterization of functionals implementing strategies for searching
tree-structured spaces. Precise definitions of informal concepts such as
stream and co-routine are given and their mathematical theory is developed.
Operations on programs treated include program transformations which introduce
functional and control abstractions; a compiling morphism that provides a
representation of control abstractions as functional abstractions; and
operations that transform intensional properties to extensional properties.
The goal is not only to account for programming practice in Lisp, but also to
improve practice by providing mathematical tools for developing programs and
building programming systems.
Rum views computation as a process of generating computation structures --
trees for context-independent computations and sequences for context-dependent
computations. The recursion theorem gives a fixed-point function that
computes computationally minimal fixed points. The context insensitivity
theorem says that context-dependent computations are uniformly parameterized
by the calling context and that computations in which context dependence is
localized can be treated like context-independent computations. Rum machine
structure and morphism are introduced to define and prove properties of
compilers. The hierarchy of comparison relations on programs ranges from
intensional equality to maximum approximation and equivalence relations that
are extensional. The fixed-point function computes the least fixed point with
respect to the maximum approximation. Comparison relations, combined with the
interpretation of programs using computation structures, provide operations on
programs both with meanings to preserve and meanings to transform.
------------------------
@techreport{mason-talcott-85memories,
author={Mason, I. A. and Talcott, C. L.},
year={1985},
title={Memories of S-expressions: Proving Properties of Lisp-Like Programs
that Destructively Alter Memory},
institution={Department of Computer Science, Stanford University},
number={STAN-CS-85-1057}
}
clt jul91