Deprecated: The each() function is deprecated. This message will be suppressed on further calls in /home/zhenxiangba/zhenxiangba.com/public_html/phproxy-improved-master/index.php on line 456
Type Specialisation Exercises
[go: Go Back, main page]

Type Specialisation Exercises

Power play

The power function, with a static first parameter, can be defined as follows:
letrec power n x = if@ n=@ 1 then x else x*power (n-@1) x
in \x. power 3 x
But this program cannot be specialised as it stands, because n would need to be assigned several different residual types.

Variations on append

The append function can be defined and tested as follows:
letrec append xs ys =
  case xs of
    Nil: ys,
    Cons x xs: Cons x (append xs ys)
  esac
in

append (Cons (lift 1) (Cons (lift 2) Nil)) (Cons (lift 3) Nil)
So far all the list constructors are dynamic. Make the constructors of the first argument static, to create a specialised version of append for lists of length two. Now make the constructors of the second argument and result static also. Do you understand why you obtain the residual program that you do?

Experiments with Arity Raising

Experiment with the arity raiser, by giving the specialiser purely dynamic terms containing pairs in different contexts, and examining the output. For example, specialise
let f x = x in f (lift 1, lift 2)
and
let proj x = case x of Inj y: y esac
in proj (Inj (lift 1, lift 2))
Can you find a simple modification to these examples which avoids the duplication of code? Hint: insert a dynamic constructor somewhere. Why not define function composition, and a function swap which swaps the components of pairs, and try composing swap with itself?

Simulating Constructor Specialisation: An Exercise in First-Class Polyvariance

One way to model data-structures in the pure lambda-calculus is as follows. Model data values by functions from the branches of a case over the datatype to the result of the case. Model case branches by functions from the components to the result of the branch. Model constructors by functions which select the appropriate branch, and apply it to the components. For example, lists can be modelled as follows:
nil c n = n
cons x xs c n = c x xs
listcase xs c n = xs c n
If we apply this idea to a datatype with one unary constructor, we obtain
inj x k = k x
caseinj x k = x k
But suppose we want to model the behaviour of the specialisable constructor In. By representing case branches as a polyvariant function, model constructor specialisation using poly and spec.

Optimal Specialisation

A self-interpreter for the specialiser's metalanguage can be specialised optimally if, for any program p,
 mix int p = p 
(up to trivialities such as renaming of variables). If the meta-language is typed, then an optimal specialiser must specialise types, since otherwise this equation cannot hold for any p containing a type not found in int. In particular, the `universal' type used to represent values in the interpreter must be specialised to the types of those values. The type specialiser was the first to be able to do so for the lambda-calculus. In this exercise, you will repeat this experiment. Take the following typed interpreter for the lambda-calculus plus constants:
letrec eval env e =
  case e of
    Cn n: Num n,
    Vr x: env x,
    Lm x e: Fun (\v. 
             let env y = if x=y then v else env y
             in eval env e),
    Ap e1 e2:case eval env e1 of
               Fun f: f (eval env e2)
             esac
  esac
in

eval (\i.Wrong)

(Ap (Ap (Lm (lift "x") (Lm (lift "f") 
          (Ap (Vr (lift "f")) (Vr (lift "x"))))) (Cn (lift 3)))
    (Lm (lift "z") (Vr (lift "z"))))
The interpreter can be specialised as it stands, but since everything is dynamic then no specialisation occurs. Make the following changes to the binding-times in the interpreter, along with any other necessary changes to make specialisation possible, and see how the results change. Have you achieved optimal specialisation? (If not: keep trying.) What happens if you specialise this interpreter to an ill-typed lambda-term, such as
(Ap@ (Cn@ 3) (Cn@ 4))
Is this the behaviour you would expect?

Transforming Polymorphism to Monomorphism

The type specialiser is not optimal for the polymorphic lambda-calculus, because both source and residual programs are simply typed (i.e. monomorphic). However, we can write an interpreter for a polymorphic language in the type specialiser's meta-language. Specialising such an interpreter to a polymorphic program will translate it into a monomorphic one. Begin by adding a case to your optimal interpreter from the previous exercise so that it also interprets a let construct:
Let x e1 e2
represents let x=e1 in e2. Test your interpreter by specialising it to
(Let@ "id" (Lm@ "x" (Vr@ "x")) (Ap@ (Vr@ "id") (Cn@ 3)))
Make sure that specialisation is still optimal --- that is, you obtain a corresponding let in the residual program. What happens if you specialise your interpreter to a program which requires polymorphism to be well-typed? For example,
(Let@ "id" (Lm@ "x" (Vr@ "x")) 
   (Ap@ (Ap@ (Vr@ "id") (Vr@ "id")) (Cn@ 3)))
Modify your interpreter so that it can be specialised to this term. You will need to generate two versions of id in the residual program, with two different monotypes --- could polyvariance be useful perhaps? Following the Hindley-Milner type system, you may wish to distinguish between lambda-bound and let-bound variables, where only the latter may be polymorphic.

Transforming Higher-Order to First-Order

Higher-order programs can be transformed to first-order ones by representing function values as data-structures called closures, consisting of a tag identifying the function, and the values of its free variables. Function calls are interpreted by calling a dispatch function which inspects the tag, and then behaves like the function that the tag identifies. The transformation to first-order form is called closure conversion or firstification, and is a little tricky in a typed language. The object of this exercise is to develop an interpreter for the lambda-calculus, which when specialised to a lambda-term produces the result of firstifying it. Start from the optimal interpreter you developed above. Can you change the representation of functions in the interpreter in such a way that residual functions will be represented by tagged tuples of their free variables? Don't forget to introduce a dispatching function, which can be specialised to produce the dispatch function in the firstified code! A suitable lambda-expression to test your firstifier on is
(Ap@ (Lm@ "ap" (Ap@ (Ap@ (Vr@ "ap") (Lm@ "z" (Vr@ "z")))
                    (Ap@ (Ap@ (Vr@ "ap") (Lm@ "w" (Cn@ 3)))
                         (Cn@ 4))))
     (Lm@ "f" (Lm@ "x" (Ap@ (Vr@ "f") (Vr@ "x")))))
You may encounter an infinite representation error from the arity raiser during this exercise. Certain recursive static functions can cause this problem. In this case, try passing the dispatch function as a parameter to eval rather than referring to it as a free variable.

Interpreting Imperative Programs

Below is an interpreter for a simple imperative language, supporting assignments, conditional statements, and sequencing. Variables in the interpreted language need not be declared: a variable is given a value simply by assigning to it. The interpreter given below is purely dynamic; your job is to modify it so that the program to be interpreted, and the names in the environment, are static.
let look env x = env x in
let assign env x v = 
   \i. if i=x then v else env i
in

letrec eval env e =
  case e of
    Con n: n,
    Var s: look env s,
    Add e1 e2: eval env e1 + eval env e2
  esac
in

letrec exec env p =
  case p of
    Skip: env,
    Assign x e: assign env x (eval env e),
    Seq p1 p2: let env=exec env p1 in exec env p2,
    If e p1 p2: if eval env e=lift 0 then exec env p2 
                else exec env p1
  esac
in

let run p e = let env = exec (\i.lift 0) p in
              eval env e
in

run
(Seq (Assign (lift "x") (Con (lift 3)))
     (Seq (If (Var (lift "x"))
              (Assign (lift "y") (Add (Var (lift "x")) (Con (lift 1))))
              Skip)
          (Assign (lift "z") (Var (lift "y")))))

(Add (Var (lift "x")) (Var (lift "y")))
This interpreter would be difficult to specialise with a partial evaluator, because of the dynamic conditional in the function exec, which forces the result of exec to be dynamic. But exec returns the environment, which should of course be partially static. Luckily the type specialiser allows dynamic conditionals to have partially static results, so the problem will not arise. One solution using a partial evaluator would be to use CPS specialisation, which specialises the context of a dynamic conditional with partially static branches twice, once in each branch. In the example above, the statement following the If statement would need to be `compiled' twice (that is, two different specialisations of exec to this statement would need to be generated), since one branch of the If introduces the variable y into the environment, while the other branch does not. Thus the partially static environment would have a different shape, depending on which branch of the conditional statement was chosen, and the reference to y in
(Assign (lift "z") (Var (lift "y")))
would need to be compiled differently in each case. The type specialiser on the other hand `compiles' this last statement once only. Inspect the residual code: how is the problem of different variables in the environment after each branch of an If resolved? In the interpreter given above, uninitialised variables have the value zero. Modify the interpreter to distinguish between initialised and uninitialised variables in the environment. What is the effect of making this distinction static, in the example above?
Last modified: Mon Jul 6 23:58:01 MEST 1998