Max's environment or table of definitions has four entries for each name:

    tag:  single lower case character name.
    ldef: lambda representation.
    cdef: combinator representation
    type: type of definition.

Definitions  are  maintained  in  both lambda and combinator form. The former is
more  convenient  for  merging the equations which constitute definitions, while
the latter allows simpler reduction and much simpler type checking.

In  common with Min, the implementation of Max uses no variables. That is to say
that  once a name is assigned, its value is not thereafter changed. The only ex-
ception  to  this  is  the name '_' which is used as a sink. It may be specified
more than once, but its value is never referenced. '_' is a good name for a sink
because  it  is arguably the APL name with the lowest connotative overhead (most
boring).  The symbol is used in the functional language "Hope" for approximately
the same purpose.

Function [show] is used to display the first ⎕pw characters of a possibly infin-
ite  list.  [show] outputs list items as a side-effect of calculating the number
of  session columns remaining. This approach was adopted so that with unbuffered
⍞← output, leftmost items of a list are displayed early. This provides an indic-
ation  that  something is happening if reduction of successive items takes prog-
ressively longer, as for example, in the case of the Fibonacci sequence.

To  help  show  what's going on, the APL code is occasionally commented using an
informal  notation  to  indicate  function type. The scheme mirrors APL's header
syntax, with '::' in place of the function name. For example:

        tex env ← tex :: exp env

means the function is dyadic; takes a left argument of type "tex" (type express-
ion); a 2-item right argument of "exp" (expression) and "env" (environment); and
returns a 2-item result.

The following D-functions implement Max:

    max         process input stream, returning {shy} environment.
    parse       parse tree from char vector source.
    lambda      lambda expression from parse tree.
    local       local definitions replaced with simple lambda expressions.
    equn        argument patterns replaced with simple lambda expressions.
    show        display of reduced expression.
    defn        environment extended with new definition.
    compile     compilation of lambda, into combinator form.
    typex       type statement (::) processing.
    type        deduction (inference) of expression type.
    unify       unification of type expressions.
    reduce      expression reduction.
    extend      environment extended with primitive types and definitions.
    const       implementation constants.
    trees       graphical display of tree structures.

In  addition  to these relatively substantial functions, which are described be-
low,  there  are  a  number  of small utilities which are defined dynamically in
several  places.  This repetition is justified on the grounds that being able to
see the definition in context outweighs any reluctance to replicate code.

Dref:  "Lookup Tables" are used in several places. These take the form of a vec-
tor of names (tags) followed by a corresponding vector of values (vals). For ex-

    tab←'abc'(1 2 3)            ⍝ table of values for 'a', 'b', 'c'.

A function to "dereference" a name from the table could be coded:

    dref←{                      ⍝ Dereference from table.
        tags vals←⍺             ⍝ left arg: look-up table.
        name←⍵                  ⍝ right arg: name to look up.
        (tags⍳name)⊃vals,⊂name  ⍝ value corresponding to name.

The  phrase  ,⊂name in the last line supplies a default value if the name is un-
known to the table. In this coding, the tag itself is returned:

        tab∘dref¨'a+b×c'        ⍝ translate operands, preserve operators.
    1 + 2 × 3

The above coding can be changed to use a reduction ↑{..}/ to split the left arg-
ument,  thus  avoiding  the allocation of explicit names for parts of the table.
Then,  within  the reduction operand function, tags and vals are referenced as ⍺
and ⍵ respectively.

    dref←{                      ⍝ Dereference from table.
        name←⍵                  ⍝ right arg: name to look up.
        ↑{(⍺⍳name)⊃⍵,⊂name}/⍺   ⍝ value corresponding to name.

As  a  final  step, we can make this a "one-liner" by turning the inner function
into  a monadic operator, and passing [name] as its operand. [name] is then ref-
erenced as ⍺⍺.

    dref←{↑⍵{(⍺⍳⍺⍺)⊃⍵,⊂⍺⍺}/⍺}   ⍝ Dereference from table.

Another  idiom worthy of note is "fold from the left". APL's primitive reduction
operator  '/'  is  applied "from the right", so to reverse this, we must commute
the function operand and reverse the arguments.

To  build  a  "cons" tree, where cons (⊂) associates right, we can use primitive

          trees ↑{'⊂'⍺ ⍵}/ 'abcd'       ⍝ tree from right.
    a ┌⊂─┐
      b ┌⊂┐
        c d

but  to build an "apply" tree, where apply associates left, we must use the com-
muted form:

          trees ↑{'@'⍺ ⍵}⍨/⌽ 'abcd'     ⍝ tree from left.
     ┌─@┐ d
    ┌@┐ c
    a b

    app←{↑{'@'⍺ ⍵}⍨/⌽⍵}                 ⍝ function application.

Max's  right  argument  is a script for processing prior to keyboard input. This
can be a character matrix; a character vector with embedded new-lines; or a vec-
tor  of character vectors. Max returns as a shy result, an environment which may
be  re-input  via its left argument as a starting point for a later session. The
environment is a 4-column matrix:

    Name, lambda definition, combinator definition, type.

Local functions in max:

Stream:  Takes  the next complete input line. If this starts with ')', exit max,
returning  the environment as a shy result. Otherwise, process each line segment
and then the rest of the script.

Next: Returns a complete source line from the input stream.

Input: Returns the next line from the script or keyboard.

Prompt: Assembles the input prompt.

Retract:  Retracts  primitive definitions from the environment leaving only user
defined definitions. The environment is returned as a four-column matrix.

Alph: A vector of valid names for definitions.

Chars:  A  vector of valid input characters. The boolean argument to the unnamed
function  which generates chars, indicates the admission of language extensions.
For pure Max, change this to 0.

The  number  of  operators in Max makes it just complex enough to merit a parser
more  general than Min's very specific one. Parse's left argument is a vector of
operators in binding strength order, and its right argument, a source expression
to be parsed. The result is a "parse tree":

    ┌×┐ ┌×┐
    a b c d

Parse has three phases: prep, post, and tree.

Prep: Preprocess the source string performing simple lexical tasks that make the
following  pass  easier.  Firstly,  it  replaces  lambda expressions of the form
"\abc.X"  with  an infix equivalent "a\b\c\X". Secondly, it substitutes negative
integers  for local definition dot sequences: 'a.b...c..d' becomes 'a' ¯1 'b' ¯3
'c' ¯2 'd'.  Lastly, the type digraph :: is replaced by a single character equi-
valent '⊤'.

Post:  Converts  the  preprocessed  source  into postfix or reverse polish form.
Input tokens are taken from the right argument and appended to one of two output
arrays:  'ac',  an accumulator for output tokens and 'sk', a stack for operators
whose  binding  strength is less than that of those already output. The state of
the  process  and  current bracket appear in 'st' and 'bk'. State is one of: '-'
initial;  '∇'  operator  processed, expecting operand; or '⍵' operand processed.
A  dummy operand '?' is inserted between adjacent operators, and an explicit ap-
ply operator '@' is inserted between adjacent operands. Parentheses and brackets
are  processed  by  a  recursive call on post for a left (lb) one, and returning
current sub-expression and remaining source tokens for a right (rb) one.

Tree:  Converts  postfix form to the final nested array tree structure. The list
separator  ',' and cons operator ':' which have been kept distinct until now be-
cause  they have differing binding strengths are both translated to an applicat-
ion of the primitive prefix function '⊂'. For example, 'ht:' → '@'('@' '⊂' h)t.

This  function  returns a lambda expression or definition from a parse tree. Two
substantial subtasks are:

    - local definition resolution.  - see function local.
    - pattern matching in equations - see function equn.

This  function  replaces  local definitions using "where dots" with their lambda
abstraction equivalent. In general:

    X.a=A → (\a.X)A

Subfunction  [lexp] tries to push the lambda abstraction as far down the tree as
possible. For example, if X above is a function application: fx, then the trans-
formation above becomes:

    fx.a=A → (\a.fx)A

However,  if  variable "a" is free only in x (say) then the following is prefer-

    fx.a=A → f(x.a=A) → f((\a.x)A)

If variable "a" occurs only once (or not at all) in expression X, then the lamb-
da abstraction can be replaced directly with value A:

    fa.a=A → fA     / "a" occurs once.

    f0.a=A → f0     / "a" does not occur.

This optimisation is beneficial in its own right, and corresponds to the idea of
localising  names  "as tightly as possible".  However, the real reason for doing
it, is to push name localisation below any pattern matching code, making it eas-
ier to merge equations such as:

    p 0 j = 0                   / product in terms of sum.
    p (+i) j = s j (p i j)
    .   s 0 j = j               / local defn of sum.
    .   s (+i) j = +(s i j)

Lambda trees for the two equations and final definition of p are:

 ┌=─┐         ┌=─┐             =merge=>     ┌=─┐
 p ┌\───┐     p ┌\──┐                       p ┌\────┐
   _ ┌──@┐      _ ┌─@───────┐                 _ ┌───@───────┐
   ┌─@─┐ ?      ┌─@┐  ┌─────@─┐               ┌─@─┐   ┌─────@─┐
  ┌@┐ ┌\┐      ┌@┐ ? ┌\─┐    ┌@┐             ┌@┐ ┌\┐ ┌\─┐    ┌@┐
  ! _ j 0      ! _   i ┌\──┐ - _             ! _ j 0 i ┌\──┐ - _
                       j  ···                          j  ···

As  the local definition code is pushed below the pattern matching code, the '?'
subtrees line up nicely, and the equations are easily merged.

This  function  replaces  a  complex  equation  using argument patterns with its
simple  lambda  equivalent.  The  pattern  matching appears as explicit function
calls  on  ! (test for zero) and ∘ (test for null). The dummy bound variable '_'
may be used in the generated lambda abstractions. For example:

    n0=1      →  n=\_.!_1?
    n(+i)=0   →  n=\_.!_?((\i.0)(-i))
    h(x:y)=x  →  h=\_.∘_?(\xy.x)(↑_)(↓_)

The  first  lambda expression reads: "n is a function of _, where if _ is 0, the
result is 1, otherwise it's an error".

The  second  one  reads:  "n  is  a function of _ where if _ is zero then error,
otherwise the result is the application of function (\i.0) to the predecessor of
i.  This  last mechanism is how pattern variables assume their inverse values in
the  body of the expression. For example, a function f(+i)= ..i.. given an argu-
ment of 4, has i with a value of 3 in its body.

The  third  expression  reads  "h  is  a function of _, where if _ is null, then
error,  otherwise the result is the application function (\xy.x) to the head and
tail of _".

Typically, as in the first two examples above, a function definition is compris-
ed of complementary equations where the error cases cancel out.

Show  continually  reduces an expression until either it reaches normal form, or
⎕PW characters of output have been displayed.

The  global environment is extended with a new definition, or a refinement of an
existing definition.

Compile  translates  an  expression containing lambda abstractions into one con-
sisting  only  of  function applications. It is possible to represent any lambda
expression  using  only  the S,K and I combinators, but for non-trivial express-
ions, unmanageably large expressions result. The optimisations employ additional
combinators  to  reduce the size of the resulting tree. Max's compile is essent-
ially the same as Min's, except that the table of optimisations is passed  as  a
left argument. This is in order to restrict which optimisations are applied dur-
ing the "bootstrap" process in function 'extend'.

Typex  processes Max's :: operator. The parser translates type expressions into:

    ⊤ exp ?         / for a type query such as            [0] ::
    ⊤ name type     / for a type specification such as      s :: #→#→#

Typex  returns  a  possibly updated environment and the number of output columns

Type deduces (infers) the type of a compiled expression. The expression may con-
tain  references  to  other objects in the environment which is passed as a left
argument. There are two cases:

If  the  expression  is  an  atom,  its type is referenced from the environment.
Otherwise,  the type is a function application: fun arg. The type of each of fun
and  arg is deduced recursively and the type of the resulting function applicat-
ion produced by unifying function type, argument type, and a wild card type var-
iable with the template for function application:

    func    argt    rslt
    ⍺→⍵     ⍺       ⍵

The  unification  algorithm is interesting in its own right, independent of Max.
Its arguments are a pair of expressions to be unified. For example, if 'a+a' and
'b+(c×d)'  are expressions with variables a, b, c and d, then we can deduce that
the  expressions  can  be  unified (are equivalent) if and only if a=b=(c×d), in
which  case  they  are  both  equivalent to '(c×d)+(c×d)'. Internally, type var-
iables: ⍺, ∊, ⍵, ··· are represented by natural numbers 0, 1, 2, ···.

      1 '+' 1 unify 2 '+' (3 '×' 4)    ⍝ a+a <≡> b+(c×d)  ≡>  (c×d)+(c×d).
 3 × 4 + 3 × 4

See also: dfns.dws/notes.unify.

Local functions in unify:

Dist: The two expression are allocated distinct type variables.

Subs:  The  expressions are traversed in parallel building a substitution table.
For  example  if ⍺ on one side, matches #→# on the other, the substitution table
is extended with a new entry: (⍺,#→#).

Merge:  Substitutions  are  merged  into  the table. For example, if (⍺,⍵→⍵) and
(⍵,[#]), then (⍺,[#]→[#]).

Fix:  The  fixpoint or power-limit operator applies its function operand repeat-
edly until there is no difference between argument and result. The simplest form
of  this  operator is {⍵≡z←⍺⍺ ⍵:⍵ ⋄ ∇ z}. However, in this case, an extra phrase
aborts  the  process  if an infinite cycle is detected. Such a cycle would occur
trying to merge, for example: (⍺,[⍵]) and (⍵,[⍺]) which represents an expression
with  a  non-finite type. Trying to infer the type of the primitive Y combinator
leads  to  such a cycle. For this reason, Y is included in the primitive combin-
ators at "boot" time because its type cannot be inferred by this method.

Max  can  accommodate  expressions which represent infinite lists. Consequently,
the process can't be allowed to attempt to reduce the expression to its simplest
or "normal" form, as this might not terminate. The reduction proceeds to a less-
er  form  called  "weak  head normal form". In this context, an expression is in
WHNF if it is:

    - An atomic value, such as '0', or '+',
    - A function applied to fewer than its defined number of arguments,
    - A list construction of the form head:tail.

Local functions in reduce:

Eval:  Takes a combinator expression and returns a reduced expression. There are
three cases:

    If  the  expression  is a function application, apply the (reduced) function
    to its argument.

    If the expression is a number or an error, return the value unchanged.

    Otherwise,  the  expression  represents  a  function.  In this case return a
    function suspension 4-tuple:

        The name of the function.
        An empty vector of function arguments (none so far).
        The argument template or function header.
        Definition if the function is a combinator.

Apply: Takes a function suspension and an argument and returns either an extend-
ed suspension or if the argument vector is complete, the result of the applicat-
ion.  In  this case the function is either primitive, and is evaluated by [numb]
or  [list], or it's a combinator and so values are substituted from the argument
vector into the combinator definition and the resulting expression returned.

Numb: Applies a numeric function to its argument(s). All numeric primitive func-
tions are "strict" in their first argument which means that this must be reduced
to a number before the evaluation can proceed.

List:  Applies  a list function to its argument(s). List primitives require that
their first argument be in WHNF.

Whnf: Iteratively reduces an expression to weak head normal form.

Extend takes the initial environment input through Max's left argument and comp-
lements it with definitions for each of the primitive functions and combinators.
The  types of the primitive functions and the definitions of the combinators are
supplied by: const'ptypes' and const'cdefs'.

Combinator  types are inferred from their definitions; a process which is satis-
fyingly circular:

    type:  Deduces  the  type of the expression from the known types of its con-
    stituent combinators. Therefore it needs the combinator form of the definit-
    ion provided by 'compile'.

    compile: Produces the combinator form of an expression from the lambda equi-
    valent.  Unfortunately,  the  optimised result may contain combinators whose
    type  has not yet been inferred, so the process will fail. Avoiding the opt-
    imisation would produce excessively large combinator definitions with a con-
    sequent delay in initialisation.

The  cycle  is broken by restricting the optimisations available to the compiler
to  those employing combinators whose type has already been determined. The pro-
cess is boot-strapped with a set of "primitive" combinator types: const'ptypes',
for  S,  K, I, and Y. As typing proceeds, compile is able to use more optimisat-
ions  and  thus  produce smaller expressions and quicker typing. Each combinator
name is displayed briefly during this process.

To  extend  Max's set of combinators, we need only add definitions to the const-
'cdefs' table. The new definitions will be typed at "boot" time.

Const produces tables of all of the constants used in the implementation.

Alph:   Alphabet of names that can be used in definitions.

Vars:   Alphabet of type variable names.

Cdefs:  Table  of  combinator definitions, each entry comprising: name, argument
        template, definition in terms of arguments.

Ptypes: Table of types of primitive functions and combinators.

Opts:   Table of Curry/Turner combinator optimisations.

Temps:  Table of pattern matching templates.

Trees  is a utility, borrowed from Min, which is not used directly in the imple-
mentation.  It  is handy for displaying intermediate expression trees that occur
during the processing of a Max script. An example is:

          trees parse's0j=j, s(+i)j=+(sij), s::, ~s'
       ┌─=┐    ┌──────⋄───┐
     ┌─@┐ j  ┌─=─┐      ┌─⋄─┐
    ┌@┐ j ┌──@┐ ┌@──┐  ┌⊤┐ ┌~┐
    s 0  ┌@─┐ j + ┌─@┐ s ? ? s
         s ┌@┐   ┌@┐ j
           + i   s i

Max syntax

    stmt  :=  defn | expr | type | stmt , stmt      / Statement

    defn  :=  name argt* = expr                     / Definition.
    argt  :=  name | npat | lpat                    / Function argument.
    name  :=  alph                                  / Single char name.
    npat  :=  digt | ( + argt )                     / Numeric pattern.
    lpat  :=  null | ( argt : lpat )                / List pattern.

    expr  :=  prim | expr expr | ( expr ) |         / Expression.
              expr whex | list

    prim  :=  name | digt | pfun                    / Primitive object.

    whex  :=  ..* defn | whex whex                  / Where expression.

    type  :=  tdef | tenq                           / Type statement.
    tdec  :=  name :: texp                          / Type declaration.
    tenq  :=  expr ::                               / Type enquiry.

    texp  := tvar | [ texp ] | texp → texp |        / Type expression.
             ( texp )

    list  :=  [] | [ elts ] | expr : list           / List.
    elts  :=  expr | expr , elts                    / List elts.

    alph  :=  a | b | ··· | z                       / Name alphabet.
    digt  :=  0 | 1 | ··· | 9                       / Numeric digits.
    tvar  :=  ⍺ | ⍵ | ∊ | ⍳ | ⍴ | ···               / Type variable.
    pfun  :=  +                                     / Primitive function.

Where: ⍺* means 0 or more ⍺s.
      ⍺⍺* means 1 or more ⍺s.

Binding strengths and associativity

    [6]  (a b) c  / Function application.
    [5]  a:(b:c)  / List construction.
    [4]  a\(b\c)  / Lambda abstraction.
    [3]  a=(b=c)  / Definition.
    [2]  a.(b.c)  / Localisation.
    [1]  a,(b,c)  / Item separation.
    [0]  (a,b),c  / Statement separation.

1. The  example using the Sieve of Eratosthenes appears to go to sleep after the
   promising start: [2,3,5,7,11,. This came as a bit of a blow to the author who
   had hoped to get rich by selling large primes to encryption agencies :-). The
   fault is due to the reduction engine's using call-by-name as opposed to call-
   by-need  semantics,  which leads to the repeated evaluation of arguments. The
   number of reductions grows rapidly with each term: 131, 835, 11,509, 117,745,
   4,881,653, 38,063,959, 1,416,888,935, ...

   We  could ameliorate this problem to some extent by using "memoization" tech-
   niques (see dfns.dws/notes.memo). The above example goes significantly faster
   if  we  add  a simple linear cache to function [reduce]. This requires adding
   just two extra lines:

        keys vals←⍬ ⍬                       ⍝ initial cache.            |
        cache←{⊃⌽keys vals,∘⊂←⍺ ⍵}          ⍝ !extend cache!            |

   and modifying two others:

        eval←{                              ⍝ Value of @exp.
            (⊂⍵)∊keys:(keys⍳⊂⍵)⊃vals        ⍝ value in cache: done.     |
            ⍴⍴⍵:⍵ cache(∇ 1⊃⍵)apply 2⊃⍵     ⍝ function application.     |
            ~⍵∊tags:⍵                       ⍝ number, null or error.
            ⍵'',(tags⍳⍵)⊃defs               ⍝ function suspension.
        }                                   ⍝ exp ← :: exp

   Note  however,  that adding the "!extend cache!" line above would violate the
   claim that "the implementation of Max uses no variables".

2. The type checker currently refuses to infer the type of \h.h[h], whereas it's
   perfectly happy with ↑[↑]. For this reason, the multi-line example in the In-
   troduction starting:

        p 2 [                       / Pick 2th function from list.
            f . f x = h x,          / head,
            f . f x = h (t x),      / head of tail,

   doesn't  yet  work,  although it appears to be a correct Max expression. This
   may be due to the following bug:

3. Max assigns a local name (say "i") either:

    as a function ⍺rgument:             f i = (i +)(i 0)            [⍺]
    or as an expression ⍵here clause:   (i +)(i 0) . i x = x        [⍵]
   In case [⍺], the type-checker must infer the type of "i" solely from the con-
   text  in which it is used within the body of the function. To have any chance
   of  winding  up  with  a  reasonable typing and to catch any type errors, the
   checker  refines its notion of a name's type using unification. In [⍺] above,
   the leftmost occurrence of "i" (i +) implies a type of (#→#)→⍺ and the right-
   most  (i 0),  a type of #→⍵. As these types cannot be unified, the expression
   as  a  whole fails. Note that if the checker had been more lax and assigned a
   type  of  ⍺→⍵  in  both cases, the unification would have worked but the type
   implied  by the expression (i +)(i 0) as a whole would be just ⍵, which tells
   us nothing.

   In  case [⍵], the type-checker has a considerably easier job. The type of "i"
   can be deduced directly from its definition to be ⍺→⍺. If the checker acknow-
   ledges  this  distinction and _preserves_ rather than _refines_ this type as-
   signment  while checking the body of the function, we wind up with (i +)(i 0)
   typed as #, which is spot-on.

   Unfortunately,  this  implementation of Max is unable to distinguish [⍺] from
   [⍵] as it converts both to lambda expressions and then to combinators _prior_
   to  type  checking.  This  means  that case [⍵], which should work, reports a
   type error.

   In the literature, definition "i" in [⍺] and [⍵] is termed "lambda-bound" and
   "let-bound" (or "where-bound") respectively.  In the latter case,  the  type-
   checker should mark "i" as "generic" and refrain fom refining its type on en-
   countering references within the expression.

   Notice that a _global_ defition retains its generic type and so the following
   statement is accepted by the type-checker:

        ix=x, i+(i0)    // global i::⍺→⍺

1. For  no better reason than to explore a different technique, the original in-
   tention  was  to give Max a reduction mechanism distinct from Min's. The plan
   was  to  dispense  with  combinators altogether and reduce lambda expressions
   directly.  Extended  Max would expose these together with the set of internal
   primitive  functions  and a tracer would show the reduction of lambda, rather
   than combinator, expressions.

            z.z=0:z ;

   A  simple  eval-apply  lambda  reducer  ran into trouble with successive list
   items  forcing  the accumulation of multiple environments. This extra baggage
   seemed to slow reduction to an unacceptable degree and any attempt to "factor
   out" common environments didn't seem to pay off.

   A  version powered by a lazy (call-by-name) SECD machine (Landin 1964) worked
   surprisingly  well  and  a  call-by-need version would certainly be worth re-

   However,  while  at  first  showing  promise, these attempts foundered on the
   difficulty  of  inferring  the type of lambda abstractions. Ways of achieving
   this  are  well understood, but sadly not by this author. An overriding bene-
   fit  of  the  combinator implementation is that expressions are comprised ex-
   clusively  of primitive objects of known type and function applications. Such
   constructs can be typed in a single recursive pass.

   No  attempt  has  yet been made to implement a more modern engine such as STG
   (The  Spineless  Tagless G-Machine - Peyton Jones 1992) or TIM (The Three In-
   struction Machine - Peyton Jones and Lester 1992).

2. An earlier draft of Max supported "tuples".  A tuple is Max's equivalent of a
   structure or record and is distinguished from a list  by  being  enclosed  in
   parentheses: (+,0,[]). A tuple is in some sense the dual of a list: a list of
   a particular type may have a varying number of items but each must be of  the
   same type; a tuple has a fixed number of items but each may be of a  distinct
   type. The type of the above example is denoted:: (#→#,#,[⍺]).

   The construct was later removed because the extra facility it provided to the
   language  didn't  quite  seem to justify the additional complexity its imple-
   mentation incurred.

   An example of the distinction between lists and tuples is the old APL favour-
   ite,  the  "inverted"  file.  A "regular" component file where each component
   contains  many  fields  is analogous to a list-of-tuples, whereas an inverted
   file (one component per field type), is a tuple-of-lists.

3. At present there is no tracer, but if one were added, it might be pleasing to
   watch  the  type inference process. It's not clear what this should look like
   for  lambda  abstractions,  but  for  simple applicative expressions, perhaps
   something along the lines of:

            ↑[↑][↑] :: ;

Starting Over
In summary, a rewrite of Max might:
- Reintroduce tuples;
- Have the type-checker distinguish where-bound from lambda-bound definitions;
- Reduce compiled lambda expressions directly using a call-by-need machine;
- Implement a tracer;
- Perhaps implement some sort of optional tracing of the type inference process.

Local  definitions using indentation appeared in Landin's language ISWIM (If you
See  What I Mean) in 1966. He used the term "off-side rule" (borrowed from Socc-
er)  to describe the scheme. The off-side rule is employed in a number of modern
functional languages including Haskell (1999).

In  his  paper  "Basic Polymorphic Typechecking" (1987), Luca Cardelli gives the
following "bit of history":

   "Polymorphic typing of programs was envisioned by Strachey; his lecture notes
    on  fundamental  concepts  of  programming languages [Strachey 1967] already
    contain much of the notation and terminology used today.

    Polymorphic  types were already known as "type schemas" in combinatory logic
    [Curry 1958].  Extending  Curry's  work, and collaborating with him, Hindley
    introduced  the  idea  of a principal type schema, which is the most general
    polymorphic  type  of an expression, and showed that if a combinatorial term
    has  a  type,  then  it has a principal type [Hindley 1969]. In doing so, he
    used  a  result  by Robinson about the existence of most general unifiers in
    the  unification  algorithm [Robinson 1965]. These results contained all the
    germs  of  polymorphic typechecking, including the basic algorithms. The ex-
    istence of principal types means that a type inference algorithm will always
    compute  a  unique  "best"  type for a program; moreover, unification can be
    used to perform this computation. However, these results did not immediately
    influence  the  programming language community, because of their theoretical

    Influenced  by Strachey, and independently from Hindley, Milner rediscovered
    many of these ideas in the context of the LCF proof generation system [Gord-
    on 1979], which included the first version of the ML language [Milner 1984].
    He  introduced  a crucial extension to Hindley's work: the notion of generic
    and non-generic type variables, which is essential for handling declarations
    of polymorphic functions. Milner implemented the first practical polymorphic
    typechecker,   and  showed that the type system is sound [Milner 1978]. More
    recently,  Milner  and  Damas proved the principal-type property for the ex-
    tended system [Damas 1982] which implies that the type system is decidable."

1.  Cardelli, L., (1987) Basic Polymorphic Typechecking. Science of Computer
    Programming 8/2.

2.  Curry, H.B., Feys, R. (1958) Combinatory Logic, North-Holland, Amsterdam.

3.  Damas, L., Milner, R.,  (1982) Principal type-schemes for functional pro-
    grams, Proc. POPL 82, pp. 207-212.

4.  Gordon, M.J., Milner, R., Wadsworth, C.P., Edinburgh LCF, Lecture Notes in
    Computer Science, No 78, Springer-Verlag.

5.  Hindley, R., (1969) The principal type scheme of an object in combinatory
    logic, Transactions of the American Mathematical Society, Vol. 146.

6.  Landin, P.J., (1964) The Mechanical Evaluation of Expressions. Computer
    Journal, 6. pp 308-320.

7.  Landin, P.J., (1966). The Next 700 Programming Languages. Communications of
    the ACM 9, 3. pp 153-403.

8.  Milner, R., (1978). A Theory of type polymorphism in programming. Journal of
    Computer and System Science. Vol. 17, pp.348-75.

9. Milner, R., (1984) A proposal for Standard ML, Proc. of the 1984 ACM Sym-
    posium on Lisp and Functional Programming.

10. Peyton Jones, S.L., (1992). Implementing lazy functional languages on stock
    hardware: the Spineless Tagless G-machine. Journal of Functional Programm-
    ing 2(2), July 1992, pp. 127-202

11. Peyton Jones, S.L. and Lester, D.R., (1992). Implementing functional lang-
    uages: a tutorial, Prentice Hall, 1992. Now out of print, but try:

12. Peyton Jones, S. and Hughes, J. (editors) (1999) Haskell 98 - A non-strict,
    purely functional language.

13. Robinson, J.A., (1965). A machine-oriented logic based on the resolution
    principle. Journal of the ACM. Vol. 12, No. 1, pp. 23-41.

14. Strachey, C. (1967) Fundamental concepts in programming languages, lecture
    notes for the International Summer School in Computer Programming, Copen-
    hagen, 1967.

Back to: contents