Joris van der Hoeven
LIX, CNRS
École polytechnique
91128 Palaiseau Cedex
France
vdhoeven@lix.polytechnique.fr
http://lix.polytechnique.fr/~vdhoeven
Overview of the Mathemagix type
system
November 3, 2023
. This work has
been supported by the ANR-09-JCJC-0098-01
The goal of the
Mathemagix, type system, overloading, parametric polymorphism, language design, computer algebra
Until the mid nineties, the development of computer algebra systems
tended to exploit advances in the area of programming languages, and
sometimes even influenced the design of new languages. The
The
After this initial period, computer algebra systems have been less keen
on exploiting new ideas in language design. One important reason is that
a good language for computer algebra is more important for developers
than for end users. Indeed, typical end users tend to use computer
algebra systems as enhanced pocket calculators, and rarely write
programs of substantial complexity themselves. Another reason is
specific to the family of systems that grew out of
In our opinion, this has lead to an unpleasant current situation in
computer algebra: there is a dramatic lack of a modern, sound and fast
programming language. The major systems
The absence of modern languages for computer algebra is even more
critical whenever performance is required. Nowadays, many important
computer algebra libraries (such as bit
floating point numbers. Unfortunately,
For these reasons, we have started the design of a new and free
software,
In this paper, we will focus on the underlying type system. We present an informal overview of this system and highlight in which respect it differs from existing systems. We plan to provide a more detailed formal description of the type system in a future paper.
The central idea behind the design of the
One consequence of this design philosophy is that program interfaces
admit very detailed specifications: although the actual implementations
are not formally proven, the combination of various components is sound
as long as each of the components fulfills its specification. By
contrast,
Another consequence of the
The
In order to make the above discussion about the main design philosophy
more concrete, we will consider the very simple example of computing the
square of an element in a monoid. In section 2, we will show how such a function would typically be written
in various existing languages, and compare with what we would do in
In section 3, we will continue with a more complete description of the primitives of the type system which have currently been implemented in the compiler. We will also discuss various difficulties that we have encountered and some plans for extensions.
As stated before, we have chosen to remain quite informal in this paper. Nevertheless, in section 4, we will outline the formal semantics of the type system. The main difficulty is to carefully explain what are the possible meanings of expressions based on heavily overloaded notations, and to design a compiler which can determine these meanings automatically.
Given the declarative power of the language, it should be noticed that the compiler will not always be able to find all possible meanings of a program. However, this is not necessarily a problem as long as the compiler never assigns a wrong meaning to an expression. Indeed, given an expression whose meanings are particularly hard to detect, it is not absurd to raise an error, or even to loop forever. Indeed, in such cases, it will always be possible to make the expression easier to understand by adding some explicit casts. Fortunately, most natural mathematical notations also have a semantics which is usually easy to determine: otherwise, mathematicians would have a hard job to understand each other at the first place!
We will consider a very simple example in order to illustrate the most
essential differences between of a monoid. Here we recall that a
monoid is simply a set
together with an
associative multiplication
.
In
category Monoid == { infix *: (This, This) -> This; } |
We may now define the square of an element of a
monoid by
forall (M: Monoid) square (x: M): M == x * x; |
Given an instance x of any type T with a multiplication infix *: (T, T) -> T, we may then compute the square of x using square x.
In our definition of a monoid, we notice that we did not specify the multiplication to be associative. Nothing withholds the user from replacing the definition by
category Monoid == { infix *: (This, This) -> This; associative: This -> Void; } |
This allows the user to indicate that the multiplication on a
type T is associative, by implementing the
“dummy” function associative for T.
At this point, the type system provides no means for specifying the
mathematical semantics of associativity: if we would like to take
advantage out of this kind of semantics, then we should also be able to
automatically prove associativity during type conversions. This
is really an issue for automatic theorem provers which is beyond the
current design goals of
As stated in the introduction, a lot of the inspiration for
define Monoid: Category == with { *: (%, %) -> %; } |
However, the forall primitive inside
Squarer (M: Monoid): with { square: M -> M; } == add { square (x: M): M == x * x; } |
In order to use the template for a particular class, say Integer, one has to explicitly import the instantiation of the template for that particular class:
import from Squarer (Integer); |
The necessity to encapsulate templates inside classes makes the class
hierarchy in
The C++ language [34] does provide support for the definition of templates:
template<typename M> square (const M& x) { return x * x; } |
However, as we see on this example, the current language does not
provide a means for requiring M to be a monoid. Several
C++ extensions with “signatures” [2] or
“concepts” [30] have been proposed in order to
add such requirements.
In the above piece of code, we also notice that the argument x
is of type const M& instead of M.
This kind of interference of low level details with the type system is
at the source of many problems when writing large computer algebras
libraries in
Mainstream strongly typed functional programming languages, such as
# let square x = x * x;; val square: int -> int = <fun> # let float_square x = x *. x;; val float_square: float -> float = <fun> |
At any rate, this means that we somehow have to specify the monoid in which we want to take a square when applying the square function of our example. Nevertheless, modulo acceptance of this additional disambiguation constraint, it is possible to define the analogue of the Monoid category and the routine square:
# module type Monoid = sig type t val mul : t -> t -> t end;; # module Squarer = functor (El: Monoid) -> struct let square x = El.mul x x end;; |
As in the case of
# module int_Monoid = struct type t = int let mul x y = x * y end;; # module int_Squarer = Squarer (int_Monoid);; # int_Squarer.square 11111;; - : int = 123454321 |
class Monoid a where (*) :: a -> a -> a square x = x * x |
In order to enable the square function for a particular type, one has to create an instance of the monoid for this particular type. For instance, we may endow String with the structure of a monoid by using concatenation as our multiplication:
instance Monoid [Char] a where x * y :: x ++ y |
After this instantiation, we may square the string "hello" using square "hello". In order to run this example in practice, we notice that there are a few minor problems: the operator * is already reserved for standard multiplication of numbers, and one has to use the -XFlexibleInstances option in order to allow for the instantiation of the string type.
The nice thing of the above mechanism is that we may instantiate other
types as monoids as well and share the name * of the
multiplication operator among all these instantiations.
Although the automated proof assistant in
,
it is also possible to define parametric implicit conversions, such as
the inclusion of
in
. Moreover, implicit conversions can be composed
into chains of implicit conversions.
One major problem with implicit conversions is that they quickly tend to
become ambiguous (and this explains why they were entirely removed from
may typically be achieved in two different ways
as the succession of two implicit conversions; namely, as
or as
.
In is usually more
efficient than
, so
we really want to enforce the most efficient solution. Currently,
Essentially, the difference between
square x = x * x |
and without specifying the type of x, then the symbol * should not be too heavily overloaded in order to allow the type system to determine the type of square. In other words, no sound strongly typed system can be designed which allows both for highly ambiguous function declarations and highly ambiguous function applications.
Whether the user prefers a type system which allows for more freedom at
the level of function declarations or function applications is a matter
of personal taste. We may regard
We finally notice that signatures are now implemented under various
names in a variety of languages. For instance, in
There are three main kinds of objects inside the
test?: Boolean == pred? x; // constant flag?: Boolean := false; // mutable |
In this example, test? is a constant, whereas flag? is a mutable variable which can be given new values using the assignment operator :=. The actual type of the mutable variable flag? is Alias Boolean. Functions can be declared using a similar syntax:
foo (x: Int): Int == x * x; |
shift (x: Int) (y: Int): Int == x + y; iterate (foo: Int -> Int, n: Int) (x: Int): Int == if n = 0 then x else iterate (foo, n-1) (foo x); |
The return type and the types of part of the function arguments are allowed to depend on the arguments themselves. For instance:
square (x: M, M: Monoid): M == x * x; |
New classes are defined using the class primitive, as in the following example:
class Point == { mutable { x: Double; y: Double; } constructor point (x2: Int, y2: Int) == { x == x2; y == y2; } } |
We usually use similar names for constructors as for the class itself, but the user is free to pick other names. The mutable keyword specifies that we have both read and read-write accessors postfix .x and postfix .y for the fields x and y. Contrary to C++, new accessors can be defined outside the class itself:
postfix .length (p: Point): Double == sqrt (p.x * p.x + p.y * p.y); |
As in the case of functions, classes are allowed to depend on parameters, which may be either type parameters or ordinary values. Again, there may be dependencies among the parameters. One simple example of a class definition with parameters is:
class Num_Vec (n: Int) == { mutable v: Vector Double; constructor num_vec (c: Double) == { v == [ c | i: Int in 0..n ]; } } |
Categories are the central concept for achieving genericity. We have already seen an example of the definition of a category in section 2.1. Again, categories may take parameters, with possible dependencies among them. For instance:
category Module (R: Ring) == { This: Abelian_Group; infix *: (R, This) -> This; } |
As in
category Type {} category To (T: Type) == { convert: This -> T; } |
Given an ordinary type T, we write x: T
if x is an instance of T. In the case of
a category Cat, we write if a
type T satisfies the category, that is, if all
category fields are defined in the current context, when replacing This by T. Contrary to
The main strength of the
infix * (c: Double, p: Point): Point == point (c * p.x, c * p.y); infix * (p: Point, c: Double): Point == point (p.x * c, p.y * c); |
Contrary to
bar: Int == 11111; bar: String == "Hello"; mmout << bar * bar << lf; mmout << bar >< " John!" << lf; |
Internally, the
foo (x: Int): Int == x + x; foo (s: String): String == reverse s; |
Then the expression foo bar will be assigned the type And (Int, String). An example of a truly ambiguous expression would be bar = bar, since it is unclear whether we want to compare the integers 11111 or the strings "Hello". True ambiguities will provoke compile time errors.
The second kind of parametric overloading relies on the forall
keyword. The syntax is similar to template declarations in
forall (M: Monoid) fourth_power (x: M): M == x * x * x * x; |
Internally, the
forall (T: Type) operator [] (t: Tuple T): Vector T == vector t; |
This notation in particular defines the empty vector []
which admits the universally quantified type Forall (T: Type,
Vector T). In particular, and contrary to what would have been the
case in
v: Vector Int == []; w: Vector Int == [] >< []; // concatenation |
would typically be all right. On the other hand, the expression #[] (size of the empty vector) is an example of a genuine and parametric ambiguity.
In comparison with
fourth_power (x: Double): Double == square square x; |
Contrary to
forall (R: Number_Type) pi (): R == …; pi (): Double == …; |
One major difference between
Nevertheless, the parametric overloading facility makes it easy to emulate implicit conversions, with the additional benefit that it can be made precise when exactly implicit conversions are permitted. Indeed, we have already introduced the To T category, defined by
category To (T: Type) == { convert: This -> T; } |
Here convert is the standard operator for type
conversions in
forall (M: Monoid, C: To M) infix * (c: C, v: Vector M): Vector M == [ (c :> M) * x | x: M in v ]; |
Here c :> M stands for the application of convert
to c and retaining only the results of type M
(recall that c might have several meanings due to
overloading). This kind of emulated “implicit” conversions
are so common that
forall (M: Monoid) infix * (c :> M, v: Vector M): Vector M == [ c * x | x: M in v ]; |
In particular, this mechanism can be used in order to define converters with various kinds of transitivity:
convert (x :> Integer): Rational == x / 1; convert (cp: Colored_Point) :> Point == cp.p; |
The first example is also called an upgrader and provides a simple way for the construction of instances of more complex types from instances of simpler types. The second example is called a downgrader and can be used in order to customize type inheritance, in a way which is unrelated to the actual representation types in memory.
The elimination of genuine implicit converters also allows for several
optimizations in the compiler. Indeed, certain operations such as
multiplication can be overloaded hundreds of times in non trivial
applications. In the above example of scalar multiplication, the
Functions with several arguments use a classical tuple notation. It
would have been possible to follow the as a function of
type
. Although this
convention is more systematic and eases the implementation of a
compiler, it is also non standard in mainstream mathematics; in
, or as
the operator
.
In order to accomodate for functions with an arbitrary number of
arguments and lazy streams of arguments,
forall (R: Ring) eval (P: MVPol R, p: Tuple R): R == …; |
The syntactic sugar takes care of the necessary conversions between tuples and generators. For instance, given a polynomial P: MVPol Int, the following would be valid evaluations:
eval (P, 1, 2..8, (9, 10), 11..20); eval (P, (i^2 | i: Int in 0..100)); |
Notice that the notation of function application (or evaluation) can be overloaded itself:
postfix .() (fs: Vector (Int -> Int), x: Int): Vector Int == [ f x | f: Int -> Int in fs ]; |
There are various natural and planned extensions of the current type system.
One of the most annoying problems that we are currently working on concerns literal integers: the expression 1 can naturally be interpreted as a machine Int or as a long Integer. Consequently, it is natural to consider 1 to be of type And (Int, Integer). For efficiency reasons, it is also natural to implement each of the following operations:
infix =: (Int, Int) -> Boolean; infix =: (Integer, Integer) -> Boolean; infix =: (Int, Integer) -> Boolean; infix =: (Integer, Int) -> Boolean; |
This makes an expression such as 1 = 1 highly ambiguous. Our current solution permits the user to prefer certain operations or types over others. For instance, we would typically prefer the type Integer over Int, since Int arithmetic might overflow. However, we still might prefer infix =: (Int, Int) -> Boolean over infix =: (Int, Integer) -> Boolean. Indeed, given i: Int, we would like the test i = 0 to be executed fast.
One rather straightforward extension of the type system is to consider other “logical types”. Logical implication is already implemented using the assume primitive:
forall (R: Ring) { … assume (R: Ordered) sign (P: Polynomial R): Int == if P = 0 then 0 else sign P[deg P]; … } |
The implementation of existentially quantified types will allow us to write routines such as
forall (K: Field) exists (L: Algebraic_Extension K) roots (p: Polynomial K): Vector L == …; |
Similarly, we plan the implementation of union types and abstract data
types, together with various pattern matching utilities similar to those
found in
We also plan to extend the syntactic sugar. For instance, given two
aliases i, j: Alias Int, we would like to be able to
write (i, j) := (j, i) or (i, j) += (1,
1). A macro facility should also be included, comparable to the one
that can be found in
R = ZZ[x,y] |
for the simultaneous introduction of the polynomial ring and the two coordinate functions
.
In the longer future, we would like to be able to formally describe mathematical properties of categories and algorithms, and provide suitable language constructs for supplying partial or complete correctness proofs.
In order to specify the semantics of the -calculus.
Source programs can be represented in a suitable variant of typed
-calculus, extended with special
notations for categories and overloading.
We will use a few notational conventions. For the sake of brevity, we
will now use superscripts for specifying types. For instance, denotes the function
.
For the sake of readability, we will also denote types ,
,
etc. using capitalized identifiers and categories
,
,
etc. using bold capitalized identifiers. Similarly, we will
use the terms “type expressions” and “category
expressions” whenever an expression should be considered as a type
or category. Notice however that this terminology is not formally
enforced by the language itself.
The source language contains three main components:
Given expressions and
, we denote function application by
,
,
or
.
Given a variable , an
expression
and type expressions
and
, we
denote by
the lambda expression which sends
of type
to
of type
.
We will denote by the type of the above
-expression. In the case
when
depends on
, we will rather write
for this type.
Hence, all lambda expressions are typed and there are no syntactic
constraints on the types and
. However, “badly typed”
expressions such as
will have no correct
interpretation in the section below.
Given variables , type
expressions
and expressions
, we may form the expression
. The informal meaning is: the expression
, with mutually recursive
bindings
.
Given variables and type expressions
, we may form the data type
. For instance, a list of
integers might be declared using
.
We also introduce a special variable
which
will be the type of
.
Given variables and type expressions
, we may form the category
. For instance, we might
introduce the
category using
.
Given two expressions and
, we may form the overloaded expression
.
Given type expressions and
, we may form the intersection type
.
Given a variable , a type
expression
and an expression
, we may form the parametrically overloaded
expression
.
Given a variable , a type
expression
and a type expression
, we may form the universally
quantified type expression
.
In the last two cases, the variable is often
(but not necessarily) a type variable
and its
type
a category
.
The source language allows us to define an overloaded function such as
In a context where is of type
, it is the job of the compiler to recognize
that
should be interpreted as a function of type
in the expression
.
In order to do so, we first extend the source language with a few
additional constructs in order to disambiguate overloaded expressions.
The extended language will be called the target language. In a
given context , we next
specify when a source expression
can be
interpreted as a non ambiguous expression
in the
target language. In that case, we will write
and
the expression
will always admit a unique type.
For instance, for as above, we introduce
operators
and
for
accessing the two possible meanings, so that
For increased clarity, we will freely annotate target expressions by
their types when appropriate. For instance, we might have written instead of
.
Given an expression , we
may form the expressions
and
.
Given expressions and
, we may form the expression
. Here
should be
regarded as a template and
as its
specialization at
.
There are many rules for specifying how to interpret expressions. We list a few of them:
Here stands for the substitution of
for
in
.
Given expressions ,
and
,
we may form the expression
.
The informal meaning of this expression is “the type
considered as an instance of
, through specification of the structure
”.
Given an expression , we
may form
, meaning
“forget the category of
”.
Given expressions and
, we may form the expression
, which allows us to cast to a type
of the form
.
Given an expression , we
may form
.
In order to cast a given type to a given
category
, all fields of the
category should admit an interpretation in the current context:
Assuming in addition that ,
we also have
. There are
further rules for casting down.
A target expression is said to be reduced if its
type
is not of the form
,
, or
or
.
The task of the compiler is to recursively determine all reduced
interpretations of all subexpressions of a source program. Since each
subexpression
may have several interpretations,
we systematically try to represent the set of all possible reduced
interpretations by a conjunction
of universally
quantified expressions. In case of success, this target expression
will be the result of the compilation in the relevant
context
, and we will write
.
Let us illustrate this idea on two examples. With
as in (1) and
,
there are two reduced interpretations of
:
Hence, the result of the compilation of is given
by
In a similar way, the result of compilation may be a parametrically overloaded expression:
Sometimes, the result of the compilation of
is a conjunction which contains at least two
expressions of the same type. In that case,
is
truly ambiguous, so the compiler should return an error message, unless
we can somehow resolve the ambiguity. In order to do this, the idea is
to define a partial preference relation
on
target expressions and to keep only those expressions in the conjunction
which are maximal for this relation.
For instance, assume that we have a function of
type
and the constant
of
type
. In section 3.5,
we have seen that
is a partial specialization
of
, but not the inverse.
Consequently, we should strictly prefer
over
, and
over
, where
.
As indicated in section 3.8, we are currently investigating
further extensions of the preference relation
via user provided preference rules.
In absence of universal quantification, the search process for all
reduced interpretations can in principle be designed to be finite and
complete. The most important implementation challenge for
The main idea behind the current implementation is that all pattern matching is done in two stages: at the first stage, we propose possible matches for free variables introduced during unification of quantified expressions. At a second stage, we verify that the proposed matches satisfy the necessary categorical constraints, and we rerun the pattern matching routines for the actual matches. When proceeding this way, it is guaranteed that casts of a type to a category never involve free variables.
Let us illustrate the idea on the simple example of computing a square.
So assume that we have the function of type
in our context, as well as a multiplication
. In order to compile the
expression
, the algorithm
will attempt to match
with
for some free variable
. At a
first stage, we introduce a new free variable
and match
against
.
This check succeeds with the bindings
and
, but without performing any type
checking for these bindings. At a second stage, we have to resolve the
innermost binding
and cast
to
. This results in the
correct proposal
for the free variable, where
. We finally rematch
with
and find the return type
.
In practice the above idea works very well. Apart from more pathological theoretical problems that will be discussed below, the only practically important problem that we do not treat currently, is finding a “smallest common supertype” with respect to type conversions (see also [33]).
For instance, let be a function of type
. What should be the type of
, where
and
are such that
and
are different? Theoretically speaking, this should be
the type
, where
is the category
.
However, the current pattern matching mechanism in the
It is easy to write programs which make the compiler fail or loop
forever. For instance, given a context with the category and functions
and
of types
and
,
the compilation of
will loop. Indeed, the
compiler will successively search for converters
,
,
, etc. Currently, some safeguards
have been integrated which will make the compiler abort with an error
message when entering this kind of loops.
The expressiveness of the type system actually makes it possible to
encode any first order theory directly in the system. For instance,
given a binary predicate and function symbols
, the statement
might be encoded by the declaration of a function
of type
,
where
.
These negative remarks are counterbalanced by the fact that the type system is not intended to prove mathematical theorems, but rather to make sense out of commonly used overloaded mathematical notations. It relies upon the shoulders of the user to use the type system in order to define such common notations and not misuse it in order to prove general first order statements. Since notations are intended to be easily understandable at the first place, they can usually be given a sense by following simple formal procedures. We believe that our type system is powerful enough to cover most standard notations in this sense.
The above discussion shows that we do not aim completeness for the
We also notice that failure of the compiler to find the intended meaning does not necessarily mean that we will get an error message or that the compiler does not terminate. Indeed, theoretically speaking, we might obtain a correct interpretation, even though the intended interpretation should be preferred. In particular, it is important to use the overloading facility in such a way that all possible interpretations are always correct, even though some of them may be preferred.
Given an expression on which the compilation
process succeeds, we finally have to show what it means to evaluate
. So let
with
be the expression in the target language
which is produced by the compiler. The target language has the property
that it is quite easy to “downgrade”
into an expression of classical untyped
-calculus.
This reduces the evaluation semantics of
Some of the most prominent rules for rewriting
into a term of classical untyped
-calculus
are the following:
Overloaded expressions are rewritten as
pairs
.
The projections and
are simply
and
.
Template expressions are rewritten as
-expressions
.
Template instantiation is rewritten into
function application
.
Instances of categories are implemented as
-tuples
.
For instance, consider the template .
After compilation, this template is transformed into the expression
, where
.
One of the aims of the actual -calculus.
The Axiom computer algebra system.
http://wiki.axiom-developer.org/FrontPage
.
G. Baumgartner and V.F. Russo. Implementing signatures for C++. ACM Trans. Program. Lang. Syst., 19(1):153–187, 1997.
E. Bond, M. Auslander, S. Grisoff, R. Kenney, M. Myszewski, J. Sammet, R. Tobey and S. Zilles. FORMAC an experimental formula manipulation compiler. In Proceedings of the 1964 19th ACM national conference, ACM '64, pages 112–101. New York, NY, USA, 1964. ACM.
T. Coquand and G. Huet. The calculus of constructions. Inf. Comput., 76(2-3):95–120, 1988.
T. Coquand et al. The Coq proof assistant.
http://coq.inria.fr/
, 1984.
D. Eisenbud, D.R. Grayson, M.E. Stillman and B. Sturmfels, editors. Computations in algebraic geometry with Macaulay 2. Springer-Verlag, London, UK, UK, 2002.
J.-C. Faugère. FGb: A Library for Computing Gröbner Bases. In Komei Fukuda, Joris Hoeven, Michael Joswig and Nobuki Takayama, editors, Mathematical Software - ICMS 2010, volume 6327 of Lecture Notes in Computer Science, pages 84–87. Berlin, Heidelberg, September 2010. Springer Berlin / Heidelberg.
K. Geddes, G. Gonnet and Maplesoft. Maple.
http://www.maplesoft.com/products/maple/
, 1980.
J. Y. Girard. Une extension de l'interprétation de Gödel à l'analyse, et son application à l'élimination de coupures dans l'analyse et la théorie des types. In J. E. Fenstad, editor, Proceedings of the Second Scandinavian Logic Symposium, pages 63–92. North-Holland Publishing Co., 1971.
T. Granlund et al. GMP, the GNU multiple precision arithmetic library.
http://www.swox.com/gmp
, 1991.
D.R. Grayson and M.E. Stillman. Macaulay2, a software system for research in algebraic geometry. Available at http://www.math.uiuc.edu/Macaulay2/.
J.H. Griesmer, R.D. Jenks and D.Y.Y. Yun. SCRATCHPAD User's Manual. Computer Science Department monograph series. IBM Research Division, 1975.
G. Hanrot, V. Lefèvre, K. Ryde and P. Zimmermann. MPFR, a C library for multiple-precision floating-point computations with exact rounding.
http://www.mpfr.org
, 2000.
W. Hart. An introduction to Flint. In K. Fukuda, J. van der Hoeven, M. Joswig and N. Takayama, editors, Mathematical Software - ICMS 2010, Third International Congress on Mathematical Software, Kobe, Japan, September 13-17, 2010, volume 6327 of Lecture Notes in Computer Science, pages 88–91. Springer, 2010.
J. van der Hoeven, G. Lecerf, B. Mourain et al. Mathemagix. 2002.
http://www.mathemagix.org
.
J. van der Hoeven, G. Lecerf, B. Mourrain, P. Trébuchet, J. Berthomieu, D. Diatta and A. Manzaflaris. Mathemagix, the quest of modularity and efficiency for symbolic and certified numeric computation. ACM Commun. Comput. Algebra, 45(3/4):186–188, 2012.
P. Hudak, J. Hughes, S. Peyton Jones and P. Wadler. A history of haskell: being lazy with class. In Proceedings of the third ACM SIGPLAN conference on History of programming languages, HOPL III, pages 12–1. New York, NY, USA, 2007. ACM.
R. D. Jenks. The SCRATCHPAD language. SIGPLAN Not., 9(4):101–111, mar 1974.
R.D. Jenks. Modlisp – an introduction (invited). In Proceedings of the International Symposiumon on Symbolic and Algebraic Computation, EUROSAM '79, pages 466–480. London, UK, UK, 1979. Springer-Verlag.
R.D. Jenks and B.M. Trager. A language for computational algebra. SIGPLAN Not., 16(11):22–29, 1981.
R.D. Jenks and R. Sutor. AXIOM: the scientific computation system. Springer-Verlag, New York, NY, USA, 1992.
http://caml.inria.fr/ocaml/
, 1996.
W. A. Martin and R. J. Fateman. The MACSYMA system. In Proceedings of the second ACM symposium on Symbolic and algebraic manipulation, SYMSAC '71, pages 59–75. New York, NY, USA, 1971. ACM.
P. Martin-Löf. Constructive mathematics and computer programming. Logic, Methodology and Philosophy of Science VI, :153–175, 1979.
R. Milner. A theory of type polymorphism in programming. Journal of Computer and System Sciences, 17:348–375, 1978.
J. Moses. Macsyma: A personal history. Journal of Symbolic Computation, 47(2):123–130, 2012.
The Maxima computer algebra system (free version).
http://maxima.sourceforge.net/
, 1998.
T. Nipkow, L. Paulson and M. Wenzel. Isabelle/Hol.
http://www.cl.cam.ac.uk/research/hvg/Isabelle/
, 1993.
S. Peyton Jones et al. The Haskell 98 language and libraries: the revised report. Journal of Functional Programming, 13(1):0–255, Jan 2003. Http://www.haskell.org/definition/.
G. Dos Reis and B. Stroustrup. Specifying C++ concepts. SIGPLAN Not., 41(1):295–308, 2006.
A. Sabi. Typing algorithm in type theory with inheritance. In Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming languages, POPL '97, pages 292–301. ACM, 1997.
W.A. Stein et al. Sage Mathematics Software. The Sage Development Team, 2004.
http://www.sagemath.org
.
R. S. Sutor and R. D. Jenks. The type inference and coercion facilities in the scratchpad ii interpreter. SIGPLAN Not., 22(7):56–63, 1987.
B. Stroustrup. The C++ programming language. Addison-Wesley, 2-nd edition, 1995.
S. Watt, P.A. Broadbery, S.S. Dooley, P. Iglio, S.C. Morrison, J.M. Steinbach and R.S. Sutor. A first report on the A# compiler. In Proceedings of the international symposium on Symbolic and algebraic computation, ISSAC '94, pages 25–31. New York, NY, USA, 1994. ACM.
S. Watt et al. Aldor programming language.
http://www.aldor.org/
, 1994.
Wolfram Research. Mathematica.
http://www.wolfram.com/mathematica/
, 1988.