|
Let be a linear differential operator, where
is an effective algebraically closed subfield
of
. It can be shown that
the differential Galois group of
is generated
(as a closed algebraic group) by a finite number of monodromy
matrices, Stokes matrices and matrices in local exponential groups.
Moreover, there exist fast algorithms for the approximation of the
entries of these matrices.
In this paper, we present a numeric-symbolic algorithm for the computation of the closed algebraic subgroup generated by a finite number of invertible matrices. Using the above results, this yields an algorithm for the computation of differential Galois groups, when computing with a sufficient precision.
Even though there is no straightforward way to find a
“sufficient precision” for guaranteeing the correctness of
the end-result, it is often possible to check a posteriori
whether the end-result is correct. In particular, we present a
non-heuristic algorithm for the factorization of linear differential
operators.
Let be a monic linear differential operator of
order
, where
is an effective algebraically closed subfield of
. A holonomic function is a
solution to the equation
.
The differential Galois group
of
is a linear algebraic group which acts on the space
of solutions (see section 2.2 and
[Kaplansky, 1957; van der Put and Singer, 2003; Kolchin, 1973]). It
carries a lot of information about the solutions in
and on the relations between different solutions. For instance, the
existence of non-trivial factorizations of
and
the existence of Liouvillian solutions can be read off from the Galois
group. This makes it an interesting problem to explicitly compute the
Galois group of
.
A classical approach in this area is to let act
on other vector spaces obtained from
by the
constructions from linear algebra, such as symmetric powers
and exterior powers
[Beke, 1894;
Singer and Ulmer, 1993]. For a suitable such space
, the Galois group
consists precisely of those invertible
matrices
which leave a certain one-dimensional subspace of
invariant [Humphreys, 1981, chapter 11]. Invariants in
or
under
may be computed
more efficiently by considering the local solutions of
at singularities [van Hoeij and Weil, 1997; van Hoeij, 1997; van Hoeij,
1996]. More recently, and assuming (for instance) that the coefficients
of
are actually in
,
alternative algorithms appeared which are based on the reduction of the
equation
modulo a prime number
[Cluzeau, 2004; van der Put, 1995; van der Put and Singer, 2003].
In this paper, we will study another type of “analytic
modular” algorithms, by studying the operator
in greater detail near its singularities using the theory of
accelero-summation [Écalle, 1985; Écalle, 1987;
Écalle, 1992; Écalle, 1993; Braaksma, 1991; Braaksma,
1992]. More precisely, we will use the following facts:
The differential Galois group of is
generated (as a closed algebraic group) by a finite number of
monodromy matrices, Stokes matrices and matrices in so called local
exponential groups (see [Ramis, 1985; Martinet and Ramis, 1991] and
theorem 14 below).
There exists an algorithm for the approximation of the entries of
the above matrices (see [van der Hoeven, 1999; van der Hoeven,
2001b; van der Hoeven, 2005b] and theorem 19 below). If
is the algebraic closure of
, then
-digit
approximations can be computed in time
.
When using these facts for the computation of differential Galois
groups, the bulk of the computations is reduced to linear algebra in
dimension with multiple precision coefficients.
In comparison with previous methods, this approach is expected to be
much faster than algorithms which rely on the use of exterior powers. A
detailed comparison with arithmetic modular methods would be
interesting. One advantage of arithmetic methods is that they are easier
to implement in existing systems. On the other hand, our analytic
approach relies on linear algebra in dimension
(with floating coefficients), whereas modulo
methods rely on linear algebra in dimension
(with coefficients modulo
),
so the first approach might be a bit faster. Another advantage of the
analytic approach is that it is more easily adapted to coefficients
fields
with transcendental constants.
Let us outline the structure of this paper. In section 2, we start by recalling some standard terminology and we shortly review the theorems on which our algorithms rely. We start with a survey of differential Galois theory, monodromy and local exponential groups. We next recall some basic definitions and theorems from the theory of accelero-summation and the link with Stokes matrices and differential Galois groups. We finally recall some theorems about the effective approximation of the transcendental numbers involved in the whole process.
Before coming to the computation of differential Galois groups, we first
consider the simpler problem of factoring in
section 3. We recall that there exists a non-trivial
factorization of
if and only if the Galois group
of
admits a non-trivial invariant subspace. By
using computations with limited precision, we show how to use this
criterion in order to compute candidate factorizations or a proof that
there exist no factorizations. It is easy to check a posteriori
whether a candidate factorization is correct, so we obtain a
factorization algorithm by increasing the precision until we obtain a
correct candidate or a proof that there are no factorizations.
In section 4 we consider the problem of computing the
differential Galois group of .
Using the results from section 2, it suffices to show how
to compute the algebraic closure of a matrix group
generated by a finite number of given elements. A theoretical solution
for this problem based on Gröbner basis techniques has been given
in [Derksen et al., 2003]. The main idea behind the present algorithm is
similar, but more emphasis is put on efficiency (in contrast to
generality).
First of all, in our context of complex numbers with arbitrary
precisions, we may use the LLL-algorithm for the computation of linear
and multiplicative dependencies [Lenstra et al., 1982]. Secondly, the
connected component of is represented as the
exponential of a Lie algebra
given by a basis.
Computations with such Lie algebras essentially boil down to linear
algebra. Finally, we use classical techniques for finite groups in order
to represent and compute with the elements in
[Sims, 1970; Sims, 1971; Murray and O'Brien, 1995]. Moreover, we will
present an algorithm for non-commutative lattice reduction, similar to
the LLL-algorithm, for the efficient computation with elements in
near the identity.
The algorithms in section 4 are all done using a fixed
precision. Although we do prove that we really compute the Galois group
when using a sufficiently large precision, it is not clear a
priori how to find such a “sufficient precision”.
Nevertheless, we have already seen in section 3 that it is
often possible to check the correctness of the result a
posteriori, especially when we are not interested in the Galois
group itself, but only in some information
provided by
. Also, it might
be possible to reduce the amount of dependence on “transcendental
arguments” in the algorithm modulo a further development of our
ideas. Some hints are given in the last section.
Remark
In this section, we recall several classical results about differential Galois theory and its link with accelero-summation theory. We also recall previous work on the efficient evaluation of holonomic constants. The main result of this section is theorem 20.
Throughout this paper, we will use the following notations:
An algebraically closed field of constants.
The algebra of matrices with coefficients
in
.
The subgroup of of invertible matrices.
The vector space generated by a subset of
a larger vector space.
The algebra generated by a subset of a
larger algebra.
Vectors are typeset in bold face and we use the
following vector notations:
Matrices will also be used as mappings
. When making a base change in
, we understand that we
perform the corresponding transformations
on all
matrices under consideration. We denote
for the
diagonal matrix with entries
.
The
may either be scalars or square matrices.
Given a matrix
and a vector
, we write
for the
vector
with
for all
.
Consider a monic linear differential operator , where
is an algebraically
closed subfield of
and
. We will denote by
the
finite set of singularities of
(in the case of
, one considers the
transformation
). A
Picard-Vessiot extension of
is a
differential field
such that
is differentially generated by
and a basis of solutions
to
the equation
.
has
as its field
of constants.
A Picard-Vessiot extension always exists: given a point
and
, let
be the unique solution to
with
for
. We call
the canonical basis for the solution space of
at
,
and regard
as a column vector. Taking
, the condition
PV2 is trivially satisfied since
and the constant field of
is
.
Let be a Picard-Vessiot extension of
and let
be as in
PV1. The differential Galois group
of the extension
is the group of
differential automorphisms which leave
pointwise
invariant. It is classical [Kolchin, 1973] that
is independent (up to isomorphism) of the particular choice of the
Picard-Vessiot extension
.
Given an automorphism , any
solution
to
is sent to
another solution. In particular, there exists a unique matrix
with
for all
. This yields an embedding
of
into
and we define
. Conversely,
belongs to
if every differential
relation
satisfied by
is
also satisfied by
(with
). Since this assumption constitutes an infinite
number of algebraic conditions on the coefficients of
, it follows that
is a
Zariski closed algebraic matrix group. Whenever
is another basis, we obtain the same matrix group
up to conjugation.
Assume now that is a larger algebraically closed
subfield of
. Then the field
is again a Picard-Vessiot extension of
. Furthermore, the Ritt-Raudenbush
theorem [Ritt, 1950] implies that the perfect differential ideal of all
with
is finitely
generated, say by
. But then
is still a finite system of generators of the
perfect differential ideal of all
with
. Consequently,
(i.e. as an algebraic group over
) is determined by the same algebraic equations as
. We conclude that
.
Let be a Picard-Vessiot extension of
. Any differential field
with
naturally induces an
algebraic subgroup
of automorphisms of
which leave
fixed. Inversely, any
algebraic subgroup
of
gives rise to the differential field
with
of all elements which are invariant under the action
of
. We say that
(resp.
)
is closed if
(resp.
). In that case, the extension
is said to be normal, i.e.
every element in
is moved by an automorphism of
over
.
The main theorem from differential Galois theory states that the Galois
correspondences are bijective [Kaplansky, 1957, Theorem 5.9].
Theorem
The correspondences and
are bijective.
The group is a closed normal subgroup of
if and only if the extension
is normal. In that case,
.
Corollary . If
for all
, then
.
Consider a continuous path on
from
to
.
Then analytic continuation of the canonical basis
at
along
yields a basis
of solutions to
at
.
The matrix
with
![]() |
(1) |
is called the connection matrix or transition matrix
along . In particular, if
, then we call
a monodromy matrix based in
. We clearly have
for the composition of paths, so the monodromy matrices based in form a group
which is called
the monodromy group. Given a path
from
to
,
we notice that
. Since any
differential relation satisfied by
is again
satisfied by its analytic continuation along
, we have
and
.
Remark and
as row vectors, then (1)
has to be transposed. The roles of
and
may also be interchanged modulo inversion of
.
Now assume that admits a singularity at
(if
then we may reduce to
this case modulo a translation; singularities at infinity may be brought
back to zero using the transformation
).
It is well-known [Fabry, 1885; van Hoeij, 1996] that
admits a computable formal basis of solutions of the form
![]() |
(2) |
with ,
,
and
. We will denote by
the set
of finite sums of expressions of the form (2). We may see
as a differential subring of a formal
differential field of “complex transseries”
[van der Hoeven, 2001a] with constant field
.
We recall that transseries in are infinite
linear combinations
of
“transmonomials” with “grid-based support”. The
set
of transmonomials forms a totally ordered
vector space for exponentiation by reals and the asymptotic ordering
. In particular, each
non-zero transseries
admits a unique dominant
monomial
. It can be shown
[van der Hoeven, 2001a] that there exists a unique basis
of solutions to
of the form (2), with
and
for all
. We call
the canonical basis of solutions in
and there is an algorithm which computes
as a
function of
.
Let be the subset of
of
all finite sums of expressions of the form (2) with
. Then any
can uniquely be written as a finite sum
,
where
. Let
be the group of all automorphisms
for which
there exists a mapping
with
for all
. Then every
preserves differentiation and maps the Picard-Vessiot
extension
of
into
itself. In particular, the restriction
of
to
is a subset of
.
Proposition is fixed under
. Then
.
ProofAssume that and
let
be a “generalized exponent” with
. Let
be a supplement of the
-vector
space
. Let
be the mapping in
which sends
to
for each
and
. Then we clearly have
.
Let be the set of generalized exponents
corresponding to the generalized exponents of the elements of the
canonical basis
. Using
linear algebra, we may compute a multiplicatively independent set
such that
for certain
and all
.
Proposition is
generated by the matrices
where
is chosen arbitrarily.
ProofLet be the group
generated by the matrices
.
Notice that each individual matrix
generates
: assuming
, the variety
is
irreducible of dimension
and
is not contained in an algebraic group of dimension
. Now any
is a diagonal
matrix
for some multiplicative mapping
. Hence
Conversely, each element
determines a multiplicative mapping which may be
further extended to
using Zorn's lemma and the
fact that
is algebraically closed. It follows
that
.
Assume that and let
be
the transformation which sends
to
,
to
and
to
.
Then
preserves differentiation, so any solution
to
of the form (2) is sent to
another solution of the same form. In particular, there exists a matrix
with
,
called the formal monodromy matrix around
. We have
.
Proposition is fixed under
and
. Then
.
ProofWe already know that . Interpreting
as a
polynomial in
with
,
we must have
since
Consequently, is of the form
and
We conclude that for every
with
, whence
.
Let be the differential
-algebra of infinitesimal Puiseux series in
for
and consider a formal
power series solution
to
. The process of accelero-summation enables to
associate an analytic meaning
to
in a sector near the origin of the Riemann surface
of
, even
in the case when
is divergent. Schematically
speaking, we obtain
through a succession of
transformations:
![]() |
(3) |
Each is a “resurgent function” which
realizes
in the “convolution model”
with respect to the
-th
“critical time”
(with
and
). In our
case,
is an analytic function which admits only
a finite number of singularities above
.
In general, the singularities of a resurgent function are usually
located on a finitely generated grid. Let us describe the
transformations
,
and
in more detail.
where , and extends by strong
linearity:
The result is a formal series in
which converges near the origin of
. The formal Borel transform is a morphism of
differential algebras which sends multiplication to the convolution
product, i.e.
.
where the acceleration kernel is given by
For large on an axis with
, it can be shown that
for
some constant
. Assuming that
satisfies
![]() |
(5) |
it follows that the acceleration of
is well-defined for small
on
. The set
of directions
such
admits a singularity on
is called the set of
Stokes directions. Accelerations are morphisms of differential
-algebras which preserve the
convolution product.
On an axis with
![]() |
(6) |
the function is defined for all sufficiently
small
. The set
of Stokes directions is defined in a similar way as in the
case of accelerations. The Laplace transform is a morphism of
differential
-algebras which
is inverse to the Borel transform and sends the convolution product to
multiplication.
Remark .
Given critical times in
and directions
satisfying (5), we
say that a formal power series
is
accelero-summable in the multi-direction
if the above scheme yields an analytic function
near the origin of any axis on
satisfying (6). We denote the set of such power series by
, where
.
Inversely, given
, we denote
by
the set of all triples
such that
and so that
is
well-defined. In that case, we write
and
.
The set forms a differential subring of
and the map
for
is injective. If
and
are obtained from
and
by inserting a new critical time and an arbitrary
direction, then we have
. In
particular,
contains
, where
denotes the ring of
convergent infinitesimal Puiseux series. Let
be
sets of directions such that each
is finite
modulo
. Let
be the subset of
of multi-directions
which verify (5). We denote
,
and
.
Taking , the notion of
accelero-summation extends to formal expressions of the form (2)
and more general elements of
as follows. Given
,
,
and
, we simply define
.
It can be checked that this definition is coherent when replacing
by
for some
. By linearity, we thus obtain a natural
differential subalgebra
of accelero-summable
transseries with critical times
and in the
multi-direction
. We also
have natural analogues
and
of
and
.
The main result we need from the theory of accelero-summation is the following theorem [Écalle, 1987; Braaksma, 1991].
Theorem be a formal solution to
. Then
.
Corollary be the canonical basis of formal solutions to
at the origin. We have
.
ProofHolonomy is preserved under
multiplication with elements of .
Remark
We say that is stable under Stokes
morphisms is for all
,
there exists a
with
, and if the same property is recursively satisfied
by
. We denote by
the differential subring of
which
is stable under Stokes morphisms. The mappings
will be called Stokes morphisms and we denote by
the group of all such maps.
Proposition is fixed under
. Then
is convergent.
ProofAssume that one of the
admits a singularity at
and choose
maximal and
minimal. Modulo the
removal of unnecessary critical times, we may assume without loss of
generality that
. Let
with
be a multi-direction
satisfying (5), such that
for all
. Then
for all sufficiently small .
Now
is obtained by integration around
along the axis
.
By classical properties of the Laplace integral [Candelberger et al.,
1993, Pré I.2], the function
cannot
vanish, since
admits a singularity in
(if the Laplace integrals corresponding to both directions
coincide, then the Laplace transform can be
analytically continued to a larger sector, which is only possible if
is analytic in a sector which contains both
directions
). We conclude
that
, so
is not fixed under
.
Remark be a set of multi-directions
satisfying (5), with
for exactly
one
, and so that for all
, we have either
or
and
for
some small
. For every
, we have
By looking more carefully at the proof of proposition 12,
we observe that it suffices to assume that is
fixed under all Stokes morphisms of the form
, instead of all elements in
.
We say that are equivalent, if
for all
, where
is the denominator of
.
We notice that
is finite modulo this equivalent
relation. We denote by
a subset of
with one element in each equivalence class.
Let us now come back to our differential equation . Given
,
the map
induces an isomorphism between
and
. We denote
by
the unique matrix with
. Given a second
,
the vector
is again in
, whence
by repeating the
argument. In particular, the Stokes morphism
induces the Stokes matrix
.
We are now in the position that we can construct a finite set of generators for the Galois group
in a regular point
.
Algorithm Compute_generators
Input: an operator and a
regular point
Output: a set of generators
for
for each do
return
Theorem constructed as above, the differential Galois
group
is generated by
as
a closed algebraic subgroup of
.
ProofAssume that is
fixed by each element of
. We
have to prove that
. Given a
singularity
, let
be the “continuation” of
along
(which involves analytic continuation
until
followed by
“decelero-unsummation”). By proposition 7, we
have
. From proposition 12 and remark 13, we next deduce that
is convergent. Indeed, since
,
its realization
in the convolution model with
critical time
is a function in
. Consequently,
whenever
and
are
equivalent. At this point we have shown that
is
meromorphic at
. But a
function which is meromorphic at all points of the Riemann sphere
is actually a rational function. It follows that
.
Remark
Remark for
. Modulo the identification of functions in the
formal model, the convolution models and the geometric model via
accelero-summation, the pointed alien derivatives commute with the usual
derivation
. Consequently, if
is a solution to
,
then we also have
. In
particular, given the canonical basis of solutions
to
, there exists a unique
matrix
with
This equation is called the bridge equation. Since admits only a finite number of singularities and the alien
derivations “translate singularities”, we have
for some
, so
the matrices
are nilpotent. More generally, if
are
-linearly
independent, then all elements in the algebra generated by
are nilpotent.
It is easily shown that the Stokes morphisms correspond to the
exponentials of directional Alien derivations
. This yields a way to
reinterpret the Stokes matrices in terms of the
with
. In particular, the
preceding discussion implies that the Stokes matrices are unipotent.
The extra flexibility provided by pointwise over directional alien
derivatives admits many applications, such as the preservation of
realness [Menous, 1996]. For further details, see [Écalle, 1985;
Écalle, 1987; Écalle, 1992; Écalle, 1993].
A complex number is said to be
effective if there exists an approximation algorithm
for
which takes
on input
and which returns an
-approximation
of
for which
. The time complexity of
this approximation algorithm is the time
it
takes to compute a
-approximation
for
. It is not hard to show
that the set
of effective complex numbers forms
a field. However, given
the question whether
is undecidable. The following theorems were
proved in [Chudnovsky and Chudnovsky, 1990; van der Hoeven, 1999; van
der Hoeven, 2001b].
Theorem ,
,
and
. Given a broken line path
on
, we have
The value of the analytic continuation of
at the end-point of
is effective.
There exists an approximation algorithm of time complexity for
,
when not counting the approximation time of the input data
,
and
.
There exists an algorithm which computes an approximation
algorithm for as in (b) as a
function of
,
and
.
Theorem be regular singular in
. Let
,
and
.
Then
is well-defined for all sufficiently small
on the effective Riemann surface
of
above
, and
is effective.
There exists an approximation algorithm of time complexity for
,
when not counting the approximation time of the input data
,
and
.
There exists an algorithm which computes an approximation
algorithm for as in (b) as a
function of
,
and
.
In general, the approximation of involves the
existence of certain bounds. In each of the above theorems, the
assertion (c) essentially states that there exists an algorithm
for computing these bounds as a function of the input data. This
property does not merely follow from (a) and (b)
alone.
The following theorem has been proved in [van der Hoeven, 2005b].
Theorem be singular in
.
Let
be as in theorem 10 and
. Given
with
and
,
we have
is effective.
There exists an approximation algorithm of time complexity for
,
when not counting the approximation time of the input data
,
and
.
There exists an algorithm which computes an approximation
algorithm for as in (b) as a
function of
,
and
.
If we replace by an arbitrary effective
algebraically closed subfield
of
, then the assertions (a) and
(c) in the three above theorems remain valid (see [van der
Hoeven, 2005a; van der Hoeven, 2003] in the cases of theorems 17
and 18), but the complexity in (b) usually drops
back to
. Notice also that we
may replace
by the transition matrix along
in each of the theorems. The following theorem
summarizes the results from section 2.
Theorem be an effective algebraically closed constant
field of
. Then there exists
an algorithm which takes
and
on input, and which computes a finite set
,
such that
The group is generated by
as a closed algebraic subgroup of
.
If , then the entries
of the matrices in
have time complexity
.
ProofIt is classical that the set of effective complex numbers forms a field. Similarly, the
set
of effective complex numbers with an
approximation algorithm of time complexity
forms
a field, since the operations
,
,
and
can all be performed in time
. In particular, the classes of matrices with
entries in
resp.
are stable under the same operations. Now in the algorithm Compute_generators,
we may take broken-line paths with vertices above
for the
. Hence (a)
and (b) follow from theorem 19(a)
resp. (b) and the above observations.
Given , we may endow
with an approximate zero-test for which
if and only if
.
We will denote this field by
.
Clearly, this zero-test is not compatible with the field structure of
. Nevertheless, any finite
computation, which can be carried out in
with an
oracle for zero-testing, can be carried out in exactly the same way in
for a sufficiently small
. Given
,
we will denote by
the “cast” of
to
and similarly for matrices
with coefficients in
.
Remark
usually come with a natural bound
for
. Then, given
with
, it is even better to
use the approximate zero-test
if and only if
. Notice that the bound
usually depends on the internal representation of
and not merely on
as a
number in
.
Let be an effective algebraically closed
subfield of
. Consider a
monic linear differential operator
,
where
. In this section, we
present an algorithm for finding a non-trivial factorization
with
whenever such a factorization
exists.
Let be a basis of solutions for the equation
, where
is an abstract differential field. We denote the Wronskian of
by
It is classical (and easy to check) that
![]() |
(7) |
When expanding the determinant in terms of the
matrices
it follows that
Denoting by the logarithmic derivative of
, it can also be checked by
induction that
admits as solutions, whence
, using Euclidean division in the skew
polynomial ring
.
If admits a factorization
, then
leaves
invariant.
If is an invariant subvector space of
, then
admits a factorization
with
.
ProofAssume that
admits a factorization
.
Then, given
and
,
we have
, whence
. Conversely, assume that
is an invariant subvector space of
and let
be a basis of
.
Then we observe that
for all
. Consequently,
for all , so that
, by corollary 3.
Hence
![]() |
(8) |
is a differential operator with coefficients in
which vanishes on
. But this
is only possible if
divides
.
Lemma be a non-unitary algebra of nilpotent matrices
in
. Then there exists a
basis of
in which
is
lower triangular for all
.
ProofLet be a matrix
such that
is a non-zero vector space of minimal
dimension. Given
and
, we claim that
.
Assume the contrary, so that
.
By the minimality hypothesis, we must have
.
In particular,
and
.
Again by the minimality hypothesis, it follows that
. In other words, the restriction of
to
is an isomorphism on
. Hence
admits a non-zero eigenvector in
,
which contradicts the fact that
is nilpotent.
Let us now prove the lemma by induction over . If
or
, then we have nothing to do, so assume that
and
.
We claim that
admits a non-trivial invariant
subvector space
. Indeed, we
may take
if
and
if
. Now
consider a basis
of
and
complete it to a basis
of
. Then each matrix in
is
lower triangular with respect to this basis. Let
and
be the algebras of lower dimensional
matrices which occur as upper left resp. lower right blocks
of matrices in
. We conclude
by applying the induction hypothesis on
and
.
Let be a finite set of non-zero nilpotent
matrices. If all matrices in the
-algebra
generated by
are
nilpotent, then it is easy to compute a basis for which all matrices in
are lower triangular. Indeed, setting
for all
, we
first compute a basis of
. We
successively complete this basis into a basis of
,
and so on until
.
If not all matrices in are nilpotent, then the
proof of lemma 23 indicates a method for the computation of
a matrix in
which is not nilpotent. Indeed, we
start by picking an
and set
, where
is smallest
with
. We next set
and iterate the following loop. Take a matrix
and distinguish the following three cases:
Set and continue.
Set ,
and continue.
Return the non-nilpotent matrix .
At the end of our loop, we either found a non-nilpotent matrix, or we
have for all
.
In the second case, we obtain a non-trivial invariant subspace of
as in the proof of lemma 23 and we
recursively apply the algorithm on this subspace and a complement. In
fact, the returned matrix is not even monopotent
(i.e. not of the form
,
where
is a nilpotent matrix), since it both
admits zero and a non-zero number as eigenvalues.
Proposition 22 in combination with theorem 20
implies that the factorization of linear differential operators in reduces to the computation of non-trivial invariant
subvector spaces under the action of
whenever
they exist.
In this section, we will first solve a slightly simpler problem:
assuming that is an effective algebraically
closed field and given a finite set of matrices
, we will show how to compute a non-trivial
invariant subspace
of
under the action of
,
whenever such a
exists.
We now have the following algorithm for computing non-trivial -invariant subspaces of
when they exist.
Algorithm Invariant_subspace
Input: a set of non-zero matrices in
Output: an -invariant
subspace of
or fail
Step 1. [Initial -splitting]
Compute a “random non-zero element”
of
Compute an -splitting
w.r.t.
and
each
Step 2. [One dimensional components]
For every with
and
, do the following:
Pick a and compute
If then return
Otherwise, set
If for all
then return
fail
Step 3. [Higher dimensional components]
Let be such that
Let
Let
If then go to step 4 and otherwise to step 5
Step 4. [Non-triangular case]
Let be non-monopotent on
(cf. previous section)
Refine the -splitting
w.r.t.
and return to step 2
Step 5. [Potentially triangular case]
Choose and compute
If then return
Otherwise, let be in
with
Refine the -splitting
w.r.t.
If this yields a finer -splitting
then return to step 2
Otherwise, set and repeat step 5
The algorithm needs a few additional explanations. In step , we may take
to be an
arbitrary element in
.
However, it is better to take a “small random expression in the
elements of
” for
. With high probability, this
yields an
-splitting which
will not need to be refined in the sequel. Indeed, the subset of
matrices in
which yield non-maximal
-splittings is a closed algebraic subset of
measure zero, since it is determined by coinciding eigenvalues. In
particular, given an
-splitting
w.r.t.
, it will usually suffice to check that each
is monopotent on each
,
in order to obtain an
-splitting
w.r.t. the other elements in
.
Throughout the algorithm, the -splitting
gets finer and finer, so the
-splitting
ultimately remains constant. From this point on, the space
can only strictly decrease in step
, so
also remains constant,
ultimately. But then we either find a non-trivial invariant subspace in
step 5, or all components of the
-splitting
become one-dimensional. In the latter case, we either obtain a
non-trivial invariant subspace in step 1, or a proof that
for every
(and thus for every
).
Remark is no longer an effective algebraically
closed field, but rather a field
with an
approximate zero-test. In that case, we recall that a number which is
approximately zero is not necessarily zero. On the other hand, a number
which is not approximately zero is surely non-zero.
Consequently, in our algorithm for the computation of
, the dimension of
can
be too small, but it is never too large. In particular, if the algorithm
Invariant_subspace fails, then the approximate proof
that
for every
yields a
genuine proof that there are no non-trivial invariant subspaces.
Putting together the results from the previous sections, we now have the
following algorithm for finding a right factor of .
Algorithm Right_factor
Input:
Output: a non-trivial right-factor of or fail
Step 1. [Compute generators]
Choose and let
Compute a finite set of generators for
(cf. theorem 20)
Step 2. [Initial precision]
while do
Step 3. [Produce invariant subspace]
Let
If fail then return
fail
Let be a column basis of
Let
Step 4. [Produce and check guess]
Let
Divide by
,
producing
with
If then go to step 5
Reconstruct from
and
with precision
If we obtain no good approximations or then go
to step 5
Return
Step 5. [Increase precision]
Go to step 3
The main idea behind the algorithm is to use proposition 22
in combination with Invariant_subspace so as to provide
good candidate right factors of in
. Using reconstruction of coefficients in
from Laurent series in
with
increasing precisions, we next produce good candidate right factors in
. We keep increasing the
precision until we find a right factor or a proof that
is irreducible. Let us detail the different steps a bit more:
We will work with power series approximations of
terms and approximate zero-tests in
.
The degree of a rational function
is
defined by
. The
initial precisions
and
have been chosen as small as possible. Indeed, we want to take
advantage of a possible quick answer when computing with a small
precision (see also the explanations below of step 5).
If Invariant_subspace fails, then there exists no
factorization of , by
remark 24. Effective power series and Laurent series
are defined in a similar way as effective real numbers (in
particular, we don't assume the existence of an effective
zero-test). Efficient algorithm for such computations are
described in [van der Hoeven, 2002].
The reconstruction of and
from
and
contains
two ingredients: we use Padé approximation to find rational
function approximations of degree
and the
LLL-algorithm to approximate numbers
by
numbers in
.
Doubling the precision at successive steps heuristically causes the computation time to increase geometrically at each step. In particular, unsuccessful computations at lower precisions don't take much time with respect to the last successful computation with respect to the required precision. Instead of multiplying the precisions by two, we also notice that it would be even better to increase by a factor which doubles the estimated computation time at each step. Of course, this would require a more precise complexity analysis of the algorithm.
The problem of reconstructing elements in from
elements in
is an interesting topic on its own.
In theory, one may consider the polynomial algebra over
generated by all coefficients occurring in
and
the number
we wish to reconstruct. We may then
apply the LLL-algorithm [Lenstra et al., 1982] on the lattice spanned by
monomials of smallest total degree (for
instance) and search for minimal
-digit
relations. If
is the algebraic closure of
, then we may simply use the
lattice spanned by the first
powers of
.
At a sufficiently large precision ,
the LLL-algorithm will ultimately succeed for all coefficients of a
candidate factorization which need to be reconstructed. If there are no
factorizations, then the algorithm will ultimately fail at step 3. This
proofs the termination of Right_factor.
Remark , it would
be nice to use more of the structure of the original problem. For
instance, a factorization of
actually yields
relations on the coefficients which we may try to use. For high
precision computations, it is also recommended to speed the
LLL-algorithm up using a similar dichotomic algorithm as for fast
g.c.d. computations [Moenck, 1973; Pan and Wang, 2002].
Remark is available, using
techniques from [Bertrand and Beukers, 1985; van Hoeij, 1997; van der
Put and Singer, 2003], then one may take
instead
of
in step 5. Of course, bounds for the required
precision
are even harder to obtain. See
[Bertrand and Beukers, 1985] for some results in that direction.
Throughout this section, will stand for the
field
of effective complex number with the
approximate zero-test at precision
.
This field has the following properties:
We have an effective zero-test in .
There exists an algorithm which takes on input
and which computes a finite set of generators for the
-vector space of integers
with
.
There exists an algorithm which takes on input
and which computes a finite set of generators for the
-vector space of integers
with
.
is closed under exponentiation and
logarithm.
Indeed, we obtain EH2 and EH3 using
the LLL-algorithm. Some of the results in this section go through when
only a subset of the conditions are satisfied. In that case, we notice
that EH2EH1,
EH3
EH1 and
EH4
(EH2
EH3).
Given a finite set of matrices ,
we give a numerical algorithm for the computation of the smallest closed
algebraic subgroup
of
which contains
. We will
represent
by a finite set
and the finite basis
of a Lie algebra
over
, such
that
and each corresponds to a unique connected
component
of
.
We will also prove that there exists a precision
such that the algorithm yields the theoretically correct result for all
.
Let be the group of invertible diagonal
matrices. Each matrix
has the form
, where
is the vector
in
of the elements on the diagonal of
. The coordinate ring
of
is the set
of Laurent polynomials in
.
Now consider the case when consists of a single
diagonal matrix
. Let
be the ideal which defines
. Given a relation
between
the
, any power
satisfies the same relation
,
whence
. Let
be the ideal generated by all
,
such that
.
ProofWe already observed that . Assuming for contradiction that
, choose
such that is minimal. If
, then
,
since otherwise
and
has
less than
terms. In particular, the vectors
with
are linearly
independent. But this contradicts the fact that
for all
.
By EH2, we may compute a minimal finite set of generators for the
-vector
space of
with
.
We may also compute a basis
for
, where
.
Then
is the connected component of
, since
for all
, and
cannot be further enlarged while conserving this property.
Let . We construct a basis
of
,
by taking
to be shortest in
(if
) or
(if
), such that
. This basis determines a toric change of
coordinates
with
such
that
with respect to the new coordinates.
Similarly, we may construct a basis
of
, by taking each
to be shortest in
such that
is maximal. This basis determines a second toric change of coordinates
with
such that
(
) with respect
to the new coordinates.
After the above changes of coordinates, the ideal
is determined by the equations
.
Setting
it follows that . Rewriting
with respect to the original coordinates now
completes the computation of
.
Let us now consider the case when consists of a
single arbitrary matrix
.
Then we first compute the multiplicative Jordan decomposition of
. Modulo a change of basis of
, this means that
where and
are the
semi-simple and unipotent parts of
:
where
Proposition .
RemarkNotice that is
well-defined for power series
and nilpotent
matrices
; in this case,
and
.
ProofThe assertion is clear if , so assume
.
Let
. We clearly have
, since
is
a closed algebraic group which contains
.
Moreover, the set
is infinite, so
. Since
is irreducible
and
, we conclude that
.
Proposition .
ProofSince is a
commutative group, [Humphreys, 1981, Theorem 15.5] implies that
, where
and
are closed subgroups of
. Now
and
are closed subgroups of
resp.
, so
is a
closed subgroup of
. Since
, it follows that
.
Corollary , then
.
In order to compute the closure of the product of a finite number of
algebraic groups of the form ,
an important subproblem is to test whether a given matrix
belongs to
.
We first observe that implies
. After the computation of
and
with
it therefore
suffices to check that
and
. In fact, it suffices to check whether
, where
is
the unique matrix in
with
. Modulo a suitable base change, we have thus
reduced the general problem to the case when
is
a diagonal matrix whose eigenvalues are all roots of unity.
Assume that and
are such
that
. Since
and
commute, it follows that
and
can be diagonalized w.r.t. a
common basis. The elements of this basis are elements of the different
eigenspaces of
. In other
words, if
with pairwise distinct
, then
is diagonal for
some block matrix
with
for each
. It follows that
for certain
.
Without loss of generality, we may therefore replace
by the intersection of
with
.
From now on, we assume that the above two reductions have been made. Let
be a diagonal matrix in
. By lemma 27, we have
if and only if any
-linear
relation
induces a relation
, where
.
Now consider a random matrix
in
, i.e. a linear combination of the
basis elements with small random integer coefficients. We compute its
blockwise Jordan normal form
so that
and let
be the restriction of
to the diagonal. We have
. Computing a basis for the
-linear relations of the form
using EH3, the above criterion now enables us to check
whether
.
If the check whether succeeds, then we are
clearly done. Otherwise, since
was chosen in a
random way, the relation
is very likely to be
satisfied for all possible choices of
(up to
permutations of coordinates inside each block). Indeed, the
for which this is not the case lie on a countable union
of algebraic variety of a lower dimension, so
has measure
.
Heuristically speaking, we may therefore conclude that
if the check fails (at least temporarily, modulo some final checks when
the overall computation of
will be completed).
Theoretically speaking, we may perform the above computations with instead of
,
where
is a basis of
and
the
are formal parameters. We then check whether
the relation
is still satisfied for the analogue
of
.
If so, then we are sure that
.
Otherwise, we keep trying with other random elements of
.
It is likely that a more efficient theoretical algorithm can be designed
for testing -linear relations
between the eigenvalues of elements in
.
One of the referees suggested to use similar methods as in [Masser,
1988; Bertrand, 1995; Compoint and Singer, 1998]. However, we did not
study this topic in more detail, since our final algorithm for the
computation of Galois groups will be based on heuristics anyway. We also
notice that a “really good” random number generator should
actually never generate points which satisfy non-trivial
algebraic relations.
A Lie algebra is said to be algebraic,
if it is the Lie algebra of some algebraic group, i.e. if
is an algebraic subset of
. It is classical [Borel, 1991, Corollary 7.7] that
the smallest Lie algebra generated by a finite number of algebraic Lie
algebras is again algebraic. The Lie algebras we will consider in our
algorithms will always assumed to be algebraic. Given a finite number
of algebraic Lie algebras and a basis
for
, it is
easy to enrich
so that
is a Lie algebra: as long as
for two elements
, we add
to
. By what precedes, the
computed Lie algebra
is again algebraic.
Putting together the ingredients from the previous sections, we now have
the following algorithm for computing the smallest closed algebraic
group which contains
.
Algorithm Closure()
Input: A subset of
Output: a numeric approximation of
Step 1. [Initialize algorithm]
Compute for each
Let (notice that
)
Let
Step 2. [Closure]
While there exists an with
set
While there exists an with
set
While there exists with
do
Compute
If then set
,
quit loop and repeat step 2
Otherwise, set
Return
The termination of this algorithm relies on a lemma, whose proof was kindly communicated to the author by J.-Y. Hée.
Lemma be a closed algebraic subgroup of
and let
be a finite number of matrices in the
normalizer of
. Denote by
the group generated by
and
. If all elements in
have finite order, then
is
finite.
ProofIn the case when , the result is classical [Dixon, 1971, Theorem
9.2]. In the general case, the normalizer
of
is a closed algebraic subgroup of
and
is a normal subgroup of
. By [Borel, 1991, Theorem 6.8 and
Proposition 1.10], it follows that
is an affine
algebraic group which is isomorphic to a closed algebraic matrix group.
This reduces the general case to the special case when
.
Theorem such that, for every
with
, the set
returned by
,
coincides with the smallest closed algebraic subgroup
of
which contains
.
ProofClearly, the dimension of increases throughout the execution of the algorithm, so it
remains ultimately constant. At this point, the set
will keep growing and the lemma implies that
ultimately stabilizes. When this happens,
is
closed under multiplication modulo
,
as well as under multiplicative inverses, since each element in
has finite order modulo
.
We conclude that
is indeed the smallest closed
algebraic subgroup of
which contains
, provided that the approximate
zero-test always returns the right result.
In order to prove the correctness at a sufficient precision, we assume
that we use the theoretic membership test from section 4.4
and that the random number generator successively generates the same
random numbers each time we relaunch the algorithm at a higher
precision. Now consider the trace of the execution of our algorithm when
using an infinite precision. Let be a sufficient
precision such that all zero-tests in this execution tree are still
correct when we replace the infinite precision by a precision
. Then the trace of the execution
any finite precision
coincides with the trace of
the execution at infinite precision. This completes the proof.
Remark
Assume now that is the set of generators for
as computed in theorem 20. Assume
that we have computed a reasonable candidate
for
, expressed in the original
basis corresponding to
. We
still have to reconstruct
and
with
such that
.
In the case of , by selecting
a suitable basis of
, we may
consider
as a big
matrix
whose first
columns are linearly independent. We
compute the row-echelon form of this basis:
The entries of must be in
: provided that
is indeed
generated by a basis of matrices with entries in
, the row-echelon form of this second basis
coincides with
. It therefore
suffices to reconstruct the entries of
using the
LLL-algorithm.
In the case of a matrix , the
set
is an algebraic variety of dimension
over
. Now
choose
close to
in such
a way that
independent coordinates of
are all in
.
Then the other coordinates of
,
considered as elements of
,
are easily found using Newton's method. Since
is
an algebraic variety, these other coordinates are actually in
, and we reconstruct them using the
LLL-algorithm.
The algorithm Closure from the previous section is quite
inefficient when the set becomes large. It is
therefore useful to seek for a better computational representation of
. For finite groups
, one classical idea is to search
for a sequence of subgroups
![]() |
(9) |
such that the indices are small. Then we may
represent elements in
by sequences
with
for each
. This representation is particularly useful if
operates on a set
and if
there exists points
in
such that
is the stabilizer of the set for each
. Then the set
corresponds to the orbit of
while leaving
fixed [Sims, 1970; Sims, 1971].
In the case of matrix groups, one often takes
for
[Murray and O'Brien, 1995]. However, this
approach only yields interesting results when there exist non-trivial
invariant subspaces under the action of the group, which will usually
not be the case for us (otherwise we may factor
and consider smaller problems). A theoretical way out of this is to also
consider the action of
on exterior powers
. However, this approach is very
expensive from a computational point of view. In our more specific
context of matrices with complex entries, we will therefore combine two
other approaches: non-commutative lattice reduction and the operation of
on
via conjugations
.
The algebra admits a natural (multiplicative)
norm, given by
where stands for the Euclidean norm on
. If
is
finite, this enables us to construct
as in (9) as follows. Assuming that
have been
constructed, we consider a matrix
for which
is minimal, and let
be the
set generated by
and
in
. This construction allows us
to rapidly identify a big commutative part of
. More precisely, we have
Proposition be such that
and
. Then we have
ProofWriting and
, we expand
and
in
.
This yields a non-commutative power series in
and
whose terms in
and
vanish. It follows that
The proposition implies that whenever
. Now take
and
with
,
where the
are as above. Then it follows that
and
commute whenever
. What is more, proposition
34 shows that taking commutators is a convenient way to
construct matrices close to identity from a set of non-commutative
generators.
However, from the effective point of view, we will not compute the
exact computation of minimal representatives
in cosets of algebraic groups in detail. We will rather simulate such a
computation in a way which is sufficient for our purpose. If
is such that
is small, then we
will also try to use the fact that the centralizer
of
is often a big subgroup of
, so the orbit of
is
small.
Let be a closed algebraic subgroup of
with associated Lie-algebra
.
In this section, we will show how to compute efficiently with elements
of the finite group
. Until
the very end of this section, we assume that
is
included in the connected component of the normalizer of
in
. We denote
by
the Lie algebra of this connected component.
By [Humphreys, 1981, Theorem 13.3], we have
.
.
generates
.
whenever
is another
such generator.
Let be the prime-decomposition of the order of
modulo
,
with
. Let
and
for all
.
Let
be the subgroup of
of elements which commute with
modulo
, so that
. For
,
we represent elements in the quotient
by
elements in the orbit of the action
modulo
. Since
, the set
is a Lie algebra
whose normalizer contains
.
Consequently,
, and we
represent elements in
by products
, with
and
. The elements in
are
represented in a similar way as the elements in
, using recursion. The successive matrices
for
,
, etc. will be called
a basis for
. A
basis
is said to be sorted if
.
Let ,
, etc. be as above. We start by
computing the orbit of
modulo
for
. Whenever we hit an
element
(modulo
)
with
or
,
then we start the process of basis reduction. Otherwise, we obtain a
finite orbit, together with a finite number of matrices by which we have
to extend
. We keep doing
this using the same method for
until
.
At the end, we still have to show how to extend
with a new matrix
. Now
recursive application of the algorithm to
and
yields a sorted basis
. When keeping track of the corresponding powers of
during the computations, we also obtain a finite
system of generators for
.
Using g.c.d. computations we either obtain a minimal
generator
or a new element in the connected
component. In the first case, we return
if
and apply basis reduction otherwise.
Using the above base extension procedure, we may transform a raw basis
into a basis for
:
starting with
, we
successively add
. However,
it is more efficient to reduce
first. More
precisely, let us now describe a procedure which tries to replace
by a better raw basis
,
with
, and whose elements are
closer to identity. Of course, we may always return the original basis
if a better one could not be found.
We first test whether all basis elements are roots of unity modulo . If not, then we found a new
element in the connected component. We next test whether there exist
with
,
in which case we keep adding the smallest such commutator to the basis.
Whenever this stops, we write
with
and consider all lattice reductions
proposed by the LLL-algorithm in the commutative vector space
. Whenever
, for one such reduction, then we perform the
corresponding reduction
on our basis and keep
repeating the basis reduction process.
We hope that we provided convincing evidence that analytic methods may be used for the efficient computation of differential Galois groups and related problems like the factorization of linear differential operators.
The two main remaining challenges are the concrete implementation of the algorithms presented here (as part of a more general library for the computation with analytic functions such as [van der Hoeven et al., 2002–2005]) and the development of a priori or a posteriori methods for ensuring the correctness of the computed result. Some ideas into this direction are as follows:
Use theoretical bounds on the number of connected components of the computed Galois group and related bounds on the sizes of the basis elements in EH2 and EH3. See [Derksen et al., 2003, Section 3.2] for some results.
Use the classification theory for algebraic groups in order to
gather more information about the computed Galois group . In particular, it is useful to compute
the radical (or unipotent radical) of
,
thereby reducing the study of
to the study
of a finite group, a semisimple (or reductive) group and a solvable
(or unipotent) group [Humphreys, 1981, page 125]. We refer to [de
Graaf, 2000] for computational aspects of the corresponding Lie
algebras.
Use the classical theory of invariant subspaces in symmetric
products or exterior powers as an a posteriori correctness
check and search for an effective version of Chevalley's theorem
[Humphreys, 1981, Theorem 11.2]. One may start with generalizing
[van Hoeij and Weil, 1997; Compoint and Singer, 1998] and notice
that a better knowledge of the Galois group
helps to further restrict the number of monomials (i.e.
“generalized exponents”) to be considered. Indeed, if
is an arbitrary algebraic subgroup of
, for which the ring of
invariants is easy to compute, then the invariants for
must be searched in this ring. Also, their are known
algorithms for computing the invariants for certain types of
algebraic groups, like linearly reductive groups [Derksen, 1999].
The representation for algebraic groups we
used in section 4 is efficient for computations (we
merely do linear algebra in dimension
,
lattice reduction and computations with small finite groups).
Nevertheless, it may be interesting to reconstruct the algebraic
equations for
and search for equations which
are particularly sparse with respect to suitably chosen coordinates.
For instance, a big cyclic group admits a particularly nice
(resp. large) Gröbner basis w.r.t.
well chosen (resp. badly chosen) coordinates.
Conversely, it may be interesting to switch back from a Gröbner
basis representation to our representation.
Carefully identify those parts of the algorithm which either prove
or disprove certain matrices to belong to the Galois group. For
instance, we know that all Stokes matrices are unipotent. Given a
non-zero transcendental number ,
we may then reliably conclude that a Stokes matrix of the form
generates the group
.
An interesting idea to get rid of the transcendental part of the
computations might be to quotient the values of the functions in our
basis of solutions by the action of the
Galois group. For instance, if
and
are close regular points in
, is it true that the orbit of
under the action of the Galois group necessarily contains a point in
? This is clearly the
case for finite Galois groups and the full Galois group, as well as
for the equations
and
. More generally, as soon as
becomes more transcendental, its orbit under the action of the
Galois group becomes larger, so the likelihood of finding a point in
the intersection with
increases.
Besides the above ideas for improving the algorithms, this paper also raises a few other interesting questions:
Are there more efficient approaches for the reconstruction of
elements in in section 3.4,
both in the cases when
and when
is more general? Also, as pointed out above, we may
want to reconstruct equations for
from the
variety.
Does there exists an efficient membership test in section 4.4 which does not rely on probabilistic arguments?
Can the approach of section 4 be adapted to the computation of a “basis” for the usual topological closure of a finitely generated matrix group?
Of course, a better mastering of the algorithms in this paper may also
lead to more efficient algorithms for other computations which rely on
differential Galois theory, like the computation of Liouvillian or other
forms of solutions. More generally, our algorithms may be used for other
computations with algebraic matrix groups over
and other fields of characteristic
.
We also expect all results to generalize to holonomic systems of linear
partial differential equations.
Beke, E., 1894. Die Irreduzibilität der homogenen linearen Differentialgleichungen. Math. Ann. 45.
Bertrand, D., 1995. Minimal heights and polarizations on group varieties. Duke Math. J. 80 (1), 223–250.
Bertrand, D., Beukers, F., 1985. équations différentielles linéaires et majorations de multiplicités. Ann. Sci. de l'École Norm. Sup. 28 (4-5), 473–494.
Borel, A., 1991. Linear algebraic groups, 2nd Edition. Springer-Verlag.
Braaksma, B., 1991. Multisummability and Stokes multipliers of linear meromorphic differential equations. J. Diff. Eq 92, 45–75.
Braaksma, B., 1992. Multisummability of formal power series solutions to nonlinear meromorphic differential equations. Ann. Inst. Fourier de Grenoble 42, 517–540.
Candelberger, B., Nosmas, J., Pham, F., 1993. Approche de la résurgence. Actualités mathématiques. Hermann.
Chudnovsky, D., Chudnovsky, G., 1990. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In: Lect. Notes in Pure and Applied Math. Vol. 125. Dekker, New-York, pp. 109–232.
Cluzeau, T., 2004. Algorithmique modulaire des équations différentielles linéaires. Ph.D. thesis, Université de Limoges.
Compoint, E., Singer, M., 1998. Computing Galois groups of completely reducible differential equations. JSC 11, 1–22.
de Graaf, W., 2000. Lie Algebras: Theory and Algorithms. Vol. 56 of North Holland Mathematical Library. Elsevier science.
Derksen, H., 1999. Computation of reductive group invariants. Adv. in Math. 141, 366–384.
Derksen, H., Jaendel, E., Koiran, P., 2003. Quantum automata and algebraic groups. Tech. Rep. 2003-39, LIP, École Norm. Sup. de Lyon, presented at MEGA 2003; to appear in JSC.
Dixon, J., 1971. The structure of linear groups. Van Nostrand Reinhold Company.
Écalle, J., 1985. Les fonctions résurgentes I–III. Publ. Math. d'Orsay 1981 and 1985.
Écalle, J., 1987. L'accélération des fonctions résurgentes (survol), unpublished manuscript.
Écalle, J., 1992. Introduction aux fonctions analysables et preuve constructive de la conjecture de Dulac. Hermann, collection: Actualités mathématiques.
Écalle, J., 1993. Six lectures on transseries, analysable functions and the constructive proof of Dulac's conjecture. In: Schlomiuk, D. (Ed.), Bifurcations and periodic orbits of vector fields. Kluwer, pp. 75–184.
Fabry, E., 1885. Sur les intégrales des équations différentielles linéaires à coefficients rationnels. Ph.D. thesis, Paris.
Humphreys, J., 1981. Linear algebraic groups. Graduate Texts in Math. Springer.
Kaplansky, I., 1957. An introduction to differential algebra. Hermann.
Kolchin, E., 1973. Differential algebra and algebraic groups. Academic Press, New York.
Lenstra, A., Lenstra, H., Lovász, L., 1982. Factoring polynomials with rational coefficients. Math. Ann. 261, 515–534.
Martinet, J., Ramis, J.-P., 1991. Elementary acceleration and multisummability. Ann. Inst. Henri Poincaré 54 (4), 331–401.
Masser, D., 1988. Linear relations on algebraic groups. In: Baker, A. (Ed.), New advances in transendence theory. Cambridge Univ. Press, pp. 248–262.
Menous, F., 1996. Les bonnes moyennes uniformisantes et leur application à la resommation réelle. Ph.D. thesis, Université Paris-Sud, France.
Mitschi, C., 1996. Differential Galois groups of confluent generalized hypergeometric equations: an approach using stokes multipliers. Pac. J. Math. 176 (2), 365–405.
Moenck, R., 1973. Fast computation of gcds. In: Proc. of the 5th ACM Annual Symposium on Theory of Computing. ACM Press, New York, pp. 142–171.
Murray, S. H., O'Brien, E. A., 1995. Selecting base points for the Schreier-Sims algorithm for matrix groups. J.S.C. 18, 577–584.
Pan, V. Y., Wang, X., July 2002. Acceleration of euclidean algorithm and extensions. In: Mora, T. (Ed.), Proc. ISSAC '02. Lille, France, pp. 207–213.
Ramis, J.-P., 1985. Phénomène de Stokes et filtration Gevrey sur le groupe de Picard-Vessiot. Notes aux CRAS 301 (I/5), 165–167.
Ritt, J., 1950. Differential algebra. Amer. Math. Soc., New York.
Schlesinger, L., 1895. Handbuch der Theorie der linearen Differentialgleichungen. Vol. I. Teubner, Leipzig.
Schlesinger, L., 1897. Handbuch der Theorie der linearen Differentialgleichungen. Vol. II. Teubner, Leipzig.
Sims, C., 1970. Computational problems in abstract algebra. Pergamon press, Oxford, Ch. Computational methods in the study of permutation groups, pp. 169–183.
Sims, C., 1971. Determining the conjugacy classes of permutation groups. In: Birkhoff, G., Jr., M. H. (Eds.), Computers in algebra and number theory. Vol. 4 of Proc. A.M.S. A.M.S., New York, pp. 191–195.
Singer, M., Ulmer, F., 1993. Galois groups of second and third order linear differential equations. J.S.C. 16, 9–36.
van der Hoeven, J., 1999. Fast evaluation of holonomic functions. TCS 210, 199–215.
van der Hoeven, J., 2001a. Complex transseries solutions to algebraic differential equations. Tech. Rep. 2001-34, Univ. d'Orsay.
van der Hoeven, J., 2001b. Fast evaluation of holonomic functions near and in singularities. JSC 31, 717–743.
van der Hoeven, J., 2002. Relax, but don't be too lazy. JSC 34, 479–542.
van der Hoeven, J., 2003. Majorants for formal power series. Tech. Rep. 2003-15, Université Paris-Sud, Orsay, France.
van der Hoeven, J., 2005a. Effective complex analysis. J.S.C. 39, 433–449.
van der Hoeven, J., 2005b. Efficient accelero-summation of holonomic functions. Tech. Rep. 2005-54, Université Paris-Sud, Orsay, France, submitted to JSC.
van der Hoeven, J., 2006. Computations with effective real numbers. TCS 351, 52–60.
van der Hoeven et al., J., 2002–2005. Mmxlib: the standard library for Mathemagix. http://www.mathemagix.org/mmxweb/web/welcome-mml.en.html.
van der Put, M., 1995. Differential equations
modulo . Compositio
Mathematica 97, 227–251.
van der Put, M., Singer, M., 2003. Galois Theory of Linear Differential Equations. Vol. 328 of Grundlehren der mathematischen Wissenschaften. Springer.
van Hoeij, M., 1996. Factorization of linear differential operators. Ph.D. thesis, Univ. of Nijmegen, The Netherlands.
van Hoeij, M., 1997. Factorization of differential operators with rational functions coefficients. J.S.C. 24, 537–561.
van Hoeij, M., Weil, J., 1997. An algorithm for computing invariants of differential Galois groups. J. Pure Appl. Algebra 117–118, 353–379.