|
The class of reduction-based algorithms was introduced recently as a new approach towards creative telescoping. Starting with Hermite reduction of rational functions, various reductions have been introduced for increasingly large classes of holonomic functions. In this paper we show how to construct reductions for general holonomic functions, in the purely differential setting.
I.1.2 Algebraic algorithms |
Let be an effective field of characteristic zero
and
a non-zero polynomial. Consider the system
of differential equations
where is an
matrix with
entries in
and
is a
column vector of
unknown functions. Notice that
any system of differential equations
with
can be rewritten in this form by taking
to be a multiple of all denominators.
Let be a formal solution to (1) and
consider the
-module
of linear combinations
where
is a row vector. Then
has the natural structure of a D-module for the derivation
defined by
. A
-linear mapping
is said to be a reduction if
for all
. Such a reduction is
said to be confined if its image is a finite dimensional
subspace of
and normal if
for all
.
In this paper, we will show how to construct confined reductions. Such
reductions are interesting for their application to creative telescoping
[13, 7], as we briefly recall in section 2. The first reduction of this kind is Hermite reduction [2, 4], in which case . The existence of normal confined reductions has
been shown in increasingly general cases [12, 6]
and most noticeably for Fuchsian differential equations [5].
We refer to [4, 9] for more details and the
application to creative telescoping.
Our construction of confined reductions proceeds in two stages. In
section 4, we first focus on the -submodule
of
of linear combinations
with
. We will construct a
-linear head reduction
such that
and
is bounded from above for all
.
Here we understand that
for all
. The construction uses a variant of Gaussian
elimination that will be described in section 3.
The head reduction may also be regarded as a way to reduce the valuation
of in
,
at the point at infinity. In section 5 we turn to tail
reductions, with the aim to reduce the valuation of
at all other points in
and its algebraic closure
. This is essentially similar
to head reduction via a change of variables, while allowing
ourselves to work in algebraic extensions of
.
In the last section 7, we show how to glue the head
reduction and the tail reductions at each of the roots of together into a global confined reduction on
. Using straightforward linear algebra and
suitable valuation bounds, one can further turn this reduction into a
normal one, as will be shown in detail in section 7.2.
The valuation bounds that are required in section 7.2 are
proved in section 6. In this section we also prove degree
and valuation bounds for so called head and tail choppers. The existence
of head and tail choppers whose size is polynomial in the size of the
original equation makes it possible to derive polynomial bounds for the
complexity of creative telescoping: this follows from polynomial bounds
for the dimension of and for the reduction of an
elements in
. We intend to
work out the details in a forthcoming paper.
Let be an effective subfield of
and let
and
denote the
partial derivations with respect to
and
. Consider a system of differential
equations
![]() |
(2) |
where is non-zero and
are such that
Setting , the first part of
(2) then becomes of the form (1). Notice that
any bivariate holonomic function is an entry of a solution to a system
of the form (2).
Let be a complex analytic solution of the above
system of equations and let
be the
-module generated by the entries of
. Notice that
is stable under both
and
. For any
with
and any non-singular contour
in
between two points
,
we may consider the integral
which defines a function in the single variable . It is natural to ask under which conditions
is a holonomic function and how to compute a
differential operator
with
.
The idea of creative telescoping is to compute a differential
operator and a function
with
such that
Integrating over , we then
obtain
If the contour has the property that
for all
(where the equality is
allowed to hold at the limit if necessary), then
yields the desired annihilator with
.
In general, we need to multiply
on the left with
an annihilator of
.
Now assume that we have a computable confined reduction . Then the functions in the sequence
can all be computed and they belong to a finite
dimensional
-vector space
. Using linear algebra, that
means that we can compute a relation
![]() |
(4) |
with . Taking
we thus obtain (3). If the relation (4) has
minimal order and the reduction
is normal, then it can be shown [9] that there exist no
relations of the form (3) of order lower than
.
Let be a matrix and denote the
-th row of
by
. Assuming that
, its leading index
is the smallest index
with
. We say that
is in
row swept form if there exists a
such
that
and
for all
. Notice that
has rank
in this case.
An invertible matrix such that
is in row swept form will be called a row sweaper for
. We may compute such a matrix
using the routine RowSweaper below,
which is really a variant of Gaussian elimination. Whenever we apply
this routine to a matrix
such that the truncated
matrix
with rows
is in
row swept form, we notice that these first
rows
are left invariant by the row sweaping process. In other words, the
returned row sweaper
is of the form
. If, in addition, the matrix
has rank
, then
is of the form
.
Algorithm RowSweaper |
for
if
Let
Swap the
for
return |
Let and
.
We may regard
as a Laurent polynomial with
matrix coefficients
:
If , then we denote
and
. For any
, we also denote
. Setting
the equation (1) implies
for any constant matrix . The
matrix
can also be regarded as a Laurent
polynomial with matrix coefficients
.
We say that
is a head chopper for (1) if
is an invertible matrix.
Proof. Setting ,
and
,
we have
In other words, .□
and that
is invertible. Then
Proof. Assume that is a
head chopper for (1). Setting
and
, we have
and
is invertible. Similarly, setting
and
, we have
, whence
is invertible. The opposite directions follow by taking
and
in the roles of
and
.□
Notice that the equations (5–7) and
Proposition 1 generalize to the case when
for some arbitrary
. Notice
also that
, where
. Given
and
, let
It is easy to see that both and
are
-modules.
Now consider a matrix with rows
ordered by increasing degree
.
Let
, let
be the matrix with rows
, and
let
be maximal such that
. We say that
is a
-head annihilator for (1) if the following conditions are satisfied:
The rows of form a basis for the
-module
;
The matrix is invertible;
The first rows of
are
-linearly
independent.
The matrix is obviously a
-head annihilator. If
,
then we notice that HA3 implies that
is a head chopper for (1). The following proposition is
also easily checked:
, we
have
be a
-head annihilator for
and
be as in
. Then there
exists an invertible matrix
of the form
such that the last
rows of
vanish and such that
is a
-head annihilator for
Proof. Let be the row
sweaper for
as computed by the algorithm
RowSweaper from section 3. By
construction,
for all
. We claim that
for all
. Indeed, if
, then this would imply that
, which contradicts HA2. From
our claim, it follows that
and
is maximal with the property that
.
Since the first
rows of
and
coincide, the first
rows of
are
-linearly
independent. This shows that HA3 is satisfied for
. As to HA2, let
be the invertible matrix with
. Then we notice that
, whence
is invertible. The
rows of
clearly form a basis for
, since
is
invertible.□
be a
-head annihilator for
, let
, and assume that the last
rows of
vanish. Let
be
the matrix with rows
. Then
is a
-head
annihilator for
Proof. We have for all
and
for all
. In particular, we have
and
is maximal with the property that
. Setting
, we also observe that
for
all
. Since
and the last
rows of
vanish, the first
rows of both
and
are
-linearly
independent. In other words, HA3 is satisfied for
. As to HA2, we
observe that
, whence
is invertible.
Let us finally show that forms a basis for the
-module
. So let
.
Then
, so
for some row matrix
. Setting
, we have
, whence
.
Since the first
rows of
are
-linearly independent and
the last
rows of
vanish,
we get
for all
.
Let
be the row vector with
for
and
for
. By what precedes, we have
and
. Now we have
for
and
for
. In other words,
, as desired.□
Propositions 4 and 5 allow us to compute -head annihilators for (1)
with arbitrarily large
.
Assuming that we have
in HA3
for sufficiently large
, this
yields the following algorithm for the computation of a head chopper for
(1):
Algorithm HeadChopper |
repeat
if
|
. Consider
the value of
at the beginning of the loop and
after
iterations. Then
is a
-head annihilator.
Proof. We first observe that
throughout the algorithm. Let us now prove the proposition by induction
over
. The proposition
clearly holds for
. Assuming
that the proposition holds for a given
,
let us show that it again holds at the next iteration. Consider the
values of
and
at the
beginning of the loop and after
iterations. Let
be maximal such that
. From the induction hypothesis, it follows that the
first
rows of
are
-linearly independent, whence the
matrix
is of the form
. Now Proposition 4 implies that
is still a
-head
annihilator. Since the last
rows of
vanish, Proposition 5 also implies that
is a
-head
annihilator. This completes the induction. Notice also that
is maximal with the property that
.□
with
. In
particular,
.
Proof. Assume that be the value of
at the beginning of the main loop after
iterations. Also let
and
be the values of
and
as computed during the
-th
iteration.
Let be maximal such that
. Using the observation made at the end of the above
proof, we have
, so there
exist an index
and
with
for all
.
Furthermore,
and
Moreover, for , the row
sweaper
is even of the form
By induction on , we observe
that
. For
, we also have
for all
, again by induction.
Consequently,
for all
, which means that the sequence
converges to a limit
in
. By construction, the first
rows of
are zero, its last
rows have rank
, and
. We conclude by taking
to be the last row of
.□
Proof. We already observed that
throughout the algorithm. If the algorithm terminates, then it follows
that
is indeed a head chopper for (1).
Assume for contradiction that the algorithm does not terminate and let
be such that
.
Let
be a fundamental system of solutions to the
equation
is some differential field extension of
with
constant field
. From
we deduce that
,
whence
. More generally,
whence
and
for all
. Since the space
has dimension
over
, it follows that there exists a
polynomial
of degree at most
in
such that
and
. Since
is
a fundamental system of solutions, we have
.
This contradicts the existence of an element
with
.□
Let be a head chopper for (1).
Replacing
by
if
necessary, we may assume without loss of generality that
and
. Let
. Writing
with
and
,
let
to be the set of exceptional
indices
for which
or
. For any
, let
If and
,
then the matrix
is invertible. We define the
-linear mapping
by
We indeed have , since
. The mapping
also induces a mapping
that we will still denote
by
. Setting
, the relation (7) yields
This shows that the mapping is a reduction. If
and
,
then we have
and the identity map
is clearly a reduction as well.
Since compositions of reductions are again reductions, we also obtain a
reduction for each
.
Now let
be the unique mapping with
for all
and
. Then
is clearly a
reduction as well and it has a finite dimensional image
. For any
,
we call
the head reduction of
. The following straightforward
algorithm allows us to compute head reductions:
Algorithm HeadReduce |
repeat
if
Let
|
Remark with
. Indeed, it suffices to start with
and accumulate
at the end of the
main loop.
Remark can be computed more efficiently in a relaxed
manner [10].
Remark
with an arbitrary number of rows
.
This allows for the simultaneous head reduction of several elements in
, something that might be
interesting for the application to creative telescoping.
Head reduction essentially allows us to reduce the valuation in of elements in
via
the subtraction of elements in
.
Tail reduction aims at reducing the valuation in
in a similar way for any
in the algebraic
closure
of
.
More precisely, let
,
and
. We
may regard
as a Laurent polynomial in
with matrix coefficients
:
If , then we denote its
valuation in
by
.
Setting
the equation (1) implies
for any matrix . The matrix
can also be regarded as a Laurent polynomial
with matrix coefficients
. We
say that
is a tail chopper at
for (1) if
is an
invertible matrix. In fact, it suffices to consider tail choppers at the
origin:
, where
. Define
,
and
.
Then
is a tail chopper at
for
is a tail chopper at
for
.
Proof. Setting ,
we have
. Consequently,
and
.□
There is also a direct link between head choppers and tail choppers at
via the change of variables
.
. Setting
, we define
,
and
.
Then
is a tail chopper at
for
is a head chopper for
.
Proof. Setting ,
we have
Consequently, and
.□
Finally, the matrix is a tail chopper at almost
all points
:
be such that
.
Then
is a tail chopper for
.
Proof. If and
, then (9) becomes
for
.
In particular,
and
is
invertible in
.□
Now consider a monic square-free polynomial and
assume that we wish to compute a tail chopper for (1) at a
root
of
in
. First of all, we have to decide how to
conduct computations in
. If
is irreducible, then we may simply work in the
field
instead of
and
take
to be the residue class of
, so that
becomes a
generic formal root of
. In
general, factoring
over
may be hard, so we cannot assume
to be
irreducible. Instead, we rely on the well known technique of dynamic
evaluation [8].
For convenience of the reader, let us recall that dynamic evaluation
amounts to performing all computations as if
were irreducible and
were a field with an
algorithm for division. Whenever we wish to divide by a non-zero element
(with
)
that is not invertible, then
provides us with a
non trivial factor of
. In
that case, we launch an exception and redo all computations with
or
in the role of
.
So let be a formal root of
and define
,
,
and
. Let
be a head chopper
for the equation
, as
computed using the algorithm from section 4.3. Then
is a tail chopper at
by
Lemmas 13 and 14.
Let be a tail chopper for (1) at
. Let
,
,
and
be as above, so that
, is a head chopper for the
equation
. In particular,
rewriting linear combinations
with
as linear combinations
with
, we may head reduce
as described in section 4.4. Let
be such that
.
Then we may rewrite
as an element
of
. We call
the tail reduction of
at
and write
.
Let be the finite set of exceptional indices for
the above head reduction and
.
Setting
and
,
it can be checked that the following algorithm computes the tail
reduction at
:
Algorithm TailReduce |
repeat
if
Let
|
In the particular case when
![]() |
(11) |
for some operator of order
, the system (1) is equivalent to
Given a general system (1), there always exists an element
such that
are
-linearly independent. Such an
element
is called a cyclic vector and,
with respect to the basis
of
, the equation (1) transforms into
an equation of the form (12). For efficient algorithms to
compute cyclic vectors, we refer to [3].
In the remainder of this section, we focus on systems (1)
that are equivalent to (12), with ,
and
as in (11).
Let us start with a quick review of some well known results about the
asymptotic behaviour of solutions to (12) when . We define
to be the set of polynomials in whose
coefficients are Puiseux series in
,
together with the corresponding set of monomials. The set
is asymptotically ordered by
and elements of
can be
written as series
with
. We call
the
support of
. If
, then the maximal element
of the support is called the dominant
monomial of
.
We may also regard elements of
as series
in
with
coefficients in
. If
, then we denote by
the corresponding valuation of
in
. Notice that we have
for some
.
Let . We write
for the set of finite
-linear
combinations of elements of
.
The sets
and
are defined
likewise. Consider the formal exponential monomial group
It is well known [1, 11] that the equation (12) admits a basis of formal solutions
of the form
with ,
and such that the monomials
are pairwise
distinct. We will write
Notice that this result generalizes to the case when
via a straightforward change of variables
with
.
Let us now consider a linear differential operator . Such an operator can also be expanded with
respect to
; we denote by
the valuation of
in
and by
the coefficient of
in this expansion.
From the valuative point of view it is more convenient to work with
linear differential operators ,
where
is the Euler derivation. Such operators
can be expanded with respect to
in a similar way
and
and
are defined as
above.
For , let
be the corresponding operator in
.
If
has order
,
then
For , let
be the corresponding operator in
.
If
has order
,
then
Given , we notice that its
logarithmic Euler derivative
belongs to
. Let
be
the operator obtained from
by substituting
for
. Then
for all . If
has order
, then
We call the twist of
by
.
Now consider an operator and
. If
,
then it can be shown that
.
Otherwise,
, whence
. In other words,
More generally, if and
, then
Let
If , then it can be shown
using the Newton polygon method that
.
Let and notice that
for
all
.
Proof. Let be a
transcendental constant over
with
and let
. Then
satisfies
,
, and
We may rewrite for the linear differential
operator
. Notice that
. Similarly, we may rewrite
for some operator
with
.
Let be the operators given by
and
, so that
and
. By
construction,
Since has order at most
, there exists a monomial
with
and
.
Now the solutions of
are spanned by the
solutions to
and any particular solution to the
inhomogeneous equation
. In
[11, Chapter 7] it is shown that the latter equation admits
a so called distinguished solution
with
and
.
Since
is transcendental over
, it follows that
.
Let
be a solution to
with
. Then
satisfies
whereas
(The last equality follows from (13) and the fact that
.) Now
implies
whence
We already noticed before that .□
We notice that our definition of coincides with
the one from section 4.2, since the degree in
coincides with the opposite of the valuation in
. Theorem 16
immediately yields a bound for the number
of
iterations that are necessary in HeadChopper in order
to obtain a head chopper. More precisely:
for
.
Proof. Let be the head
chopper as computed by HeadChopper and let
be its last row. Then
and
where
.
Now the theorem implies
,
whence
. We conclude that
is a head chopper in
.□
Let ,
, and let
be the smallest
number for which there exists a head chopper
with
![]() |
(14) |
We will call the defect of (1)
at infinity. Collarary 17 shows that
. In a similar way, given
, let
and let
be the smallest number for which there exists a tail
chopper
with
![]() |
(15) |
We call the defect of (1)
at
. Defining
in a similar way as
,
but at the point
, one has
.
Let and
,
so that
. Let
. The proof technique from Theorem 16
can also be used for studying
as a function of
:
Proof. Let and rewrite
and
with
. Then we have
We may rewrite and
for
some
with
and
. Let
be
the operators given by
and
, so that
and
. In a similar way as in the proof
of Theorem 16, we deduce that
Since has order at most
, there exists a monomial
with
and
.
The distinguised solution to the equation
with
has valuation
,
so that
. Since
, it follows that
,
whence
. In a similar way as
in the proof of Theorem 16, we obtain
whence
The inequality in the other direction is
straightforward.□
such that for all
and
with
, we have
Proof. Let be a head
chopper for (1) such that
satisfies
. Let
be sufficiently small such that
and
is defined and invertible for all
. We take
.
Assume for contradiction that and
. Let
and
be such that
and
. By construction,
, whence
,
and
. But then
. This contradicts Theorem 18,
since
implies
.□
, we can
compute an integer
such that for all
and
with
, we have
Proof. Follows from the previous proposition via a change of variables.□
Let us now study how head and tail reductions can be glued together into
a global confined reduction on .
More generally, we consider the case when
,
where
is a monic square-free polynomial such
that
divides
for some
.
We assume that we have computed a head chopper for (1) and
tail choppers for (1) at each the
roots
of
in
. In particular, we may compute the
corresponding head and tail reductions. Given an element
of the Galois group of
over
, we may also assume without loss
of generality that the tail choppers were chosen such that
for all
.
Partial fraction decomposition yields -linear
mappings
and
with
for all . This allows us to
define a global reduction
of
by
By our assumption that the tail choppers were chosen in a way that is
compatible with the action of the Galois group, we have
whenever
. Furthermore, the
restriction of the reduction on
to
is still a reduction.
Remark
Given the confined reduction from section 7.1, let us show how to construct a normal confined reduction
. For each
, assume that we have computed constants
and
with the property that
for all
with
and
, we have
.
Consider the finite dimensional -vector
space
and let
for all
. Let
be the
-subvector space of
of all
with
for all
. This
space is finite dimensional and our assumptions imply that we cannot
have
for
.
In other words, for any
with
, we have
.
Now let and let
be a
supplement of
in
so that
. We may compute bases of
and
using
straightforward linear algebra. The canonical
-linear projections
and
with
are also computable. We
claim that we may take
for every
.
defines a computable
normal confined reduction on
.
Proof. The mapping is
clearly a computable confined reduction on
.
It remains to be shown that
for all
. Now
,
so
and there exists a
with
. Since
, it follows that
and
. In other words,
and
.□
Acknowledgments. We would like to thank Pierre Lairez for a helpful remark on an earlier version of this paper.
G.D. Birkhoff. Singular points of ordinary differential equations. Trans. Am. Math. Soc., 10:436–470, 1909.
A. Bostan, F. Chen, S. Chyzak, and Z. Li. Complexity of creative telescoping for bivariate rational functions. In Proc. ISSAC '12, pages 203–210. New York, NY, USA, 2010. ACM.
A. Bostan, F. Chyzak, and É. de Panafieu. Complexity estimates for two uncoupling algorithms. In Proc. ISSAC 2013, ISSAC '13, pages 85–92. New York, NY, USA, 2013. ACM.
S. Chen. Some applications of differential-difference algebra to creative telescoping. PhD thesis, École Polytechnique, 2011.
S. Chen, M. van Hoeij, M. Kauers, and C. Koutschan. Reduction-based creative telescoping for Fuchsian D-finite functions. Technical Report, ArXiv, 2016. http://arxiv.org/abs/1611.07421.
S. Chen, M. Kauers, and C. Koutschan. Reduction-based creative telescoping for algebraic functions. In Proc. ISSAC '16, pages 175–182. New York, NY, USA, 2016. ACM.
F. Chyzak. The ABC of Creative Telescoping — Algorithms, Bounds, Complexity. Habilitation, École polytechnique, 2014.
J. Della Dora, C. Dicrescenzo, and D. Duval. A new method for computing in algebraic number fields. In G. Goos and J. Hartmanis, editors, Eurocal'85 (2), volume 174 of Lect. Notes in Comp. Science, pages 321–326. Springer, 1985.
L. Dumont. Efficient algorithms for the symbolic computation of some contour integrals depending on one parameter. PhD thesis, École Polytechnique, 2016.
J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.
J. van der Hoeven. Transseries and real differential algebra, volume 1888 of Lecture Notes in Mathematics. Springer-Verlag, 2006.
P. Lairez. Periods of rational integrals: algorithms and applications. PhD thesis, École polytechnique, Nov 2014.
D. Zeilberger. The method of creative telescoping. JSC, 11(3):195–204, 1991.