|
. This work has
been supported by the ANR-10-BLAN 0109 LEDA project.
Abstract
In this paper, we will present several algorithms for computing with D-algebraic power series. Such power series are specified by one or more algebraic differential equations and a sufficient number of initial conditions. The emphasis is not on the efficient computation of coefficients of such power series (various techniques are known for that), but rather on the ability to decide whether expressions involving D-algebraic power series are zero. We will both consider univariate and multivariate series and, besides the usual ring operations and differentiation, we will also consider composition, implicitly determined power series and monomial transformations.
Keywords: D-algebraic power series, algorithm, zero test, implicit function
Let be a field of characteristic zero. A power
series
is said to be D-algebraic if it
satisfies a non-trivial differential equation
, where
is a polynomial with
coefficients in
. The set of
D-algebraic power series contains many classical transcendental
functions, such as
,
,
,
etc., and it is closed under the ring operations,
restricted division, differentiation and composition. This makes the
differential ring of D-algebraic power series suitable as a framework
for exact computations with mathematical expressions that involve
transcendental functions.
The notion of D-algebraic power series admits a straightforward generalization to the multivariate context. In this case, we require the satisfaction of a non-trivial algebraic differential equation with respect to each of the partial derivatives. The multivariate context allows for some additional operations, such as the resolution of implicit power series equations and general monomial transformations with rational powers. Again, the set of D-algebraic power series is stable under such operations.
There are two main aspects about computations with formal power series. On the one hand, we need fast algorithms for the computation of coefficients. There is an important literature on this subject and the asymptotically fastest methods either rely on Newton's method [2, 1, 9] or on relaxed power series evaluation [8, 6, 10].
On the other hand, there is the problem of deciding whether a given
power series is zero. This problem is hard in the sense that we need to
check the cancellation of an infinite number of coefficients. Therefore,
a related question is how to represent power series in such a way that
we can design such zero tests. We also notice the asymmetric aspect of
the problem: given a non-zero series ,
it is usually easy to prove that
:
it suffices to compute a non-zero coefficient. However, if
vanishes, then it is potentially difficult to establish a
formal proof of this fact.
In this paper, we will focus on the second aspect. We will consider various representations for D-algebraic power series, show how to perform common operations on power series when using these representations, and also present several zero tests. All representations are based on a combination of differential equations satisfied by the power series and initial conditions. However, depending on additional properties of these equations, some representations are more suitable for performing common operations and zero testing.
For global computations with algebraic differential equations, it is convenient to use the classical framework of differential algebra [17, 14]. In addition, we need some technology in order to deal with initial conditions. One key ingredient is the determination of the number of initial conditions which are needed in order to guarantee that a power series solution of a system of differential equations is unique. For this, we will use a similar technique as the one introduced by Denef and Lipshitz in [4, 5], and develop this technique in further detail.
Apart from a first section 2 with some reminders from
differential algebra, the paper is subdivided into three main parts. In
section 3, we first focus on the univariate case and the
representation of a D-algebraic series by a
single differential polynomial that annihilates
together with a sufficient number of initial conditions. In section 4, we remain in the univariate setting, but switch to more
flexible representations of D-algebraic series as solutions to systems
of algebraic differential equations with sufficiently many initial
conditions. In section 5, we generalize our results to the
multivariate setting and also consider the additional operations of
solving implicit equations, composition, and monomial transformations.
In section 3, we effectively represent D-algebraic power
series by a pair , where
is a computable power series (meaning that the
function
is computable) and a non-zero
differential polynomial
such that
. Specializing a more general result from [4, 5], we will show how to compute a number
(called a root separation bound for
at
) with the property that
the equation
admits no other solutions
with
(where
denotes the usual valuation in
).
Moreover, if
is a “non-degenerate
root” of
(in the sense that
, where
is a simpler
non-zero differential polynomial, called the separant of
), then we actually obtain an explicit
recurrence relation for
in terms of
for
.
In section 3.3, we will exploit the existence of such a
recurrence relation in the non-degenerate case, by giving a first zero
test for series in the differential field
generated by
. We will next
strengthen the root separation bound by not only looking for other
solutions of
in
,
but also in
. In section 3.4, this allows us to simplify the zero test (along similar
lines as in [7]) and also widen its scope to power series
that depend on a finite number of parameters (Remark 10).
We finally consider the case when
is ill
specified as a degenerate root of
.
In sections 3.5 and 3.6, we give algorithms
for computing root separation bounds in this case, as well as
non-degenerate annihilators.
In principle, annihilators of complex D-algebraic series (such as large expressions in other D-algebraic series) can be computed using brute force (Proposition 14). However, this technique is, in general, very inefficient. For this reason, we introduce the more flexible framework of D-domains in section 4. In this framework, we express D-algebraic series as rational functions in a finite number of D-algebraic series that satisfy a special kind of system of algebraic differential equations with initial conditions. We show how to adopt the major results from section 3 to this setting.
In section 5, we turn our attention to multivariate
D-algebraic series. We will start by showing how to interpret a
multivariate D-algebraic series in as a
D-algebraic series in
whose coefficients are
D-algebraic series in
. We
next generalize the notion of a D-domain and show that the above
reduction to the univariate case can be done at the level of D-domains.
We conclude by giving some algorithms for some typical multivariate
operations: the resolution of implicit equations, composition, and
monomial transformations. In each of these cases, we will show how to
avoid the computation of differential annihilators as much as possible,
by remaining in the framework of multivariate D-domains.
There are several approaches to the zero test problem for D-algebraic power series [16, 4, 5, 13, 18, 19, 15, 12] and we refer to [7] for a brief discussion. From a logical point of view, the most important decision problems for power series solutions to algebraic differential equations with initial conditions were settled in [4, 5]. One essential tool in this paper is the computation of a generalization of root separation bounds for more general decision problems. In a sense, this paper merely consists of specializations of these results to more specific problems. Nevertheless, we think that we introduced some noteworthy improvements that we will point out now.
It should first be emphasized that the papers [4, 5] are based on a more general decision procedure for testing whether systems of differential equations and inequations with initial conditions admit solutions. Efficiency is not the major concern here. The authors also do not attempt to state their results in terms of classical differential algebra, even though they are aware of this possibility. From our point of view, one main contribution of this paper is to isolate the part of the problem that can be dealt with using classical differential algebra techniques from the part where initial conditions and root separation bounds come in (notice that [15] provides an interesting alternative way to achieve this goal). This allows us to explicitly state our zero test algorithms, which we also believe to be more efficient than the ones from [4, 5].
Our approach also contains a few theoretical improvements. First of all,
we mainly work over a so called “effective power series
domain” instead of the constant field
. For instance, we may take
, where
is the differentially transcendental power series involved in the
Euler-Maclaurin formula for the -function.
Similarly, the conditions on the constant field
are slightly weaker: we merely require an effective constant field
with an algorithm for the computation of all positive
integer roots of univariate polynomials with coefficients in
. The improved zero test from
section 3.4 also allows for power series that depend on
parameters.
These theoretical improvements were actually introduced in [7], but the current presentation is simpler and more systematic. In particular, we only need to consider logarithmic power series instead of logarithmic transseries in the correctness proof of the improved zero test from section 3.4. Furthermore, we included algorithms for the computation of non-degenerate annihilators and root separation bounds in the degenerate case.
Section 4 contains no theoretical improvements, but we expect the more flexible framework of D-domains to be most suitable for practical computations. It is interesting to see that the root separation bounds and both zero tests from section 3 can be generalized to this setting. In particular, for the computation of root separation bounds, we introduce the dominant Hermite normal form, which seems interesting in its own right.
As we show in section 5, a multivariate D-algebraic series
in can always be regarded as a D-algebraic
series in
whose coefficients are D-algebraic
series in
. From the logical
point of view, decision problems for such series therefore reduce to
their univariate counterparts. However, there are a few additional
operations, such as solving implicit equations, extraction of
coefficients and monomial transformations. Not only do we present
algorithms for carrying out such operations, but we also discuss ways to
make this efficient, in the framework of multivariate D-domains.
Let us recall some standard notations from differential algebra. Let
be a field of characteristic zero and let
be a differential
-algebra
that is also an integral domain. We will mainly work with respect to a
single derivation
. Given a
finite number of indeterminates
,
we will denote by
or simply by
the differential ring of differential polynomials in
and by
or
its fraction
field.
We will assume an admissible ranking on the set
. For instance, we may take
whenever
or
and
. Given
such a ranking, the leader of a differential polynomial
is the highest variable
occurring in
when
is
considered as a polynomial in
.
We will denote by
the leader of
. Considering
as a
polynomial in
, the leading
coefficient
is called the initial of
,
the
separant, and we will denote
.
If
has degree
in
, then the pair
is called the Ritt rank of
and such
pairs are ordered lexicographically. We will also denote
in that case.
Given , we say that
is reducible with respect to
if there exists an
such that
or
and
.
The process of Ritt reduction provides us with a relation of
the form
where ,
,
and where
is reduced with respect to
.
We will denote
.
Given , we recall that
stands for the differential ideal generated by
. If
,
then we also denote
and recall that
forms a differential ideal. We say that forms an
autoreduced sequence if
Ritt reduces to
itself with respect to
for each
. In that case the set of differential
polynomials that reduce to zero with respect to
coincides with the differential ideal
.
In section 5, we will also consider differential rings and
differential polynomials with respect to a finite number of pairwise
commuting derivations . In
that case,
has to be replaced with the set of
all expressions
with
and
. The notion of admissible
rankings and Ritt reduction can be extended to this setting and we refer
to classical textbooks on differential algebra [17, 14] for more details.
In order to explicitly write down a differential polynomial , it is convenient to use vector notation. We
will index differential monomials by vectors
where each
is itself a finite sequence
that may be padded with zeros whenever necessary. We
denote
after which we may write
with . For a fixed degree
, it will be convenient to
denote by
the homogeneous component of
of degree
:
The largest with
will be
called the degree of
and we will denote
it by
. The smallest
with
will be called the
differential valuation of
and we denote
it by
. It will also be
convenient to denote
and
.
Given a differential polynomial and a
“point”
, it is
often convenient to consider the additive conjugate of
by
, which
is defined to be the unique differential polynomial
with
for all . The coefficients
of
can be expressed
directly by using a Taylor series expansion:
In particular, we get .
Assume now that and
. Given
,
we will denote by
its valuation in
. This valuation naturally extends to
differential polynomials in
via the
inclusion
. Notice that
should not be confused with
.
Assume now that and let
be a homogeneous differential polynomial in a single variable of degree
. Then we will denote by
the polynomial with
For any series , we then have
We will also denote by the largest root of
in
, while
taking
if no such root exists.
For some purposes, we will occasionally consider logarithmic power
series . Such series can
still be considered as power series
in
and we will still denote by
the
valuation of
in
.
The coefficients
are polynomials in
, and we will write
. Notice that
maps
into itself for each
.
Proposition and
write
with
and
. Then there exists a unique
operator
with
for every
.
Proof. Let us first prove the existence of . If
,
then we may write
with
, and the equation
admits
the solution
since for every
.
For general
, we may write
with
and take
with
This proves the existence of .
For any
of degree
in
, we also notice that
. This implies the uniqueness of
.
Let be a field. Let
be a
differential
-subalgebra of
for
with the property
that for all
and
such
that
, we have
. We will also call
a
power series domain.
A series is said to be D-algebraic over
if it satisfies a non-trivial differential
equation
with
.
In positive characteristic
,
any series
satisfies the linear differential
equation
whence any series is D-algebraic (indeed,
annihilates all series in
).
From now on, we will therefore assume that
has
characteristic zero.
Proposition is D-algebraic if and only if
admits finite transcendence degree over
.
Proof. Assuming that is
D-algebraic over
, pick
of minimal Ritt rank
with
. Then
and
is stable under the derivation
, whence
and
. Conversely, assume that
. Then
satisfy a non-trivial algebraic relation, whence
is D-algebraic.
Proposition of D-algebraic series over
forms a power series domain.
Proof. Let .
Then
, whence
and
.
Similarly,
, whence
and
. Clearly,
, so
and
. Assume finally that
. Then
, whence
,
so that
.
Assume now that is an effective power series
domain. The most obvious way to effectively represent a D-algebraic
power series in
is to represent it by a pair
where
is a computable
series and
a non-trivial annihilator with
. We define the
multiplicity of
as an annihilator of
to be
.
We also say that the annihilator
is
non-degenerate if
,
and notice that the multiplicity of a non-degenerate annihilator is one.
In order to make Proposition 3 effective, we will need a
way to compute a non-degenerate annihilator as a function of an
arbitrary annihilator. This is not completely immediate, and we will
postpone the presentation of an algorithm for doing so to the end of
this section.
Let be D-algebraic over
with annihilator
. Assume
that there exists a number
such that for any
with
and
, we have
.
Then the smallest such number
will be denoted by
and we call it the root separation of
at
.
It corresponds to the number of initial conditions that should be known
in order to determine
in a unique way as a root
of
. In fact
always exists and, as we will see in the next section, we can give an
algorithm to compute it under suitable assumptions.
Proposition is D-algebraic over
with annihilator
. Then the
following root separation bound holds:
![]() |
(1) |
Proof. Let and
. Given
with
, we have
Now assume that . Then
whence
Since , we get
, which entails
.
The following proposition also provides us with a partial converse.
Proposition and
.
Assume that
and that
, with
.
Then there exists a unique root
of
with
.
Proof. Notice that implies that
, so that
is finite. Let
. We have to
show the existence of a unique series
with
and
. We
may decompose
Extracting the coefficient of in the relation
now yields
For all , we have
and
only depends on
. In other words, the relation (3)
actually provides us with a recurrence relation for the computation of
.
We say that is effective if its
elements can be represented effectively and if all field operations can
be carried out by algorithms. We will call
an
effective diophantine field if all positive integer roots of
polynomials over
can be determined by algorithm.
In particular, this means that
admits an
effective zero test, i.e. there exists an algorithm which
takes an element
of
on
input and which returns true if
and false otherwise.
A power series is said to be
computable, if there exists an algorithm for computing
as a function of
.
The power series domain
will said to be
effective, if its elements are all effective power series and
if the differential
-algebra
operations can be carried out by algorithms. We notice that the
differential
-algebra
of all computable series is effective, although it
does not admit an effective zero test.
Assume now that we are given an effective power series domain with an effective zero test over an effective diophantine
field
. Assume also that we
are given an effective D-algebraic power series
and an annihilator
for
. Assume finally that the annihilator
has multiplicity one, so that we may compute
and
by expanding the power series
coefficients of
. In other
words, the bound (1) from Proposition 4
provides us with an effective upper bound for
.
Given a polynomial , we will
now give an algorithm ZeroTest (or
when we want to make the dependency on
and
explicit) for testing whether
. In particular, this shows that the
-algebra
is again an
effective power series domain.
Algorithm ZeroTest
1 |
If |
2 |
If |
3 |
If |
4 |
If |
5 |
Let |
6 |
Return the result of the test |
Proof. We first notice that recursive calls only occur for differential polynomials of a strictly smaller Ritt rank. This guarantees termination of the algorithm.
As to its correctness, we clearly have at line 2
whenever
.
Assume now that we reach line 3 with .
Then the degree of
in its leader cannot be one,
since this would imply
, and
we know that
. Consequently,
is a constant multiple of
, whence
for some
and
, and
.
Assume now that we reach line 4. We have for
some
and
.
Since
and
,
it follows that
.
Assume finally that we reach step 5. If ,
then we clearly have
, so
assume that
. Applying
Proposition 5, we obtain a unique power series
with
and
. It follows that
,
,
and
. The relation
therefore implies
.
Applying Proposition 4 to
,
we obtain the bound
. Since
, we conclude that
.
Remark for
more than one
at the same time. In that case, we
keep reducing until we find a
with
and such that
for all
.
Remark in
. Indeed, this would require a root
separation bound that is uniform in these parameters. Unfortunately, the
largest integer root of a simple polynomial such as
can become arbitrarily large, so the best uniform root separation bounds
are usually
.
In practical applications, the series is often
the solution of a classical initial value problem, in which case
. One disadvantage of the zero test
from section 3.3 is that
still
depends on
in quite an unpredictable way. In
particular, even for simple
,
this quantity might a priori become arbitrarily large. In this
section, we will give an improved version of our algorithm that does not
have this drawback. The idea is to not only consider ordinary power
series solutions to our differential equations, but also logarithmic
power series solutions in
,
as introduced in section 2.5.
Let be D-algebraic over
with annihilator
. Assume
that there exists a number
such that for any
with
,
we have
. Then the smallest
such number
will be denoted by
and we call it the strong root separation of
at
. Proposition 4
naturally strengthens to this setting:
Proposition is D-algebraic over
with annihilator
. Then the
following strong root separation bound holds:
![]() |
(4) |
Proof. The proof is similar to the proof of Proposition
4 with the following change. Writing
with
, we now have
instead of (2), and where stands
for a polynomial of degree at most
in
.
The consideration of logarithmic solutions leads to a better bound for the existence part of Proposition 5.
Proposition and
.
Assume that
and that
, with
.
Then there exists a root
of
with
.
Proof. The proof is analogous to the proof of Proposition 5, with the exception that (3) should be replaced by
For all , the right hand side
again only depends on
, but the constant term
of
the differential operator
may vanish if
. Yet, Proposition 1
still implies that the equation
admits a
solution in
for any
, which is sufficient for the existence of a
solution
to the equation
.
In the proof of the algorithm ZeroTest, we only needed
the existence of the solution with
and
. In view
of what precedes, we may thus improve the algorithm as follows:
Algorithm ZeroTest
1 |
If |
2 |
If |
3 |
If |
4 |
If |
5 |
Let |
6 |
Return the result of the test |
Remark or
depends
on a finite number of parameters
in
. One interesting aspect of the improved zero
test is that we no longer require any root separation bounds which
depend on
, so the zero test
still works if
depends on parameters (when using
the technique of dynamic evaluation [3] for examining the
finite number of branches that can occur depending on algebraic
conditions on the parameters). In fact, the original equation
may also depend on parameters, as long as we have a
uniform bound for
.
Assume that we are given an effective power series domain with an effective zero test over an effective diophantine
field
. Assume also that we
are given an effective D-algebraic power series
and an annihilator
for
. Let us show how the zero test algorithm from the
previous section can be used in order to compute
, thereby providing an effective bound for
via Proposition 4.
Algorithm RootSeparationBound
and
with
for
.
1 |
Let |
2 | Repeat the following |
3 |
Let |
4 |
If |
5 |
Let |
6 |
Let |
7 |
If |
8 |
Let |
9 |
If |
10 |
Let |
Proof. We first notice that
strictly decreases at every iteration of the main loop, which implies
the termination of our algorithm. As a loop invariant, we also notice
that
(whence
)
whenever we are at line 3, which means that we indeed have an algorithm
for the computation of
. If
, then the correctness of
line 4 follows from Proposition 4. Otherwise, we construct
such that
and
, whence the computability of
at line 6. If
at line 7, then
Proposition 5 implies the existence and uniqueness of
at line 8, and the relation (3) actually
provides us with an algorithm to compute the coefficients of
. Moreover
and
satisfy the assumptions for applying the
algorithm
to
and its
derivatives at line 9.
Now if , then in particular
and
by the uniqueness of
. Consequently, the zerotests
will indeed all succeed at line 9 and we will
return a correct bound
by Proposition 4.
Conversely, if
holds for all
with
and
,
then in particular
. Since
, we also have
and
, whence
by Proposition 4 and
. If
,
then this means that we will reach line 10 and find an index
with
and
.
Remark
Proposition and
with
, computes an annihilator
for
of multiplicity one.
Proof. We may use a variant of the algorithm
RootSeparationBound. Indeed, it suffices to return instead of
in step 4, and
instead of
in step
9.
Proposition and
with
, computes a non-degenerate annihilator
for
.
Proof. Using the previous proposition, we may assume
without loss of generality that has multiplicity
one. In particular, we have a zero test for elements in
. Now let
and keep
replacing
as long as
. Then we will end up with a
such that
and
.
We are now in a position to make Proposition 3 effective. We first need a general algorithm for computing algebraic dependencies.
Proposition be an effective integral domain with an
effective zero test. There exists an algorithm which takes
polynomials
on input, and which
produces a relation
with
.
Proof. Let .
Given
, the set of power
products
with
and
contains at most
elements.
The degree of any polynomial in
is bounded by
, and the space of
polynomials in
of degree
has rank
as a free
-module.
Taking
minimal such that
, it follows that the set
contains a non-trivial
-linear
dependency
, which we may
compute using linear algebra.
If is a D-algebraic power series with
non-degenerate annihilator
of order
, then the
-algebra
is contained in the
-algebra
,which
is stable under
.
Proposition of D-algebraic series over
forms an effective power series domain.
Proof. Let be represented by
pairs
and
with
. Applying Proposition 13,
we may assume without loss of generality that the annihilators
and
are non-degenerate, of orders
and
.
Then
is an effective -algebra
which is stable under
. For
, we may use Proposition 14 in order to compute an
-algebraic
relation between
. Similarly,
assuming that
with
,
the algebra
is stable under
, so we may compute an
-algebraic relation between
.
Of course, the algorithm from the proof of Proposition 14 uses brute force for for finding algebraic relations, so any algorithm that relies on this method is deemed to be quite inefficient. In the section 4 below, we will discuss algorithms that avoid relying on Proposition 14 for the computation with D-algebraic series.
In the algorithms from the previous sections, we essentially represent
D-algebraic power series by elements of ,
where
is a computable root of a differential
polynomial
. Since
is itself an effective power series domain with an
effective zero test, we may also form towers
and
represent D-algebraic power series by elements of such towers. This
generalization is useful for representing expressions involving
, the
-algebra
operations, and other D-algebraic operations such as
,
,
etc. Indeed, differential polynomials that annihilate such expression
can quickly become quite large. In this section, we will introduce an
even more convenient representation based on differential algebra, which
generalizes the construction of towers and provides more flexibility for
representing solutions to implicit equations at the end of section 5.5.
An abstract D-domain over is a
differential algebra
over
of finite transcendence degree
over
together with a differential
-algebra
morphism
which we will call the evaluation
mapping. A second abstract D-domain
with
evaluation mapping
is said to be
equivalent to
if
admits the same image as
.
Assuming that
is an effective power series
domain with an effective zero test over an effective diophantine field
, we say that
is effective if
is
computable and
is computable for each
; in that case, we also say that
is an effective D-domain.
A D-domain is an abstract D-domain of
the form
where are such that
and where the leaders of are
for certain
. It will be
convenient to lift the evaluation mapping
to
using
.
Then
becomes simply the evaluation mapping at
. In particular,
is effective as soon as
is a tuple
of computable power series. We will call the D-domain
unmixed if
for each
. We will call the D-domain
Pfaffian if
is of the form
with
for all
.
Given any D-domain over
and
, the series
is D-algebraic over
. If
is effective,
then we may effectively compute an annihilator for
.
Any D-algebraic series over
is represented by an element of an unmixed D-domain
over
.
If
is effective and
, then we may effectively compute such a
.
Proof. Given a D-algebraic domain
over
and
,
the sequence
contains non-trivial algebraic
dependencies which can be computed using Proposition 14.
This proves a). Inversely, given
,
we may compute a non-degenerate annihilator
for
using Proposition 13. Then
with
defines an unmixed
D-domain in which
represents
.
Proposition is equivalent to an unmixed D-domain.
If
is effective, then this reduction is
effective.
Proof. Let be generators of the
-algebra
and let
be the transcendence degree of
over
. For each
, we may compute a
non-degenerate annihilator
for
using Proposition 13. Then
with
defines an unmixed D-domain that is equivalent
to
.
Proposition is equivalent to a Pfaffian D-domain.
If
is effective, then this reduction is
effective.
Proof. Let us first show that
is equivalent to a D-domain with orders
.
Modulo the replacement of
by
, we may assume without loss of generality that
and
for all
. Now consider formal variables
with
and
. Let
,
and consider the
-algebra
morphism
, with
for
and
. Consider the set
of all
polynomials
with
if
and
. Let
be the ranking on
.
We define a ranking on
by setting
whenever
or
and
. Then the D-domain
with
is equivalent to
.
Assuming that and
for
all
, let us now show that
is equivalent to a Pfaffian D-domain. We may
assume that we ordered the variables
such that
. In particular, this implies
that
with
.
Let us prove by induction over
that we may
replace
by a differential polynomial of the form
with
.
So assume that the induction hypothesis is satisfied for all smaller
. We have
with
. Let
be the maximum of the degrees of
and
, and
.
Then substitution of
for each
with
in
and
yields two polynomials
and
in
such that
. This means that we may replace
by
.
Before we generalize the zero test algorithms from section 3, we will need a way to asymptotically normalize systems of linear differential equations with power series coefficients. The normalization that we will use is an asymptotic variant of the Hermite normal form.
Let us first consider a square matrix
. We say that
is in Hermite normal form if
is upper
triangular and there exist integers
such that
whenever
or
.
for all
.
If has maximal rank
, then we must have
for each
. It can be shown that there
exists a unique unimodular matrix
such that
is in Hermite normal form. Moreover,
and there is an algorithm for the computation of
if
is an effective field.
Let us now consider a matrix of rank
. Recall that
commutes with
following the law
. For each
we will
denote by
the
-th
row of
and by
its “skew dominant coefficient”, where
If , then we understand that
. The matrix
with rows
will be called the row dominant matrix
of
. We say that
is in dominant Hermite normal form if
is in Hermite normal form. Any matrix
with an inverse in
and such that
is in dominant Hermite normal form will be called a
normalization matrix. We claim that such a matrix always
exists. What is more, if the entries of
are
computable power series, then we may use the following algorithm to
compute
.
Algorithm NormalizationMatrix
with
computable coefficients and rank
for
1 |
Let |
2 |
Let |
3 |
Let |
4 |
If |
5 |
Otherwise return |
Proof. If ,
then
is a normalization matrix of
by construction. If
,
and assuming that
is a normalization matrix of
, then
is a normalization matrix of
,
since
is always invertible. This proves the
correctness of the algorithm.
As to its termination, let and let
be minimal such that the matrix with rows
has rank
for each
.
We claim that
can only increase and that the
-tuple
can only decrease for the lexicographical ordering, once
stabilizes.
Indeed, let ,
,
and let
be minimal such that the matrix with rows
has rank
for each
.
Then the rows
are precisely the first
rows of the Hermite normal form
, and therefore form a matrix of rank
. This shows that
and
for each
.
Having proved our claims, we may assume without loss of generality that
and
have stabilized.
Since the degrees of the polynomials entries of
can only decrease, we may also assume that
whence
have stabilized. Furthermore, we may
assume that our rows were ordered in such a way that
for all
. Modulo division of
by powers of
,
we may finally assume that
.
Then
and
for all
. This is repeatedly possible only
if
lies in the
module
spanned by
for each
. Since
has rank
, this means that we must have
, which completes the
termination proof.
Given an abstract D-domain ,
we claim that there exists a number
such that
for any alternative evaluation mapping
on
, we have
whenever
for all
.
The minimal such number will be called the root separation for
and we denote it by
. Our claim clearly holds when
is an unmixed D-domain. Indeed, in this case, we have
with the notations from above. The general case reduces to this
particular case by applying Proposition 17. However, since
the computation of univariate differential polynomials that annihilate
given elements of may be quite expensive, we
would like to have a more direct bound. We first need a few
preliminaries.
Consider linear differential polynomials
. Any such polynomial can formally
be viewed as an element of
.
Let
be the matrix with
. Assuming that this matrix has rank
over
, we may
use the method from section 4.2 to compute a normalization
matrix
for
.
We will call such a
a normalization matrix for
. Applying
to the column vector with entries
we obtain a
new column vector with “dominant reduced” entries
. By construction, the dominant
coefficient
of
is a
polynomial of the form
with . We will denote by
the polynomial with
.
We also denote by
the largest root of
in
, or
if no such root exists.
Now consider a D-domain with evaluation mapping
. Let
. Since
,
each
is non-zero and has the same leader
as
. In
particular, the polynomials
are linearly
independent over
.
Consequently, there exists a normalization matrix
for
, whence
and
are well defined for all
.
Proposition be a D-domain with evaluation mapping
and
. Let
be a normalization matrix for
and
. Then
Proof. Let for each
. Given
with
, there exists a largest
index
with
.
We have
Assuming that , we also have
whence
Since , we get
, which entails
,
and
.
In order to generalize the zero test from section 3.3 to the setting of D-domains, we first need a suitable counterpart of Proposition 5 in addition to Proposition 19.
Assume that we are given a differential ring
where
are such that
for
certain
. Given
and
, we would
like to solve the system of equations
for
with
.
Assuming that
for each
, we may define
and
as in the previous section. Since
is a normalization matrix, there also exists a matrix
with
. Then we have
and
. Now
consider
We have the following analogue of Proposition 5 for solving
the equation for sufficiently small
.
Proposition and
, then there exists a unique
with
and
.
Proof. By what precedes, it suffices to show that the
equation admits a unique solution with
. Let
for
each
. Recall that we may
write
for each , with
. We now decompose each
as
Extracting the coefficient of in the relation
now yields
For all , we have
and
only depends on coefficients
with
and coefficients
with
.
Hence (7) provides us with a recurrence relation for the
computation of
.
Algorithm ZeroTest
1 |
If |
2 |
If |
3 |
If |
4 |
If |
5 |
Let |
6 |
Let |
7 |
Let |
8 |
Return the result of the test |
Proof. The proof is analogous to the proof of the zero
test algorithm from section 3.3.
The zero test from the previous section may be further improved along
similar lines as what we did in section 3.4. Given an
abstract D-domain , the
strong root separation for
is the
smallest number
such that for any alternative
“evaluation” mapping
,
we have
whenever
for all
. The existence of such a
number is shown in the same way as before and for D-domains we have the
usual explicit bound:
Proposition be a D-domain with evaluation mapping
and
. Let
be a normalization matrix for
and
. Then
Proof. The proof is similar to the proof of Proposition
19. This time, ,
we again pick
to be the largest index
with
, so that
we may write
with
.
In the same way as before, we now get
For , we again get
,
,
and
.
For the analogue of Proposition 20, we define
Proposition and
, then there exists an
with
and
.
Proof. The proof is similar to the proof of Proposition 20, except that (7) should be replaced by
where is the operator inverse of
from Proposition 1.
In the algorithm ZeroTest from the previous section, it
is now possible to replace by
in the definition of
.
Given a subring , we will
denote by
the intersection of
with the fraction field of
.
We say that
is a power series domain if
is a differential ring with respect to the
derivations
such that
and
is stable under the substitutions
.
A power series domain is said to be
effective, if the differential
-algebra
operations can be carried out by algorithms and for each
we have a computable mapping which takes
on input and returns
as a computable power
series in
. In particular,
this means that every
can be regarded as an
computable power series in the sense that there exists an
algorithm which takes
on input and returns the
coefficient
of
in
on output. We also notice that the quotient field of
an effective power series domain is an effective differential field,
when representing fractions in the usual way. In particular,
is an effective differential
-algebra.
The above definitions can be generalized to countable dimension as
follows. We let , where we
regard each
as being naturally included into
. A subset
is said to be a power series domain if
is a power series domain for each
.
We say that
is effective if each
is and if we have an algorithm for computing an upper
bound for the dimension
of any given series
.
Given , we let
denote the evaluation of
at
. Given
and
with
,
we define the composition
of
and
to be the unique power series
with
We say that a power series domain is stable
under composition if
for any
and
with
. If we also have an algorithm for the computation
of
, then we say that
is effectively stable under composition.
Let with
and
. Assume that the matrix formed by
the first
columns of the scalar matrix
is invertible. Then the implicit function theorem implies that there
exist unique power series ,
such that the completed vector
with
satisfies
. We
say that a power series domain
satisfies the
implicit function theorem if
for the above
solution of
, whenever
. We say that
effectively satisfies the implicit function theorem if we also
have an algorithm to compute
as a function of
.
Consider an invertible matrix
with rational coefficients. Then the transformation
is called a monomial transformation, where we consider
as a column vector. For a power series
whose
support
satisfies
,
we may apply the monomial transformation to
as
well:
A power series domain is said to be stable
under monomial transformations if for any
and invertible matrix
with
, we have
.
We say that
is effectively stable under
monomial transformations if we also have an algorithm to compute
as a function of
and
. Notice that we do
not require the existence of a test whether
in this case (the behaviour of the algorithm being unspecified whenever
).
Given an effective power series domain that effectively satisfies each of the above closure properties (composition, implicit functions, and monomial transformations), it can be shown that an effective version of the Weierstrass preparation theorem holds. We refer to [11] for details.
Let be a multivariate power series domain. Given
a power series
and
,
we may consider
as a power series
in with coefficients in
, and also as a power series in
with coefficients in the fraction field of
.
If
is D-algebraic over
for this latter interpretation of
,
then we say that
is D-algebraic in
(or with respect to
).
We say that
is D-algebraic over
if
is D-algebraic in each of the variables
.
Proposition is D-algebraic over
and
, each coefficient
of the power series expansion of
in
is D-algebraic over
.
Proof. Given ,
let
be a non-trivial differential equation
satisfied by
in
.
Let
denote the valuation of
in
. Modulo division of
by
, we
may assume without loss of generality that
.
Now
satisfies the non-trivial equation
. This shows that
is D-algebraic over
.
For , we will prove by
induction that
is D-algebraic over
. So assume that
are
D-algebraic over
, whence
over
. Given
, the series
is D-algebraic in over
, by Proposition 3. By what precedes,
it follows that
is D-algebraic in
.
Proposition of D-algebraic power series over
forms a multivariate power series domain.
Proof. The stability of under
the differential ring operations and division (when defined) directly
follows from Proposition 3. The stability under the
projections
follows from Proposition 23.
From now on, denotes the set of differential
polynomials with respect to the pairwise commuting derivations
. Proposition 2 also
admits a natural generalization:
Proposition is D-algebraic if and only if
admits finite transcendence degree over
.
Proof. Assuming that is
D-algebraic over
, let
be of minimal Ritt rank
with
, for each
. Let
.
Then
for each
and
is stable under the derivations
. Consequently,
and
. Conversely, assume that
. Then
satisfy a non-trivial algebraic relation for each
, whence
is
D-algebraic.
Assume now that is an effective multivariate
power series domain. We may effectively represent a D-algebraic series
over
by a tuple
where
is a computable power
series in
and
an
annihilator for
with respect to
, for each
.
Proposition is an effective multivariate power series
domain over an effective diophantine field
with
an effective zero test. Then
is an effective
multivariate power series domain with an effective zero test. Moreover,
for any
, there exists an
algorithm for computing the coefficients
of the
power series expansion of a given
with respect
to
.
Proof. We prove the proposition by induction over . For
,
the result is trivial, so assume that
and that
we proved the result for all smaller
.
Given , let us first show how
to compute the coefficients
of the power series
expansion of a given
with respect to
. The induction hypothesis provides
us with a zero test in
. In
the proof of Proposition 23, we thus have an algorithm for
the computation of the valuation
,
and the remainder of this proof is constructive.
Let denote the quotient field of
. We claim that
is an
effective diophantine field. Indeed, given a polynomial
, an integer
is a root
of
if and only if
is a
root of
multiplied by the denominators of its
coefficients. Without loss of generality, we may therefore assume that
. After this reduction,
is a root of
if and only if
is a root of the coefficient
of
in
for all
. Let
be
such that this coefficient
is non-zero and let
be its largest root in
(or
if no such root exists). We may now check
whether
for
and compute
the largest root of
in
.
Any series in
may be
regarded as a univariate series in
with
coefficients in
. By what
precedes,
is an effective diophantine field and
we have an algorithm for the computation of the coefficients of
. The zero tests from section 3 can therefore be used as zero tests for
.
Let be a multivariate power series domain. An
abstract multivariate D-domain over
is a
differential
-algebra
for
together with a
differential
-algebra
morphism
. A multivariate
D-domain over
is an abstract multivariate
D-domain
over
of the
form
where are such that
and where each only involves derivations of the
form
and admits a leader of the form
for certain
.
We will denote by
those elements
of the fraction field of
such that
. We will also write
for
. We
say that
is effective if
is an effective power series domain and
is computable for each
. We
say that
is unmixed if
for all
and
.
We say that
is Pfaffian if
is of the form
with
for all
.
The proof of the following proposition is analogous to the proof of Proposition 16:
Given any multivariate D-domain over
and any
,
the series
is D-algebraic over
. If
is effective, then we may effectively compute
.
Corollary be an effective multivariate D-domain over
. Then there exists an algorithm
for testing whether
, for
given
.
Proof. This follows from Proposition 27
and the existence of a zero test in (Proposition
26).
The proofs of the following propositions are analogous to the proofs of Propositions 17 and 18:
Proposition
Proposition
Consider with
and
. Let
be
such that
for all
and
assume that
. Then we recall
from differential algebra [17, 14] that the
-polynomial
of
and
is defined by
Given a D-domain as above, we say that
is coherent if
for all
. Given an arbitrary
effective D-domain
, we may
compute an equivalent effective coherent D-domain using the algorithm
below, where we make use of the effective zero test from Corollary 28:
Algorithm MakeCoherent
1 |
Let |
2 | Repeat the following |
3 |
Let |
4 |
If there exists |
5 |
Else if |
6 |
Else if |
7 |
Else if |
8 |
Else if |
9 |
Else return |
Proof. The main loop invariant states that only contains annihilators for the point
, and each of the original differential
polynomials
Ritt reduces to zero with respect to
. The termination follows
from the fact that we only add new elements of smaller and smaller Ritt
ranks to
. At the end, the
set
is necessarily coherent and autoreduced, and
such that
. Since each
original polynomial
reduces to zero modulo
, there must exist a corresponding
polynomial
with leader
and
. This shows that
is indeed a D-domain.
By Proposition 26, there exists an algorithm for extracting
the coefficients of multivariate D-algebraic power series with respect
to a single variable . In the
case of power series that are represented by elements of a D-domain
, it would be nice to have a
more efficient and systematic algorithm. In particular, given
and
, we would
like to effectively construct a D-domain
such
that for any
and any
, we can produce a
with
. For this, it suffices that
contains all coefficients of the form
with
and
.
Theoretically speaking, we may construct using
Proposition 26. As a first optimization, we claim that
there exists a computable finite set
such that
we can take
whenever
. Indeed, for any
,
we may regard
as a power series in
with coefficients in the quotient field of
. For sufficiently large
, the coefficients of this power series are
determined by a recurrence relation of type (3).
As a second optimization, assume that is
Pfaffian and regular in
, meaning that
the valuations
of the
in
all vanish. We will take
with for all
,
and where the differential ideal
is constructed
as follows. Given
and
, let
Since , extracting the
coefficient of
in the equation
yields a relation
for some . Now let
be the set of all differential polynomials of the form
. Then we may take
. Notice that
is again Pfaffian if
is Pfaffian. However,
regularity in the other variables
is not
necessarily preserved. Nevertheless, if
for all
, then the recursive
extraction of coefficients only yields regular Pfaffian D-domains.
Remark , it can be shown
that the reduction to an equivalent Pfaffian D-domain used in the proof
of Proposition 30 actually yields a regular Pfaffian
D-domain in
.
From what precedes, it follows in particular that there exists a
constant such that we may take
for all
. Hence for any
and any
,
we have
. We may then
reinterpret the D-domain
as a univariate
D-domain by only retaining the relations
and
using coefficients of the form
with
. This allows us to replace the theoretical
zero test from Corollary 28 by one of the more efficient
zero tests from section 4.
Let with
and denote
Let be the submatrix spanned by the first
columns of
and
the submatrix spanned by the last
columns. Assume that
is invertible. Then the
implicit function theorem implies that there exist unique power series
, such that the completed
vector
with
satisfies
![]() |
(9) |
Forming the Jacobian matrix of
, we let
and
denote the submatrices spanned by the first
resp. last
rows.
Differentiating the relation (9), we obtain
Since , this yields
Given any , consider the row
matrices
and
.
Then
Let ,
and
. Let
and
diagonal matrices with entries
resp.
.
Denoting by
the row vector of derivations with
the above relation implies
Notice that maps power series to row vectors of
power series.
Assume now that for some effective multivariate
D-domain
of dimension
over
with
Assume also that the coordinate functions are
among the
. We will make the
additional assumption that
for all
. In particular, setting
, we have
.
We introduce a new evaluation mapping
on
by taking
We may formally extend into an evaluation from
into
.
This evaluation is not compatible with the original derivations
, but we do have
for all , and for each of the
derivations
. Since
is finite dimensional as a
-algebra,
Proposition 14 implies that
satisfy
a computable non-trivial
-algebraic
relation
for each
and
. Using Proposition 13,
we may assume without loss of generality that these relations are
non-degenerate. We now define a new multivariate D-domain
for the derivations
,
by taking
and taking the evaluation mapping as in (10). By construction,
for each
, so
is
D-algebraic.
The additional assumption that for all
is quite benign. Indeed, for any index
with
, postcomposition of
with
means in particular
sending
to
.
This can also be achieved by extracting the coefficient of
in
.
Consequently, modulo a finite number of extraction of coefficients, we
may assume without loss of generality that
for
all
. We have proved:
Proposition is effectively satisfies the implicit function
theorem.
Proposition is effectively stable under composition.
Proof. Given and
with
, consider
the power series
. The above
algorithm allows us to compute
with
and
, as well
as the result of the composition
(when
considering
as an element of
), which coincides with
.
The construction of is somewhat unsatisfactory
since the computation of individual
-algebraic
relations
using Proposition 14 is
usually quite expensive. In the frequent situation that
for all
, we may use the
following more efficient technique. Using Proposition 30,
we may assume without loss of generality that the original relations
are of the form
with
. By assumption, we have
for all
.
With the above notations, let
denote the
horizontal concatenation of the matrices
and
, so that
for all
and
has
coefficients in
. Modulo the
relations
, we may then write
For each and
,
this means that there exists a power product
of
the
and a polynomial
with
This allows us to take in the construction of
the multivariate D-domain
instead of the
relation found using Proposition 14.
Proposition be an effective power series domain over an effective
diophantine field
. Then
is effectively stable under monomial
transformations.
Proof. Let be an effective
D-domain over
, let
and
. Given an
invertible matrix
with
, we have to show how to compute
. Given a row vector
, we will denote
.
Then for any power series
and any
, we notice that
Since , the elements
satisfy an
-algebraic
dependency which can be computed using Proposition 14.
Applying this result for all
of the form
, we obtain D-algebraic relations
over
for
in the
individual derivations
.
The above proposition could in principle be used for computing with
monomial transforms of elements in an effective D-domain . However, in a similar way as it is more
efficient to keep fractions in their symbolic forms when computing with
fractions in
, it is often
more efficient to keep monomial transforms in their symbolic form
when computing with them.
More precisely, we first notice that monomial transformations with
are somewhat easier, since
they can be computed using our algorithms for composition. Now two
monomial transformations
and
are said to be compatible if there exist invertible matrices
with
.
Given
with
and
, let
,
and
.
Since
and
have
coefficients in
, we may
construct a D-domain
with
. Now
and
. In particular, any
can be represented by
for some
.
From the geometrical point of view, the notion of compatibility can be
interpreted as follows. Let be an open subset of
and
.
Let
be the subset of
of
series
with
.
Given a monomial transformation
,
we call
its associated cone. Given two monomial
transformations
and
with
associated cones
and
, it is not hard to check that these transformations
are compatible if and only if
.
It should be emphasized that zero testing of power series is a quite asymmetric problem in the sense that it is usually easy to prove that a non-zero series is indeed non-zero (it suffices to find a non-zero coefficient), whereas it is often hard to prove vanishing series to be zero. Since exact zero tests are so slow, it is often preferable to use heuristic zero tests instead. In fact, heuristic zero tests can be combined with genuine zero tests: for a complex computation that involves many zero tests, we first perform all zero tests heuristically. At the end of the computation, we collect all series that were heuristically assumed to be zero in order to compute the result, and we apply an exact zero test to these series.
The most obvious heuristic zero test for a univariate D-algebraic series
is to compute all coefficients up to a fixed order. Even for large
orders, these coefficients can be computed efficiently using Newton's
method or relaxed power series evaluation [2, 8,
1, 9, 6, 10]. In the
case of multivariate D-algebraic series ,
one simple idea is to generate a random scalar vector
and to test whether the univariate power series
vanishes. The composition
can be computed
efficiently using the algorithm(s) from section 5.5. Here
we notice the remarkable fact that the derivation
for which
does not depend on . This
makes it actually possible to design another exact zero test for
multivariate D-algebraic series: compute a univariate D-domain capable
of representing
where
is
treated as a formal parameter, and test whether
.
A. Bostan, F. Chyzak, F. Ollivier, B. Salvy, É. Schost, and A. Sedoglavic. Fast computation of power series solutions of systems of differential equations. In Proceedings of the 18th ACM-SIAM Symposium on Discrete Algorithms, pages 1012–1021, New Orleans, Louisiana, U.S.A., January 2007.
R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. Journal of the ACM, 25:581–595, 1978.
J. Della Dora, C. Dicrescenzo, and D. Duval. A new method for computing in algebraic number fields. In G. Goos and J. Hartmanis, editors, Eurocal'85 (2), volume 174 of Lect. Notes in Comp. Science, pages 321–326. Springer, 1985.
J. Denef and L. Lipshitz. Power series solutions of algebraic differential equations. Math. Ann., 267:213–238, 1984.
J. Denef and L. Lipshitz. Decision problems for differential equations. The Journ. of Symb. Logic, 54(3):941–950, 1989.
M. J. Fischer and L. J. Stockmeyer. Fast on-line integer multiplication. Proc. 5th ACM Symposium on Theory of Computing, 9:67–72, 1974.
J. van der Hoeven. A new zero-test for formal power series. In Teo Mora, editor, Proc. ISSAC '02, pages 117–122, Lille, France, July 2002.
J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.
J. van der Hoeven. Newton's method and FFT trading. JSC, 45(8):857–878, 2010.
J. van der Hoeven. Faster relaxed multiplication. Technical report, HAL, 2012. http://hal.archives-ouvertes.fr/hal-00687479.
J. van der Hoeven. Effective power series computations. Technical report, HAL, 2014. http://hal.archives-ouvertes.fr/.
J. van der Hoeven and J.R. Shackell. Complexity bounds for zero-test algorithms. JSC, 41:1004–1020, 2006.
A.G. Khovanskii. Fewnomials, volume 88 of Translations of Mathematical Monographs. A.M.S., Providence RI, 1991.
E.R. Kolchin. Differential algebra and algebraic groups. Academic Press, New York, 1973.
A. Péladan-Germa. Tests effectifs de nullité dans des extensions d'anneaux différentiels. PhD thesis, Gage, École Polytechnique, Palaiseau, France, 1997.
R.H. Risch. Algebraic properties of elementary functions in analysis. Amer. Journ. of Math., 4(101):743–759, 1975.
J.F. Ritt. Differential algebra. Amer. Math. Soc., New York, 1950.
J. Shackell. A differential-equations approach to functional equivalence. In Proc. ISSAC '89, pages 7–10, Portland, Oregon, A.C.M., New York, 1989. ACM Press.
J. Shackell. Zero equivalence in function fields defined by differential equations. Proc. of the A.M.S., 336(1):151–172, 1993.