|
Until now, the area of symbolic computation has mainly focused on the manipulation of algebraic expressions. It would be interesting to apply a similar spirit of “exact computations” to the field of mathematical analysis. One important step for such a project is the ability to compute with computable complex numbers and computable analytic functions. Such computations include effective analytic continuation, the exploration of Riemann surfaces and the study of singularities. This paper aims at providing some first contributions in this direction, both from a theoretical point of view (such as precise definitions of computable Riemann surfaces and computable analytic functions) and a practical one (how to compute bounds and analytic continuations in a reasonably efficient way).
We started to implement some of the algorithms in the
|
Although the field of symbolic computation has given rise to several softwares for mathematically correct computations with algebraic expressions, similar tools for analytic computations are still somewhat inexistent.
Of course, a large amount of software for numerical analysis does exist, but the user generally has to make several error estimates by hand in order to guarantee the applicability of the method being used. There are also several systems for interval arithmetic, but the vast majority of them works only for fixed precisions. Finally, several systems have been developed for certified arbitrary precision computations with polynomial systems. However, such systems cannot cope with transcendental functions or differential equations.
The first central concept of a systematic theory for certified
computational analysis is the notion of a computable real
number. Such a number is given by an
approximation algorithm which takes
with
on input and which produces an
-approximation
for
with
. One defines computable complex numbers in a
similar way.
The theory of computable real numbers and functions goes back to Turing [Tur36] and has been developed further from a theoretical point of view [Grz55, Alb80, BB85, Wei00]. It should be noticed that computable real and complex numbers are a bit tricky to manipulate: although they easily be added, multiplied, etc., there exists no test for deciding whether a computable real number is identically zero. Nevertheless, possibly incomplete zero-tests do exist for interesting subclasses of the real numbers [Ric97, MP00, vdH01b]. In section 2.5, we will also introduce the concept of semi-computable real numbers, which may be useful if a zero-test is really needed.
The subject of computable real numbers also raises several practical and
complexity issues. At the ground level, one usually implements a library
for the evaluation of basic operations ,
,
, etc. and special functions
,
,
, etc. Using fast
multiplication methods like the FFT [KO63, CT65,
SS71], this raises the question of how to do this in an
asymptotically efficient way [Bre76a, Bre76b,
CC90, Kar91, vdH99a, vdH01a,
vdH05b]. At an intermediate level, one needs a software
interface for certified operations with arbitrary precision numbers.
Several implementations exist [FHL+05, GPR03,
Mül00, vdH99b, vdH06b],
which are mostly based on correct rounding or interval arithmetic [Moo66, AH83, Neu90, JKDW01,
BBH01, Bla02]. At the top level, one may
finally provide a data type for real numbers [MM96, Mül00, Lam06, O'C05, vdH06a,
vdH06b]. Given the real number result of a complex
computation, an interesting question is to globally optimize the cost of
determining a given number of digits of the result, by automatically
adjusting the precisions for intermediate computations [vdH06a,
vdH06b].
The next major challenge for computational analysis is the efficient resolution of more complicated problems, like differential or functional equations. In our opinion, it is important to consider this problem in the complex domain. There are several reasons for this:
Most explicitly stated problems admit analytic (or semi-analytic) solutions.
The locations of the singularities of the solutions in the complex plane give important information on the optimal step-size for numerical algorithms.
The behaviour of the solutions near singularities gives important information on the nature of these solutions.
Analytic functions are very rigid in the sense that they are entirely determined by their power series expansion at a point, using the process of analytic continuation.
This paper aims at providing a basic theoretical framework for computations with computable analytic functions and effective analytic continuation. When possible, our study is oriented to efficiency and concrete implementability.
The history of analytic continuation of solutions to complex dynamical systems goes back to the 19-th century [BB56]. Although interval arithmetic and Taylor models have widely been used for certified numeric integration of dynamical systems [Moo66, Loh88, MB96, Loh01, MB04], most implementations currently use a fixed precision [Ber98]. Some early work on effective analytic continuation in the multiple precision context was done in [CC90, vdH99a, vdH01a, vdH05b]; see also [vdH07] for some applications. Of course, fast arithmetic on formal power series [BK75, BK78, vdH02b] is an important ingredient from the practical point of view. Again, the manipulation of computable analytic functions is very tricky. For instance, even for convergent local solutions to algebraic differential equations with rational coefficients and initial conditions, there exists no general algorithm for determining the radius of convergence [DL89]. Of course, one also inherits the zero-test problem from computable complex numbers.
Let us detail the structure and the main results of this paper. In section 2, we start by recalling some basic definitions and results from the theory of computable real numbers. In particular, we recall the concepts of left computable and right computable real numbers, which correspond to computable lower resp. upper bounds of real numbers.
In section 3, we introduce the concept of a computable
Riemann surface. In a similar way as computable real numbers are
approximated by “digital numbers” in , we will approximate computable Riemann surfaces by
so called “digital Riemann surfaces”, which are easier to
manipulate from an effective point of view. For instance, in section 3.2, we will see how to identify two branches in a digital
Riemann surface. However, from a conceptual point of view, it is not
always convenient to see Riemann surfaces as limits of sequences of
digital approximations. In sections 3.4 and 3.5,
we will therefore discuss two equivalent ways to represent computable
Riemann surfaces. Notice that all Riemann surfaces in this paper are
above
.
The next section 4 deals with constructions of several
kinds of computable Riemann surfaces. We start with the definition of
computable coverings (which can be thought of as morphisms of computable
Riemann surfaces) and the construction of the limit of a sequence of
coverings. We proceed with the definition of disjoint unions, covering
products, quotients and joins at a point. For instance, if and
are the Riemann surfaces of
two analytic functions
resp.
, then
and
are defined on the covering product
of
and
. In section 4.4, we consider Riemann
surfaces which admit a distinguished point, the root. This allows for
the definition of a smallest “organic” Riemann surface which
contains a prescribed set of “broken line paths”. Universal
covering spaces and so called convolution products of rooted Riemann
surfaces are special cases of organic Riemann surfaces.
In section 5, we come to the main subject of computable
analytic functions. In [vdH05a], a first definition was
proposed. Roughly speaking, the idea was to see a computable analytic
function as an instance of an abstract data type
, with methods for computing
The coefficients of .
A lower bound for the radius of convergence
of
.
An upper bound for
on any disk of radius
.
The analytic continuation of
from
to
, with
.
This point of view is very natural from a computational point of view if we want to solve a differential or more general functional equation, since it is often possible to locally solve such equations. However, the computed bounds are usually not sharp, so we need some additional global conditions in order to ensure that analytic continuation can be carried out effectively at all points where the solutions are defined.
Now the more systematic theory of computable Riemann surfaces of this
paper makes it possible to directly define the concept of a computable
analytic function on a given computable Riemann surface. Although this
makes definitions easier, one still has to show how to construct the
Riemann surface of a computable analytic function. Using the results
from section 4, we will do this for many classical
operations, like ,
,
,
,
,
,
,
,
algebraic and differential equations, convolution products, etc.
Especially in the case of convolution products, the global knowledge of
an underlying Riemann surface is very important. What is more, we will
show that it is possible to construct the Riemann surfaces
incrementally, on explicit demand by the user. Also, whereas all
underlying Riemann surfaces from [vdH05a] were simply
connected, the present theory enables us to identify certain branches
where the function takes identical values. Nevertheless, the local
approach from [vdH05a] remains useful, because any
“locally computable analytic function” induces a natural
“globally computable analytic function” (see theorem 5.7).
During the implementation of some of the algorithms from [vdH05a]
in our and
could lead to
extremely inefficient algorithms. Therefore, it is essential to have
algorithms for the efficient computation of accurate bounds. In section
6, we will study this problem in a systematic way. Our
leitmotiv is to work with truncated power series expansions at
an order
with a bound for the remainder. On the
one hand, we will study how such expansions and bounds can be computed
efficiently and accurately (sections 6.3 and 6.4).
On the other hand, we will show how to use them for computing the
absolute value of the smallest zero of an analytic function (section 6.1) and for computing extremal values on a compact disk
(section 6.2). Several of the ideas behind our algorithms
already occur in the literature about Taylor models and polynomial root
finding. However, the context is a bit different, so our exposition may
have some interest for its own sake.
For the sake of simplicity, we have limited ourselves to the study of
univariate analytic functions. It should be possible to generalize to
the multivariate case along the same lines. The main extra difficulty we
foresee is integration, because it requires an automatic algorithm for
the deformation of paths. Nevertheless, in sections 4.8 and
5.5, we study convolution products, and a similar approach
might be used for integration. Some of the algorithms in this paper have
been implemented in the
We assume that the reader is familiar with basic notions of the theory
of Turing machines. We recall that a Turing machine
computes a function
, where
if the Turing machine does not halt on the input
. A function
is said to be computable if
for some
Turing machine
. A subset
of
is said to be
recursively enumerable, or shortly enumerable, if
or if there exists a Turing machine
with
. We say that
is computable if both
and
are enumerable. Denoting by
the set of Turing machines, there exists a bijection
, whose inverse encodes each Turing machine by
a unique natural number.
More generally, an encoding (or effective
representation) of a set is a partial
surjective function
, which is not
necessarily injective. In that case, we call
(or
more precisely the pair
) an
effective set. If
is computable or
enumerable, then we call
an abstract
computable resp. enumerable set. For
instance, the set of Turing machines which halt on all inputs is an
effective set, but not an abstract computable set, because of the
halting problem. If
and
are effective sets, then so is
,
for the encoding
, where
. By induction,
is an effective set for each
.
Many other classical sets, like finite sequences or trees over an
effective set admit straightforward encodings, which will not be
detailed in what follows.
A function between two effective sets
and
is said to be
computable if there exists a Turing machine
such that
for all
.
In that case, each
with
provides an encoding for
,
and we denote by
the effective set of
all computable functions from
to
. A partial function
is
said to be computable if there exists a Turing machine
with
for all
.
Sometimes, it is convenient to allow for generalized encodings , where
is
another encoded set. Indeed, in that case, the composition
yields an encoding in the usual sense. For instance,
, where
encodes each function
by the Turing machine
which computes it. Given
, we
will write
and
.
To each object
, given by its
encoding
with
,
we may naturally associate its representation
in
. However, this
association does not lead to a mapping
,
since we do not necessarily have
.
In particular, in order to implement a computable function
via a computable function
,
using
, one has to make sure
that
whenever
.
An -ary relation
on an effective set
is said to be
computable, if there exists a computable subset
of
, with
. Equivalently, we may require the
existence of a computable function
with
for all
.
Similarly,
is enumerable, if there
exists an enumerable subset
of
, with
.
This is equivalent to the existence of a computable function
with
for all
. Here
denotes the set
of increasing computable functions
,
divided by the equivalence relation
with
. Notice that
and
are equal as sets, but not as
effective sets. A computable function
will be
called an ultimate test. Notice that the equality relation on
an effective set
is not necessarily computable
or enumerable, even if
is an abstract computable
set.
Since a subset is also a unary relation on
, the above definition in
particular yields the notions of computable and enumerable subsets of
. We also define
to be a sequentially enumerable subset of
if
or if there exists a
computable function
with
. Similarly, we say that
is
sequentially computable if both
and
are sequentially enumerable. If
is sequentially enumerable and
admits a
computable equality test, then
is enumerable. If
is enumerable and
is an
abstract enumerable set, then
is sequentially
enumerable. If
is sequentially computable, then
is an abstract enumerable set.
There are several other interesting notions which deserve further study,
but which will not be used in what follows. For instance, we may define
a subset of an effective set
to be pseudo-computable, if there exists a computable function
with
,
where
is defined similarly as
. For instance, given a Turing machine
, the set
is a pseudo-computable subset of
.
Let be the set of digital or
dyadic numbers. Given an ordered ring
, we denote
,
, etc. Given
and
, we say that
is an
-approximation
of
if
.
An approximator for
is a computable
function
which sends
to
an
-approximation of
. If
admits
such an approximator, then we call
a
computable real number and encode
by
. We denote by
the set of approximators and by
the effective set of computable real numbers.
Given
, both the problems of
testing whether
resp.
are undecidable.
The usual topologies on and
naturally induce topologies on
and
. Given an open subset
of
, an element of
is called a computable real function. Notice that
such a function admits a natural encoding by an element
, where
.
Many classical functions like
,
,
,
,
,
,
are easily seen to be computable. It can be
shown (see [Grz55, Grz57, Wei00]
and theorem 2.3 below) that a computable real function is
necessarily continuous. Consequently, the step and stair functions are
not computable. Intuitively speaking, this stems from the fact that the
sign function cannot be computed effectively for computable real
numbers.
It is convenient to express part of the semantics of computations with
computable real numbers by providing a signature for the available
operations. For instance, the class comes with
two main functions
Similarly, the class provides operations
However, we take care not to provide functions for comparisons.
There exist many equivalent definitions for computable real
numbers and several alternative encodings [Wei00, Chapter
4]. A particularly interesting alternative encoding is to define an
approximator (or two-sided approximator) of
to be a computable function
with
and . This definition admits
two variants: a left approximator (resp. right
approximator) of
is a computable increasing
(resp. decreasing) function
,
with
. A real number is said
to be left computable (resp. right
computable) if it admits a left (resp. right)
approximator.
Intuitively speaking, a left (resp. right) computable real
number corresponds to a computable lower (resp. upper)
bound. Indeed, in what follows, it will frequently occur that we can
compute sharper and sharper lower or upper bounds for certain real
numbers, without being able to compute an optimal bound. We denote by
,
,
and
the left and right analogues of
and
.
Remark of extended
digital numbers. This leads to natural counterparts
,
,
, etc.
Remark
of approximators
correspond to the estimated
cost of the computation of
(see also [vdH06b]).
We also notice that left, right and two-sided approximators can be
implemented by a common class real with a method approximate, which returns a bounding interval
as a function of
.
In the case of left (resp. right) approximators, we would
have
(resp.
).
Let be an open subset of
or
. A function
is said to be lower continuous (resp. upper
continuous), if for every
and every
(resp.
),
there exists a neighbourhood
of
, such that
(resp.
) for all
. We have [Grz55,
Grz57, Wei00]:
be an open subset of
. Then
Any is continuous.
Any is lower continuous.
Any is upper continuous.
Proof. We will prove (b); the other two
assertions are proved in a similar way. The function
admits an encoding
. Let
with approximator
Let be a left approximator for
. Given
,
there exists a
with
. Now the computation of
by
only depends on
for some
finite
. Increasing
if necessary, we may assume without loss of generality
that
Let , with approximator
. For a certain
, we have
.
Now consider the alternative approximator
of
with
for
and
otherwise. Then, by
construction,
satisfies
. We conclude that
.
The “lower step function” ,
defined by
if
and
otherwise, is lower computable in the sense that
. Indeed, given
, we may take
.
Similarly, the function
is lower computable,
while
is upper computable. In particular, this
shows that
. Besides the
projections
typical lower computable functions on are:
Here the dot in indicates the
argument of the function
.
Left computable numbers are turned into right computable numbers and
vice versa by the following operations:
More generally, increasing computable real functions induce both increasing lower and upper computable real functions, while decreasing computable real functions turn left computable real numbers into right computable real numbers and vice versa.
The complexification of
provides a natural definition for the set of computable complex
numbers. Typical operations on
include
The complexification of
also provides a natural encoding for
and,
setting
, the approximation
function for numbers in
extends to
Clearly, functions like ,
,
, etc. can only be defined on simply
connected subsets of
. On the
other hand,
is effectively algebraically
closed in the sense that there exists an algorithm which takes a
polynomial
of degree
on
input and which returns its set of
roots in
.
For many applications, the absence of computable comparisons
for computable real or complex numbers can be a big problem. One
solution to this problem is to systematically consider all possible
answers of zero tests or sign computations and to use these answers as
hypotheses during subsequent tests. For instance, if we assume that
, then a subsequent test
should return
.
The above approach can be formalized as follows. A system of real
constraints is a pair with
and
for
.
We say that
is satisfied if
for
. We denote
by
the set of systems of real constraints. A
semi-computable real number is encoded by a computable function
, where
is a finite subset of
such that at least one
element of
is satisfied and
whenever both
and
are
satisfied. We denote by
the set of
semi-computable real numbers. A semi-computable function is a
function
. Such a function
naturally induces a function
.
Indeed, given
, encoded by
, we may take
and
, whenever
.
Example is semi-computable. Indeed, given
, we first compute an
-approximation
of
with
(e.g.
) and
.
If
, then we let
and take . Otherwise, we let
and take with
and
.
From a practical point of view, computations with semi-computable
numbers can be implemented using non-deterministic evaluation and we
point to the similarity with the computation with parameterized
expressions [vdH97, Chapter 8]. Each branch of the
non-deterministic computation process comes with a system of real constraints in
.
A constraint checker is used in order to eliminate branches for which
is contradictory.
In many applications, the numbers belong to a
polynomial algebra
and one may use classical
algorithms from real algebraic geometry to check the consistency of
[BPR03]. Modulo further progress in
automatic proofs of identities [Ric92, Zei90,
vdH02a], we hope that more and more powerful constraint
checkers will be constructed for increasingly general classes of
constants (like algebraic exp-log expressions in
). This would allow for the automatic elimination of
a large number of inconsistent branches. Notice also that it is
recommended to spend a roughly equivalent time in trying to prove and
disprove constraints. Of course, proving
is
easy, since it suffices to find a non zero digit of
.
As in the case of computations with parameterized expressions, many
algorithms for computable real numbers naturally generalize to
semi-computable real numbers. This is due to the fact that all numbers
involved often belong to a fixed polynomial algebra , in which the Noetherianity of this algebra
may be used in termination proofs. We refer to [vdH97] for
examples.
Remark , for a certain number
of fixed subsets of the real numbers. However, we have not yet found any
practical use for such a generalization.
A classical Riemann surface (above ) is a topological space
, together with a projection
, so that every
admits a
neighbourhood
for which
is a homeomorphism of
on an open ball of
. A Riemann surface
with border
is defined similarly,
except that each
now admits a neighbourhood
for which
is a homeomorphism
of
on a subset of
which
is homeomorphic to
. A
classical covering is a local homeomorphism
between two Riemann surfaces, which commutes with the projections,
i.e.
.
Throughout this paper, coverings are not required to be
surjective.
An encoding of a digital Riemann surface is a tuple , where
is a finite set of nodes,
a scale,
a projection and
a symmetric adjacency relation, such that
If , then
.
If and
are such
that
, then
.
Let be such that
for
and such that three relations among
,
,
and
hold. Then the fourth relation holds as well.
The conditions DR2 and DR3 are
illustrated in figure 3.1 below. In the case when with pairwise distinct projections
and
satisfy
,
then we will also write
. Notice that
.
![]() |
Let us show how to associate a Riemann surface
in the classical sense to an encoding
as above.
To each
, we may associate a
compact square
by
We now consider the topological space
where is a copy of
for
each
. Whenever
, the squares
and
admit a common edge in
.
Gluing the corresponding copies
and
together in
according to this edge
determines a new topological space
The space is a Riemann surface with a border,
whose projection on
is naturally determined by
the projections of the
on
. Indeed, DR2 (resp.
DR3) implies that points on the edges
(resp. vertices) of the
are either
in the interior of
or on its border. The
interior
of
,
endowed with its natural projection on
,
is a Riemann surface in the classical sense; we call it the digital
Riemann surface associated to
.
It will be convenient to write
and denote the
interior of
by
.
More generally, if
, then we
write
and
for its
interior.
Example is injective, then the induced
projection
is a homeomorphism on its image. In
that case, we will identify
with the subset
of
, and
call
a digital subset of
. Conversely, any
together with a finite subset
determines a
natural digital subset
,
encoded by
with
.
Example
for which
is not injective is shown in
figure 3.2. Formally speaking, this surface is encoded by
Consider an encoding of a digital Riemann
surface
at scale
.
This encoding induces a natural “doubled encoding”
, by associating four nodes
with
to each
. Given
, we set
if and only if
and
, or
and
. The doubled encoding
encodes the same digital Riemann surface
, but at the smaller scale
. By induction, it is possible to obtain
encodings at any scale
with
.
Given a digital Riemann surface ,
the above argument shows that there exists a maximal scale
, such that
admits an
encoding at scale
for every
. Inversely, the encoding
of
at a given scale
is
essentially unique (up to bijections
).
Indeed, given
, the center
of each
(
) corresponds to a unique point in
. Furthermore, given
with
, we have
if and only if the segment
lifts
to a segment
on
.
If the scale
is clear from the context, then it
will be convenient to denote “the” encoding of
by
. If
is the result of some computation, then we will
denote by
the scale of the corresponding
representation.
Remark
instead of
. Each element of
is a pair
with
,
and
. A scaled node corresponds to
nodes
in
with
and
for all
. For simplicity, we will directly
work with nodes in
in what follows.
Nevertheless, with some additional effort, our algorithms can be adapted
to work with scaled nodes.
Let be a digital Riemann surface, with a fixed
encoding
. We write
for the set of digital points on .
Such a point
can be encoded by a pair
with
and
such that
. This encoding is
unique, except when
lies on the border of two
squares. Notice that
is an abstract computable
set. Similarly, we write
for the set of computable points on . Such a point
can be
encoded by a pair
with
and
, such that the distance
between
and
is bounded
by
. Hence, we have
, or
with
, or
with
. In particular,
admits a computable open neighbourhood
, such that
is a
homeomorphism onto a rectangle
with corners in
. Notice that there exists no
algorithm for testing whether
for given
and
.
During actual computations with digital Riemann surfaces, the conditions DR2 and DR3 are not always met a priori. In that case, we need an automatic procedure to identify nodes when necessary. This will be the main objective of this section.
Consider a tuple as in the previous section
which only satisfies DR1. Then the construction of the
previous section still yields a topological space
with a projection
, even
though
may now contain points which are not
locally homeomorphic to open subsets of
.
We will call
a digital Riemann pasting
with encoding
. For instance,
taking
,
,
and
, we obtain the digital Riemann pasting shown
at the upper left hand side of figure 3.1.
A quotient structure on is a pair
, where
is
an equivalence relation on
and
an adjacency relation on
,
such that
.
.
In that case, when setting for all
, the tuple
again
encodes a digital Riemann pasting.
The intersection of two quotient structures
and
is again a quotient
structure. Moreover, if both
and
encode digital Riemann surfaces, then so does
. Consequently,
generates a smallest quotient structure
for
which
encodes a digital Riemann surface. We call
the normalization of
and the corresponding digital Riemann surface
the normalization of
.
It can be checked that normalization commutes with the doubling operator
, so that that this
definition indeed does not depend on the chosen scale. Two examples of
normalizations are shown in figure 3.1.
Example be a digital Riemann pasting and let
be an equivalence relation on
with
. Given an encoding
, we may define an equivalence relation
on
by
. Then
encodes a digital
Riemann surface, which we will denote by
.
In order to compute the normalization of a digital Riemann pasting , it is convenient to maintain
The map which associates to each
its preimage
.
The map with
.
Given a subset of nodes such that
consists of a singleton
,
we may then glue all nodes in
together using
for all
and
for some
.
for all
.
.
In order to normalize , we
now keep on doing the following operations until both
DR2 and DR3 are satisfied:
If DR2 is not satisfied, then there exist and
such that
contains more than one element. In that case, we glue all elements
in
together using the above procedure.
If DR2 is satisfied, but not DR3,
then there exits an and a permutation
of
with
, but
.
In that case, we add
to
and
to
.
The normalization procedure finishes, because the size of strictly decreases for the first operation and the number
of
strictly increases for the second operation.
A digital covering is a covering in the
classical sense between two digital Riemann surfaces. Let
be a digital covering and let
and
be encodings of
and
at the same scale
.
Then
induces a mapping
, which sends
to the unique
with
,
where
stands for the center of
. This mapping satisfies
Inversely, given a mapping which satisfies (3.1) and (3.2), we obtain a covering in the
classical sense by sending
to the unique point
with
.
In other words, the digital covering
may be
encoded by the triple
. We
will denote by
the set of all digital
coverings.
Example be a digital Riemann surface encoded by
. Consider the equivalence relation
on
and the projection
. Then the tuple
with
,
and
encodes the digital complex subset
of
and
is
a digital covering.
Example of a digital Riemann
pasting comes with a natural digital covering
. In particular, given an equivalence relation
on
with
, we obtain a natural covering
. This generalizes the previous example, by
taking
. Moreover, any
digital covering
induces a digital covering
which commutes with
.
Given digital Riemann surfaces and coverings
, we call
![]() |
(3.3) |
a digital covering sequence. Such a sequence admits a natural limit
![]() |
(3.4) |
where is the smallest equivalence relation with
for each
,
and
has the natural structure of a Riemann
surface. We will write
for the composed covering
and
for the covering
which sends
to
.
We say that the covering sequence (3.3) is
computable, if the mapping
is
computable. In that case, we call
a
computable Riemann surface. Notice that coverings are not
necessarily injective. This corresponds to the idea that better
knowledge of the Riemann surface of a function may lead to the detection
of identical leaves.
Example be an open rectangle with corners in
or an open disk with center in
and radius in
. For each
, let
By example 3.1, and
determine a digital subset
of
. The limit of the sequence
of embeddings
is a computable digital Riemann
surface, which is homeomorphic to
.
More generally, the limit of a computable sequence
of embeddings of digital subsets of
is called a
computable open subset of
.
Example encoded by:
Given , the restriction
of
to those
with
determines a digital Riemann surface
in the usual sense. The natural inclusions determine
a digital covering sequence
whose limit
corresponds to
. Notice that
is isomorphic to the universal covering space of
; see also section 4.7.
Let be a fixed computable Riemann surface. We
denote by
the sets of digital and computable points on . A digital point
(and
similarly for a computable point
)
may be encoded by a partial sequence
such that
and
for all
. We notice that
is an
abstract enumerable set. We have natural computable mappings
As in the case of digital Riemann surfaces, each
admits a computable open neighbourhood
,
such that
is a homeomorphism onto a rectangle
with corners in
.
Instead of using the digital representation of
computable Riemann surfaces (i.e. as limits of digital
covering sequences), we may also try to mimic more classical
representations of Riemann surfaces. For instance, a computable
atlas representation of a Riemann surface
with projection
is a tuple
, where
is an abstract enumerable set.
is a computable map which sends
to a computable open subset
of
.
is a computable partial map such that
is an immersion for every
, with
for all
. Here
is a computable function such that
for all
.
An enumerable relation with
Proof. Let be the limit
of a digital covering sequence
of digital
Riemann surfaces
and define
Given , let
be an encoding of
. We have
already noticed that
for
,
with
or
with
.
We may thus take
.
Conversely, given
, the
composition of
and the restriction of
to
determines an immersion
of
into
. Finally, given pairs
, we may ultimately check whether
: given
,
we check whether
and
.
Proof. Let be an
enumeration of
and
an
enumeration of all pairs
with
.
Let us first assume that each is a digital
subset of
. Consider the
disjoint union
, together
with the smallest equivalence relation
for which
corresponding squares in
and
are equivalent if and only if
.
Setting
, we obtain a natural
computable digital covering sequence
.
We claim that
is isomorphic to the limit
of this sequence.
Indeed, the construction implies natural coverings
which pass to the limit
.
Inversely,
naturally immerses into
, with inverse
.
Gluing these immersions together for all
,
we obtain a covering
with
(since every
is contained in
), proving that
.
In the general case, each is the computable
limit of a sequence
of immersions. We may now
construct another computable atlas representation of
, by taking
,
, etc. We conclude by
applying the above argument to this new computable atlas
representation.
Remark to be of
a prescribed type, like open rectangles with corners in
or open balls with centers in
and radii in
.
Let be a classical Riemann surface
above
and denote
![]() |
(3.5) |
Given a point , let
be the largest radius such that there exists an open disk
for which
admits an open
neighbourhood
so that the restriction
of
to
is a
homeomorphism between
and
. Given
with
, we denote by
or
the unique point in
with
. In particular, the
notations (3.5) naturally generalize to the case of balls
and
in
, for
(resp.
).
A computable intrinsic representation of
is a tuple
such that
is an encoding for
.
is a computable function with
.
is a sequential enumeration of the elements
of
.
is a computable function with
.
is a computable function with
for all
and
.
is an enumerable relation with
Simplifying notations ,
and
, we
thus have the following signature:
admits a computable intrinsic representation.
Proof. Let be the limit
of a digital covering sequence
.
For
, we take the set of
encodings of points
and we already have a
computable mapping
.
Let be the encoding of a point
. The distance
of to
the border
is easily computed for each
. Since
and
, the sequence
encodes
.
Similarly, given
with
, it is easy to compute
in
for each sufficiently large
. Then the sequence
encodes
.
Finally, yields an enumeration
of
. Given
with encodings
and
,
we may ultimately check whether
holds: given
, we test whether
and
.
be a Riemann surface with a
computable intrinsic representation. Then
is a
computable Riemann surface.
Proof. Let be the
enumeration of
and
an
enumeration of all pairs
such that
holds.
For each , we may compute a
square
with corners in
, for some
such that
. Now let
, where
is the smallest
equivalence relation induced by identifying matching squares in
and
for pairs
. We claim that the limit
of the induced digital covering sequence
is
isomorphic to
.
Indeed, we have natural coverings for each
, which pass to the limit
. Inversely, for each
, the set
can be
immersed in some
, where
is large enough such that all pairs
with
are among
.
Gluing these immersions together, we this obtain an immersion
with
, proving
that
.
Let be a computable Riemann surface. In certain
cases, we may design an algorithm
to compute the
distance of a point
to the border. In that case,
we say that
is a delimited computable
Riemann surface.
Remark .
Therefore, we will stick to the original definition.
Assume that and consider two points
. Even under the assumption that
, we notice that there exists no test in order
to decide whether
. Indeed,
given encodings
and
of
resp.
,
we do not know whether there exists an index
with
. Nevertheless, we
naturally do have such a test in the case when the coverings
are embeddings. In this case, we say that
has computable branches. Conversely, assume that we have a
conditional equality test
where returns the result of the test
, provided that we are given the
answer
to the test
.
Equivalently, one may assume a predicate
such that holds if and only if
, provided that
.
Then we may associate a new digital Riemann surface
to each
, by identifying all
squares
with
whose
centers are equal (using the normalization algorithm from section 3.2). This leads to a new representation
of
, for which the induced
coverings
are embeddings. When using the atlas
representation,
has computable branches if and
only if we have a computable test for deciding whether
.
Consider two computable Riemann surfaces and
. A covering
is said to be computable if its restriction to
is a computable mapping
. A
digital representation of such a covering is a triple
, such that
represents
,
represents
and
is a
computable sequence of digital coverings
,
such that
![]() |
(4.1) |
commutes and for any
. If each
is an immersion,
then we call
a computable immersion (of
representations). If
is also surjective, then we
call
a computable subdivision (of
representations), and
is said to be a
subdivision of
.
be the representation of a
computable Riemann surface. Then we may compute a computable
subdivision
of
,
such that there exist
with
for all
and
.
Proof. Without loss of generality, we may assume
that the are encoded at scales
. Given a digital Riemann surface
encoded by
,
let
stand for its restriction to the subset of
inner nodes
which admit four distinct
neighbours
. Taking
,
and
, the inclusion mappings
determine a computable immersion of the Riemann surface
represented by
into
. Since
, this immersion is actually a subdivision and
we have
for all
.
be the limit of a computable
covering sequence
and
a digital Riemann surface such that
is
compact. Then we may compute an
and a digital
Riemann surface
with
.
Proof. The set forms an
open covering of
. Since
is compact, it follows that there exists an
with
. Since
and
are both digital
Riemann surfaces, we may actually check whether
, and therefore compute the first
for which this holds.
be a computable
covering. Let
and
be
representations for
resp.
. Modulo subdividing
and reindexing
,
the covering
admits a computable digital
representation of the form
.
Proof. By lemma 4.1, we may assume
without loss of generality that there exist with
for all
and
. In particular,
is a
compact subset of
for all
. By lemma 4.2, we may compute a
digital Riemann surface
with
. We next increase
further until there exists a digital covering
which commutes with
. On the
one hand, the digital coverings
,
whose incarnations at a suitable scale are finite in number, can easily
be computed. Using the predicate
,
we also have an ultimate test for checking whether
. Trying all values of
in parallel, we know that one of these tests will ultimately succeed.
Increasing
still further so as to ensure that
, this completes the
construction of the digital representation of
.
Remark of a computable Riemann surface
is said to be proper if there exist
with
for all
and
. From the proof of proposition 4.3, it follows that it is not necessary to subdivide
, provided that
is proper.
A computable covering sequence is a computable sequence
![]() |
(4.2) |
where each is a computable Riemann surface and
each
a computable covering. Let
be a proper representation of
for each
. By induction over
, and modulo reindexation of
, we may construct a digital representation
for
,
such that we have the following commutative diagram:
In particular, we obtain a new computable Riemann surface
We call the limit of the computable
covering sequence (4.2). This limit satisfies the following
universal property:
Let and
be two digital
Riemann surfaces which are encoded at the same scale
. We define their disjoint union
by
It is not hard to verify that this construction does not depend on and that
is indeed a digital
Riemann surface. We have natural inclusions
and
. The disjoint union
satisfies the following universal property:
Similarly, we define the covering product of
and
by
taking
We have natural digital coverings and
which are not necessarily surjective. The covering product
does satisfy the following universal property:
Let and
be computable
Riemann surfaces represented by
resp.
. The
disjoint union of
and
is the computable Riemann surface represented by the sequence
. The sequences
and
determine computable immersions
and
and the universal properties
for
pass to the limit. Similarly, the
covering product
of
and
is the computable Riemann surfaces
represented by the sequence
.
Again, we have natural computable coverings
and
which satisfy the universal property for
products.
and
be
computable Riemann surfaces.
If and
are
delimited, then so are
and
.
If and
have
computable branches, then so have
and
.
Proof. All properties are easy. For instance,
given , we have
Let be a computable Riemann surface and
a sequentially enumerable relation with
. In particular, we may compute a computable
sequence
, where each
is a pair
such that
is the
-th pair
in the enumeration of
.
For each , let
be the smallest equivalence relation on
generated by the relations
for
and
. Setting
, we have natural computable coverings
and
. Let
be the limit of
.
The mappings
induce a computable surjective
covering
. For every
we have
.
It is not hard to verify the following universal property of
:
Let us now consider two computable Riemann surfaces
and
. Given
and
with
,
consider the relation
on
which is reduced to the singleton
.
We call
the join of
and
at
. If
and
are not important, or clear from the context, then we also write
for
. We
will denote the natural coverings
and
by
resp.
.
and
are connected. Then
is connected.
Proof. Assume for contradiction that is not connected and let
,
where
is the connected component of
. Then we may define an equivalence relation
on
by
. The quotient set
has
a natural structure of a Riemann surface and there exists a natural
covering
. By the universal
property of
, it follows that
, which is impossible.
The proposition ensures in particular that we may apply the following classical theorem:
and
be path-connected topological spaces, such that
is non-empty and path connected. Denote by
and
the natural inclusions of
in
resp.
. Then the homotopy group of
is given by
where is the normal subgroup of the free
product
of
and
generated by elements
with
.
A broken line path is a finite sequence and we write
Intuitively speaking, corresponds to a path
. We write
for the set of broken line paths and denote by
the subsets of digital and computable paths. The empty
path is denoted by . We say
that
is a truncation of
and write
if
for some
. Given two paths
, we write
. When no confusion is possible, paths of
length
will be identified with numbers. If
, then we will also write
for the path
.
A Riemann surface is said to be rooted
if it admits a special element
called
the root of
. If
is also computable and
, then we call
a computable
rooted Riemann surface. Unless explicitly stated otherwise, we will
always assume that rooted Riemann surfaces are connected. A
root-preserving covering between two rooted Riemann surfaces will be
called a rooted covering. We denote by
the class of computable rooted Riemann surfaces. Given
, we have an additional method
in the signature of
.
Let be a computable rooted Riemann surface. We
define the path domain
of
to be the set of
,
so that
are all well-defined. We will also write .
The digital and computable path domains of
are defined by
We notice that is an abstract computable set
with a computable equality test, whereas
is only
an effective set. A broken line path
naturally
induces a continuous path
by setting
for and
.
This path is rooted in the sense that
.
and
be computable rooted Riemann surfaces. Then there exists at most one
rooted covering
. Such a
covering is necessarily computable and computable as a function of
and
.
Proof. Assume that there exists a covering . By continuity, it suffices to
show how to compute
for all
. Since
is connected,
there exists a path
with
. Given
,
we claim that we may compute such a path
.
Indeed, the set
is enumerable and, given
, we may ultimately test whether
. We perform these ultimate
tests in parallel, for all
,
until one of them succeeds. Since
is connected,
we have
, so our claim
implies
.
be a computable
rooted Riemann surface and assume that
is
given the natural topology of
.
Then
,
and
are open subsets of
,
resp.
.
is a dense subset of
and
is a dense subset of both
and
.
Proof. Let us prove the proposition by induction
over for each of the subspaces
,
,
etc. The assertions are clear for
.
Assume that
is open, with
as a dense subset. We have
where . Now the restriction
is computable, so
is
lower continuous, by theorem 2.3. Assume that
and let
. Then
admits an open neighbourhood
with
for all
.
Consequently,
is an open neighbourhood of
. This proves that
,
and
are open subsets of
,
resp.
.
In order to show that
is a dense subset of
, it suffices to prove that any
open ball
intersects
. Now
is an open ball of
, which intersects
, say in
. Furthermore,
is a disk
with radius
. Taking
with
, we
thus have
. The other density
statements are proved similarly.
be the limit of a
computable covering sequence
.
If are all connected, then so is
.
If are all simply connected, then so is
.
Proof. Assume that where
and
are non-empty open
sets. Then
both intersects
and
for sufficiently large
. Consequently,
is not
connected. This proves (a). As to (b), assume that
are simply connected and consider a loop
with
.
Then
is compact, so
for
a sufficiently large
. In a
similar way as in lemma 4.2, we may find a
such that the restriction of
to
is a homeomorphism. But then
is a loop in
which may be contracted to a point. Composing with
, we obtain a contraction of
into a point.
, we may compute
the connected component
of the root.
Proof. Let .
Modulo taking a subsequence, we may assume without loss of generality
that
contains a point
with
. It is easy to compute
the connected component
of
in
for each
.
By proposition 4.15(a), the limit of the sequence
yields
.
Assume now that we are given an enumerable set of paths
and a computable mapping
such that, given
and
, we
have
if and only if
. Reordering terms when necessary, we may assume
that
is presented as an enumeration
such that
for all
. Assume that we are also given a number
; we call
an organic triple.
Let us define a computable rooted covering sequence , such that
for all
and
with
. We proceed by induction over
. Denote by
the
computable ball with center
and radius
. We start with
and
. Assume that
has been constructed. Then the path
necessarily occurs before
in our enumeration,
whence
, so that
and
are well-defined. Now we take
with root and
.
By construction,
for all
and
with
.
Indeed, if
, then
. If
,
then
. This completes the
construction of our covering sequence. Its limit
is called the organic Riemann surface associated to
. Organic Riemann surfaces
are always simply connected, by corollary 4.12 and
proposition 4.15. They satisfy two universal properties:
with
and
,
there exists a unique rooted covering
.
Moreover, if
is computable, then
is computable and computable as a function of
.
Proof. Let us show by induction over that there exists a unique rooted covering
, where
This is clear for . Assume
that the assertion holds for a given
.
There exists a covering
By the universal property of joins, it follows that there exists a
rooted covering with
and
. We obtain
by proposition 4.5 and we conclude by proposition 4.13.
and
be organic triples with
.
Then there exists a unique rooted covering
, which is computable and computable as a function
of
and
.
Proof. Notice that for
all
. Denote the counterparts
of
,
, etc. in the construction of
by
,
, etc. For each
, there exists a computable
such that
. By a similar
induction as in the proof of proposition 4.17, one shows
that there exists a rooted covering
for every
. Passing to the limit, we
obtain
.
Remark such that
for any
and
with
, then we may still
define
, where
is an enumerable set, which fulfills the stronger requirement that if and only if
.
Let be a computable rooted Riemann
surface. The construction of organic Riemann surfaces may in particular
be applied for
,
and
. In that
case, we denote
and it can be proved that
. In the construction of
, each
is
naturally isomorphic to the ball
.
By induction over
, each
therefore comes with a natural rooted covering
. Taking limits, we obtain a
natural rooted covering
and it is
readily verified that
for all
. The universal computable covering
space
admits the following universal
properties:
with
, there exists a
unique rooted covering
and
satisfies
. If
is computable, then
is
computable and computable as a function of
.
Proof. With as in the
proof of proposition 4.17, the universal property of joins
implies that
for all
. Taking limits for
,
we conclude that
.
, there exists a unique rooted covering
and we have
.
Moreover,
is computable and computable as a
function of
.
Proof. The existence, uniqueness and
computability properties of follow from
proposition 4.18. The rooted coverings
and
are identical by proposition 4.13.
be a root-preserving computable
covering between two rooted computable Riemann surfaces
and
with
. Then any path
with
can be lifted uniquely to a path
with
.
Proof. Let .
Since
is uniformly continuous, we may
approximate
by a broken line path
with
Since , we have
. Consequently,
lifts
to a path
on
.
Since
, we also have
for all
.
Consequently,
, so that
lifts to the path
.
Let be a rooted digital Riemann
surface, encoded by
. Assume
that
is such that
.
In this section, we will then show that the universal covering space
can be constructed in a more explicit way.
A digital path is a tuple with
. We denote by
the set of digital paths
on
, for which
.
Given
, we write
. The set
comes with a
natural projection
and a natural adjacency
relation:
if and only if
or
for some
.
Let be the subset of
of
paths of lengths
. Then
is a Riemann pasting and we denote by
its associated digital Riemann surface. The root
can be lifted to a root
of
for
and the natural
inclusions
induce natural rooted coverings
for
.
of
is isomorphic to the universal covering
space
of
.
Proof. In view of proposition 4.13,
it suffices to prove that there exist rooted coverings
and
. Since
, we have natural rooted coverings
. This yields a rooted covering
when passing to the limit. Conversely, any path
can naturally be approximated by a digital path
, in the sense that
, after possible reparameterization of
. Setting
, we then have
,
which shows the existence of a rooted covering
.
Proof. The theoretical definition of the
normalization of a Riemann pasting also applies in the case of when
is infinite and one has
resp.
for
if and only if these relations hold for a
sufficiently large
with
. For each
there exists a digital path
of smallest length with
and we
denote this length by
. Let
for each
,
so that
. For every
, the path
induces an element
of
, which shows that the natural rooted covering
is surjective. Since
is
obtained by gluing a finite number of squares to
, corollary 4.12 implies that
is simply connected, by induction over
. Consequently,
is
isomorphic to
for each
, and
,
as desired.
be a rooted digital
Riemann surface and let
be such that
. Then there exists an algorithm
to decide whether
and
are homotopic.
Proof. Since has
computable branches by proposition 4.25, we have a test for
deciding whether
. Now this
is the case if and only if
and
are homotopic.
Remark :
Let ,
,
,
let
be the restriction of
to
.
Let .
If then return
.
Pick an element of minimal length
and set
.
If , then go to step 3.
Let be obtained by gluing a new square above
to
.
If there exists a with
, then set
and
identify
with
inside
.
Replace by
and go to
step 2.
The above algorithm returns a set of digital paths
each of which elements corresponds to a generator in
.
Let and
be
two computable Riemann surfaces with roots above
. Organic Riemann surfaces are also useful for the
construction of a new Riemann surface
such that
the convolution product of analytic functions
and
on
resp.
will be defined on
.
A digital folding on is a computable
mapping
such that
for
all
and
with
and
. We denote
by
the enumerable set of digital foldings on
. We also write
for the subset of rooted digital foldings
with
. Given
, we define
by
We define to be the set of all foldings
with
,
such that
and
lift to
rooted foldings on
resp.
. We notice that
is enumerable.
Now any induces a path
by
, where
. By construction, we have
and
. Let
be the enumerable set of all paths which are induced by foldings in
. Given
, we let
![]() |
(4.3) |
Given with
,
we claim that
. Indeed, let
be such that
and define
with
by
By construction, we have and
. In view of remark 4.19, we may
now define the convolution product of
and
by
.
be a continuous function with
, such that
and its mirror
lift into
functions
and
on
resp.
with
and
.
Then the path
can be lifted to a path
on
. In
particular, given
and
on
resp.
, the convolution product
can be analytically continued along
:
where , whenever
.
Proof. We first observe that a digital folding
induces a natural continuous mapping
by
Let
Since is uniformly continuous, we may
approximate it by a digital folding
with
Moreover, we may take such that
and
. By our choice of
, the foldings
and
lift to
resp.
, so that
. Moreover, the broken line
path
satisfies
for all
, again by the choice of
. Consequently,
and its associated continuous path
lifts to a path
on
with
the same endpoints as
. Once
more by the choice of
, we
have
for all
and
. Consequently,
lifts to the path
on
.
The convolution product comes with
natural computable rooted coverings
and
, since any
in particular induces a path
with
. The following universal property follows from
proposition 4.18:
In [vdH05a], a computable analytic function was defined locally as a “computable
germ” with a computable method for analytic continuation. In
section 5.1, we recall an improved version of this
definition. In section 5.3, we define the new concepts of
globally and incrementally computable analytic functions. These concepts
allow for computations with analytic functions on computable Riemann
surfaces as studied in the previous sections. A locally computable
analytic function in the sense of section 5.1 will
naturally give rise to a globally computable analytic function on an
organic Riemann surface. However, common operations on globally analytic
functions, as studied in sections 5.4 and 5.5,
may give rise to computable Riemann surfaces which are not necessarily
simply connected. Our new definition therefore has the advantage that
identical branches may be detected effectively in many cases.
Let be a convergent power series at
the origin. We will write
for its
radius of convergence. Given
with
, we also define
Finally, given , we will
denote by
the analytic continuation
of
along the straightline segment
, so that
for small
.
A locally computable analytic function
is an object encoded by a quadruple
where
is a computable power series.
is a computable partial function,
which yields an upper bound
for every
.
is a computable partial function, which
yields the analytic continuation
of
as a function of
with
.
We denote by the set of locally
computable analytic functions. Given
,
we call
its computable radius of
convergence. Usually,
is smaller than the
genuine radius of convergence of
.
Remark is recursive,
because of the method
for analytic continuation.
Such recursive quadruples can in their turn be encoded by terms in a
suitable
-calculus, and
thereby make the definition fit into the setting introduced in section
2.1.
Example centered at a given
. We may implement it as a function
with
The constructor can be implemented in a similar
way.
Example can easily be implemented in a
recursive manner. For instance, the addition of
may be computed by taking
In sections 3 and 4 of [vdH05a], algorithms were given for several other basic operations and for the resolution of differential equations. Modulo minor modifications, these algorithms remain valid. In particular, we have implementations for the following operations:
![]() |
(5.1) |
In the cases of and
, the second argument specifies the value of the
function at
.
It is instructive to rethink the definition of
in terms of signatures. First of all, we have a class of computable
power series
with a method for the extraction of
coefficients
Then the class of locally computable analytic
functions is determined by the methods
![]() |
(5.2) |
For the last two methods, we understand that and
are defined if and only if
resp.
.
The recursive definition of raises the question
when two elements
should be considered
identical. In what follows, we will use the criterion that
if and only if the signature (5.2) does not
allow for the distinction of
and
. In other words, whenever
and
are such that
and
are both defined and
, we require that
,
and
.
We warn that we may have
for two different
elements
.
Remark and
to be
only left resp. right computable. In [vdH05a],
we also required a few additional global consistency conditions in our
definition of computable analytic functions. The homotopy condition will
no longer be needed because of theorem 5.7 below, even
though its satisfaction may speed up certain algorithms. The continuity
condition also becomes superfluous because of theorem 2.3.
Given , we have
already noticed that the computable radius of convergence
of
does not necessarily coincide
with its theoretical radius of convergence
.
This raises a problem when we want to analytically continue
, because we are not always able to effectively
continue
at all points where
is theoretically defined. By contrast, bad upper bounds for
on compact disks only raise an efficiency problem. Indeed,
we will show now how to improve bad bounds into exact bounds.
Let us first introduce some new concepts. The path domain of
is the set of
such that
for every
. Given
, we denote
and
. The digital path domain
of
, which is defined by
, is enumerable. Given
, we say that
improves
, and we
write
, if
,
and
for all
and
.
Assume now that we are given with
and let us show how to compute an
-approximation for
.
Approximating
sufficiently far, we first compute
an
with
.
Now let
and choose
![]() |
(5.3) |
sufficiently large such that
![]() |
(5.4) |
Using an algorithm which will be specified in section 6.2,
we next compute an -approximation
for
,
where
. Then
is the desired
-approximation
of
. We have proved:
Another situation which frequently occurs is that the radius of
convergence can be improved via the process of analytic continuation and
that we want to compute bounds on larger disks, corresponding to the new
radii of convergence. This problem may be reformulated by introducing
the class of weak locally
computable analytic functions. The signatures of
and
are the same except that
comes with a second radius function
with
; given
, we only require computable bounds
for
. We have a natural
inclusion
and the notions of path domain and
improvement naturally extend to
.
Assume now that and that we want to compute a
bound for
on
for a given
. We have an algorithm for
computing a
with
from
. Consider any computable
sequence
with
Since is compact and the function
is continuous (by theorem 2.3), it follows
that there exists an
with
for all
. In particular, the
balls
form an open covering of
, whence the sequence
is necessarily finite. Let
be its last term.
Then
is a computable upper bound for .
We have proved:
The bound (5.4) may also be used in order to provide a
default analytic continuation method of a computable power series inside a given computable radius of convergence
, assuming that we have an
algorithm for the computation of upper bounds
,
.
Indeed, let
,
and
be such that
and assume that we want to compute an
-approximation
of
!. Now choose
with
and let
. Then the majoration [vdH05a, vdH03]
yields the majoration
so that
Taking in a similar way as in (5.3),
we thus have
Let be an
-approximation
of
. Then
is also an
-approximation of
.
Let us now consider an analytic function
on a computable rooted Riemann surface
.
We say that
is a globally computable
analytic function on
,
if there exists a computable function
,
which maps
to a locally computable analytic
function
, such that
for all ,
and
. We denote by
the set of such functions. We also denote by
the set of all computable analytic functions
on some connected computable (and
computable as a function of
)
rooted Riemann surface
. In
other words, the signature of
is given by
Here the projection is required to
satisfy
. The signature for
can also be given in a more intrinsic way:
Notice that the method has been replaced by an
exact method
with codomain
, in view of proposition 5.5. For
the intrinsic representation, the compatibility conditions become
and
,
where
denotes the Riemann surface
with its root moved by
.
Using analytic continuation, let us now show how to improve a locally
computable analytic function into a globally
computable analytic function
.
on input and produces an improvement
of
on output. Given any other improvement
of
, we have
.
Proof. For the underlying Riemann surface of
, we take
, with
for all
. By the construction of
, we may compute a sequence
of open balls
with
,
and
, such that
Given , we may therefore
compute an
with
(which
may depend on the encoding
)
and take
Given , we may also compute
Given with
,
we may finally compute
By propositions 5.6 and 5.5, we may therefore
compute for every
with
. Since
, proposition 4.14 and its
adaptation to
imply
, whence
.
The universal property of
(proposition 4.17)
implies that
for any other improvement
of
.
In practice, it is not very convenient to compute with global computable
analytic functions , because
we have no control over the regions where we wish to investigate
first. An alternative formalization relies on the
incremental extension of the Riemann surface on which
is known. Consider the class
with the
following signature:
Given and
,
where
, the method
returns an extension
of
on a Riemann surface
with
(in particular, there exists a computable rooted
covering
). For simplicity,
it will be convenient to assume that
.
For consistency, we also assume that successive calls of
for paths
and
with
yield extended surfaces
and
for which there exists a rooted covering
. This ensures the following:
of
. Then the limit
of the computable rooted covering sequence
does not depend on the particular ordering of the enumeration (up to
isomorphism).
on input and produces an improvement
of
on output. Given any other improvement
of
, we have
.
Any locally computable analytic function
naturally determines an incrementally computable analytic function
: starting with
, each successive call of
joins a ball with radius
to
at the end of
, just like in
the construction of organic Riemann surfaces. However, as we will see in
the next section, the method
may also be used to
identify identical branches in the Riemann surface of a function.
In this section, we improve the implementations of the
operations (5.1) so as to identify branches of the
underlying computable Riemann surface of an analytic function , whenever we know that
takes the same values on both branches. We will also
consider several other operations on computable analytic functions.
where stands for the rooted covering product,
i.e.
is the connected component of
the root of
. This root is
computed by applying the universal property of computable covering
products to the immersions of a small ball
into
neighbourhoods of the roots of
and
. As to the method
, we may simply take
The consistency condition for successive applications of is naturally met, because of the universal property of
covering products. The cases of subtraction, multiplication,
exponentiation and precomposition with any other computable entire
functions in several variables can be dealt with in a similar way.
It remains to be shown that is a computable
rooted Riemann surface. It suffices to show this in the case when
is a digital Riemann surface. Indeed,
Now for every point above
, we will show in section 6.1 how to
compute a maximal disk
on which
does not vanish. For the
-th
approximation
of
,
it suffices to take the union of all
with
(starting with an
for which
contains the root of
).
Let be the limit of a covering sequence
of digital Riemann surfaces. Given
, we have sketched in remark 4.27
how to compute generators
for the homotopy group
of
.
The relations
induce a computable equivalence
relation
on
.
Setting
, the covering
gives rise to a natural covering
. We take
However, in this particular case, the integrals of
over the above generators
are always multiples
of
, so they can be computed
exactly. More precisely, let
be the limit of
. We now replace
by
, where
is the equivalence relation defined by
Given , we may check whether
, by computing
-approximations
and
of
resp.
and testing whether
. The covering
induces a
natural covering
and we take
Let be the digital Riemann surface with
and with an adjacency relation defined as
follows. Solving the equation
at the center
of
yields
solutions
which we attach arbitrarily to the
with
.
We set
if
and if the
analytic continuation of
from
to
coincides with
.
This can be tested effectively, since there are no multiple roots,
whence all branches are bounded away from each other when computing with
a sufficient precision. By a similar argument, the root
of
may be lifted to
, if
has a prescribed value
, and the rooted covering
may be lifted to a rooted covering
. We now take
Denoting , we also take
![]() |
(5.8) |
where is a vector of indeterminates,
a polynomial in
and
. Any algebraic differential equation can be
rewritten in this form. In section 6.3 below, we will
discuss techniques for computing the power series solution to (5.8)
at the origin, as well as bounds
and
. Given
with
for all
,
we have
By what has been said at the end of section 5.2, we may
compute . Consequently, the
equations (5.8) and (5.9) have the same form,
and the analytic continuation process may be used recursively, so as to
yield a solution
. Since we
do not have any a priori knowledge about identical branches in
the Riemann surfaces of the
,
we simply solve (5.8) in
by taking
. Notice that the
decomposition of the integral in (5.9) may be used for more
general implicit equations involving integration, even if they are not
given in normal form (5.8).
and set
In section 6.2, we will show how to compute , so that
and
.
Assume now that are such that
, so that
is
well-defined by what precedes. Let
be the
canonical Riemann surface of
,
as in theorem 5.7, and let
be the
subspace induced by paths
for which
. A digital folding
on
is said to be a digital homotopy
between the paths associated to
and
if
and
is
constant. The set of pairs
such that
determine digitally homotopic paths on
is enumerable. We take
,
where
stands for digital homotopy on
. We also take
Remark
can be done more efficiently, by directly working with digital
approximations and using corollary 4.26.
In order to determine , it
suffices to compute
until several successive
values
are small with respect to
and approximate
.
A similar approximation may be used for the analytic continuation to a
point
with
.
Finally, one may determine
by heuristically
identifying branches in
where the germs of
above the same point coincide up to a given order and
at a given accuracy.
Remark , assume that we want
to obtain a lower bound for
with “expected
relative error”
. Then
we may keep producing better and better lower bounds
and heuristic bounds
(at expansion order
), until
.
Let . The
convolution product of
and
is locally defined by
![]() |
(5.10) |
If we want to evaluate up to many digits at a
small
, then we may simply
compute the Taylor series expansions of
and
at
and evaluate the primitive
of
. Assuming that the series
expansions of
and
are
given, this algorithm requires a time
for the
computation of a
-approximation
of
. More generally, if
is a broken-line path with
, then
Modulo the replacement of each by
for a sufficiently large
,
we may thus compute a
-approximation
of
using the above method, in time
.
In order to obtain a complete power series expansion of
at
or
,
it is convenient to consider the Borel and Laplace transforms
Then we have
These formula allow for the efficient (and possibly relaxed) computation
of the coefficients , since
the Borel and Laplace transforms can be computed in essentially linear
time.
More precisely, let ,
and
.
Assume that
and
for all
and that we are given
-approximations
of
and
-approximations
of
.
Then the naive evaluation of the formula (5.12) using
interval arithmetic yields
-approximations
of
.
In order to use (5.11) and fast multiplication methods
based on the FFT, one may subdivide the multiplication of
and
into squares like in relaxed
multiplication method from [vdH02b, Figure 3]. For each
square, one may then apply the scaling technique from [vdH02b,
Section 6.2.2], so as to allow for FFT-multiplication without precision
loss. This yields an
algorithm for the
computation of
-approximations
for
. Notice that this
algorithm is relaxed.
If we want the power series expansion of at a
path
with
,
then consider the formula
![]() |
(5.13) |
Assuming that the and
are sufficiently small, we also have
for all . Now if
is sufficiently small, we may compute the series expansion
of
at
as a function of
the series expansion of the same function at the origin, using a variant
of [vdH02b, Section 3.4.1]. This yields
-digit expansions for
coefficients of
at
in
time
.
Let us now define and
by
induction over
, in such a
way that
implies
and
. Assuming that
, we take
![]() |
(5.15) |
Clearly, for with
,
we have
and
.
Given
, we take
This completes the induction and the construction of . If
,
then we have
, since (5.15) reduces to (4.3). If
, then we suspect that
,
but we have not tried to check this in detail.
In practice, if we want to analytically continue
along a path
which is known to belong to
, it can be quite expensive to
“randomly” compute a part of
which
contains
. During the
analytic continuation of
along
, it is therefore recommended to progressively
compute equivalent paths for
which avoid
singularities as well as possible. These paths may then be used for the
computation of better bounds and in order to accelerate the computation
of a part of
which contains
.
More precisely, let and assume that
is fixed. Let
By construction, we have and
is the limit of a sequence
with
. Let
be fixed and
consider the set
of all paths
with
. Given
, let
Here denotes the distance between
and the border of
and we recall
that
stands for the continuous path on associated to
. Using Dijkstra's shortest
path algorithm, we may enumerate
such that
. As soon as we find an
with
then we stop (this condition can be checked ultimately by computing a
sufficiently precise digital approximation of
using the techniques from section 4.7). If
is small enough, this yields a path
which is
homotopic to
on
and for
which
is small. The idea is now to replace
by
in the right-hand side of
(5.15) resp. (5.16), if this
yields a better bound.
The above approach raises several subtle problems. First of all, the
computed path depends on the number .
When computing a
-th
approximation for
, one
possibility is to take
. A
second problem is that the choice of
depends on
and
,
so we no longer have
.
Nevertheless, it should be possible to adapt the theory to the weaker
condition that
whenever
, where we notice that our change can only lead to
improved bounds. Finally, if
becomes small, then
the shortest path algorithm may become inefficient. One approach to this
problem would be to use the shortest path at a larger scale for an
accelerated computation of the shortest path at a smaller scale. As a
first approximation, one may also try to continuously deform
as a function of
.
We wish to come back to these issues in a forthcoming paper.
For actual implementations of computable analytic functions it is very important that bound computations (i.e. lower bounds for convergence radii and upper bounds for the norm on compact disks) can be carried out both accurately and efficiently.
A first problem is to find a good balance between efficiency and accuracy: when bounds are needed during intermediate computations, rough bounds are often sufficient and faster to obtain. However, bad bounds may lead to pessimistic estimates and the computation of more terms in power series expansions in order to achieve a given precision for the end-result. Therefore, it is important that cheap bounds are also reasonably accurate.
Another point is that it is usually a good idea to use different algorithms for rough and high precision bound computations. Indeed, only when sufficient knowledge is gathered about the function using rough bound computations, it is usually possible to fulfill the conditions for applying a high precision method, such as Newton's method. Furthermore, such asymptotically fast methods may only be more efficient when large precisions are required, which requires the study of the trade-off between different methods.
In this section, we will present several techniques for efficient and/or
accurate bound computations. Some of the algorithms have been
implemented in
Let with
and
. The problem of computing a
lower bound for the radius of convergence of
reduces to the computation of a
such that
has no zeros on
.
We may start with the simpler problem of computing a lower bound for
where with
has been
fixed. A natural approach is to approximate the problem by a root
finding problem of complex polynomials.
More precisely, we may approximate real and complex numbers by elements
of the sets and
of real
intervals with endpoints in
resp.
complex balls with centers in
and radii in
[vdH06b]. Let
for some
with
.
We start by picking
, and the
computation of complex ball approximations
for
, as well as a bound for the
remainder
The bound may be integrated into the constant
coefficient
by setting
. Now we compute a lower bound for the norm of the
smallest root of the polynomial
using some classical numerical method and interval/ball arithmetic. The
result will then be presented as an interval and
yields the desired lower bound for
.
We have implemented two experimental versions of the above method for
the two numerical methods from [Car96] and a variant of [Pan96, Appendix A]. The first method is based on repeated
squaring in the ring .
However, it is cumbersome to adapt to the case when there exist almost
multiple roots. Also, we observed a lot of precision loss in our context
of certified computations with complex balls. This might be due to the
divisions. The second method is based on Graeffe transforms and rapidly
provided us with rough lower bounds for
of an
acceptable quality. Let us quickly explain this method.
First of all, we recall that Graeffe's transform sends a polynomial of degree
with roots
to another polynomial
with
roots
. Such a polynomial can
be computed efficiently using FFT-squaring:
Given a monic polynomial with
, we also observe that the norm of the largest
root of
lies in the interval
. Indeed, if
,
then
, whence
. Similarly, if
is such
that
for all
,
then
for all
.
Now let be a polynomial of degree
and assume that we want an upper bound for the largest
root of
with a relative accuracy
. If we rather want a lower bound, then we
replace
by
.
We start by making
monic by setting
. We next let
be
smallest such that
. Starting
with
and
,
we now repeat the following:
Compute .
Scale .
Replace .
If , then return
.
Set and
.
Consider the factorizations and
, where
denotes the
original. Then we observe that
,
each time when we arrive at step 4. When the approximations
were sufficiently precise, it follows that we obtain an
-approximation of the largest
root of
on exit.
Remark
Remark of maximal
modulus, then after a few steps, we have
,
with
, whence a good
approximation of
can be read off from
. Now if
, then either
or
. Going the way back up, we may
thus compute a good approximation of
.
At a second stage, this approximation may be further improved using
Newton's method.
Remark admits a single
root
of multiplicity
. In that case, each iteration typically gives rise
to a precision loss of
binary digits, when using
a fast algorithm for multiplication.
Let us now come back to the original problem of computing a lower bound
for the radius of convergence of
. Given
,
we thus have to find an
-th
lower approximation
for
with
and
.
We start by computing the
-th
lower approximation
of
. For
,
we may now take
if
and
otherwise (alternatively, one may choose
as a function of a heuristic approximation of the
radius of convergence of
;
see remark 5.11). Using the above algorithm, we may now
compute a lower bound
for
, using an expansion of
at
order
(or an order like
which makes the total computation time more or less proportional to
) and
. We may then take
if
and
otherwise.
Let and
be
such that
. By definition, we
have a method for computing an upper bound
for
. Since this bound may be
pessimistic, we will now show how to compute better approximations for
.
We start by computing an with
and picking an expansion order
.
If we want an
-approximation
of
, then we may take
as in (5.3) and (5.4). We
next compute approximations
for the first
coefficients
of the series
and set
.
We now have to approximate
Let be a power of two larger with
and
. We may
efficiently approximate the vector
using the FFT
and compute
More generally, we may efficiently approximate
using the FFT for small values of
.
Let
. Then
for , and where
for polynomials
of degree
. In other words,
![]() |
(6.1) |
We also have
where . We may thus compute
an approximation
using one FFT of order
. More generally, for a fixed
, and modulo choosing a larger
, we may compute an
approximation
using one FFT of order
.
In practice, the above method is more powerful. Indeed, if is a truncated power series, then the right-hand side of
(6.1) is usually of the order
for a
small
. Also, in the
favorable but frequent case when the maximal value of
is obtained near a unit
which “clearly
dominates the others” (this case typically occurs when we approach
an isolated singularity), one may consider the shifted polynomial
and apply Newton's method near
in order to efficiently find high precision approximations of
. If the upper bound for
was pessimistic, one may also directly recompute the
Taylor expansion of
at order
and apply Newton's method for this series. This allows us to use a much
sharper bound for the tail of the expansion of
on
than (5.4). Alternatively, one
may investigate the use of a steepest descent method. Notice that the
method may still be applied in the slightly less favorable case of a
small number of units
which dominate the others.
Remark
Indeed, it suffices to replace and
by the corresponding
,
and
,
. The efficient computation
of
and
is interesting in
order to compute upper bounds for
resp.
on compact disks. In the case
of
, one needs to require
that
has no roots on
, so that
.
Remark
where is a more general continuous and
real-valued function which can be evaluated efficiently. However,
suitable analogues of (6.1) are harder to obtain in that
case.
In sections 6.1 and 6.2, an
important ingredient of the algorithms is the computation of a bound
for the tail
of the
power series expansion of
on a compact disk
. Until now, sharp bounds for
the tail were obtained by computing a rough bound
on a slightly larger disk and using Cauchy's formula. However, if
is pessimistic, then we will have to choose
quite large in order to reduce the bound for
. This raises the questing of finding more
direct ways for bounding
on
. In this section, we will see how to adapt the
strategies of lazy and relaxed computations with formal power series in
order to directly take into account error bounds for the tails.
Assuming algorithms for the computation of bounds
and
for
resp.
on
, we will also denote by
the
resulting bound for
on
. Finally, in the case when
, then we will abbreviate
,
,
etc. by
,
and so on.
In the case of relaxed computation [vdH02b], additional
tricks are used so as to accelerate these computations using
FFT-multiplication. This enables us to compute
coefficients in time
, where
corresponds to the complexity of multiplication
of polynomials of degree
.
The lazy and relaxed strategies have the big advantage that the
resolution of a functional equation can be done in approximately the
same time as the evaluation of the defining implicit equation.
One disadvantage of FFT-multiplication is that it increases numerical
instability in the case when the coefficients do
not have the same orders of magnitude. Using transformations of the kind
, where
is the “numerical” radius of convergence of
, it has been shown in [vdH02b,
Section 6.2] how to reduce this numerical instability. In our case, we
are rather interested in the computation of
-approximations of
for
. Assume that
is the solution of some implicit equation using the operations
,
,
,
,
,
and
.
Using the rules
we may then construct an implicit equation for
which can be evaluated as efficiently as
itself.
Without loss of generality, we may thus assume that
and compute
-approximations
for the coefficients
for an
which does not depend on
. If
we need
coefficients,
usually suffices. This trick therefore reduces the general case to fixed
point arithmetic and FFT-multiplication of degree
polynomials only accounts for a precision loss of
digits.
where
can be computed in time . One
may also compute a bound
for
using automatic differentiation. For especially nice postcompositions,
one may take:
For more general postcompositions with ,
with
,
and
, one may use
The case of convolution products will be discussed below.
In order to find the smallest fixed-point of
, we may use the secant
method:
If for some
or if
exceeds a given threshold, then the method fails and
we set
. Otherwise,
converges quadratically to
.
As soon as
, for some given
, we check whether
for
, in which
case we stop. The resulting
is an approximation
of
with relative accuracy
.
The above technique generalizes to systems of
implicit equations. In this case, the hypothesis
and the bound
are vectors and the secant method
becomes:
where
We may also consider systems such that
is recursively built up using the standard operations
,
,
,
, etc., together with
extra operations like
and
which involve the recursive resolution of other systems of implicit
equations. Indeed, theoretically speaking, such a system may be
rewritten as one big system
of the above kind.
In practice however, we also want to preserve the lazy computation
paradigm, which can be achieved by storing the hypotheses
and the corresponding bounds
in a
hash table, which is passed as a reference to the bound computation
method.
Let us now consider the dependence of the computation of for a solution to
as a function of
(assuming that we perform the necessary scalings
for each
). When the implicit
equation was constructed using
,
,
,
and recursive solutions to
implicit equations of the same kind, then it can be checked that
![]() |
(6.7) |
for . Consequently, the
function
indeed does have a fixed point for
sufficiently small
, and our
algorithm yields a computable lower bound for
. In particular, our technique can be used as an
alternative for the classical majorant method [vK75, vdH03]. Moreover, it easily adapts to slightly more general
functional equations, which involve composition or other operations: it
suffices to check that (6.7) holds for
.
Assuming that lower bounds for radii of convergence are computed as
above, we claim that coincides with the largest
theoretical simply connected Riemann surface
on
which
and
are defined.
In order to see this, we first observe that the algorithm for computing
may theoretically be applied to arbitrary paths
and
with
. Since
was constructed
using the common operations
,
,
,
,
etc., we have
whenever
and
depends continuously on
and
.
Consequently, the supremum
is lower continuous in . Now
assume for contradiction that
and take
Setting , there exists a
neighbourhood
of
with
for all
.
Taking
with
,
we thus obtain
. This
contradiction completes the proof of our claim. Notice the analogy with
[vdH05a, Theorem 3].
![]() |
(6.8) |
Assuming that the equation admits a solution at the origin, its analytic
continuation to requires the prior analytic
continuation of
to
for
any
and
.
Naive implementations may therefore lead to infinite loops.
One solution to this problem is to introduce a “freezing”
operator . Given
, the function
is the
restriction of
to its current Riemann surface
. In particular,
for all
. Then
we may replace (6.8) by
This approach avoids infinite loops, by handing over to the user the
responsibility of ensuring that all values with
are already defined. Of course, this may be
automatized by trying brutal continuations in all directions. One may
also consider delayed freezing operators
,
which only freeze
after
postcompositions.
In the very particular case when the generate a
finite group
for the composition operator, we
notice that (6.8) may be rewritten as a system of
equations in the unknowns
with
. After a local resolution at
the origin, these equations do no longer involve composition. A
particularly important special case of this situation is when
and
with
.
Nevertheless, in the equation (5.14), the functions and
are known except when
resp.
.
Modulo one subdivision of the path, we may also assume without loss of
generality that
. This
reduces the resolution of the convolution equation to the problem of
determining the coefficients of
at a small
as a function of the coefficients of
at
in a relaxed manner, assuming that the
coefficients of
at
are
already known. Now we may again write
![]() |
(6.9) |
The coefficients of may be computed in a relaxed
manner by what precedes. The second member may be expanded in
using
![]() |
(6.10) |
However, the evaluation of each !
at a precision of
digits still requires a time
, which is not very
convenient if we want to evaluate up to order
. On the other hand, if the power series expansion
of
has convergence radius
, then the translated expansion of
still has convergence radius
.
The idea is now to use (6.9) and (6.10) for
the computation of good bounds
and not for the
expansion of
itself, using the formulas
If is close to
,
then
may typically remain finite even for
. In that case, we have a method to
analytically continue
beyond
.
Remark expansion of
the solution
to a convolution equation at a path
, one generally needs an
order
expansion of
at
the origin, where
is more or less proportional
to
(it also depends on the positions of the
singularities of
). It
remains an interesting question whether the order
can be reduced.
![]() |
(6.11) |
Indeed, the fixed-point method yields
The denominator is unnecessarily pessimistic:
even if
exceeds
,
the function
itself might be bounded away from
. This is particularly
annoying in the case when
for large values of
. Indeed, when using the
fixed-point method in a direct way on this example, the computable
radius of convergence of
would be
instead of
.
For this reason, it is good to treat the case of division (6.11)
in an ad hoc manner. When rewriting (6.11) in
terms of , we obtain the
solution
Now we may compute a lower bound for
on
using the technique from
section 6.2. Consequently, we may take
![]() |
(6.12) |
we obtain a bound
Although this bound is a bit better than in the bound for division
(roughly speaking, we effectively “see” the part of with
), we
again obtain a better ad hoc bound by solving (6.12)
in terms of
:
Section 6.2 again yields an efficient algorithm for
computing order bounds
and
for
and
on
. We may
then take
Composition is treated in a similar way as integration. Applying the
above rules to , we obtain
We now replace the equation by
and compute bounds and
as in the previous section with the above improvement for the final
division by
. In the case of
possibly nested systems of implicit equations
, subexpressions
are
decomposed as
where is a vector and
stands for the vector product.
Example
![]() |
(6.13) |
For , we have
and
for the polynomial with
. Then (6.13) is equivalent to
![]() |
(6.14) |
where is an expression which is also a power
series in
. We may then
rewrite (6.14) as
We next set
for polynomials of degree
and numbers
and
which
are approximated at successive stages using the secant method. Then (6.15) admits an explicit solution
Now order upper bounds for
and
can be computed using the method from
section 6.2. Then we may take
With some more work, this method can be adapted to the case of systems
of ordinary differential equations (6.14), with and
. The case
when
is polynomial can also be treated with the
majorant technique [vdH03, Section 5].
One approach to this problem is to use Taylor models [MB96,
Ber98] in which we consider as
formal parameters, with
.
Instead of computing with coefficients in
,
we now compute with coefficients in the jet space
For and
with
, we take
. Given
,
the constant coefficient
is stored at a
precision which is one or a few words higher than the precision of the
.
Taylor models can be used in many variants. For instance, each of the
coefficients with
may be
taken to be finite precision floating point numbers instead of balls, in
which case rounding errors are incorporated into the error of
. If
,
then one may also take
(
), which allows for the computations of bounds for
the derivatives in the parameters
.
If
is continued analytically from
to
, then we
also notice that the initial conditions
at
may again be taken in the jet space
for the errors
at
.
This is useful for the computation of return maps and limit cycles. When
the constant coefficients of such jets become less precise than
, it may sometimes be useful to
unjettify
and replace each
by
. We next
rejettify the vector
by replacing each
by
.
Remark on initial conditions. For
instance, return maps for limit cycles may be computed in such a way.
Remark
Given an initial condition
at , integration of the
equation from
to
yields
with
Now if is given by an enclosing rectangle, then
left multiplication by
turns this rectangle by
, so that we lose
bit of precision when enclosing the result by a new
rectangle.
Now a similar problem is encountered when using complex interval
arithmetic in order to compute the -th
power of
using
.
Therefore, one possible remedy is to adapt ball arithmetic to matrices
and represent transition matrices such as
by
balls
, where
is an exact matrix and
denotes the
space of matrices
with norm
(i.e.
for all vectors
). In this representation, taking the (naive)
-th power of the matrix
only gives rise to a precision loss of
bits. Although the above idea applies well to matrices
whose eigenvalues are of approximately the same order of magnitude, the
error bounds may again be pessimistic in other cases. In addition, one
may therefore adapt the norms in an incremental manner using numerical
preconditioning. Equivalently, one may adapt the coordinate system as a
function of
[Loh88, MB04].
We notice that the divide and conquer technique may also be used to the wrapping effect. Indeed, in the case of linear equations, the transition matrices verify the relation
Instead of computing in the naive way, using
one may use binary splitting:
Even in the case when ordinary interval arithmetic is used in order to
represent the matrices , the
computation of
using this technique gives rise
to a precision loss of only
bits. With some more
work, this technique also applies to non-linear equations.
Encoding for an effective set
4
Effective set of computable functions from
into
4
Encoding of
5
Set
of digital
numbers 5
Set of strictly positive elements in
5
Set of positive or zero elements in
5
Set of approximators of real numbers 5
Set of computable real numbers
5
Set of left computable real numbers 6
Set of right computable real numbers 6
Dot notation for arguments
7
Scale of the encoding of a digital Riemann
surface 9
Nodes of the encoding of a digital Riemann
surface 9
Projection of
on
9
Adjacency relation on
9
Circular adjacency relation
10
Square associated to a node
10
Set of digital points on
11
Set of computable points on
12
Normalization of a digital Riemann pasting
12
Set of digital coverings
13
Open ball of radius
and center
resp.
16
Closed ball of radius
and center
resp.
16
Disjoint union of
and
20
Covering product of
and
20
Join of
at
and
at
21
Set of broken line paths
22
Root of a Riemann surface
22
Set of rooted Riemann surfaces
22
Path domain of
22
Organic Riemann surface associated to
24
Covering space of
25
Convolution product of
and
28
Radius of convergence of
28
Maximum of
on disk
of radius
28
Analytic continuation of
along the straightline segment
28
Effective lower bound for
28
Effective upper bound for
28
Set of locally computable analytic functions
29
Path domain of
30
improves
30
Set of weak locally computable functions
31
Set of computable analytic functions 32
Projection on
32
Set of incrementally computable Riemann
surfaces 33
Extension mapping for incrementally
computable Riemann surfaces 33
The head
42
The tail
42
The restriction
42
G. Alefeld and J. Herzberger. Introduction to interval analysis. Academic Press, 1983.
C.A. Briot and J.C. Bouquet. Propriétés des fonctions définies par des équations différentielles. Journal de l'École Polytechnique, 36:133–198, 1856.
E. Bishop and D. Bridges. Foundations of constructive analysis. Die Grundlehren der mathematische Wissenschaften. Springer, Berlin, 1985.
J. Blanck, V. Brattka, and P. Hertling, editors. Computability and complexity in analysis, volume 2064 of Lect. Notes in Comp. Sc. Springer, 2001.
M. Berz. Cosy infinity version 8 reference manual. Technical Report MSUCL-1088, Michigan State University, 1998.
R.P. Brent and H.T. Kung.
algorithms for composition and reversion of power series. In
J.F. Traub, editor, Analytic Computational Complexity,
1975. Proc. of a symposium on analytic computational complexity
held by Carnegie-Mellon University.
R.P. Brent and H.T. Kung. Fast algorithms for manipulating formal power series. Journal of the ACM, 25:581–595, 1978.
J. Blanck. General purpose exact real arithmetic. Technical Report CSR 21-200, Luleå University of Technology, Sweden, 2002. http://www.sm.luth.se/~jens/.
S. Basu, R. Pollack, and M.F. Roy. Algorithms in real algebraic geometry, volume 10 of Algorithms and computation in mathematics. Springer-Verlag, 2003.
R.P. Brent. The complexity of multiprecision arithmetic. In Complexity of Computational problem solving, 1976.
R.P. Brent. Fast multiple-precision evaluation of elementary functions. Journal of the ACM, 23(2):242–251, 1976.
J. P. Cardinal. On two iterative methods for approximating the roots of a polynomial. In J. Renegar, M. Shub, and S. Smale, editors, Proc. AMS-SIAM Summer Seminar on Math. of Numerical Analysis, volume 32 of Lectures in Applied Math., pages 165–188, Providence, 1996. American Mathematical Society Press. Park City, Utah.
D.V. Chudnovsky and G.V. Chudnovsky. Computer algebra in the service of mathematical physics and number theory (computers in mathematics, stanford, ca, 1986). In Lect. Notes in Pure and Applied Math., volume 125, pages 109–232, New-York, 1990. Dekker.
J.W. Cooley and J.W. Tukey. An algorithm for the machine calculation of complex Fourier series. Math. Computat., 19:297–301, 1965.
J. Denef and L. Lipshitz. Decision problems for differential equations. The Journ. of Symb. Logic, 54(3):941–950, 1989.
J. Écalle. Les fonctions résurgentes I–III. Publ. Math. d'Orsay 1981 and 1985, 1985.
L. Fousse, G. Hanrot, V. Lefèvre, P. Pélissier, and P. Zimmermann. Mpfr: A multiple-precision binary floating-point library with correct rounding. Technical Report RR-5753, INRIA, 2005.
M. Grimmer, K. Petras, and N. Revol. Multiple precision interval packages: Comparing different approaches. Technical Report RR 2003-32, LIP, École Normale Supérieure de Lyon, 2003.
A. Grzegorczyk. Computable functionals. Fund. Math., 42:168–202, 1955.
A. Grzegorczyk. On the definitions of computable real continuous functions. Fund. Math., 44:61–71, 1957.
L. Jaulin, M. Kieffer, O. Didrit, and E. Walter. Applied interval analysis. Springer, London, 2001.
E.A. Karatsuba. Fast evaluation of transcendental functions. Problems of Information Transmission, 27:339–360, 1991.
A. Karatsuba and J. Ofman. Multiplication of multidigit numbers on automata. Soviet Physics Doklady, 7:595–596, 1963.
B. Lambov. The RealLib project. http://www.brics.dk/~barnie/RealLib, 2001–2006.
R. Lohner. Einschließung der Lösung gewöhnlicher Anfangs- und Randwertaufgaben und Anwendugen. PhD thesis, Universität Karlsruhe, 1988.
R. Lohner. On the ubiquity of the wrapping effect in the computation of error bounds. In U. Kulisch, R. Lohner, and A. Facius, editors, Perspectives of enclosure methods, pages 201–217. Springer, 2001.
K. Makino and M. Berz. Remainder differential algebras and their applications. In M. Berz, C. Bischof, G. Corliss, and A. Griewank, editors, Computational differentiation: techniques, applications and tools, pages 63–74, SIAM, Philadelphia, 1996.
K. Makino and M. Berz. Suppression of the wrapping effect by taylor model-based validated integrators. Technical Report MSU Report MSUHEP 40910, Michigan State University, 2004.
N. Müller. iRRAM, exact arithmetic in C++. http://www.informatik.uni-trier.de/iRRAM/, 2000.
V. Ménissier-Morain. Arbitrary precision real arithmetic: design and algorithms. http://www-calfor.lip6.fr/~vmm/documents/submission_JSC.ps.gz, 1996.
R.E. Moore. Interval analysis. Prentice Hall, Englewood Cliffs, N.J., 1966.
M. N. Minh and M. Petitot. Lyndon words,
polylogarithms and the Riemann function.
Discrete Maths, 217(1–3):273–292, 2000.
A. Neumaier. Interval methods for systems of equations. Cambridge university press, Cambridge, 1990.
R. O'Connor. A monadic, functional implementation of real numbers. Technical report, Institute for Computing and Information Science, Radboud University Nijmegen, 2005.
Victor Y. Pan. On approximating complex polynomial zeros: Modified quadtree (Weyl's) construction and improved Newton's iteration. Technical Report RR-2894, INRIA Sophia, 1996.
D. Richardson. The elementary constant problem. In Proc. ISSAC '92, pages 108–116, 1992.
D. Richardson. How to recognise zero. JSC, 24:627–645, 1997.
A. Schönhage and V. Strassen. Schnelle Multiplikation grosser Zahlen. Computing, 7:281–292, 1971.
A. Turing. On computable numbers, with an application to the Entscheidungsproblem. Proc. London Maths. Soc., 2(42):230–265, 1936.
J. van der Hoeven. Automatic asymptotics. PhD thesis, École polytechnique, France, 1997.
J. van der Hoeven. Fast evaluation of holonomic functions. TCS, 210:199–215, 1999.
J. van der Hoeven. GMPX, a C-extension library for gmp. http://www.math.u-psud.fr/~vdhoeven/, 1999. No longer maintained.
J. van der Hoeven. Fast evaluation of holonomic functions near and in singularities. JSC, 31:717–743, 2001.
J. van der Hoeven. Zero-testing, witness conjectures and differential diophantine approximation. Technical Report 2001-62, Prépublications d'Orsay, 2001.
J. van der Hoeven. A new zero-test for formal power series. In Teo Mora, editor, Proc. ISSAC '02, pages 117–122, Lille, France, July 2002.
J. van der Hoeven. Relax, but don't be too lazy. JSC, 34:479–542, 2002.
J. van der Hoeven. Majorants for formal power series. Technical Report 2003-15, Université Paris-Sud, Orsay, France, 2003.
J. van der Hoeven. Effective complex analysis. JSC, 39:433–449, 2005.
J. van der Hoeven. Efficient accelero-summation of holonomic functions. Technical Report 2005-54, Univ. Paris-Sud, Orsay, France, 2005. Accepted for JSC.
J. van der Hoeven. Computations with effective real numbers. TCS, 351:52–60, 2006.
J. van der Hoeven. Effective real numbers in Mmxlib. In D. Saunders, editor, Proc. ISSAC '06, Genova, Italy, July 2006.
J. van der Hoeven. Around the numeric-symbolic computation of differential Galois groups. JSC, 42(1–2):236–264, 2007.
S. von Kowalevsky. Zur Theorie der partiellen Differentialgleichungen. J. Reine und Angew. Math., 80:1–32, 1875.
K. Weihrauch. Computable analysis. Springer-Verlag, Berlin/Heidelberg, 2000.
D. Zeilberger. A holonomic systems approach to special functions identities. Journal of Comp. and Appl. Math., 32:321–368, 1990.