|
It is well known that the operation of integration may lead to
divergent formal expansions like |
A natural way to solve a differential equation like
![]() |
(1.1) |
for large is to rewrite it in integral form
![]() |
(1.2) |
and recursively replace the left-hand side by the right-hand side. This
yields a convergent expansion for as an
“integral transseries”
![]() |
(1.3) |
More generally such infinite sums can occur recursively in the exponents. The aim of this paper is to develop a systematic calculus for integral transseries.
We will work in the context of complex grid-based transseries [vdH01], which is briefly recalled in section 2. An “integral transseries” is an infinite linear combination of “integral transmonomials”, which are finite expressions formed from certain “elementary monomials” using multiplication and integration. The elementary monomials are exponentials of integrals of “simpler” integral transseries. The representation of a complex transseries by an integral transseries is far from being unique, which provides a lot of flexibility for the computation with integral transseries.
A major aim of the theory is to lift the usual operations on complex
transseries, such as differentiation, truncation, division,
etc., to their representations by integral transseries.
Moreover, we want these operations to preserve “combinatorial
convergence”. For instance, even though the transseries
represented by in (1.3) is not
convergent, the expansion is said to be combinatorially convergent as an
integral transseries. Indeed, the main reason of being for integral
transseries is that they allow us to maintain a formal notion of
convergence during our computations, there where the represented
transseries themselves are divergent.
Before introducing integral transseries in their full generality, we first introduce the simpler notion of integral series in sections 3, 4 and 5. A first technical difficulty is to impose suitable conditions on the supports of integral series. Since we work in the grid-based context, we need a suitable analogue of the grid-based finiteness property. This involves two ingredients: finite generation (cf. regular language specifications in section 5.1) and an asymptotic descent condition (cf. the cycle condition in section 5.2), which states roughly speaking that later terms in the expansion are smaller and smaller from an asymptotic point of view. Moreover, because integral monomials represent series and not merely monomials, we need this descent condition to be sufficiently uniform. This motivates the introduction of the span of an integral series in section 4.3 and frameworks in section 4.4. These notions allow to obtain quick and rough bounds for the support of the transseries represented by an integral series or monomial.
In order to lift computations with classical complex transseries to integral transseries, the key step is a mechanism for rewriting integral transseries in a form with a clear asymptotically dominant part. For instance, in order to compute a fraction
it is necessary to first expand the denominator in such a way that it can be inverted. Using integration by parts, one has
whence
Now admits a natural “integral
transseries” inverse
.
A more systematic procedure for obtaining expansions like (1.4)
and (1.5) will be the object of sections 7 and
8. In section 6, we prepare this material by
introducing the integral transseries analogue of transbases and
differentiation. Putting all techniques together, we finally construct
the field of integral transseries in section 9.
Integral transseries can be seen as a natural generalization of Écalle's arborified moulds [EV04]. One advantage of the approach in this paper is that a systematic calculus for integral transseries avoids the process of “arborification”. Even though the latter technique also has a large degree of generality, our technique works in a context as general as that of complex transseries, for which a satisfactory theory of accelero-summation is not even known yet (but under development). Furthermore, with more work, we think that our technique may be generalized so as to include other types of operators, like infinite summation, and parameterized transseries. Unfortunately, we also have to pay the price of a certain technicity in sections 7 and 8. It remains an interesting question how far the ideas in this paper may be further simplified. A few ideas in this direction will be mentioned in the conclusion. We plan to further develop the topic of integral transseries and its link with the theory of accelero-summation in a forthcoming paper.
The field of complex grid-based transseries has been constructed and
studied in [vdH01]. Below we will quickly present a
classical variant of the construction. We first endow
with the following total ordering:
This gives the structure of a totally ordered
-vector space, although the
ordering is not compatible with the multiplication.
Remark
and vary the orderings during the construction [vdH01].
However, in this paper, we will assume the above ordering, for
simplicity.
Now consider the totally ordered monomial groups
and corresponding fields
,
which are inductively defined by
We call
the field of purely exponential complex transseries. Setting
we have , because of the
grid-based property. For each
,
let
denote the field obtained from
when replacing
by
. We call
the field of complex transseries. Setting
we again have , because of
the grid-based property.
Remark
under exponentiation and next under logarithm. In [vdH01],
we proceed exactly the other way around.
We recursively define a strong derivation on , by setting
for monomials and extending by strong linearity.
Next, we set
for . It is classical to
verify that
is a strong derivation which
satisfies
![]() |
(2.1) |
For all , the logarithmic
derivative of
is denoted by
.
Using general strongly linear algebra, it follows from (2.1)
that admits a distinguished strong right inverse
, i.e.
for all
. More
specifically, one has
![]() |
(2.2) |
for all transmonomials . It
can also be checked that
and
are stable under integration for all
(and
similarly for
and
).
In particular, we have
in the inductive construction of .
In this paper, the flatness relations ,
,
and
are defined in an
-expo-linear
way:
For instance, and
.
A subset
of
is said to
be flat if
for all
. We denote by
the set
of all flat subsets. We have
and
is stable under arbitrary intersections.
Remark , where
is a partially ordered monomial group which admits powers
in a totally ordered ring
with
. In that case, we set
The definitions remain valid if is only a
partially ordered monomial monoid which can be embedded into a partially
ordered group.
Given , we define the
flattened dominance and neglection relations
,
,
and
by
We also define the derived relations ,
,
and
by
Notice that and
The differentiation and the distinguished
integration
on
satisfy
for all (the first relation is easily checked
and the second one follows from relation follows from (2.2)).
We say that
and
are
flat.
Remark -expo-linear
way using
Equivalently, we may define them using
When replacing and
by
the relations
and
defined by
it is also possible to recover the -expo-linear
case.
Classically, a transbasis is a tuple of
transseries with
and such that
for some
.
for all
.
The integer is called the level of the
transbasis and
is said to be a plane
transbasis if
. We recall
that the flatness relations
and
were defined in an
-expo-linear
way, whence
. We denote by
the set of power products
with
. The level of
a transseries
is the highest
such that
.
In this paper, it will be convenient to consider a variant of the
concept of transbases. Given ,
consider the differentiation
A differential transbasis of level is a
tuple
of transseries with
and such that
for all
.
We will sometimes denote . We
notice that
is stable under
for all
and it has been proved in [vdH01]
that
is stable under
.
In what follows, all transbasis will be considered to be differential. The following incomplete transbasis theorem is proved in a similar way as the usual theorem for non-differential transbases [vdH97]:
be a transbasis of level
and
a transseries of
level
. Then there exists a
transbasis
of level
such that
.
Let be a constant field,
a monomial set and let
be a strong
-module of formal power series. Consider a set
and a strong
-module
which admits
as a strong
basis. We will call elements of
monomials, even
though we do not necessarily have an ordering
on
. Assume that we have a
strongly linear mapping
such that is regular for all
. Then we call
a
representation module and we say that
represents
for each
. We denote by
the
equivalence relation on
defined by
. If
and
are monoids and
preserves
multiplication, then we call
a representation
algebra.
Example and a mapping
such
that
is regular for all
, let
be the set of all
mappings
such that
is a
summable family in
. A family
is said to be summable if
is a summable family in
, in
which case we set
The strong summation is well-defined, since is
summable for all
and
. The mapping
extends to
by strong linearity, giving
the structure of a representation module. Any other representation
module with the same
and
on
can be embedded into
.
Example be the strong
-algebra
of grid-based series over a monomial group
.
Let
be infinitesimal monomials in
and consider the formal group
. We have a natural multiplicative mapping
, which extends to
according to the previous example. This gives
the structure of a representation algebra.
An is called a Cartesian representation
of
. If
is generated by its infinitesimal elements, then each series in
admits a Cartesian representation for a suitable
.
Example be a representation module with representation
mapping
. A support
function is a mapping
with
for all
. Given
, we set
,
so that
.
A family to be
-summable
if
is finite for all
. The subset
of
of all
such that
is
-summable is a
representation module for the
-summability
relation. We have
for the trivial support
function with
for all
.
Let be a representation module with
representation mapping
and a support function as
in example 3.3. Given
and
, we denote
A strongly linear operator is said to be a
truncation operator w.r.t.
,
if
.
For all , we have
.
For all , we have
.
Two truncation operators and
are said to be compatible if they commute. In that case,
is again a truncation operator.
Let be a set of subsets of
, which is closed under complements and finite
intersections. Then we say that
is
-truncation-closed if for each
, there exists a truncation
operator
for each
,
and
for all
.
In what follows, given
, we
will sometimes use the notations
,
, etc.
Example
for all subsets of
,
where
Let be a strong differential
-algebra for the derivation
. Assume that
admits a regular and distinguished right inverse
. Let
be a monomial group
together with a multiplicative mapping
such that
is regular for all
.
In what follows, we will assume that
for
.
We denote by the free formal structure generated
by
,
and
. In other words, each
element
is a tree whose leafs are labeled by
elements in
, whose unary
nodes are labeled by
, and
whose binary nodes are labeled by
.
For instance, if
, then the
following tree is an element of
:
We will denote by the size of
, i.e. the number of leafs plus
the number of integral nodes of the tree
.
We denote by
the total size of
, i.e. when we also count
multiplicative nodes. For instance, the size of the above example tree
is
and its total size
. We denote by
the finite
subset of
of leafs of
. Elements of
are called
integral monomials. It may sometimes be useful to assume the
existence of a special integral node monomial
of
size
.
Each integral monomial induces a natural element
. Indeed, this was already
assumed if
. If
resp.
,
then we recursively set
We also recursively define a non-trivial support function by
We let denote the (non-associative)
representation algebra from example 3.3. Elements of
are called integral series and we call
a representation algebra of integral series
with underlying monomial group
.
For each
, we denote
.
Remark is not associative nor
commutative. In the sequel of this paper, this will not be a problem,
since integral transseries are mainly used for the purpose of
representation. Nevertheless, it is possible to define an associative
(and/or commutative) variant of
.
Let be the equivalence relation on
generated by all relations of the form
. This relation is compatible with the
representation mapping
as well as the size
function
. Assuming that
, it follows that
has the structure of a multiplicative monoid and
has the structure of an associative representation
algebra. The mapping
naturally factors through
.
The mapping naturally extends by strong
linearity to
. Indeed, given
there is at most one
with
. Similarly, the
multiplication
extends to a strongly bilinear
mapping
. Given
, we denote by
and
the strongly linear operators on
with
The operators ,
and
(for monomials
) admit strongly linear left inverses
,
and
whose action on monomials is given by
Let be such that
.
Then we claim that
is a well-define multiplicative inverse of
modulo
. Indeed,
. Moreover, given
and
with
,
we must have
.
Consider an exponential transmonomial with
. Then
For arbitrary transmonomials it follows that
for a sufficiently large .
For such an
, we define the
integral span of
by
We have
Example ,
and
. Notice that
Assume now that . For
integral monomials
, we
recursively define the span of
by
We have
For such that
is finite
(we say that
has finitely generated
support), we define
.
Example .
A framework is a set of subsets of
which is closed under arbitrary intersections and
such that
for each
.
The elements of
will be called frames.
For each subset
there exists a smallest frame
which contains
and we denote it by
.
Example is a monomial group and define the flatness
relations as in remark 2.3. Then the set
is a framework. Indeed, if is a family of
elements in
with
,
then
for any
.
If
is only a monomial monoid which can be
embedded in a monomial group
,
then
is again a framework.
A framework function on is a function
which associates a frame
to each integral monomial
,
so that
In particular, for all
. We call
the frame for
. More generally, given a set
of integral monomials, we call
the frame for . Given a
representation algebra
of integral series with a
framework function
as above, we call
a framed representation algebra.
Example . Taking
for all , we define a
framework function on
.
Given an integral series ,
the number of integral monomials
with
is finite for each fixed size
. Therefore, we may define the majorating
series
for
by
We say that is combinatorially
convergent if
is convergent. We will denote
the set of combinatorially convergent series in
by
.
Proof. This follows immediately from the facts
that for all , we have
Here for
if
for all
.
A family of elements in
is said to be summable if
is a summable
family of power series in
.
In other words,
is finite for each
and
. The set
is a strong
-module
for this infinite summation operator.
Consider a strongly linear mapping .
For each
, let
. We call
the
majorant series for
.
Notice that
is uniquely determined by the
restriction of
to
.
We may also regard
as a mapping on
by setting
. We
have
for all . If
maps
into itself, then
maps
into
,
and we say that
is uniformly strong.
This is the case if and only if
is a summable
family in
for each
.
In particular, if
and there exist constants
with
for all
, then
is uniformly
strong.
A uniformly strong mapping is said to be a
rewriting if
for all
. In that case
is said
to be a rewriting of
and we write
. If
maps
into itself, then we call
a
monomial rewriting. In particular, consider a mapping
such that there exist constants
with
,
and
for all
.
Then
extends uniquely to a monomial rewriting.
Example by
multiplications
determines a monomial rewriting.
Assume that is the constant field for the
derivation on
. Then for any
there exists a constant
with
Let be the subset of
of
integral monomials of the form
with . We recursively define
a product
on
by
for all and
and
extension by linearity. Here we understand that the products
are taken in
.
We recursively define the mouldification
of an
integral monomial
by
This definition extends to a strongly linear mapping .
For several purposes, it is more convenient to work with moulds in than general integral series in
. However, the mapping
generally destroys combinatorial convergence, i.e.
is not a rewriting. The convergence can often be
restored using the process of arborification [EV04], which
corresponds more or less to inverting the mapping
in a nice way. In this paper, we will show that most important
operations can be carried out directly in
,
while preserving combinatorial convergence.
Throughout this section, we assume that is a
framed representation algebra of integral series with underlying
monomial group
and framework function
.
A regular language specification is a finite set
of formal language symbols, together with a rule of
one of the following forms for each
:
, with
;
, with
;
, with
;
, with
.
The language symbols may regarded as subsets of
as follows. Consider the set
of mapping
, such that for
all
we have
,
,
, resp.
,
if
is specified by R1, R2,
R3 resp. R4. Then
is again in
. We will regard
as the subset
of
. Subsets of this kind are called regular
languages.
Example of
,
the set
generated by
,
and
is a regular language. Notice that
consists of
the
with
.
Inversely, given a regular language specification
, let
be the finite set
monomials
, such that
for some
.
Then each
is contained in
and the set
is finite for each
.
Given a regular language specification ,
we say that
directly depends on
, and we write
, if
for some
, or
.
The transitive closure of the direct dependency relation
is denoted by
;
we say that
depends on
, if
.
The dependency relation
is reflexive and
transitive, but not necessarily anti-symmetric, since distinct language
symbols may mutually depend on each other.
Example
where we may take ,
and the natural embedding of
into
for
.
We will show later that all integral monomials which occur in the
expansion (1.3) belong to
.
Remark from example
may simply be specified by
Remark and a language
symbol
such that
for no
with
for some
. Then
as a
subset of
. A language
specification which contains no symbols
of the
above type is said to be well-rooted.
Let be a regular language specification. An
arc is a sequence
![]() |
(5.1) |
such that and for all
, we have
,
and either
or
and
.
or
and
.
and
.
If , then we call (5.1)
a cycle in
. A cycle
(5.1) is said to be minimal if
. Notice that each two language symbols
and
in a cycle mutually
depends on each other.
We say that satisfies the cycle
condition, if
for every cycle (5.1).
In that case, we say that
is a grid-based
language specification and the elements of
are called grid-based languages. A grid-based subset of
is an arbitrary subset of a grid-based language.
Series in
with grid-based support are called
grid-based integral series. We denote by
the representation algebra of such grid-based series.
Remark to be replaced by one-step arcs
in
the case when the languages
are unique with the
properties that
and
for
all
.
Example
It is easily verified that and
for all
. Consequently,
and
,
whence
. If we set
then we notice that the cycle condition is no longer satisfied.
Example is also a grid-based subset
of
.
Let be a regular language specification and
define
Consider a -labeled tree
with children
.
Assume that the roots of
are labeled by
. We say that
is a derivation tree if this is recursively the case for
and either
,
and
.
or
for some
,
and
.
,
and
.
,
and
.
We denote by the set of all such derivation
trees. Given
, we define
,
and
as follows:
is the product of all
where
ranges over all labels of
.
is obtained from
by
substituting each label
by
,
or
, depending on whether
,
resp.
, and
by eliminating all nodes
with
.
, where
is the label of the root of
.
Example from example 5.2,
the tree
is a derivation tree for the triple
with
,
and
.
Proof. We recursively construct
as follows:
If , then
is reduced to its root labeled by
.
If , then we choose
with
,
and set
If ,
and
with
, then
If ,
and
with
and
, then
It is easily verified by recursion that ,
and
.
If is a derivation tree constructed as in the
proof of the proposition, then we say that
is a
derivation tree for the triple
.
For instance,
is a derivation tree for
in example 5.8.
of a derivation
tree
with respect to a grid-based language
specification
. If
, then
.
Proof. The derivation tree
is of the form
![]() |
(5.2) |
where the expanded subtrees are .
Denoting by
the label of the root of
and
for all
, we have a cycle
![]() |
(5.3) |
since . We conclude that
.
Let be a grid-based language specification. We
say that a derivation tree
is
cycle-free, if it does not contain a subtree
of the form (5.2) with
.
Given
, let
be the set of monomials
such that
for some cycle-free derivation tree
. Clearly,
is finite.
Now consider with
and
. For
, we define the set
by
If or
,
then
.
If or
,
then
.
If , then
.
Clearly, for every , we have
a cycle (5.1), so
Let be the finite union of all
, where
are as above.
Proof. Let .
We will prove that
by induction over the minimal
size of a derivation tree
for
. This is clear if
is
cycle-free. Otherwise,
admits a subtree
of the form (5.2) with
. Modulo the replacement of
by a subtree, we may assume without loss of generality that the
and
are all cycle-free. By
the definition of
, we now
have
. Now consider the
derivation tree
which is obtained from
when replacing
by
. By the induction hypothesis, we have
. We conclude that
.
of integral series
over a grid-based differential algebra
and
assume that
is grid-based on
. Then all strong linear combinations over
grid-based subsets of
are summable.
Proof. Let be a
grid-based subset of
, so
that
for
some grid-based
language specification
. Let
and
be as in the
previous section and notice that
is grid-based.
Now give the natural ordering
and order
by Higman's imbrication ordering [vdH04, Section 1.4], with the additional requirement that the
imbrication preserves roots. Then Kruskal's theorem implies that the
set of
-labeled trees is
well-quasi-ordered for the opposite ordering of
. We claim that the mapping
preserves the ordering .
So assume that and let us prove by induction
over the size of
that
. Write
Since the imbrication of into
preserves roots, we have
, so that
,
and
for all
.
Each admits a subtree of the form
with
.
Now the induction hypothesis implies that that
for all
. By lemma 5.10,
we also have
for all
. It follows that
This completes the proof of our claim.
Now let and consider a family
with
. Let
be the set of triples
with
and
. Then the family
refines
.
By proposition 5.9, each triple
admits a (distinct) derivation tree
.
By what precedes, it follows that
is a
well-based family. From lemma 5.11, we conclude that
is a grid-based family.
Theorem 5.12 implies in particular that
is grid-based for all
. If
, then we denote
. A truncation operator
on
is said to be compatible with the grid-based
structure, if for every grid-based language
, there exists a grid-based language
so that
maps
into
.
Throughout this section, we assume that is a
representation algebra of grid-based series with underlying monomial
group
and framework function
.
Proof. Consider a grid-based and a regular
language specification resp.
. In view of remark 5.3,
we may assume that each language symbol
resp.
is specified by a rule of the
form
Let be the regular language specification, whose
symbols are formal intersections
with
and
as above, and so that each
is specified by
Since any cycle in
induces a cycle
in
,
we conclude that
is a regular language
specification.
Proof. Consider a grid-based and a regular
language specification resp.
, where each language symbol
resp.
is
specified by a rule of the form (6.1) resp.
(6.2). Let
be the regular language
specification, whose symbols are formal differences
with
as in (6.1) and where
is a finite union, where each
is specified by
Each formal symbol is specified by
where stands for the set of pairs
with
and
. Since any cycle
in
induces a cycle
in
, we conclude that
is a regular language specification.
Consider ,
,
,
and a relation
among
,
,
,
,
,
,
,
,
,
,
and
. Then we define
These relations generalize to grid-based integral series , by requiring that they hold uniformly for all
monomials in a grid-based language
.
More precisely, denoting by
the set of all
grid-based languages, we define
Notice that these definitions indeed extend the case when is a monomial, since
is a
grid-based language for every
.
We say that
is regular, if
for certain
,
and
.
In that case, we denote
.
Given a subset of
and
, we recall that
. More generally, if
is
a subset of
, then we denote
. These notations are
particularly useful if
is one of the sets
Restrictions on the support combine in a natural way. For instance, we have
because of proposition 6.1.
and
. Given
with
, we define
Then extends by strong linearity into an
operator
.
Proof. Given ,
there exists a grid-based language specification
and
with
and
. Then the support of
is included in
,
so
is well-defined.
Remark .
Let be an algebra of grid-based integral series.
A logarithmic derivation on
is a
mapping
such that
, for all
.
for all
with
.
is regular for all
with
.
for all
with
.
for all
and
.
for all
with
.
In that case, and in virtue of the next sections, we say that is a differential representation algebra of
grid-based series. We say that
is finitely based
if
is finite for all finite .
Given , we will also denote
. Assuming that
, let
,
and
be such that
. Then
admits a natural inverse
modulo
given by
If is a grid-based language with
,
and
, then
is a grid-based
language with
,
and
. It
follows that
is also a regular grid-based
series. In the sequel, we will denote
and
. If
,
then we set
and
.
If
, then we take
.
Assume now that for some finite set
, that
for some plane
differential transbasis
of transmonomials, and
that
for certain
.
In that case we say that the differential representation algebra
is triangular and the above notations may be
extended to more general monomials
:
given
with
and
, we let
and
(if
).
Furthermore, given
, consider
. Then for all
with
, we have
,
and
. Similarly, with
as above, we have
,
and
.
Setting
and
,
we may therefore assume that
and
for all
with
. If
,
then we take
.
Let us now define a strong derivation .
We first define the derivative of each monomial in
:
![]() |
![]() |
![]() |
(𝔵∈𝔊ℝ) |
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
Clearly, for all
.
extends to a strongly
linear mapping
, which
represents the derivation on
.
Proof. Consider a grid-based language
specification . For each
, we define a new language
symbol
by
If , then
.
If , then
.
If , then
.
If , then
.
Clearly, if with
,
then
. Now consider a cycle
which involves one of the languages of the form
with
.
Since none of the
and none of the
depend on
, it
follows that the cycle has the form
for certain
and modulo remark 5.5. But then
is also a cycle, whence
. We conclude that the
with
are grid-based languages.
Given , the above discussion
shows that
belongs to the grid-based language
. In order to prove that the
mapping
extends by strong linearity, we still
need to show that for every
there exists only a
finite number of
with
. Indeed, by induction over the size of
, it is easily seen that
is necessarily obtained from
through the
replacement of a subtree
of
by
or the replacement of a subtree of the form
with
by
.
Assume that is triangular, with
.
Proof. It suffices to prove the support bounds
for monomials .
If , then there exists an
with
.
Consequently, we may take
.
If , then
is of the form
and
.
Consequently,
, so we may
take
. We notice that
is a finite set.
Let us finally show that we may take ,
where
is a finite union up to duplicates. We use
induction over the size of
.
If
, then have
. If
,
then by the induction hypothesis
.
Finally, if
, then
.
Proof. Let be a
grid-based language specification with
and
for all
.
Given
, let us prove by
induction over the size of
that
. If
,
then
implies
.
If
, then
implies
. Finally, if
, then
.
Proof. Given a finite subset of
of
, let
For all with
,
we claim that
Indeed, using induction over ,
we have
By strong linearity, we conclude that
for all with
,
whence
. We conclude by
recalling that
is finite for each
.
Throughout this and the next section, we assume that is a triangular differential representation algebra
of grid-based series with underlying monomial group
, framework function
and logarithmic derivation
.
The objective of this and the next section is to construct truncation
operators
on
for all
,
and
.
In this section, we start with the construction of
on
, where
We assume that is fixed and perform the
construction simultaneously for all possible values of
.
Given , we define
,
and
by induction over the size of
. We always take
If , then we take
If , then it will be
convenient to denote
and
. We distinguish the following cases:
[]. We take
[]. We take
[]. If
, then we set
If , then we take
Remark ,
and
coincide with those
from proposition 6.3 in the case when
. This ensures that
is
well-defined. We will show below (proposition 7.7) that
. From proposition 6.7,
it follows that
. This
ensures that
is well-defined. Finally, it can be
shown that
, which justifies
the simplification
Consider a grid-based language .
Given a grid-based language specification
with
, then we notice that
only depends on languages
with
. Modulo removing all
other languages, we may thus assume without loss of generality that
for all
.
In order to show that the definitions of ,
and
for monomials
extend by strong linearity to
, we first have to specify regular languages
,
and
with respect to which
,
and
can be expanded for
. We proceed along similar
lines as for the construction of
,
and
.
Since , we take
. If
,
then we take
If , then we abbreviate
,
and
distinguish the following cases:
[]. We take
[]. We take
[]. We take
[]. If
, then we set
If , then we take
By induction over the size of ,
it is straightforward to verify that
and
for all
.
Remark for all
,
then case 4 with
never occurs, and it is easily
checked that
and
for all
. We will show in proposition
7.3 below that
.
It follows that
and
are
well-defined grid-based languages.
Proof. The inclusions are clear if . So assume that
and
let
. Let us prove by
induction over the minimal size of a derivation tree for
with
that
and
. We distinguish the
following cases:
We must have . Now
and
.
Let be such that
. Then we must have
, whence
.
By the induction hypothesis, we get
and
.
We obtain with
and
. By the induction
hypothesis, we have
,
and
.
It follows that
and
.
If , then
with
. By
the induction hypothesis, we have
and
. Since
, it follows that
and
. If
, then
with
and
.
By the induction hypothesis, we get
and
. Since
, we also have
and
. It follows that
and
.
The other inclusions are proved in a similar way.
Proof. The inclusions are clear if . Assume therefore that
. If
,
then also
, whence
and
.
Otherwise, we must have
,
since
and
.
We conclude by proposition 7.3.
,
and
are
grid-based.
Proof. For languages of the form there is nothing to prove. The case when
for all
has also been dealt with in remark 7.2.
Consider a cycle which involves a language of
the form
. Then none of the
can be in
,
since none of the languages in
depend on
. Consequently, the cycle is of the
form
with for all
.
For all
, let
if
and
otherwise. Then, using proposition 7.4,
is again a cycle with .
Let us next consider a cycle which involves a
language of the form
. Then
none of the
can be a language of the form
,
,
or
,
with
. Consequently, the
cycle is of the form
with for all
.
For all
with
or
and
or
, let
.
For the other
, let
. Then, using proposition 7.4,
is again a cycle with .
,
and
extend to
by strong linearity.
Proof. The proposition is clear for . For
with
, we also have
and
. Given
with
, let
We also define
Let us prove by induction over the size of that
and
are finite. Now
By the induction hypothesis, the sets at the right hand sides are
finite.
,
,
on
are projections with
Also, the restriction of to
is a projection on
.
Proof. The operator is
clearly a projection with
.
By construction, we also have
and
for
resp.
. Given
with
, it follows that
and similarly for
.
We conclude by proposition 7.3.
Proof. The result is clear when
or
. Assume therefore that
. If
, then
.
If
, then
and
.
Let us denote by the flattened support of
in
. By
proposition 6.6, there exists sets
,
and
of
with
Proof. It suffices to prove both bounds for
monomials . The bound (7.2) directly follows from the fact that
. Taking
we prove (7.3) by induction over the size of . In the case when
, we have
and we are done,
so assume that
. If
, then
and
we are again done. If
, then
Assume now that . If
, then we denote
, so that
and
. Let
.
We distinguish three cases:
[ and
]
Since
, we have
. This shows that
, since
and
.
[ and
]
Since
, we have
It follows that with
and
. Since
, we also have
with
and
.
Hence
.
[] Let
,
,
,
,
,
and
be as in the
following diagram:
with and
,
so that
. If
, then
,
whence
and
.
If
, then
, whence
and
. Again, we obtain
.
Proof. Let be such that
and
for all with
.
We claim that for all
, we
have
Indeed, using induction over the size if ,
we are always in one of the following cases:
By strong linearity it follows that for all
. Using a similar induction
it can be checked that
for all . Indeed, in the
product case, we have
We deduce that for all
. Setting
,
with
as in the proof of proposition 6.8,
we finally claim that
for all . Indeed, using
induction, we have
in the multiplicative case and
in the integral case. We conclude that for all
.
Let be a grid-based subset of
and
. We will denote by
the set of all series
for
which there exists a grid-based language specification
with
and
for some
.
for all
.
Given and
,
let us now define the grid-based operator
We proceed by induction over
The case has been dealt with in the previous
section. Assume therefore that
.
Let
be the maximal element of
. Then
.
Moreover,
, so we already
have constructed
on
. Given
we now define
Similarly, given a grid-based language ,
we define
so that for all
.
Since
, we notice that
and
are well-defined. We also
notice that the definitions extend the previous definitions on
if
resp.
, since
and
are respectively the zero and the identity operator on
. By strong linearity, we
thus obtain the desired operator
on
, which extends its previous definition on
. Since any
belongs to
for some
and
, it follows that
is defined on
.
Proof. Assume that .
Let us show by induction over
that
. We have already treated the case
in proposition 7.10. In particular,
, with
as
above. Now
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
(♦∈{≍,≺}) |
We conclude by the induction hypothesis.
,
and
on
are projections with
Furthermore, the restriction of to
is a projection on
.
Proof. Given ,
let us prove by induction over
that
implies
. If
, then we are done by
proposition 7.7. Otherwise, we have either
or
. In the first case, we
obtain
and
.
In the second case, we get
and
, by induction. Similarly,
implies
and
implies
. It follows that
,
and
are the identity operators on
,
resp.
.
Given a grid-based language ,
let us now prove by induction over
that
,
,
and
whenever
. If
,
then we are done by proposition 7.7. Otherwise, we have
(where
is the maximal
element of
), so the
induction hypothesis implies
and if
.
Since
, we conclude that
.
Proof. Without loss of generality, we may assume
that . We prove that
on
by induction over
. If
,
then we are done by proposition 7.8. So assume that
. Let
if
and
if
. Setting
,
we have
. The induction
hypothesis combined with strong linearity implies
![]() |
(8.1) |
Now we distinguish three cases:
If , then
If , then we get
If , then
If , then
, so that
If , then
, whence
Let if
and
if
,
so that
. Then
and similarly . Now
and
commute, since
we have either
or
or
. From (8.1),
we conclude that
.
We finally have to extend ,
and
from
to
. By
induction, we may assume that we have done this for
with
and that propositions 8.5 and
8.6 below already hold for
and
instead of
and
. In particular,
is a
projection of
on
.
Now consider a monomial which is not in
. Then we set
Similarly, given a language but
, we set
so that for each
and
. Notice that the defining
relations remain valid if
or
, by the induction hypotheses. By strong
linearity, the operators
,
and
on
therefore extend to
.
Proof. If is the largest
element of
, then we are done
by proposition 8.1. Otherwise, we use induction and assume
that we have proved the assertion for all larger
. Given
,
it follows that
. Now
![]() |
![]() |
![]() |
|
![]() |
![]() |
![]() |
(♦∈{≻,≺}) |
We conclude by proposition 8.1.
,
and
on
are projections with
Furthermore, the restrictions of and
to
are projections on
resp.
.
Proof. If ,
then we are done by proposition 8.2. If
, then the induction hypotheses at the
beginning of this section imply that the above properties are already
satisfied for the operators
,
and
.
Assume now that is such that
and
. If
, then proposition 8.2 implies
. Otherwise, the induction
hypothesis implies
and
. This shows that
.
In a similar way, if
is such that
and
, then
. Finally, if
is such that
and
, then
,
so
, by proposition 8.2.
Given a grid-based language ,
the induction hypothesis and proposition 8.2 also imply
Finally, if , then we obtain
This completes the proof.
Proof. If ,
then we are done by proposition 8.3. If
, then the induction hypotheses at the
beginning of this section imply that we have commutation when
is replaced by
.
Notice that we do not necessarily have
contrary
to before, but merely
.
Let us first assume that .
Let
and denote
if
and
otherwise. Then
, so the induction hypothesis
implies
Moreover, by proposition 8.3, and
commute on the image of
. We conclude that
and
commute everywhere.
Assume now that . Let
and denote
if
and
otherwise. If
, then
,
and the induction hypothesis implies
Assume now that . We have
already shown above that
(or
) and
(or
or
or
) commute. We also have
. Consequently,
This completes the proof.
Let us now consider an extension of
the transbasis
with a new element
between
and
. Likewise, assume that we have extensions
of
,
of
with
and
of
,
together with a logarithmic derivation on
which
extends the one on
.
Denote the truncation operators on as
constructed above by
for all
,
and
. If
and
, then we want to show that the restriction of
to
coincides with
. Denoting by
the canonical injection of
into
, we thus have to show that
.
In the cases when or
and
, the definitions of
and
clearly coincide. Assume
therefore that
. If
, then
Now for we have
and
, whence
and
. It follows that
and
,
whence
. If
, then
Indeed, in a similar way as above, we have
Furthermore, for we have
, since
.
In this section, we will construct the representation algebra of integral exponential transseries, with
. Using induction over
, we first construct a differential representation
algebra
of grid-based integral series, with
underlying monomial group
,
and such that
For each , there exists
a “privileged”
with
.
For all , there exists
a triangular differential representation subalgebra
of grid-based series with
.
In IH2, we call a
triangular set for
.
We notice that the union of two triangular sets is again a triangular
set.
Given with
,
we claim that the operators
with
are naturally defined on
.
Indeed, given
, there exists
a triangular set
for
,
and
. In sections 7 and 8, we
have shown that
is a well-defined element of
. Moreover, because of
section 8.5, the value of
does not
depend on the choice of
.
Taking and
,
the induction hypotheses IH1 and IH2
are clearly satisfied for
.
Assume that
has been constructed and let
The set
is clearly a monomial group with -powers
and we have a natural mapping
defined by
We define a logarithmic derivative on as
follows. Given
with
, we may write
with
,
and
. Now we set
Since , we notice that
.
We extend the mapping into a mapping
as follows. If
is such that there
exists an
with
,
then we arbitrarily take such an
and set
. Otherwise, we simply take
.
Let us prove IH2, which will complete the induction.
Given , the set
is finite. By the induction hypothesis, there exists a
triangular differential representation subalgebra
of grid-based series with
.
Taking
, we then obtain a
triangular differential representation subalgebra
of grid-based series with
.
We clearly have . The set
is called the set of exponential integral transseries. For , the smallest
with
is called the exponential height of
. Setting
, the finiteness property implies that
and
.
By induction over the exponential height ,
let us now construct a strongly linear mapping
which maps integral monomials to integral monomials. So assume that
or that
and that we already
have mapping
. Given a
monomial
, we recursively
define
by
![]() |
![]() |
![]() |
(f=0∨f∈𝔼
|
||
![]() |
![]() |
![]() |
|||
![]() |
![]() |
![]() |
This definition has a natural analogue for language specifications:
Hence, extends by strong linearity to
. Now for each
, the representation algebra
of with derivation
is
formally isomorphic to the representation algebra
of
with derivation
.
By what precedes, we may therefore embed each
into
(for instance,
is
identified with
), so that
The set
is called the set of integral transseries. Setting
we have .
Proof. In the case when ,
,
we may take
.
Similarly, if
and
,
then we take
. The general
case is reduced to one of the above cases modulo a sufficient number of
upward shiftings.
with
admits a multiplicative inverse modulo
.
Proof. Writing with
and
, we
may take
where is such that
.
An integral transseries in
is said to be combinatorially convergent if
is combinatorially convergent as an integral series and if
is recursively combinatorially convergent for every
. The inclusions
together with proposition 4.6 imply:
Proof. Given ,
let us prove by induction over the exponential height
of
that
.
For each , we have either
and
,
or
and
,
where
has exponential height
. In the second case, the induction hypothesis
implies that
. Since
it follows that for all
.
Furthermore, the postcomposition with send
monomials
to monomials
with
. It follows that
This shows that .
Proposition 9.4 implies that and
more generally
We call
the set of combinatorially convergent integral transseries. We denote
,
, etc. the images of
,
,
etc. under
. The
following is an immediate consequence of proposition 9.3:
Proof. We prove the proposition by induction
over the maximal exponential height of
. If
,
then the result is clear, so assume
.
The set
is finite. By the induction hypothesis,
there exists a triangular set
with
and such that
maps
into
. Taking
, we have
. Let us show that
maps
into
.
So let
with
.
Writing
, where
, we have
By the construction of , we
have
. Since
, proposition 8.4 also implies
that
.
and
with
. Then
maps
into itself.
Proof. Given ,
let
be a triangular set with
and such that
maps
into
. By proposition 8.4,
we now have
.
In a similar way as proposition 9.1 and its corollary, one may now prove:
,
and
.
Then
There exists a with
.
There exists an with
.
with
admits a multiplicative inverse modulo
.
In this paper, we have laid the foundations for the formal calculus with integral transseries. We intend to further develop this theory in a forthcoming paper. Let us briefly mention a few points which still have to be investigated in more detail.
First of all, from the foundational point of view, we have chosen to work with “uniform” arcs and cycles. It is also possible to consider “individual” arcs and cycles of the form
such that and for all
, we have
,
and either
or
,
and
.
or
,
resp.
and
.
,
and
.
We expect that languages which verify the weaker “individual” cycle condition can always be rewritten into languages which verify the usual uniform cycle condition, using the technique of “loop unrolling”.
More generally, given an arbitrary regular language , one may define its “descending
part”
: it consists of
those integral monomials
such that any
“cycle in the derivation tree of
”
satisfies the individual cycle condition. Again,
may be computed using the process of loop unrolling. Another application
of loop unrolling in combination with truncation is to rewrite an
arbitrary grid-based language specification
into
an equivalent language specification
modulo
, such that for every
, we have
for all
. In particular, in
the cycle condition, this implies that
for all
.
Another interesting topic from the foundational point of view is to systematically work with operators of either one of the forms
instead of usual integration. These operators have the advantage of
being closer to arborified moulds and may make it easier to develop the
theory of accelero-summation. Secondly, operators of the form with
are naturally
“infinitesimal” on suitable frames, and it should be
possible to rewrite arbitrary grid-based languages as a tree whose leafs
are languages which are constructed using products, infinitesimal
operators and repetition.
Finally, it remains to be shown that the set of integral transseries is stable under many other operations, such as composition and functional inversion (when defined), formal alien differentiation, the resolution of quasi-linear differential equations, and so on. Of course, the consideration of additional operators besides integration, such as infinite summation, is another interesting topic.
J. Écalle and B. Vallet. The arborification-coarborification transform: analytic, combinatorial, and algebraic aspects. Technical Report 2004-30, Univ. Paris-Sud, 2004.
J. van der Hoeven. Automatic asymptotics. PhD thesis, École polytechnique, France, 1997.
J. van der Hoeven. Complex transseries solutions to algebraic differential equations. Technical Report 2001-34, Univ. d'Orsay, 2001.
J. van der Hoeven. Transseries and real differential algebra. Technical Report 2004-47, Université Paris-Sud, Orsay, France, 2004. Submitted to Springer-Verlag's series of Lecture Notes in Mathematics.