|
We present a probabilistic Las Vegas algorithm for solving sufficiently generic square polynomial systems over finite fields. We achieve a nearly quadratic running time in the number of solutions, for densely represented input polynomials. We also prove a nearly linear bit complexity bound for polynomial systems with rational coefficients. Our results are obtained using the combination of the Kronecker solver and a new improved algorithm for fast multivariate modular composition.
|
Let be an effective field. This means that we have a data structure for representing elements of , together with algorithms for performing the field operations. Consider a homogeneous system of polynomial equations
where . We are interested in the exact resolution of such a system through the computation of a parametrization of its zero-set by a so-called primitive element; see section 4.2 for a precise definition.
Throughout this paper, we assume that are given in standard dense representation. The total degree of each is written , and we let . Our algorithm requires the following regularity conditions:
The intermediate ideals are absolutely radical, which means radical over the algebraic closure of , for ;
Conditions
We prove new probabilistic complexity bounds for computing all the solutions in terms of a univariate parametrization by a primitive element. For finite fields , our bound is arbitrarily close to quadratic in the number of the solutions whenever : see Corollary 5.5. For rational coefficients of bounded height, we deduce a complexity bound that is nearly linear in the expected bit size of the output of the algorithm: see Theorem 6.11. These results improve upon the best previously known complexity bounds.
We set . In order to simplify the presentation of complexity bounds, we use soft-Oh notation: means that ; see [21, chapter 25, section 7] for technical details. The least integer larger or equal to is written ; the largest one smaller or equal to is written .
The -vector space of polynomials of degree is written . The remainder of the division of by is written . In order to express that an expression is explicitly computed modulo , we write .
The Galois ring of characteristic and cardinality , written , is defined as
with monic and irreducible modulo .
The complexity of polynomial system solving is a central question in effective algebraic geometry, which is still open. Except in some very special cases, no softly linear time algorithms are known. In order to simplify the discussion, we assume that for .
One known favorable special case concerns systems of bounded degrees
over that satisfy
Assume that the input polynomials are given as a straight-line program
of size over without
division (see the definition in [13] for instance) and that
we use the computation tree model for our complexity measures. If the
cardinality of is sufficiently large and the
conditions
(1.1) |
operations in and with a uniformly bounded probability of failure. The complexity bound (1.1) essentially comes from [27, Theorem 1] by taking into account that ; this is detailed in section 5 below. If the are all bounded, then we notice that , so the bound (1.1) is essentially quadratic. In the other extreme case when is bounded, we rather have , so the bound (1.1) becomes essentially cubic. In this paper, we show how to obtain a nearly quadratic bound for this case as well.
Gröbner bases constitute another cornerstone for polynomial system
solving. Algorithms in this setting are usually deterministic, but their
complexities are highly intricate in general: they depend on both the
system and the admissible ordering attached to monomials. Complexities
of unfavourable cases are doubly exponential in ; see a summary of known families in [21,
chapter 21]. The first simply exponential bounds of the form for zero dimensional systems are due to Giusti, Lazard and
Lakshman in the eighties [23, 45-48].
The constant hidden behind the latter “” may be related to a feasible exponent for linear algebra, in the sense that any two square
matrices of size may be multiplied with operations in their coefficient ring. Basically one
may perform Gaussian elimination on Macaulay matrices: for instance,
under
At present time it is not known how to exploit the structure of Macaulay matrices to perform Gaussian elimination in time quadratic in or even in . One contribution to this problem, due to Canny, Kaltofen, and Yagati [14], relies on sparse polynomial interpolation and the Wiedemann solver. Further important results based on structured linear algebra have been established by Mourrain, Pan, and Ruatta [56]: they proved that the number of roots and of real roots can be computed with floating point operations, and that all the (real) roots in a given box up to the error can be computed with floating point operations, where and depend linearly on as well as on the conditioning of the system. Yet another important advance here is Faugère's F5 algorithm, which suppresses useless rows in Macaulay matrices. However, its worst case complexity remains essentially cubic in [3, 4]. Other methods derived or inspired by Macaulay's work have been developed by Mourrain and Trébuchet [57-59].
For Gröbner bases of zero dimensional systems, monomial orderings may be changed quite conveniently by means of linear algebra. This is known as the FGLM algorithm (named after the initials of its authors, Faugère, Gianni, Lazard and Mora [20]) and involves a cost . Recently, Faugère, Gaudry, Huot and Renault [19] decreased it to . Nevertheless one must keep in mind that the best practical values for are only about , which is far from the best known theoretical bound , due to Le Gall [49].
Another aspect of the present paper concerns multivariate modular composition, that is the computation of where , has degree , and are in . When , the best known complexity bound for any field is due to Brent and Kung [11]. Their algorithm even yields a sub-quadratic cost when using fast linear algebra; see [40, p. 185]. Here the constant is such that a matrix over may be multiplied with another rectangular matrix with operations in . Huang and Pan proved that ; see [38, Theorem 10.1]. In [35, section 3] we extended Brent and Kung's result to the multivariate setting. A major breakthrough for the modular composition problem is due to Kedlaya and Umans [41, 42], in the case when is a finite field and more generally a finite ring of the form for any integer and monic of degree . For any fixed rational value , they showed that can be computed modulo in time whenever the partial degrees of are , and under the condition ; see [42, Theorem 7.1]. Extensions to compositions modulo triangular sets can be found in [63].
The first technical but important contribution of this paper concerns the refinement of Kedlaya and Umans' complexity result on modular composition [41, 42]. In fact, in Corollary 3.6 we achieve a sharper complexity bound in terms of the total degree of , that is softly linear in the bit size of the coefficients, and that holds without the assumption . We also handle fast evaluations of in a larger algebra of the form , where and is monic in of degree : this is a particular case of composition modulo triangular sets studied in [63], for which we also improve upon the dependency in the coefficient size. These new results are based on our recent advances on multivariate multi-point evaluation algorithms [36]. At a more abstract level, our algorithm applies to any field over which fast multi-point multivariate polynomial evaluation exists. We recall that fast multivariate multi-point evaluation remains an important open problem: no efficient algorithms are known over a general abstract field and we are not aware of any efficient implementations of Kedlaya and Umans' method; for some recent progress on special cases, we refer to [33, 34].
Our main contributions concern the complexity of polynomial system solving. We first prove a Las Vegas type probabilistic bit complexity bound over the finite field , for the dense representation of input polynomials, and where is any fixed rational value: see Corollary 5.5. Whenever , this bound is arbitrarily close to quadratic in the number of the roots of the system. This improves upon previously known bounds. For example, the complexity bound (1.1) simplifies to whenever . With input polynomials given in dense representation, the quantity is of the order . If all the remain bounded and if tends to infinity, then we have
Consequently, the expected complexity of the Kronecker solver becomes , in terms of the number of arithmetic operations in . However, if the are no longer bounded, then may grow with and the worst case complexity of the solver becomes .
Our next main result concerns the case . We assume that have integer coefficients of bit size , and we achieve expected time . Whenever , this time turns out to be nearly linear in the bit size of the output: see the precise bound in Theorem 6.11. This complexity bound admits major consequences for symbolic real solving, especially for algorithms based on the Kronecker solver; see for instance [2] and references therein. Once a Kronecker parametrization is known, it is also possible to appeal to univariate numerical solvers to deduce complex or real approximations of the roots of the system; see for instance the book [55].
Let us briefly describe the structure of this paper. Section 2 is devoted to prerequisites about the complexity model and fast multi-point polynomial evaluation. The next section 3 deals with various particular cases of multivariate modular composition involved in the Kronecker solver. Section 4 concerns the algebraic data structures needed by the solver, along with general position and some of the randomness aspects. The solver is itself recalled in section 5, that ends with our main result over the finite fields. Our main complexity bound over the rational numbers is stated in the final section 6.
It may be helpful to notice that the probability analysis of our algorithms contains two main stages. On the one hand, we need to put the system in general position and prepare the main data structures. This part is handled in section 4. On the other hand, for the sake of efficiency, some subalgorithms of the Kronecker solver involve additional random parameters that must be taken outside certain proper closed subvarieties (see Propositions 5.2 and 5.3), or integer parameters that must be coprime with a certain set of bad primes (see Lemma 6.7). The probability of picking suitable random parameters of this kind will be analyzed separately.
Throughout this paper, we will be working in the Turing machine model [61] with sufficiently many tapes (seven tapes are usually sufficient in order to implement basic routines on polynomials, series, and matrices in a natural way). The Kronecker algorithm is randomized, so it also requires a special instruction to generate a random symbol in one cell within constant time.
In some cases, algebraic structures have a natural bit size (e.g. modular integers, finite fields); in other cases the size is variable (e.g. arrays, polynomials). In both cases elements are represented on tapes as sequences of symbols, followed by a specific termination symbol ””. The reader must keep in mind that heads of the machine can just move one cell left or right at each time step. Algorithms must take care of using data in the most contiguous way as possible, and loop counters cannot be used for free.
The rest of the section gathers standard data types and elementary operations needed in the next sections. We freely use well known complexity bounds on polynomials and matrices from [21]; details about Turing machine implementations of these algorithms can be found in [65].
We use binary representation for integers. A modular integer in is represented by its natural preimage in . Integers can be added in linear time and multiplied in softly linear time ; see [30] and historical references therein. Integer divisions and extended gcds in bit size also take softly linear time [64]. We will also use truncated -adic integers, for which we refer the interested reader to [6] for practical algorithms.
One dimensional arrays are sequences of elements terminated by “”. For example the vector is stored as .
For bidimensional arrays we use column-major representation. This means that an array of size ( rows and columns), is stored as the vector of its columns, i.e. . Such arrays are allowed to contain elements of different types and sizes. For example the matrix over is stored as . In the Turing machine model, we recall that the transposition of a bidimensional array can be achieved in softly linear time:
For univariate polynomials we use dense representation, which means that a polynomial of degree is stored as the vector of its coefficients from degrees to . Additions and subtractions take linear time in . Multiplying two polynomials over finite fields may be achieved in softly linear time [15, 29, 31].
In the present paper we consider finite rings of the form with and monic of degree . Elements of are stored as their natural preimage in with coefficients in . Additions and subtractions in take linear time, products take time ; see for instance [21, part II].
For a polynomial we use the recursive dense representation, by regarding as an element of . In particular, admits the same representation as its expansion as a univariate polynomial in . In our algorithms, the number of variables is not part of the representation of , so it must be supplied as a separate parameter.
The support of is defined as the set of monomials with non-zero coefficients and we write for its cardinality. Assuming that, apart from the mandatory trailing “#” symbol, the representations of coefficients in do not involve other “” symbols (this can always be achieved through suitable renamings ), we denote the number of “” symbols involved in the representation of by . We notice that . For the remainder of this subsection, we assume that the size of elements in is bounded by a constant .
Example
Proof. In order to compute the partial derivative in , we run over all the coefficients of in time by Lemma 2.6.
If we are interested in the derivative in with , then we reinterpret as an element of and its derivative requires time .
Proof. We use Horner's method to evaluate univariate polynomials over at in softly linear time. In total, the machine runs over the entire representation of and the time spent in arithmetic operations is at most .
where is the cardinality of the support of in the variables .
At several places we will use the following bound on the cardinality of the support of a polynomial in variables and total degree :
(2.1) |
For the sake of efficiency, a homogeneous polynomial in the variables has a dedicated representation made of and . In this way, can be recovered as .
The problem called multivariate multi-point evaluation over an effective ring is the following: given and points in , compute . The first following statement concerns the cost of the naive evaluation algorithm in terms of the total degree of .
In terms of the partial degrees of , we will need the following better complexity bounds that refine previous work by Kedlaya and Umans [41, 42]. The first theorem takes partial degrees into account, while the second one is in terms of the total degree.
The Kronecker solver needs to perform a linear change of variables of matrix into the input homogeneous polynomials. Let be such a polynomial in of total degree , and let be a matrix over . We need to compute
A natural idea is to interpolate from its values at , where is a subset of of cardinality . Such an interpolation is known to be computable in time ; see Lemma A.8 of the appendix. On the other hand the values of at may be obtained in time by [36, Proposition 5.7], for any fixed rational value . However, this approach suffers from two drawbacks: it takes neither the homogeneity of nor the structure of into account. At the end of Appendix A, we show the following sharper complexity result:
This section is devoted to multivariate modular composition, that is the evaluation of a multivariate polynomial in at a point in , where with monic and . As recalled in the introduction, no fast algorithm is known for this task over a general ring or field . For the Kronecker solver presented in this paper, the following particular instances of are needed: and , along with tangent numbers over theses rings. In this section, we design fast algorithms for these cases, on the basis of Kedlaya and Umans' ideas, and new variants from [36].
If , then is the finite field , whereas corresponds to the ring of the -adic integers truncated to precision . Until the end of the section we set
(3.1) |
and
where is a monic polynomial of degree in .
Let represent a cost function for the multi-point evaluation of -variate polynomials over with partial degrees , total degree , and for evaluation points. The following algorithm reduces modular composition to multi-point evaluation.
Algorithm
Write
for the canonical preimage of in , where and belong to and have partial degrees in and in . In a similar manner write
for the preimage of in .
Build points whose reductions modulo are pairwise distinct.
For and for , compute
For compute and .
For and for , compute
Interpolate the polynomials and in , of partial degrees in and in , and such that and for all and .
Return reduced modulo .
Proof. First we verify that
Therefore, for all and we obtain
Since has partial degrees in and in , we deduce
whence the correctness of the algorithm.
The rewriting in step 1 takes linear time. The construction of the points in step 2 costs . In step 3 we appeal to Lemma 2.2 in order to obtain
in time , and then to obtain
for in time . Notice that the required data reorganizations can be done in softly linear time thanks to Lemma 2.1.
The cost of step 4 is provided by Lemma 2.11,
and simplifies to by (2.1) and Lemma 2.6.
The derivations in step 5 incur by Lemma 2.9. Taking into account data reorganizations via Lemma 2.1, the total cost of step 5 is
In step 6, the interpolations amount to by Lemma 2.3. Finally step 7 takes additional time .
Our main complexity bound for modular composition is presented in the next theorem, under the condition that sufficiently many evaluation points are at our disposal. For convenience we recall the following lemma.
(3.2) |
Let be of total degree and let be in , with monic in of degree . Then can be computed in time
Proof. Let be such that
First we examine the case . Lemma 3.2 and the fact that the function
is nondecreasing yield
so we have . By adapting Lemma 2.12 to , the cost of the naive modular evaluation is , since .
From now we assume that holds. We set . If is bounded then so is and the above adaptation of Lemma 2.12 may again be used to achieve a naive evaluation cost in . Consequently, we may further assume that is sufficiently large to satisfy
(3.3) |
We perform the multivariate Kronecker segmentation of into which has variables and partial degrees as follows. We let
and introduce the Kronecker substitution map
We set and . Observe that we have and . From
and
we deduce that
Still by allowing to be sufficiently large we may further assume that
(3.4) |
On the one hand, by [36, Proposition 5.1] and (3.3), the computation of takes time
On the other hand, for we compute
by binary powering, in time . We may now compute as
Since (3.4) ensures
the condition (3.2) allows us to combine Proposition 3.1 and Theorem 2.13. This yields the complexity bound
(using (3.4)) | |||
for a suitable constant . If , then this bound further simplifies to
Otherwise we have which implies that . In this case, the condition (3.2) allows us to combine Proposition 3.1 and Theorem 2.14, in order to obtain in time
which simplifies to thanks to .
Overall we have proved a complexity bound . Taking into account that , from (3.3), it follows that , which concludes the proof.
In order to generalize Theorem 3.3 to small residue fields, namely whenever inequality (3.2) does not hold, we extend the cardinality of the ground ring. We will rely on the following construction.
for some polynomials and in . The polynomials , and can be computed in time . Each application of and takes time .
Proof. The polynomial can be obtained by using Ben-Or's randomized algorithm [21, Theorem 14.42]. Since and are coprime, a natural candidate for is the composed product of and (usually written ),
(3.5) |
It is well known that is irreducible of degree . Let , that can be computed in time . We construct as the monic part of the resultant in and we write for the corresponding subresultant of degree in . For subresultants over Galois rings, we need to compute the necessary minors of the Sylvester matrix by means of Berkowitz' algorithm [5]. In this manner , , and can be obtained in time .
It is well known that is invertible modulo , so we compute and in time .
Let us consider an element represented by , and
where . Equivalently, we have
which corresponds to the usual Chinese remaindering formula associated to the factorization (3.5) of .
We observe that for all roots of . Consequently, for all roots of with , we have . It follows that actually represents . It can be computed in time .
In the other direction, given a preimage of , we obtain as , in time .
Remark
The following corollary aims at completing Theorem 3.3 for small residue fields. It will not be used in the sequel because the extension of the residue field will instead be handled at the top level of the Kronecker solver.
Proof. Notice that the total degree of may be obtained in time by Lemmas 2.6, 2.8 and inequality (2.1). It remains to handle the case when (3.2) does not hold in Theorem 3.3. In other words, assume that
We first compute the smallest integer such that
that is
and we set
so is coprime to and
We next apply Lemma 3.4, from which we borrow the notation: of degree is obtained in expected time ; the computation of , , and requires time .
With this construction in hand we set
and
and we proceed as follows for the modular composition problem:
We cast into in time ;
We cast into in time ;
We cast into in time ;
By Theorem 3.3, we can evaluate in in time
We cast into in time .
Adding up, we obtain the claimed complexity bound—up to replacing by from the outset.
Consider a polynomial system over an effective
field , which satisfies the
regularity assumptions
Under our regularity assumptions, and letting , the coordinate ring has dimension and degree by the Bézout theorem. In particular the system admits distinct solutions in , which are all regular. After applying a generic affine change of coordinates, the Kronecker solver first solves the system then …, and so on until . We first study how to perform this change of variables. Next we explain how positive dimensional solution sets of the intermediate systems are represented.
An homogeneous ideal of is said to be in Noether position when and is an integral ring extension of , where . Let be the column vector with entries . If is an invertible matrix over , then we write
If is generated by a proper regular sequence , then we say that a matrix puts the intermediate ideals in simultaneous Noether position if is in Noether position for all . Let us now examine the probability that this happens for an upper triangular matrix with ones on the diagonal and random entries above the diagonal.
where the are taken at random in . Then the probability that puts into simultaneous Noether position is at least .
Proof. We use the incremental method described in the proof of [18, Theorem 1.9] as follows (for historical references, we also refer to [18]). The values are taken such that the coefficient of in is non-zero. For such values the ideal is in Noether position.
Then we consider the determinant of the multiplication endomorphism by in : it belongs to and has degree . The values such that the coefficient of in is non-zero put into Noether position.
By induction we consider the determinant of the multiplication endomorphism by in : it belongs to and has degree . The values such that the coefficient of in is non-zero put into Noether position. By taking into account the fact that , the Schwartz–Zippel lemma [66, 69] leads to a probability of success at least
Let be an absolutely radical ideal of such that the coordinate ring is zero dimensional—here we consider the affine case when is not necessarily homogeneous. A polynomial is said to be a primitive element for if the projections of its powers generate as a -vector space. Let represent the degree of (i.e. its dimension as a -vector space).
Proof. The minimal polynomial of in actually belongs to when the are regarded as parameters, and the total degree in the is ; see for instance [18, Corollary 3.4]. The points to be avoided are the zeros of the discriminant of , seen as a polynomial of total degree in . We conclude by the aforementioned Schwartz–Zippel lemma.
For a primitive element there exist unique polynomials such that , is monic and separable, , and
The polynomials are called the parametrization of by . Equivalently, we may define for , whence
These polynomials are called the Kronecker parametrization of by ; such a parametrization is uniquely determined by .
Proof. Each has dimension as a -vector space [18, section 2.4 and the relationship to the geometric definition of the degree of an algebraic variety in Corollary 3.10]. By the previous lemma and the fact that , the probability that is primitive for all is at least
Let satisfy
for . Thanks to Noether positions, the system has all its roots in the affine space. More generally given a point all the roots of are in the affine space. A point is called a lifting point if is absolutely radical.
Proof. Without loss of generality we may assume that is algebraically closed, so we may consider a primitive element that is common to all , thanks to Lemma 4.3. The minimal polynomial of in is homogeneous in of degree , and is monic and separable in . Any point such that the specialization of at is separable ensures that is absolutely radical; see for instance [18, Proposition 3.6]. The probability of picking such a point at random in is at least by the Schwartz–Zippel lemma. The probability that satisfies the requested property is thus at least
again by using .
Assume that we are given a polynomial system
that satisfies
then
Proof. We first pick up a matrix at random as in Lemma 4.1 in order to get simultaneous Noether positions. Then comes the choice of the lifting point, for which we use the strategy presented in the proof of Lemma 4.4.
At the end of the resolution, unless the execution has failed, we expect to obtain an invertible matrix over and polynomials such that is separable of degree , the have degree , and
On the other hand, being absolutely radical, its roots are all regular, is separable, and the value of the Jacobian matrix of in at is invertible modulo ; see for instance [18, section 4.2].
One natural question at the end of a probabilistic resolution process is
to determine whether the computed solution set is correct or not. In the
case of sufficiently generic systems, namely satisfying
and
is separable of degree , the have degree ,
,
for all ,
the Jacobian matrix in of is invertible at modulo .
Then,
Let be a fixed rational value, and assume
that
then the above conditions can be checked in time .
Proof. The assumptions ensure that the system admits distinct isolated regular solutions in the affine subspace defined by . By Bézout's theorem, this system has therefore no solution at infinity, namely in the hyperplane defined by .
For , let
denote the variety of zeros of .
Let us first show that
whence . Therefore the number of isolated roots of would be
which contradicts the fact that admits distinct isolated regular solutions.
Let be in . We consider a non-zero homogeneous polynomial in . We write into with primitive as a polynomial of and a monomial in . In particular is monic in and vanishes at all the points of , so it belongs to . This shows that the class of in is integral over when seen as an element of . We deduce that is in Noether position; for instance by using [18, Theorem 1.12].
Now let us show that R is satisfied. Consider an irreducible component of for some index and then a point in , which is non-empty in the projective setting. This point is a solution of , and as such, the value of the Jacobian matrix of at has full rank by hypothesis. Therefore, the Jacobian matrix of has full rank over a Zariski dense subvariety of . It follows that the primary component associated to is necessarily prime. On the other hand, R implies that is unmixed by [53, chapter 7, section 17]; in other words, has no embedded primary ideal. Consequently is absolutely radical.
Assume now that , and let us examine complexities. Testing the separability of boils down to computing its gcd with , which takes time . We compute in time by Proposition 2.15. Then we compute the Jacobian matrix in of in time by Lemma 2.9. The evaluations of all the and of at in can be done in time by means of Theorem 3.3—recall that these homogeneous polynomials to be evaluated are already represented by their specialization at . Finally the determinant of the latter value of is obtained without division thanks to Berkowitz' algorithm [5] in time . Testing the invertibility of this determinant simply requires additional time .
Let be an effective field and let
be homogeneous polynomials in . Throughout this section, we assume that
conditions
by a primitive element is given, so that
We will build primitive elements for all intermediate ideals from a unique tuple . In order to obtain a similar parametrization for , we apply the two following steps.
The parametrization of is lifted into a parametrization of
of the form
where is a monic polynomial of degree in , and the are in . It turns out that actually belong to and have total degrees ; see for instance [18, Corollary 3.4]. We will compute them by a variant of the Newton–Hensel lifting strategy over .
Geometrically speaking represents the affine curve . The intersection of this curve with the hypersurface corresponds to . The minimal polynomial of modulo will be computed as a resultant, and the rest of the parametrization of is completed by means of a suitable deformation technique. We define to be such that this parametrization is of the form
The lifting step essentially relies on a multivariate variant of the Hensel lemma (or, equivalently, on a multivariate power series analogue of Newton's method). The Jacobian matrix of in is written
The identity matrix of size is written .
Algorithm
Initialize , , ,…, .
Initialize with the inverse of modulo .
While do the following:
update
and over ;
set ;
compute
over ;
compute ;
update computed over ;
for from to ,
update computed over .
For from to , compute over .
Return truncated to order and then regarded in .
then it takes time .
Proof. This algorithm directly comes from [27, section 4]. We do not repeat the correctness proof but focus on the complexity analysis in the Turing model for the dense representation of polynomials.
The partial evaluations of at are achieved in time by Lemma 2.10 and (2.1)—recall that the homogeneous polynomials are actually represented by their specialization at and their total degree. All the partial derivatives needed for the Jacobian matrix can be obtained in time by Lemma 2.9.
In step 2, we compute in time
by Theorem 3.3. Then the inversion of is performed by Berkowitz' algorithm in time .
In step 3 we regard and the entries of as polynomials in : their evaluations at take time by Theorem 3.3. The remaining computations in steps 3 until 5 take time .
For the intersection step we follow the method designed in [27, section 6]. We first compute a deformation of the parametrization of by the primitive element
where the are new variables satisfying for all . Let and let be a shorthand for . The extension of to is parametrized by as follows
where is monic in of degree , and the have degree in , for . Of course, respectively coincide with modulo . In addition we know that the total degrees of in and is ; see [18, section 3].
Algorithm
Compute .
Compute .
For from to do the following:
compute ;
compute ;
compute .
Return truncated to and regarded in .
Proof. This algorithm corresponds to [27, Algorithm 8]. Its correctness is explained in [27, section 6.4]. We adapt the complexity analysis to our context. In steps 2 and 3.a the inverse of modulo is only required modulo . If we were performing this modular inversion over then we would have to handle rational functions of degrees growing with . In order to avoid this coefficient swell we use truncated power series instead of rational functions. In fact, we compute the inverse of modulo for specialized to , then we use classical Hensel lifting to recover . This method works fine whenever the resultant of and does not vanish at . This resultant has degree in .
Algorithm 5.2 needs to shift by in the input and by in the output, which can be done in time using a divide and conquer algorithm; see for instance [7, Lemma 7]. Then is obtained in time . The rest of the algorithm performs polynomial products of cost .
If is a polynomial in of total degree in and , and assuming , then we write
for the polynomial in obtained after replacing by . This substitution can be performed efficiently as follows:
Compute with a softly linear number of operations in ;
Compute . This substitution is applied to each homogeneous component of seen in . In fact if is the homogeneous component of degree then we are led to compute , which basically reduces to computing , and that corresponds to a univariate polynomial shift. Such a shift is known to take softly linear time, as mentioned above.
If then this substitution in takes time . We are now ready to complete the intersection step.
Algorithm
Compute .
Compute and .
For from to compute:
;
.
Let .
Compute with in .
Return truncated modulo and seen as polynomials in .
then Algorithm 5.3 takes time .
Proof. The algorithm is borrowed from [27, Algorithm 7]. Again we only focus on the complexity and probability analyses in our context.
First we need to compute along with the corresponding Bézout relation that yields . We wish to apply a fast subresultant algorithm, namely either the one of [21, chapter 11] or the one of [51]. For this purpose, we need to ensure the following properties:
the leading coefficient (in degree ) of is non-zero modulo ,
the leading coefficient, in degree , of is non-zero modulo ,
the non-zero subresultant coefficients from degree to are non-zero modulo .
In fact, these coefficients are the only quantities that need to be inverted during the execution of the aforementioned fast subresultant algorithms.
We claim that these properties are met for sufficiently generic values of the . In order to prove and quantify this claim, we introduce
where the are new independent variables, and set . The extension of to is parametrized by as follows
where
actually belong to ,
is monic, separable, and of degree in ,
have degree in ,
have total degree in and ,
have total degree in .
For the proofs of these facts we refer the reader to [18, section 3]. It follows that are the respective specializations of at . We further introduce
By the specialization property, the subresultants of and , seen as polynomials of respective degrees and in , are the specializations of those of and at . Such a subresultant coefficient of and is a specific minor of the Sylvester matrix of and , of size at most . Therefore such a subresultant coefficient is a (non-reduced) rational function with denominator and numerator in of total degree in . Consequently the product of all these non-zero numerators yields a non-zero polynomial of total degree
such that implies that the non-zero subresultant coefficients of and are invertible; in particular the resultant is invertible and so are the leading coefficients of and .
The fast computation of raises the same issue, and requires the same kind of probability analysis. We thus introduce
which belongs to , has degree in , in , and total degree in . In this way
is the specialization of at .
A subresultant coefficient of and , seen as polynomials of respective degrees and , is a (non-reduced) rational function with denominator and numerator in of total degree in . Consequently there exists a non-zero polynomial of total degree
such that implies that all the non-zero subresultant coefficients of and and the leading coefficients of and are invertible.
As in the proof of Lemma 4.2, there exists a non-zero polynomial of total degree such that implies that is a primitive element for . This ensures the correctness of the output.
Now assume that . This happens with probability
whenever is taken at random in .
The subresultant coefficients of and have degree in , so except for values of , we can compute modulo in time , by means of the fast subresultant algorithm. A subresultant coefficient of and has degree in , so except for values of , can be computed by means of the fast subresultant algorithm modulo in time . Consequently a suitable value for is found at random in with probability of success
All the substitutions take time according to the above discussion. All the shifts by and in totalize time .
The steps 3.b take time in total. In step 4 we first specialize the variables to in time , after which the rest of the evaluation of
is decomposed into evaluations in for . More precisely, for the evaluation in we ignore the variables for and we may thus appeal to Theorem 3.3 in order to achieve time
Putting together the propositions of this and the previous sections, we obtain the following complexity bound.
Then an invertible matrix , a primitive element and a Kronecker parametrization of can be computed in time , with a probability of failure that is bounded by .
Proof. This result is a consequence of the Kronecker
solver that was recalled at the beginning of this section. First we pick
up an invertible matrix of size
with entries at random in as described in Lemma
4.5, in order to ensure that after replacing by conditions
Computing takes time by Proposition 2.15.
On the other hand the probability that are suitable for all the intersection steps is at least
by Proposition 5.3. The probability that a random value of is suitable for all stages of the solver is at least
Overall the probability of failure is bounded by . Notice that the constant is not intended to be optimal. The total complexity follows by combining Propositions 5.1, 5.2 and 5.3.
Then the Kronecker algorithm computes an integer , the finite field , the images of into , an invertible matrix with entries in , a primitive element and a Kronecker parametrization of over in time with any arbitrarily small fixed probability of failure, or in expected time with a Las Vegas algorithm.
Proof. The case
is already handled in Theorem 5.4, and corresponds to . So we focus on the opposite case , for which . We compute the first integer such that
that is
Then we set
so is coprime to and we have . Now we appeal to Lemma 3.4: we construct in time with probability of failure ; then the casts between and can be achieved in time .
With this construction at hand we proceed as follows:
We cast into in time ;
We call the Kronecker solver over . In view of Theorem 5.4, the resolution over takes time with a probability of failure .
Using Proposition 4.6, the correctness of the result can be verified in time as well. We are thus able to reach any arbitrarily small fixed probability of failure by repeating the resolution sufficiently many times. Altogether, this leads to a Las Vegas algorithm that satisfies the claimed expected complexity bound—of course, needs to be replaced by from the outset to conclude the proof.
Remark
a permutation of ,
, linear forms ,
homogeneous polynomials of total degrees ,
such that for and for . In this way we are reduced to solve a “smaller” system with all degrees . This simplification in the input system can be achieved in time by means of Proposition 2.15. We leave out the details for the sake of conciseness.
From now we turn to solving polynomial systems
with coefficients in , still
under assumptions
In order to formalize this approach, we first need to estimate the bit size of the Kronecker parametrizations in terms of the input size. Then we design a probabilistic algorithm that generates suitable values for with high probability.
For a scalar, a vector, a matrix, or a polynomial over we write for the maximal norm of its entries or coefficients. Then the height of , written , is defined as
Until the end of the text we set
In section 5.1 we have described an algorithm for lifting power series in a Kronecker parametrization. In the following paragraph we adapt the method to lift -adic integers instead. The algorithm originates from [27, section 4.6]. The Jacobian matrix of in is still written
Algorithm
Initialize , , ,…, .
Initialize with the inverse of modulo .
While do the following:
update and , over ;
set ;
compute over ;
compute over ;
replace computed over ;
for from to ,
replace computed over .
For from to , compute over .
Return .
then Algorithm 6.1 takes time , where .
Proof. This algorithm directly comes from [27, section 4.6]. We do not repeat the proof of the correctness but focus directly on its bit complexity for the dense representation of input polynomials.
Before all we may compute the partial derivatives of the in time by Lemma 2.9 to any precision . In step 2, we compute in time
by Theorem 3.3. Then the inversion of modulo is performed by Berkowitz' algorithm in time .
In step 3 the evaluations of and at at a given precision take time , again in view of Theorem 3.3. Since we keep doubling until we reach the precision , the total cost is bounded by . The rest of the computations in steps 3 and 4 take time .
Let be a zero dimensional subvariety of . “The” Chow form of is the polynomial that satisfies
where the are taken in . The Chow form is defined up to a scalar factor in , and is homogeneous of total degree .
From now we assume that is the variety of , where is an invertible integer matrix such that is in Noether position. In this setting, is included in the affine subspace defined by , and we have . We then define the normalized Chow form of as the -multiple of having the coefficient of equal to . The normalized Chow form belongs to .
In order to bound bit sizes of Kronecker parametrizations of , we first need to bound the bit size of the coefficients of . For this purpose we follow a construction due to Philippon [62], and introduce
where denotes the -Mahler measure of defined as
where is the usual Haar measure of mass 1 over the sphere
So is to be understood as a real -dimensional integral.
On the other hand, the usual Mahler measure of is defined as an -dimensional complex contour integral
The relationship between these two measures is given in [52, Theorem 4]:
which implies
(6.1) |
Then we appeal to [43, Corollary 2.10, p. 553] which ensures the existence of a rational number such that has all its coefficients in and
(6.2) |
To relate the Mahler measure to the actual bit size of we use [43, Lemma 1.1, p. 529]:
(6.3) |
Combining the three latter inequalities leads to
Let be a Kronecker parametrization of by the primitive element taken in . It relates to the Chow form of as follows:
Since belongs to , all the coefficients of and of for also belong to , and their bit size can be bounded as follows. By and inequality (2.1), the Chow form admits at most terms, whence
where is a shorthand for . Now we estimate how the coefficients of increase because of the change of variables .
Proof. The proof is immediate by using and inequality (2.1).
Proof. This is a well known unconditional variant of Stirling's formula:
In addition, for a Kronecker parametrization of , and with as above, then we have
Proof. From Lemmas 6.2 and 6.3 we obtain
Consequently,
Combining this bound with (6.4)–(6.7) and using , we deduce
Finally we use to bound by , which concludes the proof.
Given an input polynomial system satisfying
Proof. The inequality is an equality for . For we have
whence by induction on .
and for any that does not divide the sequence satisfies
Proof. By Lemma 4.5 there exists an invertible matrix with integer entries such that and is in Noether position. On the other hand by Lemma 4.2 there exists a primitive element for with integer coefficients and such that . Let represent the corresponding Kronecker parametrization, and let and be as above. Let be a non-zero coefficient in of a term of degree for , let be the Jacobian matrix of with respect to , and
By construction, , and thus are positive integers.
Now if does not divide , then for , and the system admits
isolated regular roots modulo . Consequently satisfy
R, R, and
To conclude the proof it remains to bound . First, we may further rewrite a bound for the quantity introduced in Lemma 6.4, by making use of :
By applying Lemma 6.6 on the Sylvester matrix of and , we obtain
whence
As to , we deduce from Lemma 6.4 that
On the one hand, we have
hence
On the other hand, Lemma 6.5 gives us
From Lemma 6.4 and , it follows that
Let denote the group of permutations of . Then we have
Again by Hadamard bound we deduce that
whence
The conclusion follows from adding right-hand sides of (6.8), (6.9), and .
Remark
is monic of degree and for ;
is separable modulo ;
;
is a root of modulo ;
The value of the Jacobian matrix of in at is invertible modulo .
Then satisfy
Proof. By Proposition 4.6, the residues modulo of form a Kronecker parametrization of . We may use Algorithm 6.1 from a theoretical point of view in order to prove the existence of univariate polynomials over the -adic numbers , satisfying the following properties:
is monic of degree and for ;
have nonnegative valuations in and coincide with modulo ;
;
is a root of modulo ;
The value of the Jacobian matrix of in at is invertible modulo .
Consequently, Proposition 4.6 ensures that is a Kronecker parametrization of over by the primitive element . By uniqueness, it coincides with the image over of the Kronecker parametrization of over by . Using again Proposition 4.6 with the parametrization over the rationals concludes the proof.
Now we address the question of obtaining a suitable prime number.
Proof. This is a consequence of [21, Theorem 18.10, part (i)]: we just appeal to the deterministic primality test designed in [1] in order to ensure that is always prime in return. This test works in polynomial time.
The idea behind the Kronecker solver over is as follows. For a suitable prime , we use the Kronecker algorithm to solve the input system over . If the algorithm finishes with success, then we appeal to the a posteriori verification criterion of Lemma 6.9. If the criterion passes, then we may lift the Kronecker parametrization over the -adic integers to arbitrary precision by using Algorithm 6.1, and we know that the -adic parametrization is the image of the rational one. Consequently, as soon as is sufficiently large, we can reconstruct the parametrization over .
Let us recall that a rational number , with and , is uniquely determined by its -adic expansion to precision whenever and ; see for instance [68, section 4]. Using the fast Euclidean algorithm from [21, Theorem 5.26] this reconstruction takes . The choices of and the precision needed by the solver are made precise in the following algorithm. We recall that is a fixed positive rational number, thought to be small.
Algorithm
For each from to , compute the maximum bit size of the coefficients of .
Compute and a positive integer such that .
Compute such that
Compute a prime number at random in via Lemma 6.10.
Call the Kronecker algorithm underlying Theorem 5.4 with the images of in . If the resolution fails then go to step 2.
Let be the data returned by the Kronecker algorithm in step 3. If , or if is not separable, or if , or if one of the does not vanish at modulo , or if the value of the Jacobian matrix of in at is not invertible modulo , then go to step 2.
Let (resp. ) be the canonical preimage of (resp. of ) with entries in . Let be the first integer such that
Compute over , and use Algorithm 6.1 to obtain the Kronecker parametrization by of over .
By rational reconstruction, compute the unique polynomials in that coincide with modulo .
Notice that is whenever all the have their coefficients in . Therefore in the following complexity estimates we shall write in order to make complexity bounds valid in this particular case.
Proof. By Lemma 6.10 the prime determined in step 2 does not divide the integer defined in Lemma 6.7 with probability
. In other words, the
sequence satisfy properties
Once the algorithm passes step 4, Lemma 6.9 ensures that the parametrization modulo may be lifted to any arbitrary precision, and that the parametrization over may be recovered whenever the precision is sufficiently large. According to the preceding discussion on rational reconstruction, we need that
with defined in Lemma 6.4; it satisfies
with and being . This concludes the proof of correctness.
The computation of needs time by Lemmas 2.6 and 2.8. Then , and can be computed in time , which is negligible since
The cost of step 2 is by Lemma 6.10, which is again negligible.
By Theorem 5.4 step 3 takes time
since . Proposition 4.6 then provides us with the cost
for step 4. Computing over in step 5 takes by Proposition 2.15. The cost of the -adic Hensel lifting is given in Proposition 6.1:
where . We verify that
Step 6 requires additional time .
The version of the Kronecker solver summarized in Algorithm 6.2
can be optimized by using ideas previously developed for the case of
input polynomials given by straight-line programs [27]; see
implementation details in the
A first optimization consists in minimizing the height of the parametrization over the rationals. For this purpose it is worth finding a matrix and a primitive element of small height. This search may be achieved efficiently modulo , and we refer the reader to [50] for the description of the underlying algorithms.
A second optimization concerns the early termination of the lifting stage over the -adic integers. In fact one may try to perform rational reconstruction before the bound on the precision is actually reached. If all the reconstructions succeed, then it suffices to verify that the Kronecker parametrization actually describes the solution set. A probabilistic method to do this consists in evaluating the equations at the parametrization modulo an independent random prime number with a bit size similar to the one of . This verification may also be done deterministically over the rationals, but at a higher cost.
For an input system in variables of degrees , the quantity is of the order . Therefore with , Algorithm 6.2 involves primes of bit size close to 52 bits. In fact, the bounds used in the design of Algorithm 6.2 have been simplified and overestimated for the sake of the presentation. Sharper bounds should be considered in practice, in order to ensure that fits into 32 bits for small and moderate systems, and 64 bits for larger ones.
Let us finally mention that the present bound for the bit size of the Kronecker parametrization over the rationals turns out to be essentially sharp for generic systems, according to [54, Theorem 3.1]. Consequently the complexity reached within Theorem 6.11 is nearly linear in the “expected” size of the output, from the asymptotic point of view. This fact is illustrated by the following example.
Example
where and . Let be a root of
and set
so that is clearly a zero of . For , we observe that
It follows that is also a zero of . The value of the Jacobian matrix of in at is
By expanding with respect to the last column, we obtain
Consequently is invertible. From
Proposition 4.6 thus ensures that satisfies R and R, that it is in Noether position, and that it is parametrized by the primitive element , as follows:
The Kronecker solver was already known to be competitive both in theory and practice for polynomial systems represented by straight-line programs; see for instance timings in [2, 27]. Under suitable genericity assumptions and for suitable ground fields, we have shown in this paper that this solver also leads to improved probabilistic asymptotic complexity bounds for the dense representation of input polynomials.
Our main results concern finite fields and rational numbers, but our methods can in principle be extended to any ground field over which fast multivariate multi-modular composition is available. Unfortunately, it is still a major open problem to design an algorithm for modular composition over arbitrary ground fields.
Another major problem concerns the practical efficiency of algorithms for modular composition. This holds in particular for the recent fast algorithms for finite fields designed by Kedlaya and Umans [42], and revisited in [36]; we refer the reader to the concluding section of [36] for a quantitative discussion. Consequently we do not expect our Theorem 3.3 to be of practical use, and we cannot claim our new main complexity bounds for polynomial system solving to be relevant in practice.
Nevertheless, it is important to point out that Theorem 3.3 uses modular composition as a blackbox. Whenever faster algorithms for this task do become available, they can directly be used in our algorithms. As one possible research direction, it is worthwhile to investigate the replacement of generic modular compositions by specific ones, such as those studied in [33, 34] and which feature more promising performances.
The Kronecker solver admits different extensions for when genericity
assumptions
Yet another important question for systems over the rational numbers concerns the complexity of finding numerical approximations of the roots. In generic cases we have proved in Theorem 6.11 how to compute Kronecker representations of solution sets in quasi-optimal time. This implies that numerical roots may also be computed in quasi-linear time but whenever the requested precision is “sufficiently large”, by means of fast univariate polynomial solvers. It would be interesting to quantify the latter “sufficiently large”, and to compare to the state of the art of numerical polynomial solvers.
This appendix is devoted to subjecting a multivariate polynomial to a linear change of variables. More precisely, given and an matrix over a commutative ring , then we wish to compute
The fast algorithms that we propose below do not seem to be available in the literature. They are suitable for any coefficient ring with sufficiently many elements, and they are also well suited for homogeneous polynomials.
In this subsection we focus on the algebraic model (computation trees for instance), we let be an effective commutative ring, and is a cost function such that two polynomials in may be multiplied with cost . The evaluation of a multivariate polynomial at points in a block of points , where is a finite subset of , is usually achieved by the successive use of fast univariate evaluations, as recalled in the following lemma.
Proof. We interpret as a univariate polynomial in ,
We evaluate at recursively. Then, for each , we evaluate at all the points of , with a total cost . Denoting by the cost of the algorithm in terms of operations in , we thus obtain
By induction over , it follows that
which implies the claimed bound.
The next lemma, also well known, concerns the corresponding interpolation problem.
Proof. Again we interpret as a univariate polynomial in ,
(A.1) |
For all we interpolate the values with operations in . We then recursively interpolate and form as in (A.1). The total cost is obtained as in the proof of the previous lemma.
The aim of the following proposition is the fast evaluation of at a set of points of the form , for any matrix and any vector .
Proof. We write . We first assume that is upper triangular, and we partition into
where and . We compute
for using operations in . For , we then evaluate at by induction. The base case takes constant time . Consequently, for any , the total number of operations in is , by the same argument as in the proof of Lemma A.1. We recover with operations in by Lemma A.2.
If is lower triangular then we may revert of the variables in and the columns of in order to reduce to the upper triangular case. Alternatively, we may adapt the latter decomposition of the set of points, as follows:
where and . So we compute
and evaluate at by induction, for .
Finally if is general, then it suffices to use the given LU-decomposition, where is lower triangular with on the diagonal, and is upper triangular. In fact we have , so we compute and then and .
In the next lemma the same technique is adapted to homogeneous polynomials.
Proof. Assume first that is lower triangular and let . We are led to compose with
by means of Proposition A.3. If is upper triangular then it suffices to revert the variables in , and the columns of , in order to reduce to the lower triangular case. Alternatively, we may set and compose with
in order to obtain . Finally, for any , it suffices to use the given LU-decomposition.
Proof. The total number of coefficients in is by inequality (2.1). We decompose
(A.2) |
where is made of the terms of which are multiple of , then is made of the terms of which are multiple of ,…, and finally is made of the terms of which are multiple of (that is a -multiple of a power of ). In this way, we are led to compute for , with of degree ; this requires operations in , by Lemma A.4. Then can be recovered with further operations.
Remark
For the purpose of the present paper, we need to adapt the results of the previous subsection to the case when is the Galois Ring , and in the context of Turing machines. In the next lemmas we use the lexicographic order on , written , defined by
In terms of Turing machines, we need the following variants of Lemmas A.1 and A.2.
Proof. The proof follows the one of Lemma A.1 while taking data reorganizations into account. More precisely, using one matrix transposition, we reorganize the values of the after the recursive calls into the sequence of
for running over in the lexicographic order . Then, after the multi-point evaluations of , we need to transpose the array made of the values of , in order to ensure the lexicographic ordering in the output. The cost of these transpositions is by Lemma 2.1, which is negligible.
Proof. The proof follows the one of Lemma A.2, by doing the data reorganizations in the opposite direction from the one in the proof of Lemma A.7.
From now, for convenience, we discard the case . In this way, whenever , we may use .
Proof. We first generate a subset of of cardinality in time ; this ensures the invertibility of for . The proof then follows the one of Proposition A.3 while taking data reorganizations into account. When is upper triangular, the computation of requires the multi-point evaluation of regarded in : we may simply appeal to the fast univariate algorithm because it only involves additions, subtractions and products by elements of over the ground ring . Consequently may be obtained in time , by Lemma 2.5. In addition, the array of values of the must be transposed at the end, in order to guarantee the lexicographic ordering necessary to interpolate .
When is lower triangular, the data reorganization costs essentially the same, except that the computation of takes time by Lemmas 2.11 and 2.5.
Before achieving the proof of Proposition 2.15, we further need the following lemma in order to change the representation of a homogeneous polynomial.
Proof. For simplicity the proof is done for , but it extends in a coefficientwise manner to any . A sparse representation of is made of a sequence of pairs of coefficients and vector exponents. More precisely, if then a sparse representation of it is the sequence of the pairs , for all the non-zero coefficients . The bit size of a vector exponent is , and therefore the bit size of a sparse representation of is by (2.1).
In order to prove the lemma, we first convert , given in dense representation, into a sparse representation. When the sparse representation of may be obtained in time . Otherwise and we regard in ,
and recursively compute the sparse representation of for . These representations may naturally be glued together into a sparse representation of , in time , by adding the exponent of into each exponent vector. A straightforward induction leads to a total time for the change of representation of . Then the sparse representation of may be deduced with additional time by appending the exponent of needed for homogenization.
Second, from the latter sparse representation of we may simply discard the exponents of and multiply the coefficients with the corresponding powers of , in order to obtain a sparse representation of in time .
Finally it remains to construct the dense representation of from its sparse representation. To this aim we sort the sparse representation in increasing lexicographic order on the exponent vectors in time . We next compute the dense representation by induction over . Writing
the sparse representations of are computed by induction, after removal of the powers of . The induction ends when , in which case the conversion to dense representation requires time . In total, the dense representation of can be computed in time .
Proof of Proposition 2.15. We follow the proofs of Lemma A.4 and Proposition A.5, still while taking into account the cost of data reorganizations.
In the proof of Lemma A.4, the cost of obtaining and is given by Lemma A.10, that is .
In the proof of Proposition A.5 we first need to compute the decomposition (A.2) of . The polynomial
is represented by
and . Consequently may be easily obtained in time . Then the rest of the decomposition is obtained from , recursively. The total cost for obtaining all the is therefore bounded by .
For any , any , and any , the computations of and of take time since their supports have cardinality by (2.1).
Finally, from
we obtain the representation of as
using additional time . The cost of the data reorganizations in the proof of Proposition A.5 is negligible.
M. Agrawal, N. Kayal, and N. Saxena. PRIMES is in P. Ann. Math., pages 781–793, 2004.
B. Bank, M. Giusti, J. Heintz, G. Lecerf, G. Matera, and P. Solernó. Degeneracy loci and polynomial equation solving. Found. Comput. Math., 15(1):159–184, 2015.
M. Bardet. Étude des systèmes algébriques surdéterminés. Applications aux codes correcteurs et à la cryptographie. PhD thesis, Université Pierre et Marie Curie - Paris VI, 2004. https://tel.archives-ouvertes.fr/tel-00449609.
M. Bardet, J.-C. Faugère, and B. Salvy. On the complexity of the Gröbner basis algorithm. J. Symbolic Comput., 70:49–70, 2015.
S. J. Berkowitz. On computing the determinant in small parallel time using a small number of processors. Inform. Process. Lett., 18:147–150, 1984.
J. Berthomieu, J. van der Hoeven, and G. Lecerf. Relaxed algorithms for p-adic numbers. J. Théor. Nombres Bordeaux, 23(3), 2011.
J. Berthomieu, G. Lecerf, and G. Quintin. Polynomial root finding over local rings and application to error correcting codes. Appl. Alg. Eng. Comm. Comp., 24(6):413–443, 2013.
A. Bostan, F. Chyzak, M. Giusti, R. Lebreton, G. Lecerf, B. Salvy, and É Schost. Algorithmes Efficaces en Calcul Formel. Frédéric Chyzak (self-published), Palaiseau, 2017. Electronic version available from https://hal.archives-ouvertes.fr/AECF.
A. Bostan, Ph. Flajolet, B. Salvy, and É. Schost. Fast computation of special resultants. J. Symbolic Comput., 41(1):1–29, 2006.
A. Bostan and É. Schost. Polynomial evaluation and interpolation on special sets of points. J. Complexity, 21(4):420–446, 2005.
R. P. Brent and H. T. Kung. Fast algorithms for manipulating formal power series. J. ACM, 25(4):581–595, 1978.
W. D. Brownawell. Bounds for the degrees in the Nullstellensatz. Annal. of Math., 126(3):577–591, 1987.
P. Bürgisser, M. Clausen, and M. A. Shokrollahi. Algebraic complexity theory, volume 315 of Grundlehren der Mathematischen Wissenschaften. Springer-Verlag, 1997.
J. F. Canny, E. Kaltofen, and L. Yagati. Solving systems of nonlinear polynomial equations faster. In Proceedings of the ACM-SIGSAM 1989 International Symposium on Symbolic and Algebraic Computation, ISSAC '89, pages 121–128, New York, NY, USA, 1989. ACM.
D. G. Cantor and E. Kaltofen. On fast multiplication of polynomials over arbitrary algebras. Acta Infor., 28:693–701, 1991.
J.-M. Couveignes and R. Lercier. Fast construction of irreducible polynomials over finite fields. Israel J. Math., 194(1):77–105, 2013.
C. D'Andrea, A. Ostafe, I. E. Shparlinski, and M. Sombra. Reduction modulo primes of systems of polynomial equations and algebraic dynamical systems. Trans. Amer. Math. Soc., 371(2):1169–1198, 2019.
C. Durvye and G. Lecerf. A concise proof of the Kronecker polynomial system solver from scratch. Expo. Math., 26(2):101–139, 2008.
J.-C. Faugère, P. Gaudry, L. Huot, and G. Renault. Sub-cubic change of ordering for Gröbner basis: A probabilistic approach. In Proceedings of the 39th International Symposium on Symbolic and Algebraic Computation, ISSAC '14, pages 170–177, New York, NY, USA, 2014. ACM.
J.-C. Faugère, P. Gianni, D. Lazard, and T. Mora. Efficient computation of zero-dimensional Gröbner bases by change of ordering. J. Symbolic Comput., 16(4):329–344, 1993.
J. von zur Gathen and J. Gerhard. Modern computer algebra. Cambridge University Press, New York, 3rd edition, 2013.
N. Giménez and G. Matera. On the bit complexity of polynomial system solving. J. Complexity, 51:20–67, 2019.
M. Giusti. Some effectivity problems in polynomial ideal theory. In J. Fitch, editor, EUROSAM 84: International Symposium on Symbolic and Algebraic Computation Cambridge, England, July 9–11, 1984, pages 159–171, Berlin, Heidelberg, 1984. Springer Berlin Heidelberg.
M. Giusti, K. Hägele, J. Heintz, J. L. Montaña, J. E. Morais, and L. M. Pardo. Lower bounds for Diophantine approximations. J. Pure Appl. Algebra, 117/118:277–317, 1997.
M. Giusti, J. Heintz, J. E. Morais, J. Morgenstern, and L. M. Pardo. Straight-line programs in geometric elimination theory. J. Pure Appl. Algebra, 124(1-3):101–146, 1998.
M. Giusti, J. Heintz, J. E. Morais, and L. M. Pardo. When polynomial equation systems can be “solved” fast? In Applied algebra, algebraic algorithms and error-correcting codes (Paris, 1995), volume 948 of Lecture Notes in Comput. Sci., pages 205–231. Springer-Verlag, 1995.
M. Giusti, G. Lecerf, and B. Salvy. A Gröbner free alternative for polynomial system solving. J. complexity, 17(1):154–211, 2001.
B. Grenet, J. van der Hoeven, and G. Lecerf. Deterministic root finding over finite fields using Graeffe transforms. Appl. Alg. Eng. Comm. Comp., 27(3):237–257, 2016.
D. Harvey and J. van der Hoeven. Faster polynomial multiplication over finite fields using cyclotomic coefficient rings. J. Complexity, 54:101404, 2019.
D. Harvey and J. van der Hoeven. Integer multiplication in time . Technical report, HAL, 2019. http://hal.archives-ouvertes.fr/hal-02070778.
D. Harvey and J. van der Hoeven. Polynomial multiplication over finite fields in time . Technical report, HAL, 2019. http://hal.archives-ouvertes.fr/hal-02070816.
J. Heintz. Definability and fast quantifier elimination in algebraically closed fields. Theor. Comput. Sci., 24(3):239–277, 1983.
J. van der Hoeven and G. Lecerf. Modular composition via complex roots. Technical report, CNRS & École polytechnique, 2017. http://hal.archives-ouvertes.fr/hal-01455731.
J. van der Hoeven and G. Lecerf. Modular composition via factorization. J. Complexity, 48:36–68, 2018.
J. van der Hoeven and G. Lecerf. Accelerated tower arithmetic. J. Complexity, 55:101402, 2019.
J. van der Hoeven and G. Lecerf. Fast multivariate multi-point evaluation revisited. J. Complexity, 56:101405, 2020.
J. van der Hoeven, G. Lecerf, B. Mourrain, et al. Mathemagix, From 2002. http://www.mathemagix.org.
Xiaohan Huang and V. Y. Pan. Fast rectangular matrix multiplication and applications. J. Complexity, 14(2):257–299, 1998.
G. Jeronimo and J. Sabia. Effective equidimensional decomposition of affine varieties. J. Pure Appl. Algebra, 169(2–3):229–248, 2002.
E. Kaltofen and V. Shoup. Fast polynomial factorization over high algebraic extensions of finite fields. In Proceedings of the 1997 International Symposium on Symbolic and Algebraic Computation, ISSAC '97, pages 184–188, New York, NY, USA, 1997. ACM.
K. S. Kedlaya and C. Umans. Fast modular composition in any characteristic. In FOCS'08: IEEE Conference on Foundations of Computer Science, pages 146–155, Washington, DC, USA, 2008. IEEE Computer Society.
K. S. Kedlaya and C. Umans. Fast polynomial factorization and modular composition. SIAM J. Comput., 40(6):1767–1802, 2011.
T. Krick, L. M. Pardo, and M. Sombra. Sharp estimates for the arithmetic Nullstellensatz. Duke Math. J., 109(3):521–598, 2001.
L. Kronecker. Grundzüge einer arithmetischen Theorie der algebraischen Grössen. J.reine angew. Math., 92:1–122, 1882.
Y. N. Lakshman. On the complexity of computing a Gröbner basis for the radical of a zero dimensional ideal. In Proceedings of the Twenty-second Annual ACM Symposium on Theory of Computing, STOC '90, pages 555–563, New York, NY, USA, 1990. ACM.
Y. N. Lakshman. A single exponential bound on the complexity of computing Gröbner bases of zero dimensional ideals. In T. Mora and C. Traverso, editors, Effective Methods in Algebraic Geometry, pages 227–234, Boston, MA, 1991. Birkhäuser Boston.
Y. N. Lakshman and D. Lazard. On the complexity of zero-dimensional algebraic systems. In T. Mora and C. Traverso, editors, Effective Methods in Algebraic Geometry, pages 217–225, Boston, MA, 1991. Birkhäuser Boston.
D. Lazard. Gröbner bases, Gaussian elimination and resolution of systems of algebraic equations. In J. A. Hulzen, editor, Computer Algebra: EUROCAL'83, European Computer Algebra Conference London, England, March 28–30, 1983 Proceedings, pages 146–156. Springer Berlin Heidelberg, 1983.
F. Le Gall. Powers of tensors and fast matrix multiplication. In K. Nabeshima, editor, ISSAC'14: International Symposium on Symbolic and Algebraic Computation, pages 296–303, New York, NY, USA, 2014. ACM.
G. Lecerf. Computing the equidimensional decomposition of an algebraic closed set by means of lifting fibers. J. Complexity, 19(4):564–596, 2003.
G. Lecerf. On the complexity of the Lickteig–Roy subresultant algorithm. J. Symbolic Comput., 92:243–268, 2019.
P. Lelong. Mesure de Mahler et calcul de constantes universelles pour les polynomes de variables. Math. Ann., 299(1):673–695, 1994.
H. Matsumura. Commutative ring theory, volume 8 of Cambridge Studies in Advanced Mathematics. Cambridge university press, 1989.
D. McKinnon. An arithmetic analogue of Bezout's theorem. Compos. Math., 126(2):147–155, 2001.
J. M. McNamee and V. Y. Pan. Numerical Methods for Roots of Polynomials, Part II, volume 16 of Studies in Computational Mathematics. Elsevier, 2013.
B. Mourrain, V. Y. Pan, and O. Ruatta. Accelerated solution of multivariate polynomial systems of equations. SIAM J. Comput., 32(2):435–454, 2003.
B. Mourrain and Ph. Trébuchet. Solving projective complete intersection faster. In Proceedings of the 2000 International Symposium on Symbolic and Algebraic Computation, ISSAC '00, pages 234–241, New York, NY, USA, 2000. ACM.
B. Mourrain and Ph. Trébuchet. Generalized normal forms and polynomial system solving. In Proceedings of the 2005 International Symposium on Symbolic and Algebraic Computation, ISSAC '05, pages 253–260, New York, NY, USA, 2005. ACM.
B. Mourrain and Ph. Trébuchet. Border basis representation of a general quotient algebra. In Proceedings of the 37th International Symposium on Symbolic and Algebraic Computation, ISSAC '12, pages 265–272, New York, NY, USA, 2012. ACM.
A. K. Narayanan. Fast computation of isomorphisms between finite fields using elliptic curves. In L. Budaghyan and F. Rodríguez-Henríquez, editors, Arithmetic of Finite Fields. 7th International Workshop, WAIFI 2018, Bergen, Norway, June 14–16, 2018, Revised Selected Papers, volume 11321 of Lecture Notes in Comput. Sci., pages 74–91. Springer, Cham, 2018.
C. H. Papadimitriou. Computational Complexity. Addison-Wesley, 1994.
P. Philippon. Sur des hauteurs alternatives. I. Math. Ann., 289(1):255–283, 1991.
A. Poteaux and É. Schost. On the complexity of computing with zero-dimensional triangular sets. J. Symbolic Comput., 50:110–138, 2013.
A. Schönhage. Schnelle Berechnung von Kettenbruchentwicklungen. Acta Informatica, 1(2):139–144, 1971.
A. Schönhage, A. F. W. Grotefeld, and E. Vetter. Fast algorithms: A multitape Turing machine implementation. B. I. Wissenschaftsverlag, Mannheim, 1994.
J. T. Schwartz. Fast probabilistic algorithms for verification of polynomial identities. J. ACM, 27(4):701–717, 1980.
V. Shoup. New algorithms for finding irreducible polynomials over finite fields. Math. Comp., 54(189):435–447, 1990.
P. S. Wang. A p-adic algorithm for univariate partial fractions. In Proceedings of the Fourth ACM Symposium on Symbolic and Algebraic Computation, SYMSAC '81, pages 212–217, New York, NY, USA, 1981. ACM.
R. Zippel. Probabilistic algorithms for sparse polynomials. In Proceedings EUROSAM' 79, number 72 in Lect. Notes Comput. Sci., pages 216–226. Springer-Verlag, 1979.