Оглавление

Denote by any of the sets or .

**Discriminant**1) of the polynomial

coincides with the resultant of this polynomial and its derivative:

Th

**Theorem.** *If* * denotes the zeros of * *counted with their multiplicities, then*

=>

iff possesses a multiple zero.

Ex

**Example.** For the quadratic polynomial one has

?

Prove the following:

**a)** ;

**b)** ;

**c)** .

Ex

**Example.**

Here

stand for the invariants of the quartic polynomial.

1.

here stands for a constant and .

2.

here denotes the resultant of the polynomials and ; and it is assumed that and .

3.

4. If then

5.

here , stand for the zeros of , and the leading coefficients of and are assumed to be equal .

?

Express
**a)** ; **b)** ; **c)** via . Here

According to the definition, the discriminant can be represented as the -th odrer determinant:

Using the elementary transformation of its rows one can reduce it to the -th order determinant:

The last determinant can be obtained from an alternative definition of the discriminant. Consider a homogeneous bivariate polynomial (form) in and :

Compute its partial derivatives

The discriminant of is taken to be equal the resultant of and .

By definition, the discriminant is a homogeneous polynomial over in the coefficients of the polynomial :

one has and this polynomial in contains the term .

Th

**Theorem.** *If* *possesses a unique multiple zero * *and its multiplicity equals* *then*

Ex

**Example.** Deduce the general formula for the double zero for
under the assumption of its uniqueness.

**Solution.** Using the formula for the discriminant of the quartic polynomial ( v.
☝
ABOVE ), one gets:

For one has , and formula from the above theorem yields . ♦

§

The discriminant is the invariant of the polynomial (strictly speaking, the invariant of the homogeneous polynomial (form) ).

The determinant obtained from

by deleting its first rows and its last rows, first columns and last columns
we will call the -th **subdiscriminant** of the discriminant and will denote by . For the convenience of presentation of some results we will
take the **zero subdiscriminant** to be equal the determinant itself, i.e.

§

In the classical and contemporary sourses I did not find the common name for this object. In [2]
a similar determinant is called *aperiodical* or *bigradient*.

Th

**Theorem.** *The polynomial* *possesses exactly* *common zeros with its derivative*
(*or, more strictly, *
) *iff*

=>

*If* *possesses a unique multiple zero * *and its multiplicity equals* *then this zero can be expessed as a rational function of the coefficients of the polynomial*

here stands for the determinant obtained from by deleting its first and its last row, and its first and its *last-but-one* column (thus differs from only in its last column).

Ex

**Example.** Find all the values of the parameter under which the polynomial possesses a unique multiple zero; compute this zero.

**Solution.** Compute the determinant :

This polynomial in vanishes iff . Deleting from its boundary rows and columns one gets

Substitution for the values discovered above yields:

Consequently possesses a unique double zero iff , while for it has either several multiple zeros or a zero of multiplicity higher than . For the evaluation of the multiple zero, compute the determinant :

and substitute into the formula

the obtained values for :

♦

For the polynomial its -th **Newton sum** is defined as the sum of -th powers of its zeros

Newton sums can be expressed as rational functions of the coefficients of with the aid of the following recursive **Newton formulas**:

Explicit expressions for the Newton sums via are given by the Waring formula.

Compute the Newton sums for and compose the Hankel matrix

Denote by its leading principal minors.

Th

**Theorem.** *The followng formula connects the minors of the matrix* *with the subdiscriminants of the polynomial* :

*In particular,*

**Proof** follows from the repesentation for in the Kronecker form.

!

Discriminant is responsible for the closeness of zeros of : the more they are congested the smaller it is and vice versa. The discriminant value can be used for the estimation of the distance between the zeros.

Th

**Theorem.** *The following estimations are valid*

In the middle school course in Algebra the following formula is known for the expression of the zeros of a **quadric** polynomial as functions of its coefficients:

Here is the discriminant of the quadric polynomial. While initially introduced for the quadric with real coefficients and for the case , the formula remains also valid for the case of imaginary coefficients.

For the cubic equation , there also exists a formula for the explicit representation of polynomial zeros via coefficients — namely, Cardano's formula. In a particular case of a polynomial , this formula is

There is a special agreement [3] for combining the values of the cubic roots in the above sum (generally the radicans are imaginary numbers even for the case of real and ). One may notice that the radicand of the square root coincides with the discriminant:

Polynomial with real coefficients may possess both real and non-real (imaginary) zeros . Although these zeros cannot be expressed in «good» functions from the polynomial coefficients, the conditions for the existence of the prescribed number of real zeros can be expressed in terms of polynomial inequalities imposed on . The «most essential» from these inequalities is the one imposed on the sign of the discriminant .

Ex

**Example.** The necessary and sufficient condition for the reality of all the zeros of the polynomial

**a)** is ;

**b)** is .

For the polynomial of the degree , the nonnegativity of discriminant is not the necessary and sufficient condition for the reality of all the polynomial zeros.

Th

**Theorem.** *For the reality of all the zeros of* *it is necessary that* .

**Proof** evidently follows from the representation of via the zeros of .

The more general result connects the number of distinct zeros of with the signs of subdiscriminants.

Th

**Theorem.** *Let* . *If the sequence of subdiscriminants*

*does not contain two consecutive zeros, then all the zeros of the polynomial* *are distinct and the number of real zeros equals
*

*Here* *and* *stand for correspondingly the number of permanences and the number of variations of signs in the considered sequence.*

The previous theorem is just a reformulation of the following result based on the representaion of the discriminant as the determinant of the Hankel matrix .

Th

**Theorem [Jacobi].** *The number of distinct zeros of a polynomial* *equals the rank, while the number of distinct real zeros of * *equals the signature of the matrix* .

The constructive computation of the rank and the signature of a symmetric matrix is possible via evaluation of the signs of its leading principal minors .

=>

Let

Then and the number of distinct real zeros of equals

=>

For the reality of all the zeros of it is necessary and sufficient that all the leading principal minors of the matrix be positive:

Ex

**Example.** Find the number of real zeros of
.

**Solution.** Newton sums:

Compose the Hankel matrix:

and compute its leading principal minors:

Since , all the zeros of are distinct.

**Answer.** Three real zeros.

Ex

**Example.** Find the number of real zeros of the polynomial in dependency of the parameter .

**Solution.** The discriminant of the polynomial and its first subdiscriminant have been already computed above:

Analyze the signs of and under the variation of parameter ; the critical values for the latter are those annihilating at least one of subdiscriminants:

**Answer.** Polynomial possesses one real zero if and if ; polynomial possesses three real zero if
.

§

Note, that the conditions in the answer to the previous example have the following structure: the end point for the intervals for the parameter values, providing the prescribed number for the real zeros for the polynomial , turn out to be the zeros of the discriminant . This condition is a demonstration of a more general principle: among all the inequalities imposed on the subdiscriminants, the most crucial one is that on the sign of the discriminant itself.

Th

**Theorem.** *In the* *-dimensional parameter space* , *the domains corresponding to the polynomials* *with equal number of real zeros, are separated by the* **discriminant manifold**, *i.e. the surface defined by*

Ex

**Example.**
For the polynomial the discriminant manifold becomes a curve in the -plane: ;
it separates the domain for the parameter values corresponding to polynomials with three real zeros
(обозначена in yellow) frome the domain of values defining the polynomials with precisely one real zero (in blue).

§

I failed to establish the authorship for the previous theorem. In [3]
at p. 252 a reference is presented to the work by Brill of 1877, as well as to the Kronecker works on Charakteristik Theorie; this object was referred to as *Diskriminantenfläche*.

**Problem.** For the polynomial with real coefficients , evaluate its critical values. In particular, for an even and for , find the absolute maximum

Th

**Theorem.** *The critical values of the polynomial* *are the real zeros of the polynomial*

*Here the discriminant is treated with respect to the variable* ,* while* *is tackled as a numerical parameter.*

=>

For an even and , the maximal value of coincides with the maximal real zero of the polynomial provided that this zero is not a multipe one.

Ex

**Example.** For one has

Ex

**Example.** Find the maximum of the polynomial .

**Solution.** One has

maximal real zero of the last polynomial coincides with the value of at a zero of its derivative: . ♦

**Resume.** The standard algorithm for finding the maximal value of a polynomial consists in first finding all the real zeros of its derivative
, then their substitution into and ordering the obtained values to find the maximal one. Instead of this approach, we suggest to evaluate only one zero of the constructed polynomail , namely the maximal one. The last problem, i.e. the search for a particular zero of a polynomial, is oftenly easier in solving. Thus, for instance, one may try to find the maximal zero for via the Bernoulli method. If its procedure converges to a *positive* value, then this value is2)
the maximum of .

?

Construct the polynomial for

**(a)** ,

**(b)** ,

**( c)**

and establish that is attained at two stationary points of .

The importance of the simplicity condition for the maximal zero of is clarified by the following example:

Ex

**Example.** [5]. For one gets:

possesses the maximal zero ; however, the latter corresponds to the nonreal zeros of the derivative . Maximum for is attained at the zero and it equals . ♦

Let us generalize the problem from the previous section:

**Problem.** Find critical values of the function defined implicitly by the algebraic equation . Here is a polynomial in and with real coefficients.

Th

**Theorem.** *Critical values of the implicit function are among the real zeros of the polynomial*

*Here the discriminant is considered with respect to* ,* while* *is assumed to be a numerical parameter.*

**Proof.** The necessary condition for the existence of a stationary point for at consists in vanishing the derivative . Differentiating the identity
as to

we conclude that at the conditions have to be fulfilled:

♦

Ex

**Example.** Find the mimimal value of the implicit function given by

for

**Solution.** We ignore here the questions of existence and representations for this function; but apply the theorem as it is:

The real zeros of this polynomial are:

The minimal zero is ; with its value, one can restore the corresponding — as a multiple zero for . Its value lies within the demanded interval , and one can additionally verify that for the real zeros of are greater than .

**Answer.** .

Th

**Theorem**[6,7]. *The square of the distance to the quadric* *given by the equation*

*from the point* * not lying in the quadric (i.e.* ) * equals the minimal positive zero of the distance equation*

*provided that the mentioned zero is not a multiple one. Here*

*while* *stands for the discriminant of the polynomial (treated w.r.t. * ),* and * * is the identity matrix of the order* .

=>

The square of the distance from to the quadric in given by the equation

equals the minimal positive zero of the distance equation

provided that this zero is simple. Here is the characteristic polynomial of the matrix while is the adjoint matrix for .

=>

For the particular case (i.e. the quadric centered to the origin), one has:

and the distance from the origin to the quadric equals , where stands for the maximal eigenvalue of the matrix .

§

`These and other applications of the discriminant to the problems of distance evaluation`

☞
HERE

Consider a planar smooth curve . In every its point raise a normal and take the points in it lying at the distance
from . These points constitute two curves with each of them called
**equidistant curve** for ; we will denote them and .

Th

**Theorem.** *Equidistant curve for* *where*
*is a polynomial with real coefficients, is given by the equation*

*Here the discriminant is taken w.r.t. * , *while other variables are treated as parameters.*

Ex

**Example.** Find equidistant curve for the parabola .

**Solution.** Compute the discriminant, skip a numerical factor, and order the resulting polynomial in powers of
:

Equation provides )implicitly) the equidistant curves and for the parabola . In the figure, they ar shown for the choice

Consider now th efamily of planar curves depending on the parameter ; the latter takes values from the interval . If there exists such a curve which is tangent in any its point to some curve of the given family but does not coincide with any of these curves for some its segment, then this curve is called the **envelope** for the family .

Let the family is given implicitly by the equation

where is a continuously differentiable function in its variables. The loci of the points satisfying the conditions

is called the **discriminant curve** of the family .

Th

**Theorem.** *The discriminant curve of the family contains the envelope of the family and, probably, the set of critical points, i.e. the points which are reals solutions for the system*

Ex

**Example.** Find the envelope for the family of ellipses

for

**Solution.** Here the equation for the discriminant curve is obtained as a result

of elimination of the parameter from the system

Resolve the second equation w.r.t. :

(here the restriction is essential) and substitute the result into the first equation:

The obtained curve is known as the **astroid**.
♦

Th

**Theorem.** *If* *is a polynomial in* * then the discriminant curve is given by the equation*

*Here the discriminant is taken w.r.t. * , *while other variables are treated as parameters.*

Ex

**Example.** On rewriting the equation for the family of ellipses from the previous example in the form

one can obtain a represenetation for the discriminant curve with the aid of the theorem:

The cases or correspond to the values of the parameter lying at the endpoints of the given interval. The remaind factor from the last equality defined an astroid; this fact can be confirmed by the substitution . ♦

Ex

**Example.** It can be easily verified that the equidistant curves for the curve intriduced in the previous section,
are the envelopes for the family of circumferences of the radius centered at the given curve. One gets for the parabola :

with treated as the parameter generating the family:

[1]. **Kalinina E.A., Uteshev A.Yu.** Elimination Theory (in Russian) SPb, Nii khimii, 2002

[2]. **Jury E.I.** *Inners and Stability of Dynamic Systems.* J.Wiley & Sons, New York, NY, 1974.

[3]. **Uspensky J.V.** *Theory of Equations.* New York. McGraw-Hill. 1948

[4]. *Encyklopädie der Mathematischen Wissenschaften mit Einschluss
ihrer Anwendungen*. Bd. I. *Arithmetik und Algebra*. Ed. **Meyer W.F.** 1898-1904. Leipzig, Teubner

[5]. **Uteshev A.Yu., Cherkasov T.M.** *The search for the maximum of a polynomial.* J. Symbolic Computation. 1998. Vol. 25, № 5. P. 587-618.
Text in djvu
☞
HERE

[6]. **Uteshev A.Yu., Yashina M.V.** *Distance Computation from an Ellipsoid to a Linear or a Quadric Surface in* . Lect.Notes Comput. Sci. 2007. V.4770. P.392-401. Text in djvu
☞
HERE

[7]. **Uteshev A.Yu., Yashina M.V.** *Metric Problems for Quadrics in Multidimensional Space.* J.Symbolic Computation, 2015, Vol. **68**, Part I, P. 287-315.