5/7

34 mins read

The main motivator that pushed me to attempt this investigation was quadratic equations. When I was in early high school, one of the main challenges I faced was factorization. When I was factorizing quadratics like *x* 2 − 5*x* + 6 = 0 as (*x* − ⋯ )(*x* − ⋯ ) = 0, I was taught to think of two numbers whose product was 6 and sum was 5; to find that the two missing roots were 2 and 3. I utilized that for a while for similar quadratics, but I missed a fact: there was a correlation between the roots “2 and 3” to the coefficients “5 and 6” of the quadratic. Even though it was apparent, the coefficients gave me a way to find the roots.

I ignored this fact for a while, until it came back to me in my Math HL course when I realized that I can use the first, second, and last (constant) coefficients of any polynomial to find the sum and product of its roots. Even without solving for the roots, I could know their sum and product. This fascinated me as it was in a sense beautiful; the ability to know something, even in the slightest, about the unknown. This concept was extremely astounding to me but some questions lingered in my head: why is this fact only true, if it is even? Also, if I can use the first, second, and last coefficients, what about the rest (third, fourth, ..., before-last)? After some research, I found they are related to the roots through “Vieta’s formulae”; which will be the topic of my exploration.

The aim is to explore “Vieta’s formulae.” The method I’ll be using will involve studying the sum and product of roots of quadratics then building up to general polynomials. The tools that will be utilized will involve polynomial theorems and proof by mathematical induction.

The most common applications for polynomials with integer coefficients are coding theory and cryptography; where factorization (and related operations like determining if a polynomial can be factorised) forms the foundational framework on which systems are developed or destroyed.

In this time and age, coding is fundamental because it is utilised for so many different things like digital communication ( satellites, telephones, and video ). The use of cryptography in daily computer use and business has also grown commonplace.

For polynomials with complex numbers as coefficients, factoring is essentially the same as determining and calculating the numerical roots; since they are essentially factorised into linear factors.

There are several engineering issues where the system's behaviour depends on where a polynomial's complex roots are located. If all the roots are inside the unit circle or have positive real parts, for instance, this can indicate whether the system is stable or unstable. The modelling of cruise control or "autopilot," which uses the coefficients of such polynomials to assure proper operation and safety prior to use, is an example of this in practise.

It can thus be said, with confidence, that polynomials ( whether or not their coefficients are complex or not ) and their numerical roots are immensely useful in day to day applications as they are the base of many cyptographic procedures. Being an aspiring programmer and hoping to pursue a career related to computer science, this topic peaked my interest due to its complexity and subtle existence in a multitude of immensely important technological procedures.

Before setting out to find the sum and product of roots of any polynomial, it is important to know the fundamental math theorems, notation, and terminology that will be used throughout the exploration. Beginning with terminology and notation; a polynomial is any function that can be expressed as in equation (1),

\(f(x) = \,a_nx^n\,+\,a_{n-1}\,x^{n-1}+\,...a_1xa_0\)

Where an, \(a_n,\,a_{n-1},...,a_1\) are all of complex nature (denoted by C) and are called the coefficients of the polynomial; where a_{0} ∈ C is the constant term of the polynomial; and where n is a natural number (denoted by N) and called the degree or highest power of the polynomial.

N.B - When the coefficients are complex, *f*(*x*) is preferably written as *f*(*z*), but for consistency, this won’t be be used in this paper.

As a means to summarize, *f*(*x*) may be written by summation notation as in equation (2),

\(f(x)=Σ^n_{m=0}\,a_mx^m\)

The main theorems that will be used are shown below; the reasoning for their inclusion will be explained later. The proofs of these theorems are however not relevant to the topic of exploration and thus they will not be included.

**Theorem 1 (Equality of polynomials).** If there are two polynomials *f*(*x*) and *g*(*x*) which are always equal, then their coefficients are equal (Farin, 2022). That is, if

\(Σ{^n}_{m=0}\,x^m = a_mx^m=Σ{^n}_{m=0}\,b_mx^m\)

Then the coefficients *a _{m} = b_{m}* for all integer m ∈ [0, n]. \((e.g\,+\,1=x^2+1≠x^2+x)\)

**Theorem 2 (Factor theorem)**. The polynomial *f*(*x*) has root r, meaning that *f*(*r*) = 0, if and only if (*x* − *r*) is a factor of *f*(*x*), or that if *f*(*x*) has degree n, then *f*(*x*) = *g*(*x*)(*x* − *r*) for some polynomial *g*(*x*) of degree n − 1 (Spector, 2021). (e.g. *x*^{2} − 1 = (*x* + 1)(*x* − 1) as 1^{2} − 1 = 0)

**Theorem 3 (Fundamental Theorem of Algebra or FTA).** Any polynomial *f*(*z*) of degree n has n (not necessarily distinct) complex roots, meaning that \(a_nz^n\,+a_{n-1}z^n+...+a_1+a_0\)has roots denoted as r_{1}, r_{2}, r_{3}, ... , *r _{n}* ∈ C (Britannica, 2022). (e.g. 2

What I initially learned about finding the roots of cubic polynomials in my IB Math AA HL course was a way to solve cubics such as* x*^{3} − 7*x* + 6 = 0 by either splitting terms and factorizing as

\(x^3-x-6x=6x(x^2-1)-6(x-1)\)

\(=x(x+1)(x-1)-6(x-1)\)

\(=(x^2+x-6)(x-1)\)

Or by guessing that *x* = 1 is a root and by the factor theorem, I would divide *x*^{3} − 7*x* + 6 by *x* − 1 with synthetic division. Although both ways are pertinent, the point is that cubics weren’t factorized as *a*3(*x* − *r*_{1})(*x* − *r*_{2})(*x* − *r*_{3}) like how quadratics were factorized as *a*_{2}(*x* − *r*_{1})(*x* − *r*_{2}). However, in this section, since *a*_{3}(*x *− *r*_{1})(*x* − *r*_{2})(*x* − *r*_{3}) features all roots together, I can relate the roots of the cubic to its coefficients.

Being inspired by showing that quadratics factor as *a*_{2}(*x* −* r*_{1})(*x* − *r*_{2}), it only makes sense to see if cubics also factor as *a*_{3}(*x *− *r*_{1})(*x* − *r*_{2})(*x* − *r*_{3}); all of which will be illustrated in lemma 2.

**Lemma 2 (All cubics are factorable).** Given any cubic *f*(*x*) = *a*_{3}*x*^{3} + *a*_{2}*x*^{2} + *a*_{1}*x* + *a*_{0}, where all coefficients are complex, there exists complex roots *r*_{1}, *r*_{2}, and *r*_{3} such that *f*(*x*) = *a*_{3}(*x* − *r*_{1})(*x* − *r*_{2})(*x* − *r*_{3}).

Proof. By FTA (theorem 3), *f*(*x*) has three roots *r*_{1}, *r*_{2}, and *r*_{3}. Hence, by the factor theorem (theorem 2), *f*(*x*) = *g*(*x*)(*x* − *r*_{1})(*x* − *r*_{2})(*x* − *r*_{3}) for some polynomial *g*(*x*) of degree 3 − 3, or 0. That means *g*(*x*) is a constant. To satisfy theorem 1 (coefficient of* x* 3 on LHS = RHS), *g*(*x*) =* a*_{3}, which implies lemma 2.

This time, the proof was straightforward which is great as it might imply I can generalize the proof for higher degree polynomials. I thus considered expanding *f*(*x*) as *a*_{3}(*x* − *r*_{1})(*x* − *r*_{2})(*x* − *r*3) as in equation (11).

\(a_3(x-r_1)(x-r_2)(x-r_3)=a_3(x^2-(r_1+r_2)x+r_1r_2)(x-r_3)\)

\(=a_3x^3-a_3(r_1+r_2+r_3)x^2+a_3(r_1r_2+r_2r_3+r_3r_1)x-a_3r_1r_2r_3\)

\(=a_3x^3+a_2x^2+a_1x+a_0\)

Then, by theorem 1, I compared the coefficients. ( yielding equations (12)-(14).)

\(a_2=a_3(r_1+r_2+r)\)

\(⇒r_1+r_2+r3=-\frac{a_2}{a_3}\)

\(a_1=a_3(r_1r_2+r_2r_3+r_3r_1)\)

\(⇒r_1r_2+r_2r_3+r_3r_1=\frac{a_1}{a_3}\)

\(a_0=-_3r_1r_2r_3\)

\(⇒r_1r_2r_3=-\frac{a_0}{a_3}\)

Looking at these three equations, the first thing I noticed is that (12) gives the sum of roots of the cubic polynomial, (14) gives the product but (13) somehow involves a mixture of sums and products (of two roots). The second thing is the sum seems to depend on the coefficients of the highest power of* x* and second highest power of* x*, as expected from the formula booklet. The third thing is that the product depends on the constant term and coefficient of the highest power but this time the product has a minus sign for the cubic, while it didn’t for the quadratic. All of this is still consistent with the formula booklet.

Seeing with an example, the cubic *x*^{3} − 7*x* + 6 has roots −3, 1, and 2.

Checking equation (12), −3 + 1 + 2 = 0 and \(-\frac{a_2}{a_3}=-\frac{0}{1}=0.\)

Checking equation (13), (−3) × 1 + 1 × 2 + 2 × (−3) = −7 and \(\frac{a_1}{a_3}=\frac{-7}{1}=-7.\)

Checking equation (14), (−3) × 1 × 2 = −6 and \(-\frac{a_0}{a_3}=-\frac{6}{1}=-6.\)

This thus concludes the cubic section for now. Not only was I able to relate the sum and product of roots to the coefficients of the cubic, but I was able to know how the roots were related to the *a*_{1} coefficient as well, which was a new discovery to me. Although I could not fully explain the origins of the expression *r*_{1}*r*_{2 }+ *r*_{2}*r*_{3} + *r3r1*, I kept researchig and focusing on the sum and product formulae instead for now.

I could have continued next with the following degree of polynomials: the quartic, then the quintic, and so on but these degrees were regarded as special cases. It would thus be more fit to check the pattern continues for any polynomial of any degree, which will be done in the next section.

Setting out to find the sum and product of roots of any polynomial, I used the techniques used in sections 3 and 4. Since lemmas 1 and 2 proved useful in relating the sum and product of roots to the polynomial’s coefficients, I began by generalizing both of them into Theorem 4.

**Theorem 4 (All polynomials are factorable).** Given any polynomial *f*(*x*) of degree *n*, which can represented as

\(f(x)=Σ^n_{m=0}a_mx^m\)

where all coefficients are complex, there exists n (not necessarily distinct) complex roots *r*_{1}, _{r2}, ..., *r _{n}* such that

Proof. Since *f*(*x*) is of degree *n,* then by theorem 3, it has n complex roots *r*_{1}, *r*_{2}, ..., *r _{n}*. Hence, by theorem 2, one can write

Now that I have shown the relation between *f(x*) and its roots, I thought of generalizing the formulae of the sum and product of roots for all polynomials; to prove that the formula booklet is true.A major problem was alas faced while executing this. For the quadratic and cubic, I had to expand an (*x *− *r*_{1} )(*x* − *r*_{2} ) ⋯ (*x* − *r _{n}*), but since I now didn’t know how many brackets there are there was no hope in expanding the entire product. Another approach was needed.

The other approach was: The Principle of Mathematical Induction. The motivation behind using it is that when I expanded the cubic in equation (11), I expanded the first two brackets

Expanding these two brackets were similar to expanding the quadratic in equation (7); if I were to expand four brackets, I would expand the first three, and so on. The capacity to expand the cubic with a quadratic, a quartic with a cubic, a quintic with a quartic, etc. was the primary reason I used mathematical induction. This approach was already studied in my IB Math AA HL course.

First, I used mathematical induction to generalize the sum of roots in lemma 3.

**Lemma 3 (Sum of roots in expansion).** When expanding the brackets in theorem 4, namely an (*x* − *r*_{1} )(*x* −* r*_{2} ) ⋯ (*x* −* r _{n}*), the term in

Proof. I began the proof by mathematical induction on n with the base case (*n* = 2). Base case (*n* = 2): This is true by equation (7). (*a*_{2}*x*^{2} −* a*_{2} (*r*_{1} + *r*_{2} )*x* + *a*_{2}r1*r*_{2}) Inductive case: We may assume that lemma 3 is true for *n *= *k*, then considering the case of *n* = *k *+ 1, we have in equation (15) that.

\(a_{k+1}(x-r_1)(x-r_2)...(x-r_k)(x-r_{k=1})=a_{k+1}a_k\frac{(x-r_1)(x-r_2)(...(x-r_k)}{a_k}(x-r_{k+1})\)

By the inductive hypothesis, theorem 4, and some algebra, we have equation (16) below.

\(a_{k+1}\,a_k\frac{(x-r_1)(x-r_2)...(x-r_k)}{a_k}(x-r_{k+1})\)

\(=a_{k+1}\frac{a_kx^k-a_k(r_1+r_2+...+r_k)x_{k-1}+...}{a_k}(x-r_{k+1})\)

\(=a_{k+1}(x_k-r_1+r_2+...+r_kx^{k+1}+...)(x-r_{k+1})\)

\(=a_{k+1}(x^{k+1}-(r_1+r_2+...+r_k)x^k-r_{k+1}x^k+...)\)

\(=a_{k+1}(x^{k+1}-r_1+r_2+...+r_k+r_{k+1})x^k+...)\)

\(=a_{k+1}x^{k+1}-a_{k+1}(r_1+r_2+...+r_k+r_{k+1}x^{(k+1)-1}+...\)

Thus, since the truth of the statement for* n* =* k* implies the truth of the statement for *n* = *k* + 1 and the base case is true; lemma 3 is true.

By proving lemma 3, the sum of roots formula in corollary 1 was heavily fortified.

**Corollary 1 (Sum of roots formula)**. For any polynomial of degree *n*, the sum of its roots is given by \(-\frac{a_{n-1}}{a_n}\)

Proof. By lemma 3, the term in *x*^{n−1} is both *a*_{n−1} and −an (*r*_{1} + *r*_{2} + ⋯ + *r*_{n} ). As a result, we have −an (*r*_{1} + *r*_{2} + ⋯ +* r*_{n} ) = *a*_{n−1}, and *r*_{1} + *r*_{2 }+ ⋯ + *r _{n}* = −

The formula for the sum of roots has already been derived. The proof for the product of roots is similar and will be shown now.

**Lemma 4 (Product of roots in expansion).** When expanding the brackets in theorem 4, the constant term is \((−1)^n \frac{a_0}{a_n}\)

*Proof.* By lemma 4, the constant term is both a0 and \((-1)^na_nr_1r_2...r_n\). As a result, we have \((-1)^na_nr_1r_2...r_n=a_0,\) and

\(r_1r_2...r_n=\frac{a_0}{(-1)^na_n}=(-1)^n\frac{a_0}{a_n}\)which is the product.

By expanding out an (*x* − *r*_{1} )(*x* − *r*_{2} ) ⋯ (*x* − *r _{n}*), the formulas for the sum and product of roots of any polynomial were succesfully derived. These formulas match the ones in the IB Math AA formula booklet which further fortifies their validity and accuracy.

Nevertheless, Tthere were still other ways to expand an (*x* − *r*_{1} )(*x* − *r*_{2} ) ⋯ (*x* − *r*_{n}). Rather than looking at the first, second and last coefficients, I can expand the brackets to see the rest of the coefficients. Executing this was enough to answer my inquiry about the connection between the roots and coefficients of any polynomial. The problem that remained was the expansion of an (*x* − *r*1 )(*x* − r2 ) ⋯ (*x* − *r _{n}*) using induction. I had discovered that the coefficient in

Without a consistent pattern, a proper hypothesis that could be proven using induction could not be formulated so all in all a coherent pattern had to be discovered. Returning thus to the cubic polynomials -

Before, I derived equations (12)-(14) that related the roots of a cubic to its coefficients.

Equation(12), \((r1 + r2 + r3 = −\frac{a_2}{a_3} )\) gave the sum of roots.

Equation (14), \((r_1r_2r_3 = −\frac{a_0}{a_3} )\) gave the product of roots.

Interestingly, equation (13) \((r_1r_2 + r_2r_3 + r_3r_1 = \frac{a_1}{a_3})\) gave a mixture of products and sums. This was unusual

and thus I thought to look back at the expansion stated in equation (18) as it might help.

\(a_3(x-r_1)(x-r_2)(x-r_3)\)

\(=a_3x^3-a_3(r_1+r_2+r_3)x^3+a_3(r_1r_2r+r_2r_3+r_3r_1)x-a_3r_1r_2r_3\)

The RHS appeared to yield a consistent pattern. The fourth term (−*a*_{3}*r*_{1}*r*_{2}*r*_{3}) has three roots multiplied, The third term (*a*_{3} (*r*_{1}*r*_{2} +* r*_{2}*r*_{3} + *r*_{3}*r*1 )) had two roots multiplied and the second term (−*a*_{3} (*r*_{1} + *r*_{2 }+ *r3* )) only had “one root multiplied”. Lastly, the first term vacuously had “no roots multiplied” (*a*s *a*_{3} doesn’t feature any roots).

I thought about why such pattern was true, but reached no conclusion. Then, I thought about how I could expand *a*_{3} (*x* − *r*_{1} )(*x* − *r*_{2} )(*x* − *r*_{3} ) to only find the term in *x *or i.e. *a*_{3} (*r*_{1}*r*_{2} + *r*_{2}*r*_{3} + *r3r1* )*x*. Previously, I thought of expanding *a*_{3} (*x* − *r*_{1} )(*x* − *r*_{2} )(*x* − *r*_{3} ) by expanding *a*_{3} (*x* −* r*_{1} )(*x* − *r*_{2} ) first but then I recalled a more efficient way to do so: combinations. Using combinations, I was able to expand *a*_{3} *(*x − *r*_{1} )(*x* − *r*_{2} )(*x* −* r*_{3} ) in another way to find the term in *x* as follows.

I only want one x term, which I can achieve by multiplying one* x* from (*x* −* r*_{1} ) from −*r*_{2} and −*r*_{3} from (*x *− *r*_{2} ) and (*x* − *r*_{3} ) respectively. In other words,

\(a_3(x-r_1)(x-r_3)=a_3(...+xr_2r_3+...)\)

Or I can achieve one x term by choosing the second x from the second bracket instead as such

\(a_3(x-r_1)(x-r_2)(x-r_3)=_3(...+xr_1r_3+...)\)

Or the third bracket,

\(a_3(x-r_1)(x-r_2)(=a_3(...+xr_1r_2+...)\)

And I claim that the three terms *a*_{3}(*xr*_{2}*r*_{3}), *a*_{3}(*xr*_{1}*r*_{3}), and *a*_{3}(*xr*_{1}*r*_{2}) are the only terms in* x* when expanding the cubic. This is because I can select one *x* from three brackets in three different ways, which correspond to the three terms respectively. But if I were to choose, let’s say, two *x’*s, I will get a term in *x*_{2} , not a term in *x* as wanted.

N.B: The expansion of brackets this way is algebraically correct and the argument is formal.

Hence, this means the term in* x* is - *a*_{3} (*xr*_{2}*r*_{3} ) + *a*_{3} (*xr*1*r*_{3 }) +* a*_{3}(*xr*_{1r2}) which can be simplified *a*s *xa*_{3} (*r*_{2}*r*_{3} ) + *xa*_{3} (*r*_{1}*r*_{3} ) + *xa*_{3} (*r*_{1}*r*_{2} ) = *a*_{3}(*r*_{1}*r*_{2} + *r*_{2}r_{3} +* r*_{3}*r*_{1})*x*. Comparing this with equation (18), I see that I have the same term in *x.*

Ultimately, this means rather than expanding *a*_{3 }(*x* − *r*_{1} )(*x *− *r*_{2} )(*x *− *r*_{3} ) as *a*_{3} (*x *− *r*_{1} )(*x* −* r*_{2} ) first, I can expand by choosing m number of *x*’s and 3 − m number of different roots (as 3 is the degree of the cubic) from the brackets to get a term in *x ^{m.}*

Meaning to get the term in *x*^{0} , I choose 0 x’s and 3 roots. For *x*^{1 }, I choose 1 *x* and 2 roots. For *x*^{2 }, I choose 2 *x’*s and 1 root. For *x*3 , I choose 3 x’s and 0 roots. Actually, this is the reason behind the pattern seen before (fourth term has three roots, etc.)*.*

This way of expansion reminded me of combinations. This is because I had to choose 1 *x** *from 3 brackets and the combination corresponding to that is 3*C*_{1} = 3, which corresponds to the fact there are three terms in x before adding which are *a*_{3}(*xr*_{2}*r*_{3}), *a*_{3}(*xr*_{1}*r*_{3}), and *a*_{3}(*xr*_{1}*r*_{2}). As astounding as combinations are, I did not use them significantly for the rest of the investigation, but rather for simple checking.

To further validate the procredure, I decided to extend this method to the polynomial next degree up: quartics.

Suppose I want to find the term in *x* for a quartic. I can do that by choosing one x and three roots. Note that because I am technically multiplying by the negative of the roots (like, −*r*_{1} instead of just *r*_{1}), I also have to account for the sign. Since I am multiplying three negatives of the roots, the term in *x* will also have to be negative.

\(-a_4(r_1r_2r_3+r_2r_3r_4+r_3r_4r_1+r_4r_1r_2)x\)

Checking the combination 4*C*_{1} = 4, there are indeed four terms added together in the bracket, and if I am to expand *a*_{4} (*x* −* r*_{1} )(*x* − *r*_{2} )(*x* − *r*_{3})(*x* − *r*_{4}), I would get that it equals

\(a_4(...+(r_1r_2+r_2r_2+r_3r_1)x-r_1r_2r_3)(x-r_4)\)

\(=a_4(...+(r_1r_2+r_2r_3+r_3r_1)x^2-xr_1r_2r_3-r_4(r_1r_2+r_2r_3+r_3r_1)x+r_1r_2r_3r_4 )\)

\(=a_4(...-xr_1r_2r_3-(r_4r_1r_2+r_2r_3r_4+r-3r_4r_1)x)\)

\(=a_4(...-(r_1r_2r_3+r-2r_3r_4+r_3r_4r-1+r_4r_1r_2)x)\)

Since the terms are the same, the math checks out. Like that, I automatically get the quartic coefficients in terms of the roots and a4. Apart from the sum and product, I have equations (23) and (24).

\(a_1=-a_4(r_1r_2r_3+r_2r_3r_4+r_3r_4r_1+r_4r_1r_2)\)

\(a_2=a_4(r_1r_2+r_2r_3+r_3r_4+r_4r_1+r_1r_3+r_2r_4)\)

If I isolate the roots alone, I get equations (25) and (26).

\(r_1r_2r_3+r_2r_3r_4+r_3r_4r_1+r_4r_1r_2=-\frac{a_1}{a_4}\)

\(r_1r_2+r_2r_3+r_3r_4r_4r_1+r_1r_3+r_2r_4=\frac{a_2}{a_4}\)

Equations (25) and (26) look crowded, but at least now I know what they mean. The left hand side in equation (25) is the sum of all possible 4*C*_{3 }= 4 products of three different roots chosen from four roots. And in equation (26), the LHS is the sum of all possible 4*C*_{2} = 6 products of two different roots chosen from four roots.

When understanding it this way, I see how that easily derives the sum and product formulae. For *a*_{3}, the LHS should be the sum of all possible products of one root chosen from four roots. This is simply the same as the sum; as the product of one root will be like just *r*_{1} or just* r*_{2}, and all the possible products are all the roots.

Same thing applies for *a*_{0,} the LHS is the sum of all possible products of four roots chosen from four roots. There is only one product, which is the product of all roots. Now, I had a pattern for the LHS. Before generalizing, I also needed a pattern for the RHS, specifically the signs. When there was one root multiplied (i.e. sum), the sign was negative. Whenthere were two roots multiplied, the sign was positive. For three, it was negative. For four roots, it was positive. To capture this, I thought it was better to write equations (25) and (26) as

\(r_1r_2r_3+r_2r_3r_4+r_3r_4r_1+r_4r_1r_2=(-1)^3\frac{a_{4-3}}{a_4}\)

\(r_1r_2+r_2r_3+r_3r_4+r_4r_1+r_1r_3+r_2r_4=(-1)^2\frac{a_{4-2}}{a_4}\)

It is easy to check that these equations are the same as before but I chose to encapsulate the sign as (−1) no. of roots multiplied and

the fraction \(\frac{a_\text{{degree} - no of roots multiplled}}{a_{degree}}\)as it is the pattern that I believe is true.

**Theorem 5 (Vieta’s Formulae). **The coefficients *a*_{0}, *a*_{1}, ... , *a*_{n} of a degree n polynomial are related to its complex roots *r*_{1}, *r*_{2}, ... , *r*_{n} through equation (29).

\((−1)^k \frac{a_{-k}n}{a_n}\) sum of all possible products of k different roots chosen from n root Proof. Suppose the sum on the RHS is referred to as sk. By expanding the relation from theorem 4, we get that: that *f*(*x*) = *a*_{n} (*x* − *r*_{1} )(*x* − *r*_{2} ) ⋯ (*x* − *r*_{n} ) and *f*(*x*) = *a _{n}xn* +

\(s_k=\frac{a_{n-k}}{a_k.(-1)^k}\frac{(-1)^k}{(-1)^k}=(-1)^k\frac{a_{n-k}}{a_n}\)

And the result is yielded.

Example of Theorem 5 - Consider the polynomial 1*x*^{5} − 15*x*^{4} + 85*x*^{3} − 225*x*^{2} + 274*x *− 120 which has roots 1, 2, 3, 4, and 5.

I can see that the coefficient of* x*^{2} is -225, and here *k* = 5 − 2 = 3. Using theorem 5,

\( (−1)^3\frac{-225}{1}= 1 × 2 × 3 + 1 × 2 × 4 + 1 × 2 × 5 + 1 × 3 × 4 + 1 × 3 × 5 \)

+1 × 4 × 5 + 2 × 3 × 4 + 2 × 3 × 5 + 2 × 4 × 5 + 3 × 4 × 5

225 = 6 + 8 + 10 + 12 + 15 + 20 + 24 + 30 + 40 + 60

225 = 225

The relation between the roots and coefficients is thus easily visible. The only nuance still present is in mathematically representing “sum of all possible products of k different roots chosen from n roots” but I believe it does not matter as long as it is easy usage and calculations. Theorem 5 is referred to as Vieta’s formulae and it was essentially first discovered by two mathematicians: **Albert Girard and François Vièta.** Theorem 5 was thus named after** François Vièta** (Weisstein, 2022).

The *n*th root of unity (Rowland & Weisstein, 2023) is a complex number *z* such that

*z ^{n}* = 1

If n is odd, the only real solution is *z* = 1 and if it is even then the only real solutions are *z* = 1 and *z* = −1. By De Moivre’s theorem, all

possible values of *z* are of the form cos \(\frac{( 2πk n )}{n}\) + i sin \(\frac{( 2πk n )}{n}\) where *k* = 1,2, ... , *n*.However, all *z* values can be viewed as roots to the n th degree polynomial *z ^{n} *− 1, which has an = 1,

the product of roots \(= (−1)^ .\frac{-1}{1}=(-1)^{n+1}\)

rest possible sum of products = 0

Checking the fourth roots of unity (*z*^{4} = 1), which are 1, −1, i, −i, both equations above are true. The product is 1(−1)(i)(−i) = −1 = \((−1)^ {4+1} \), and the sum of roots is 1 − 1 + i − i = 0. Moreover, the sum of product of two roots is

1(−1) + 1(i) + 1(−i) + (−1)(i) + (−1)(−i) + (i)(−i)

= −1 + i − i − i + i + 1 = 0

as expected. As such, all roots of unity have the special properties described in equations (33) and (34). Furthermore, the sum of roots is zero when *n ≥* 2, since equation (34) does not apply for *n* *=* 1. Since the sum is zero for *n ≥* 2, this means

\(∑^n_{k=1}\bigg(cos(\frac{2πk}{n})+i\,sin\,(\frac{2πk}{n})\bigg)=0\)

And by comparing the real and imaginary parts, the specific sums of sine and cosine

\(∑^n_{k=1}(cos(\frac{2πk}{n})=∑^n_{k=1}sin\,(\frac{2πk}{n})\)

is 0. The application of Vieta’s formulae, in this case, allowed me to derive a trig identity about the sums of sine and cosine.

In sections 1-5, I derived the sum and product formulaes found in the IB Math AA Formula Booklet for all polynomials using the Fundamental Theorem of Algebra, Factor Theorem and Induction. In sections 6-8, I related the roots of any arbitrary polynomial to its coefficients in a straightforward formula by factoring and differently expanding the polynomial. In the end, the entire exploration led to exploring Vieta’s formulae for polynomials. The exploration was meaningful not in the ultimate or penultimate results, but in the mathematical tools and critical thinking I underwent.

**11.1.1 Strengths - **The key strengths of this exploration were the facts that

- The formulae for the sum and product are widely applicable for all polynomials.
- A generalization and proof was discovered for the sum and product formulaes thus highlighting their validity
- The proof of the sum, product, and Vieta’s formula all heavily relied on identifying patterns in a multitude of different examples in order to formulate inductive hypothesis for the instance of lemmas 3 and 4.

**11.1.2 Limitations and Extensions -**

The only main concern for this paper is the assumption that all polynomials have complex roots such that the Fundamental Theorem of Algebra holds. Without this assumption, I believe this paper would go beyond the scope of an investigation I could do. One possible extension can thus be the exploration the sum and product of roots of a polynomial whether factored over real numbers, rational numbers or integers. Discriminants could then be uttilized to properly check whether a specific polynomial has a sum and products of roots or notThe main extension for this investigation is to explore how Vieta’s formulae might exist elsewhere in math: how it can be used for example to calculate the sum of cubes of the roots of some cubic polynomial or how Vieta’s formulae might apply to equations as *z ^{n}* = 1 to discover properties about the roots of unity; i.e. seeing the capabilities of Vieta’s formulae.

Britannica, T. Editors of Encyclopaedia (2022, June 2). fundamental theorem of algebra. Encyclopedia Britannica. https://www.britannica.com/science/fundamental-theorem-of- algebra

Farin, L. (2022, March 21). *Equality of polynomials.* ProofWiki. https://proofwiki.org/wiki/Equality_of_Polynomials

Rowland, Todd and Weisstein, Eric W. "Root of Unity." From* MathWorld*--A Wolfram Web Resource. https://mathworld.wolfram.com/RootofUnity.html

Spector, L. (2021). *Roots of polynomials.* Roots or zeros of polynomials of degree greater than 2 - Topics in precalculus. https://www.themathpage.com/aPreCalc/factor-theorem.htm

Weisstein, Eric W. "Vieta's Formulas." From *MathWorld*--A Wolfram Web Resource. https://mathworld.wolfram.com/VietasFormulas.html

**Important terminology**

Lemma - a statement that is true and is used in proving other true statements.

Corollary - statement that is true and is derived / deduced from a theorem or a specific claim.

Proof - the justification of why a statement is valid.

Conjecture - a statement that is considered true but has not yet been proven