Derivation of the Routh array

From Wikipedia, the free encyclopedia

The Routh array is a tabular method permitting one to establish the stability of a system using only the coefficients of the characteristic polynomial. Central to the field of control systems design, the Routh–Hurwitz theorem and Routh array emerge by using the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices.


Contents

[edit] The Cauchy index

Given the system:


\begin{align}
 f(x) & {} = a_0x^n+a_1x^{n-1}+\cdots+a_n & {} \quad (1) \\
      & {} = (x-r_1)(x-r_2)\cdots(x-r_n) & {} \quad (2) \\
\end{align}


Assuming no roots of f(x) = 0\, lie on the imaginary axis, and letting


N\, = The number of roots of f(x) = 0\, with negative real parts, and
P\, = The number of roots of f(x) = 0\, with positive real parts


then we have


N+P=n  \quad (3) \,


Expressing f(x)\, in polar form, we have


f(x) = \rho(x)e^{j\theta(x)}    \quad (4) \,


where


\rho(x) = \sqrt{\mathfrak{R}E^2[f(x)]+\mathfrak{I}M^2[f(x)]}   \quad (5)


and


\theta(x) = \tan^{-1}\big(\mathfrak{R}E[f(x)]/\mathfrak{I}M[f(x)]\big)  \quad (6)


from (2) note that


\theta(x) = \theta_{r_1}(x)+\theta_{r_2}(x)+\cdots+\theta_{r_n}(x)  \quad (7)\,


where


\theta_{r_i}(x) = \angle(x-r_i)  \quad (8)\,


Now if the i\,th root of f(x) = 0\, has a positive real part, then (using the notation y=(RE[y],IM[y]))


\begin{align}
\theta_{r_i}(x)\big|_{x=j\infty} & = \angle(x-r_i)\big|_{x=j\infty} \\
                                 & = \angle(0-\mathfrak{R}E[r_i],\infty-\mathfrak{I}M[r_i]) \\
                                 & = \angle(-\mathfrak{R}E[r_i],\infty) \\
                                 & = \lim_{\theta \to -\infty}\tan^{-1}\theta=-\frac{\pi}{2}  \quad (9)\\
\end{align}


and


\theta_{r_i}(x)\big|_{x=-j\infty} = \angle(-\mathfrak{R}E[r_i],-\infty) = \lim_{\theta \to \infty}\tan^{-1}\theta=\frac{\pi}{2}  \quad (10)\,


Similarly, if the ith root of f(x)=0\, has a negative real part,


\theta_{r_i}(x)\big|_{x=j\infty} = \angle(-\mathfrak{R}E[r_i],\infty) = \lim_{\theta \to \infty}\tan^{-1}\theta=\frac{\pi}{2}\,  \quad (11)


and


\theta_{r_i}(x)\big|_{x=-j\infty} = \angle(-\mathfrak{R}E[r_i],-\infty) = \lim_{\theta \to -\infty}\tan^{-1}\theta=-\frac{\pi}{2}\,  \quad (12)


Therefore, \theta_{r_i}(x)\Big|_{x=-j\infty}^{x=j\infty} = -\pi\, when the ith root of f(x)\, has a positive real part, and \theta_{r_i}(x)\Big|_{x=-j\infty}^{x=j\infty} = \pi\, when the ith root of f(x)\, has a negative real part. Alternatively,


\theta_{r_i}(x)\big|_{x=j\infty} = \angle(x-r_1)\big|_{x=j\infty}+\angle(x-r_2)\big|_{x=j\infty}+\cdots+\angle(x-r_n)\big|_{x=j\infty} = \frac{\pi}{2}N-\frac{\pi}{2}P  \quad (13)\,


and


\theta_{r_i}(x)\big|_{x=-j\infty} = \angle(x-r_1)\big|_{x=-j\infty}+\angle(x-r_2)\big|_{x=-j\infty}+\cdots+\angle(x-r_n)\big|_{x=-j\infty} = -\frac{\pi}{2}N+\frac{\pi}{2}P  \quad (14)\,


So, if we define


\Delta=\frac{1}{\pi}\theta(x)\Big|_{-j\infty}^{j\infty}  \quad (15)\,


then we have the relationship


N - P = \Delta  \quad (16)\,


and combining (3) and (16) gives us


N = \frac{n+\Delta}{2}\, and P = \frac{n-\Delta}{2}    \quad (17)\,


Therefore, given an equation of f(x)\, of degree n\, we need only evaluate this function \Delta\, to determine N\,, the number of roots with negative real parts and P\,, the number of roots with positive real parts.


Graph of θ versus tan(θ)
Figure 1
\tan(\theta)\, versus \theta\,


Equations (13) and (14) show that at x=\pm\infty\,, \theta=\theta(x)\, is an integer multiple of \pi/2\,. Note now, in accordance with (6) and Figure 1, the graph of \tan(\theta)\, vs \theta\,, that varying x\, over an interval (a,b) where \theta_a=\theta(x)|_{x=ja}\, and \theta_b=\theta(x)|_{x=jb}\, are integer multiples of \pi\,, this variation causing the function \theta(x)\, to have increased by \pi\,, indicates that in the course of travelling from point a to point b, \theta\, has "jumped" from +\infty\, to -\infty\, one more time than it has jumped from -\infty\, to +\infty\,. Similarly, if we vary x\, over an interval (a,b) this variation causing \theta(x)\, to have decreased by \pi\,, where again \theta\, is a multiple of \pi\, at both x = ja\, and x = jb\,, implies that \tan \theta (x) = \mathfrak{I}M[f(x)]/\mathfrak{R}E[f(x)]\, has jumped from -\infty\, to +\infty\, one more time than it has jumped from +\infty\, to -\infty\, as x\, was varied over the said interval.


Thus, \theta(x)\Big|_{-j\infty}^{j\infty}\, is \pi\, times the difference between the number of points at which \mathfrak{I}M[f(x)]/\mathfrak{R}E[f(x)]\, jumps from -\infty\, to +\infty\, and the number of points at which \mathfrak{I}M[f(x)]/\mathfrak{R}E[f(x)]\, jumps from +\infty\, to -\infty\, as x\, ranges over the interval (-j\infty,+j\infty\,) provided that at x=\pm j\infty, \tan[\theta(x)]\, is defined.


This image is a candidate for speedy deletion. It may be deleted after Sunday, 27 April 2008.
Figure 2
-\cot(\theta)\, versus \theta\,


In the case where the starting point is on an incongruity (i.e. \theta_a=\pi/2 \pm i\pi\,, i = 0, 1, 2, ...) the ending point will be on an incongruity as well, by equation (16) (since N\, is an integer and P\, is an integer, \Delta\, will be an integer). In this case, we can achieve this same index (difference in positive and negative jumps) by shifting the axes of the tangent function by \pi/2\,, through adding \pi/2\, to \theta\,. Thus, our index is now fully defined for any combination of coefficients in f(x)\, by evaluating \tan[\theta]=\mathfrak{I}M[f(x)]/\mathfrak{R}E[f(x)]\, over the interval (a,b) = (+j\infty, -j\infty)\, when our starting (and thus ending) point is not an an incongruity, and by evaluating


\tan[\theta'(x)]=\tan[\theta + \pi/2] = -\cot[\theta(x)] = -\mathfrak{R}E[f(x)]/\mathfrak{I}M[f(x)]  \quad (18)\,


over said interval when our starting point is at an incongruity.


This difference, \Delta\,, of negative and positive jumping incongruities encountered while traversing x\, from -j\infty\, to +j\infty\, is called the Cauchy Index of the tangent of the phase angle, the phase angle being \theta(x)\, or \theta'(x)\,, depending as \theta_a\, is an integer multiple of \pi\, or not.


[edit] The Routh criterion

To derive Routh's criterion, first we'll use a different notation to differentiate between the even and odd terms of f(x)\,:


f(x) = a_0x^n + b_0x^{n-1} + a_1x^{n-2} + b_1x^{n-3} + \cdots  \quad (19)\,


Now we have:


\begin{align}
 f(j\omega) & = a_0(j\omega)^n+b_0(j\omega)^{n-1}+a_1(j\omega)^{n-2}+b_1(j\omega)^{n-3}+\cdots & {}  \quad (20)\\
            & = a_0(j\omega)^n+a_1(j\omega)^{n-2}+a_2(j\omega)^{n-4}+\cdots & {} \quad (21)\\
            & + b_0(j\omega)^{n-1}+b_1(j\omega)^{n-3}+b_2(j\omega)^{n-5}+\cdots \\
\end{align}


Therefore, if n\, is even,


\begin{align}
 f(j\omega) & = (-1)^{n/2}\big[a_0\omega^n+a_1\omega^{n-2}+a_2\omega^{n-4}+\cdots \big] & {}  \quad (22)\\
            & + j(-1)^{(n/2)-1}\big[b_0\omega^{n-1}+b_1\omega^{n-3}+b_2\omega^{n-5}+\cdots \big] & {} \\
\end{align}


and if n is odd:


\begin{align}
 f(j\omega) & = j(-1)^{(n-1)/2}\big[a_0\omega^n+a_1\omega^{n-2}+a_2\omega^{n-4}+\cdots \big] & {}  \quad (23)\\
            & + (-1)^{(n-1)/2}\big[b_0\omega^{n-1}+b_1\omega^{n-3}+b_2\omega^{n-5}+\cdots \big] & {}\\
\end{align}


Now observe that if n\, is an odd integer, then by (3) N+P\, is odd. If N+P\, is an odd integer, then N-P\, is odd as well. Similarly, this same argument shows that when n\, is even, N-P\, will be even. Equation (13) shows that if N-P\, is even, \theta\, is an integer multiple of \pi\,. Therefore, \tan(\theta)\, is defined for n\, even, and is thus the proper index to use when n is even, and similarly \tan(\theta') = \tan(\theta+\pi) = -\cot(\theta)\, is defined for n\, odd, making it the proper index in this latter case.


Thus, from (6) and (22), for n\, even:


\Delta = I_{-\infty}^{+\infty}\frac{-\mathfrak{I}M[f(x)]}{\mathfrak{R}E[f(x)]}= I_{-\infty}^{+\infty}\frac{b_0\omega^{n-1}-b_1\omega^{n-3}+\cdots}{a_0\omega^n-a_1\omega^{n-2}+\ldots}  \quad (24)\,


and from (18) and (23), for n even:


\Delta = I_{-\infty}^{+\infty}\frac{\mathfrak{R}E[f(x)]}{\mathfrak{I}M[f(x)]}= I_{-\infty}^{+\infty}\frac{b_0\omega^{n-1}-b_1\omega^{n-3}+\ldots}{a_0\omega^n-a_1\omega^{n-2}+\ldots}  \quad (25)\,


Low and behold we are evaluating the same Cauchy index for both:


\Delta = I_{-\infty}^{+\infty}\frac{b_0\omega^{n-1}-b_1\omega^{n-3}+\ldots}{a_0\omega^n-a_1\omega^{n-2}+\ldots}    \quad (26)\,


[edit] Sturm's theorem

Sturm gives us a method for evaluating \Delta = I_{-\infty}^{+\infty}\frac{f_2(x)}{f_1(x)}\,. His theorem states as follows:


Given a sequence of polynomials f_1(x),f_2(x), \dots, f_m(x)\, where:


1) If f_k(x) = 0\, then f_{k-1}(x) \neq 0\,, f_{k+1}(x) \neq 0\,, and  \operatorname{sign}[f_{k-1}(x)] = - \operatorname{sign}[f_{k+1}(x)]\,


2) f_m(x) \neq 0 \, for -\infty < x < \infty\,


and we define V(x)\, as the number of changes of sign in the sequence f_1(x),f_2(x), \dots, f_m(x)\, for a fixed value of x\,, then:


\Delta = I_{-\infty}^{+\infty}\frac{f_2(x)}{f_1(x)}= V(-\infty) - V(+\infty)    \quad (27)\,


A sequence satisfying these requirements is obtained using the Euclidean algorithm, which is as follows:


Starting with f_1(x)\, and f_2(x)\,, and denoting the remainder of f_1(x)/f_2(x)\, by f_3(x)\, and similarly denoting the remainder of f_2(x)/f_3(x)\, by f_4(x)\,, and so on, we obtain the relationships:


\begin{align}
&f_1(x)= q_1(x)f_2(x) - f_3(x)    \quad (28)\\
&f_2(x)= q_2(x)f_3(x) - f_4(x) \\
&     \ldots \\
&f_{m-1}(x)= q_{m-1}(x)f_m(x) \\
\end{align}


or in general


f_{k-1}(x)= q_{k-1}(x)f_k(x) - f_{k+1}(x)\,


where the last non-zero remainder, f_m(x)\, will therefore be the highest common factor of f_1(x),f_2(x), \dots, f_{m-1}(x)\,. It can be observed that the sequence so constructed will satisfy the conditions of Sturm's theorem, and thus an algorithm for determining the stated index has been developed.


It is in applying Sturm's theorem (28) to (26), through the use of the Euclidean algorithm above that the Routh matrix is formed.


We get


f_3(\omega) = \frac {a_0}{b_0}f_2(\omega) - f_1(\omega)   \quad (29) \,


and identifying the coefficients of this remainder by c_0\,, -c_1\,, c_2\,, -c_3\,, and so forth, makes our formed remainder


f_3(\omega) = c_0\omega^{n-2} - c_1\omega^{n-4} + c_2\omega^{n-6} - \cdots   \quad (30)\,


where


c_0 = a_1 - \frac{a_0}{b_0}b_1 = \frac{b_0a_1 - a_1b_0}{b_0}; c_1 = a_2 - \frac{a_0}{b_0}b_2 = \frac{b_0a_2 - a_0b_2}{b_0};\ldots   \quad (31)\,


Continuing with the Euclidean algorithm on these new coefficients gives us


f_4(\omega) = \frac {b_0}{c_0}f_3(\omega) - f_2(\omega)   \quad (32)\,


where we again denote the coefficients of the remainder f_4(\omega)\, by d_0\,, -d_1\,, d_2\,, -d_3\,,


making our formed remainder


f_4(\omega) = d_0\omega^{n-3} - d_1\omega^{n-5} + d_2\omega^{n-7} - \cdots   \quad (33)\,


and giving us


d_0 = b_1 - \frac{b_0}{c_0}c_1 = \frac{c_0b_1 - b_1c_0}{c_0}; d_1 = b_2 - \frac{b_0}{c_0}c_2 = \frac{c_0b_2 - b_0c_2}{c_0};\ldots   \quad (34)\,


The rows of the Routh array are determined exactly by this algorithm when applied to the coefficients of (19). An observation worthy of note is that in the regular case the polynomials f_1(\omega)\, and f_2(\omega)\, have as the highest common factor f_{n+1}(\omega)\, and thus there will be n\, polynomials in the chain f_1(x),f_2(x), \dots, f_m(x)\,.


Note now, that in determining the signs of the members of the sequence of polynomials f_1(x),f_2(x), \dots,f_m(x)\, that at \omega = \pm \infty\, the dominating power of \omega\, will be the first term of each of these polynomials, and thus only these coefficients corresponding to the highest powers of \omega\, in f_1(x),f_2(x), \dots, and f_m(x)\,, which are a_0\,, b_0\,, c_0\,, d_0\,, ... determine the signs of f_1(x)\,, f_2(x)\,, ..., f_m(x)\, at \omega = \pm\infty\,.


So we get V(+\infty)=V(a_0, b_0, c_0, d_0, \dots)\, that is, V(+\infty)\, is the number of changes of sign in the sequence a_0\infty^n\,, b_0\infty^{n-1}\,, c_0\infty^{n-2}\,, ... which is the number of sign changes in the sequence a_0\,, b_0\,, c_0\,, d_0\,, ... and V(-\infty)=V(a_0, -b_0, c_0, -d_0, ...)\,; that is V(-\infty)\, is the number of changes of sign in the sequence a_0(-\infty)^n\,, b_0(-\infty)^{n-1}\,, c_0(-\infty)^{n-2}\,, ... which is the number of sign changes in the sequence a_0\,, -b_0\,, c_0\,, -d_0\,, ...


Since our chain a_0\,, b_0\,, c_0\,, d_0\,, ... will have n\, members it is clear that V(+\infty) + V(-\infty) = n\, since within V(a_0, b_0, c_0, d_0, \dots)\, if going from a_0\, to b_0\, a sign change has not occurred, within V(a_0, -b_0, c_0, -d_0, \dots)\, going from a_0\, to -b_0\, one has, and likewise for all n\, transitions (there will be no terms equal to zero) giving us n\, total sign changes.


As \Delta = V(-\infty) - V(+\infty)\, and n = V(+\infty) + V(-\infty)\,, and from (17) P = (n - \Delta/2)\,, we have that P = V(+\infty) = V(a_0, b_0, c_0, d_0, \dots)\, and have derived Routh's theorem -


The number of roots of a real polynomial f(z)\, which lie in the right half plane \mathfrak{R}E(r_i) > 0\, is equal to the number of changes of sign in the first column of the Routh scheme.


And for the stable case where P = 0\, then V(a_0, b_0, c_0, d_0, \dots) = 0\, by which we have Routh's famous criterion:


In order for all the roots of the polynomial f(z)\, to have negative real parts, it is necessary and sufficient that all of the elements in the first column of the Routh scheme be different from zero and of the same sign.


[edit] References

  • Hurwitz, A., "On the Conditions under which an Equation has only Roots with Negative Real Parts", Rpt. in Selected Papers on Mathematical Trends in Control Theory, Ed. R. T. Ballman et al. New York: Dover 1964
  • Routh, E. J., A Treatise on the Stability of a Given State of Motion. London: Macmillan, 1877. Rpt. in Stability of Motion, Ed. A. T. Fuller. London: Taylor & Francis, 1975
  • Gantmacher, F.R., Applications of the Theory of Matrices. Trans. J. L. Brenner et al. New York: Interscience, 1959