Transfer matrix

From Wikipedia, the free encyclopedia

In applied mathematics, the transfer matrix is a formulation in terms of a block-Toeplitz matrix of the two-scale equation, which characterizes refinable functions. Refinable functions play an important role in wavelet theory and finite element theory.

For the mask h, which is a vector with component indexes from a to b, the transfer matrix of h, we call it T_{h} here, is defined as

(T_{h})_{{j,k}}=h_{{2\cdot j-k}}.

More verbosely

T_{h}={\begin{pmatrix}h_{{a}}&&&&&\\h_{{a+2}}&h_{{a+1}}&h_{{a}}&&&\\h_{{a+4}}&h_{{a+3}}&h_{{a+2}}&h_{{a+1}}&h_{{a}}&\\\ddots &\ddots &\ddots &\ddots &\ddots &\ddots \\&h_{{b}}&h_{{b-1}}&h_{{b-2}}&h_{{b-3}}&h_{{b-4}}\\&&&h_{{b}}&h_{{b-1}}&h_{{b-2}}\\&&&&&h_{{b}}\end{pmatrix}}.

The effect of T_{h} can be expressed in terms of the downsampling operator "\downarrow ":

T_{h}\cdot x=(h*x)\downarrow 2.

Properties

  • T_{h}\cdot x=T_{x}\cdot h.
  • If you drop the first and the last column and move the odd-indexed columns to the left and the even-indexed columns to the right, then you obtain a transposed Sylvester matrix.
  • The determinant of a transfer matrix is essentially a resultant.
More precisely:
Let h_{{{\mathrm  {e}}}} be the even-indexed coefficients of h ((h_{{{\mathrm  {e}}}})_{k}=h_{{2k}}) and let h_{{{\mathrm  {o}}}} be the odd-indexed coefficients of h ((h_{{{\mathrm  {o}}}})_{k}=h_{{2k+1}}).
Then \det T_{h}=(-1)^{{\lfloor {\frac  {b-a+1}{4}}\rfloor }}\cdot h_{a}\cdot h_{b}\cdot {\mathrm  {res}}(h_{{{\mathrm  {e}}}},h_{{{\mathrm  {o}}}}), where {\mathrm  {res}} is the resultant.
This connection allows for fast computation using the Euclidean algorithm.
{\mathrm  {tr}}~T_{{g*h}}={\mathrm  {tr}}~T_{g}\cdot {\mathrm  {tr}}~T_{h}
  • For the determinant of the transfer matrix of convolved mask holds
\det T_{{g*h}}=\det T_{g}\cdot \det T_{h}\cdot {\mathrm  {res}}(g_{-},h)
where g_{-} denotes the mask with alternating signs, i.e. (g_{-})_{k}=(-1)^{k}\cdot g_{k}.
  • If T_{{h}}\cdot x=0, then T_{{g*h}}\cdot (g_{-}*x)=0.
This is a concretion of the determinant property above. From the determinant property one knows that T_{{g*h}} is singular whenever T_{{h}} is singular. This property also tells, how vectors from the null space of T_{{h}} can be converted to null space vectors of T_{{g*h}}.
  • If x is an eigenvector of T_{{h}} with respect to the eigenvalue \lambda , i.e.
T_{{h}}\cdot x=\lambda \cdot x,
then x*(1,-1) is an eigenvector of T_{{h*(1,1)}} with respect to the same eigenvalue, i.e.
T_{{h*(1,1)}}\cdot (x*(1,-1))=\lambda \cdot (x*(1,-1)).
  • Let \lambda _{a},\dots ,\lambda _{b} be the eigenvalues of T_{h}, which implies \lambda _{a}+\dots +\lambda _{b}={\mathrm  {tr}}~T_{h} and more generally \lambda _{a}^{n}+\dots +\lambda _{b}^{n}={\mathrm  {tr}}(T_{h}^{n}). This sum is useful for estimating the spectral radius of T_{h}. There is an alternative possibility for computing the sum of eigenvalue powers, which is faster for small n.
Let C_{k}h be the periodization of h with respect to period 2^{k}-1. That is C_{k}h is a circular filter, which means that the component indexes are residue classes with respect to the modulus 2^{k}-1. Then with the upsampling operator \uparrow it holds
{\mathrm  {tr}}(T_{h}^{n})=\left(C_{k}h*(C_{k}h\uparrow 2)*(C_{k}h\uparrow 2^{2})*\cdots *(C_{k}h\uparrow 2^{{n-1}})\right)_{{[0]_{{2^{n}-1}}}}
Actually not n-2 convolutions are necessary, but only 2\cdot \log _{2}n ones, when applying the strategy of efficient computation of powers. Even more the approach can be further sped up using the Fast Fourier transform.
  • From the previous statement we can derive an estimate of the spectral radius of \varrho (T_{h}). It holds
\varrho (T_{h})\geq {\frac  {a}{{\sqrt  {\#h}}}}\geq {\frac  {1}{{\sqrt  {3\cdot \#h}}}}
where \#h is the size of the filter and if all eigenvalues are real, it is also true that
\varrho (T_{h})\leq a,
where a=\Vert C_{2}h\Vert _{2}.

See also

References

  • Strang, Gilbert (1996). "Eigenvalues of (\downarrow 2){H} and convergence of the cascade algorithm". IEEE Transactions on Signal Processing 44. pp. 233–238. 
  • Thielemann, Henning (2006). Optimally matched wavelets (PhD thesis).  (contains proofs of the above properties)
This article is issued from Wikipedia. The text is available under the Creative Commons Attribution/Share Alike; additional terms may apply for the media files.