Jack function

From Wikipedia, the free encyclopedia

In mathematics, the Jack function, introduced by Henry Jack, is a homogeneous, symmetric polynomial which generalizes the Schur and zonal polynomials, and is in turn generalized by the Macdonald polynomials.

Contents

[edit] Definition

The Jack function J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m) of integer partition κ, parameter α and arguments x_1,x_2,\ldots, can be recursively defined as follows:

  • For m = 1 :
J_{(k)}^{(\alpha )}(x_1)=x_1^k(1+\alpha)\cdots (1+(k-1)\alpha)
  • For m > 1:
J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)=\sum_\mu J_\mu^{(\alpha )}(x_1,x_2,\ldots,x_{m-1}) x_m^{|\kappa /\mu|}\beta_{\kappa \mu},
where the summation is over all partitions μ such that the skew partition κ / μ is a horizontal strip, namely
\kappa_1\ge\mu_1\ge\kappa_2\ge\mu_2\ge\cdots\ge\kappa_{n-1}\ge\mu_{n-1}\ge\kappa_n (μn must be zero or otherwise J_\mu(x_1,\ldots,x_{n-1})=0) and
\beta_{\kappa\mu}=\frac{  \prod_{(i,j)\in \kappa} B_{\kappa\mu}^\kappa(i,j) }{ \prod_{(i,j)\in \mu} B_{\kappa\mu}^\mu(i,j) },
where B_{\kappa\mu}^\nu(i,j) equals κj' − i + α(κij + 1) if κj' = μj' and κj' − i + 1 + α(κij) otherwise. The expressions κ' and μ' refer to the conjugate partitions of κ and μ, respectively. The notation (i,j)\in\kappa means that the product is taken over all coordinates (i,j) of boxes in the Young diagram of the partition κ.

[edit] C normalization

The Jack functions form an orthogonal basis in a space of symmetric polynomials. This orthogonality property is unaffected by normalization. The normalization defined above is typically referred to as the J normalization. The C normalization is defined as

C_\kappa^{(\alpha)}(x_1,x_2,\ldots,x_n) = \frac{\alpha^{|\kappa|}(|\kappa|)!} {j_\kappa} J_\kappa^{(\alpha)}(x_1,x_2,\ldots,x_n),

where

j_\kappa=\prod_{(i,j)\in \kappa} (\kappa_j'-i+\alpha(\kappa_i-j+1))(\kappa_j'-i+1+\alpha(\kappa_i-j)).

For \alpha=2,\; C_\kappa^{(2)}(x_1,x_2,\ldots,x_n) denoted often as just C_\kappa(x_1,x_2,\ldots,x_n) is known as the Zonal polynomial.

[edit] Connection with the Schur polynomial

When α = 1 the Jack function is a scalar multiple of the Schur polynomial

J^{(1)}_\kappa(x_1,x_2,\ldots,x_n) = H_\kappa s_\kappa(x_1,x_2,\ldots,x_n),

where

H_\kappa=\prod_{(i,j)\in\kappa} h_\kappa(i,j)= \prod_{(i,j)\in\kappa} (\kappa_i+\kappa_j'-i-j+1)

is the product of all hook lengths of κ.

[edit] Properties

If the partition has more parts than the number of variables, then the Jack function is 0:

J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m)=0, \mbox{ if }\kappa_{m+1}>0.

[edit] Matrix argument

In some texts, especially in random matrix theory, authors have found it more convenient to use a matrix argument in the Jack function. The connection is simple. If X is a matrix with eigenvalues x_1,x_2,\ldots,x_m, then

J_\kappa^{(\alpha )}(X)=J_\kappa^{(\alpha )}(x_1,x_2,\ldots,x_m).

[edit] References

  • James Demmel and Plamen Koev, "Accurate and efficient evaluation of Schur and Jack functions", Math. Comp., 75, no. 253, 223–239, 2005.
  • H. Jack, "A class of symmetric polynomials with a parameter", Proc. Roy. Soc. Edinburgh Sect. A, 69, 1-18, 1970/1971.
  • I. G. Macdonald, Symmetric functions and Hall polynomials, Second ed., Oxford University Press, New York, 1995.
  • Richard Stanley, "Some combinatorial properties of Jack symmetric functions", Adv. Math., 77, no. 1, 76–115, 1989.

[edit] External link