Hodge index theorem

From Wikipedia, the free encyclopedia

In mathematics, the Hodge index theorem for an algebraic surface V determines the signature of the intersection pairing on the algebraic curves C on V. It says, roughly speaking, that the space spanned by such curves (up to linear equivalence) has a one-dimensional subspace on which it is positive definite (not uniquely determined), and decomposes as a direct sum of some such one-dimensional subspace, and a complementary subspace on which it is negative definite.

In a more formal statement, specify that V is a non-singular projective variety, and let H be the divisor class on V of a hyperplane section of V in a given projective embedding. Then the intersection

H·H = d

where d is the degree of V (in that embedding). Let D be the vector space of rational divisor classes on V, up to algebraic equivalence which is of finite dimension usually denoted by ρ(V). Then there is a complementary subspace to <H>, the subspace spanned by H in D, on which the intersection pairing is negative definite. Therefore the signature (often also called index) is (1,ρ(V)).

The abelian group of divisor classes up to algebraic equivalence is now called the Néron-Severi group; it is known to be a finitely-generated abelian group, and the result is about its tensor product with the rational number field. Therefore ρ(V) is equally the rank of the Néron-Severi group (which can have a non-trivial torsion subgroup, on occasion).

This result was proved in the 1930s by W. V. D. Hodge, for varieties over the complex numbers, after it had been a conjecture for some time of the Italian school of algebraic geometry (in particular, Francesco Severi, who in this case showed that ρ < ∞). Hodge's methods were the topological ones brought in by Lefschetz. The result holds over general (algebraically closed) fields.