Descent (category theory)
From Wikipedia, the free encyclopedia
In mathematics, the idea of descent has come to stand for a very general idea, extending the intuitive idea of 'gluing' in topology. Since the topologists' glue is actually the use of equivalence relations on topological spaces, the theory starts with some ideas on identification.
At a fundamental level, the passage to a quotient space is not very well-behaved in topology. A current theory that addresses the issue is non-commutative geometry. Alexander Grothendieck encountered related issues at the end of the 1950s, in laying the modern foundations of algebraic geometry. There the situation on passing to a quotient space is much worse, because (roughly speaking) polynomials are less 'flexible' than general continuous functions. Grothendieck's paradigm is simply stated as this: pass from a space X to the category Sh(X) of all sheaves of sets on it. This doesn't lose much (see soberification). Then try to understand continuous mappings f from Y to X in terms of 'descending' sheaves from Y to X, in other words the image of the inverse image functor f*. (The intuition is that Y lies above X as a sort of covering space, and the 'down' direction is from Y to X.)
A sophisticated theory resulted. It was a tribute to the efforts to use category theory to get around the alleged 'brutality' of imposing equivalence relations within geometric categories. One out-turn was the eventual definition adopted in topos theory of geometric morphism, to get the correct notion of surjectivity.
[edit] Descent of vector bundles
The case of the construction of vector bundles from data on a disjoint union of topological spaces is a straightforward place to start.
Suppose X is a topological space covered by open sets Xi. Let Y to be the disjoint union of the Xi, so that there is a natural mapping
- p : Y → X.
We think of Y as 'above' X, with the Xi projection 'down' onto X. With this language, descent implies a vector bundle on Y (so, a bundle given on each Xi), and our concern is to 'glue' those bundles Vi, to make a single bundle V on X. What we mean is that V should, when restricted to Xi, give back Vi, up to a bundle isomorphism.
The data needed is then this: on each overlap
- Xij,
intersection of Xi and Xj, we'll require mappings
- fij
to use to identify Vi and Vj there, fiber by fiber. Further the fij must satisfy conditions based on the reflexive, symmetric and transitive properties of an equivalence relation (gluing conditions). For example the composition
- fijofjk = fik
for transitivity (and choosing apt notation). The fii should be identity maps and hence the symmetry becomes invertibility of fij (so that it is fiberwise an isomorphism).
These are indeed standard conditions in fiber bundle theory (see transition function). One important application to note is change of fiber: if the fij are all you need to make a bundle, then there are many ways to make an associated bundle. That is, we can take essentially same fij, acting on various different fibers.
Another major point is the relation with the chain rule: the discussion of the way there of constructing tensor fields can be summed up as 'once you learn to descend the tangent bundle, for which transitivity is the Jacobian chain rule, the rest is just 'naturality of tensor constructions'.
To move closer towards the abstract theory we need to interpret the disjoint union of the
- Xij
now as
- Y×XY,
the fiber product (here an equalizer) of two copies of the projection p. The bundles on the Xij that we must control are actually V′ and V", the pullbacks to the fiber of V via the two different projection maps to X.
Therefore by going to a more abstract level one can eliminate the combinatorial side (that is, leave out the indices) and get something that makes sense for p not of the special form of covering with which we began. This then allows a category theory approach: what remains to do is to re-express the gluing conditions.
[edit] History
The ideas here flourished in the period 1955-1965 (which was roughly the time at which the requirements of algebraic topology were met but those of algebraic geometry were not). From the point of view of abstract category theory the work of comonads of Beck was a summation of those ideas; see Beck's monadicity theorem.
The difficulties of algebraic geometry with passage to the quotient are acute: it is like doing the non-commutative geometry of Connes, to mention the currently-fashionable theory in the area of 'bad quotients', but with polynomials to separate points, rather than general continuous functions. The urgency (to put it that way) of the problem for the geometers accounts for the title of the 1959 Grothendieck seminar TDTE on theorems of descent and techniques of existence (see FGA) connecting the descent question with the representable functor question in algebraic geometry in general, and the moduli problem in particular.
As with a number of the more abstract flights of the Grothendieck school, later work relied on some of this and bypassed other parts (to the extent that the papers, published only in mimeographed form, may have already become hard to find). There were six TDTE Bourbaki seminars given by Grothendieck, which were incorporated in the FGA collection. That is now posted as online PDF.
The work a few years later of David Mumford in his geometric invariant theory spectacularly mixed scheme and categorical techniques with more concrete geometry, to construct moduli spaces for curves and abelian varieties (for the first time, in the required technical sense of 'moduli').