Wikipedia:Reference desk/Archives/Mathematics/2007 January 15

From Wikipedia, the free encyclopedia

Mathematics desk
< January 14 << Dec | January | Feb >> January 16 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] January 15

[edit] conversions

want to convert this problem 5.78ft to cm —The preceding unsigned comment was added by 69.221.13.170 (talk) 01:50, 15 January 2007 (UTC).

Is this a homework problem? This page on Wikibooks explains one way of doing unit conversions. Remember that 1 inch = 2.54 cm. Dave6 03:53, 15 January 2007 (UTC)
Just want to convert : google it! (When I was a young scout boy, I dreamed of the ultimate swiss knife.) -- DLL .. T 17:07, 15 January 2007 (UTC)
Just type in "5.78 ft to cm" I use it all the time!! X [Mac Davis] (DESK|How's my driving?) 00:33, 19 January 2007 (UTC)

wow you are lazy Rya Min 19:54, 15 January 2007 (UTC)

[edit] Hi!

hello!

what is the square root of minus one? My math teacher told us to find out. My mom said there isn't one but my teacher must know more than she does.

Thanks, My Username is... 18:45, 15 January 2007 (UTC)

Oh, the memories... To be young and discover that everything you were ever taught was a lie...
Seriously, though, when most people talk about numbers, they actually mean what mathematicians call real numbers, and those are the only numbers taught throughout most of grade school. Your mother is right in the sense that there is no real number which is the square root of minus 1.
However, you can also consider a larger collection of numbers, called the complex numbers, where there is a number called i, which has the property i2 = -1, so it is a square root of -1. The number (-i) also has the same property, but usually the phrase "the square root of -1" is used to address i specficially.
So, depending primarily on which grade you are in, your teacher may have wanted you to come to the conclusion that there is no square root for -1 in the system of real numbers, or that the complex number i is its square root. -- Meni Rosenfeld (talk) 19:11, 15 January 2007 (UTC)
Agreed, although numbers which only contain an imaginary component are often called "imaginary numbers", like "i" or "-3i", while numbers which have both real and imaginary components, like "2 - 3i", are called complex numbers. Also, "j" is sometimes used instead of "i". StuRat 21:54, 15 January 2007 (UTC)
To be even more precise, real and imaginary numbers are also complex numbers; the set of complex numbers contains both of them as subsets. As a mathematical structure the purely imaginary numbers are not very useful: a real number times a real number is a real number, a complex number times a complex number is a complex number, but an imaginary number times an imaginary number is in general not an imaginary number.  --LambiamTalk 22:26, 15 January 2007 (UTC)
Nice teacher. This is a very provocative question, and (as we see here) can lead to fun and important mathematics. It surely raises one fundamental question. What is a number?
If a number is something with which we count, namely 0, 1, 2, …, then most numbers do not have square roots. (The square root of a number n is a number a such that a×a = n.) However, these numbers do not include −1, so we cannot properly ask the question! If a number is a difference of counting numbers, namely …, −2, −1, 0, 1, 2, …, such as we would use for, say, bookkeeping, then we can ask the question. Yet now we find that none of the negative numbers have square roots. Using ratios of integers does not help. We can define "algebraic numbers" to solve any such problem, but that feels like cheating. A more interesting, and valuable, ploy is to change our definition of equality. For example, we can start with integers, but say that two integers are equal modulo 17 if their difference is a multiple of 17. In this example we find that 4 and 13 are both square roots of −1, modulo 17. However, half of our numbers still have no square root, which leads us to the remarkable theory of quadratic reciprocity.
The usual development of numbers takes us from counting (natural numbers) to differences (integers) to fractions (rational numbers) and then to "filling in the gaps", the real numbers. Technically, this is much more complicated; but geometrically, it is quite natural. For example, a square whose side is one unit long has a diagonal whose squared length must be 2, by the Pythagorean theorem. So a square root of two naturally occurs in geometry, though it cannot be expressed as a ratio of integers. Or consider the circumference of a circle whose diameter is one unit long; its length is π, which is also not a rational number. Yet, although real numbers give us square roots for zero and all positive real numbers, they fail completely for negative numbers.
No worries; we use geometry more cleverly. Arrange the real numbers on a line, with zero in the middle, positive numbers increasing as we go right, and negative numbers going left. Negating a number is a 180° reversal. If we include turning in our numbers, the square root of −1 requires a 90° turn! Thus the "real line" becomes the "complex plane", and our numbers are, in a sense, two-dimensional. And these complex numbers have a remarkable property, expressed in the fundamental theorem of algebra. It works as follows. The square root of two is a number, x, that causes the formula x2−2 to evaluate to zero. This formula is a polynomial, a sum of terms, each term being a number times a natural power of x (where x0 is defined to be 1). The theorem says that for every polynomial we can find a complex number that produces a zero.
Are we done? For many practical purposes, yes. Complex numbers are the culmination of thousands of years of mathematical development, and support a rich, full theory of numbers. However, mathematicians were too curious and creative to stop there. The great Irish mathematician and theoretical physicist William Rowan Hamilton felt that if complex numbers were so natural a fit to the plane, then perhaps a more sophisticated kind of number would be natural for three-dimensional space. Fifteen years of failure did not stop him from trying; and on the evening of the sixteenth of October in 1843, as he and his wife were crossing the Broom Bridge in Dublin on the way to a meeting, inspiration struck him like a jolt of lightning. What he discovered was a vast new freedom in the definition of numbers and algebra, and mathematics was changed forever. For example, in 1858 Cayley published Memoir on the theory of matrices, which contained the first abstract definition of a matrix, a pivotal use of the new ideas that influences almost every application of mathematics today.
One example is irresistible. Consider the following pair of 2×2 matrices:
\begin{align}
 I &{}= \begin{bmatrix}1&0\\0&1\end{bmatrix} \\
 J &{}= \begin{bmatrix}0&-1\\1&0\end{bmatrix}
\end{align}
Using the standard rules of matrix multiplication, the product of I times any 2×2 matrix, A, from either side, has no effect: IA = AI = A. Thus I acts like 1 does for ordinary numbers. Its negative,
 -I = \begin{bmatrix}-1&0\\0&-1\end{bmatrix} ,
acts like −1. The product of J with itself produces −I, so we can view J as a square root of −1. Now take weighted sums of I and J,
 a I + b J = \begin{bmatrix}a&-b\\b&a\end{bmatrix} ,
and call these our "numbers". These add, subtract, multiply, and divide exactly like complex numbers! (Using 4×4 matrices we can reproduce Hamilton's breakthrough quaternion numbers.) And in terms of analytic geometry, J is the matrix for a 90° rotation, a satisfying conclusion. --KSmrqT 05:43, 16 January 2007 (UTC)
Now that is very cool. So simple! X [Mac Davis] (DESK|How's my driving?) 00:32, 19 January 2007 (UTC)
That is one of the coolest things I've ever read! You put well enough that even I, a 9th grader, could understand it. Its amazing how math always works out. Imaninjapiratetalk to me 22:00, 26 January 2007 (UTC)

[edit] Class operators

My friend and I have been thinking about this problem for about an hour or so, and we're stumped (and disappointed in ourselves). We need a class \mathsf{K} for which \mathsf{PH}(\mathsf{K}) is properly contained in \mathsf{HP}(\mathsf{K}). In other words, we need algebras A and B such that A is a homomorphic image of a product of copies of B, but is not a product of homomorphic images of B. Any help is appreciated, as we fear that we might have to go to the bar for a bit to think about the problem. =) –King Bee (TC) 21:29, 15 January 2007 (UTC)

Working in the variety of sets without additional structure, let B have 2 elements and let A have 3? Melchoir 01:18, 16 January 2007 (UTC)
Well, sets are a variety; i.e., closed under H, S, and P. With your example, you could take B, map that set surjectively to a one element set, then take the threefold product of it (to get something set isomorphic to A). Thanks for taking the time to look, however; do you have any other ideas? I thought that starting off with a simple algebra might be a good idea (one without any homomorphic images). –King Bee (TC) 02:46, 16 January 2007 (UTC)
Uh… no product of 1- or 2-element sets is ever going to yield a 3-element set, yes? It seems to me that if K = {B} where B has 2 elements, then PH(K) contains only sets whose sizes are powers of 2, whereas HP(K) contains all nonempty sets. Melchoir 04:41, 16 January 2007 (UTC)
Ah, I understand you now; I don't know what I was saying. However, sets have no structure on them whatsoever; I wonder if they even classify as algebras. Even if they do, this solution is a bit unsatisfying. Thank you for responding, however, I appreciate your input. –King Bee (TC) 17:22, 16 January 2007 (UTC)
I know what you mean, but I can assure you at least that sets are algebras in the sense of universal algebra: they are algebras with no operations and no identities. (There's only one identity that can be enforced, x = y, which restricts one to the subvariety of 0- and 1-element sets.) If you really want to get technical, the variety of sets is the variety of Ω-algebras where Ω is itself the empty set.
Also, if you'd like a variant with a bit more structure, you could probably adapt my example to, say, the varieties of pointed sets, or of magmas, or of dynamical systems. Are those a little more satisfying? Melchoir 20:02, 16 January 2007 (UTC)
Thinking more about your solution, I am actually surprised by its cleverness. Instead of taking sets, take the algebra B to be a two element set with a unary operation that fixes every element. Take A to be the 3 element set with the same unary operation (fixing every element). Then PH contains only sets with even sizes, and HP can contain sets of pretty much any size. I like it a lot more when it's worded in this fashion.
I also am wondering if there's some example of this using groups/rings. I want an example to have that can be brought up in an undergraduate abstract algebra course, so using algebras that such students would be familiar with would be very nice. Thanks again, sorry I wrote you off so quickly before. –King Bee (TC) 21:49, 16 January 2007 (UTC)
Well, I'm glad I helped! I think groups have the same problem as rings: that all homomorphic images are given by quotients. Maybe one can get past that using the non-commutativity of the group operation, as opposed to ring addition? Melchoir 22:52, 16 January 2007 (UTC)
(So that others can follow along, the general subject matter is universal algebra.) Do not strip structure; add it. Consider two rings (with multiplicative identity). Their product is a ring with injections from neither, because we cannot preserve the multiplicative identity. So homomorphisms from this product can give us rings we cannot get from products of homomorphisms. (I have not worked through a specific example, but I think this attack should work.) --KSmrqT 08:28, 16 January 2007 (UTC)
The goal of this problem is to construct a pathological homomorphism, and adding structure makes homomorphisms harder to find, almost by definition. In your case, you'll have to find an ideal in the product ring that projects badly enough to make
\frac{A\times B}{I} \neq \frac{A}{I_A}\times\frac{B}{I_B}.
Even if this turns out to be possible, and I doubt it, was it worth the trouble? Melchoir 17:17, 16 January 2007 (UTC)
I think Melchoir is right here. I still feel as though starting off with an algebra that is simple is the place to be; I just can't make it work. Maybe something like the omega-fold direct product of a field with itself has some interesting homomorphic image; I don't know. –King Bee (TC) 17:22, 16 January 2007 (UTC)

[edit] Sample size vs. extreme observations

For a one-variable statistic with an infinite, standard-normally-distributed population, what is the relationship between the sample size and the expected highest value observed? NeonMerlin 23:49, 15 January 2007 (UTC)

Given a single measurement of a random variable with a cumulative distribution function F, the probability of getting a result that does not exceed s is F(s). For N measurements, the probability of not exceeding s in any of the measurements is F(s)^N. The derivative of F(s)^N with respect to s will thus tell you the probability of getting no result larger than s+ds in any of them, but a result of at least s in at least one of the measurements. That is thus the probability that your highest observed value was s. In other words, F(s)^N would be the cumulative distribution function of the highest observed value. I made that up from scratch, but a quick googling confirms it, see for example the abstract of [1] --mglg(talk) 00:51, 16 January 2007 (UTC)
That gives me, for the standard normal distribution, a CDF of Φ(x)n =  \left [ \frac{1}{\sqrt{2\pi}}
\int_{-\infty}^x
\exp\left(-\frac{u^2}{2}\right)
\, du \right ]^n
and a PDF of d/dx that. I imagine someone who knows integral calculus will be able to simplify these, but how do I turn either of them into the expected value? NeonMerlin 01:40, 16 January 2007 (UTC)
In general, given a PDF you can calculate the expectation value as Integral[x PDF(x) dx]. I don't think you can get any analytic answer in this case. There is probably a useful approximation for large N, though, but I don't know it. By the way, (essentially) your integral above has a name: the Error function. --mglg(talk) 03:40, 16 January 2007 (UTC)
Let Z denote the highest of n independent random variables having the standard normal distribution. It is possible to give the following upper lower bound on the expected value of Z: E(Z) > Φ−1(n/(n+1)). This can be seen as follows. Φ is a monotonic functions, so it commutes with max. So Φ(Z) is the highest of the Φ-values of n random variables whose (cumulative) distribution function is Φ. But these Φ-values are then simply uniformly distributed on the interval (0,1), so Φ(Z) has distribution function F(x) = xn (as per above), and E(Φ(Z)) is now easily found to be n/(n+1). Since function Φ−1 is concave convex in the area of interest (for n > 1), E(Z) = E(Φ−1(Φ(Z))) > Φ−1(E(Φ(Z))) = Φ−1(n/(n+1)). I have not tried to estimate how tight this is, but I wouldn't be too surprised if E(Z) = o(Φ−1(n/(n+1))).  --LambiamTalk 05:21, 16 January 2007 (UTC)
Nice, Lambiam. But shouldn't it be "convex" and "lower bound", i.e. E(Z) > Φ−1(n/(n+1)) ? --mglg(talk) 19:12, 16 January 2007 (UTC)
You're right. In the meantime I have done some calculations, which suggest that the difference is not a runaway one; putting E(Z) = Φ−1(n/(nn)), the quantity δn appears to decrease very slowly as n increases:
      n  delta_n
  -----  -------
      1  1.00000
      2  0.80231
      4  0.71501
      8  0.67000
     16  0.64408
     32  0.62783
     64  0.61688
    128  0.60909
    256  0.60326
    512  0.59874
   1024  0.59511
   2048  0.59213
   4096  0.58965
   8192  0.58755
  16384  0.58578
It looks like δn will remain above 0.5.  --LambiamTalk 21:51, 16 January 2007 (UTC)

[edit] intergration

in some of my physics lectures and my maths a level course my teachers/lectures would make references about how its not "proper" to spilt dy/dx when doing spiting variable integration of any other maths that need the dy/dx on separate sides, so a) why is it "wrong" and b) what's the "proper" way to do it?--137.205.79.218 00:56, 16 January 2007 (UTC)

You're probably not doing it wrong, he may have made reference to it being "improper" because "dy/dx" is a symbol that means "derivative of y with respect to x." Even though it looks like a fraction, you shouldn't be able to treat it like one. However, you can under certain circumstances; he just wouldn't tell you why (probably to save you the horror of a "boring" proof). –King Bee (TC) 02:39, 16 January 2007 (UTC)
If you give us an example of a disapproved-of instance of how you were doing it, we might be able to say whether it was proper or not, and, if not, why not. Without such an example we can only guess.  --LambiamTalk 04:41, 16 January 2007 (UTC)
I think it's when you do something like splitting variables for integration. For example, the "wrong" way:
\frac{dy}{dx}=\frac{f(x)}{g(y)}
g(y) \frac{dy}{dx}=f(x)
g(y) \, dy=f(x) \, dx
\int g(y) \, dy=\int f(x) \, dx
And the "right" way:
\frac{dy}{dx}=\frac{f(x)}{g(y)}
g(y) \frac{dy}{dx}=f(x)
\int g(y) \frac{dy}{dx} \, dx=\int f(x) \, dx
\int g(y) \, dy=\int f(x) \, dx
Note that the second way isn't justified because the derivative is treated as a fraction. It may be useful to think of it as a fraction, yes, but you shouldn't write it down because it is wrong. x42bn6 Talk 21:14, 16 January 2007 (UTC)
It is possible to give a precise and rigorous meaning to infinitesimals like dx and dy so that the "wrong" way can be justified and becomes perfectly right.  --LambiamTalk 22:12, 16 January 2007 (UTC)

[edit] Twisted loop

You know the trick where you twist a strip of paper and glue the ends, so that you can draw a line along the length of the strip and arrive at where you begin without lifting your pen? What's it called and where can I read more about it? Thanks. Xiner (talk, email) 03:36, 16 January 2007 (UTC)

You mean a Möbius strip? --mglg(talk) 03:43, 16 January 2007 (UTC)
I believe you are referring to a Möbius strip. − Twas Now 03:44, 16 January 2007 (UTC)
Ah, yes. Thank you both! Xiner (talk, email) 03:55, 16 January 2007 (UTC)

[edit] Stock Options Question

Let me use microsoft as an example: http://www.marketwatch.com/tools/quotes/options1.asp?symb=MSFT&sid=3140

When I posted this, MSFT was at $31.21 My question is, why are the call options for 32.50 and up so cheap? Maybe I am reading the charts wrong, but it seems like the premium for all out of the money calls are merely 5 cents...Isnt this a worthy gamble? 140.180.1.250 05:41, 16 January 2007 (UTC)

The call options that you are looking at are January 2007 options, so they have next to no time value, and no intrinsic value (because they are out of the money). Presumably 5 cents is a minimum premium threshold - you can buy these call options, but they will cost you next to nothing because they will almost certainly be worthless when they expire in a couple of week's time.
On the other hand, if you look at the July 2007 call options, the premiums are higher - premium of $1.48 for strike price of £32.50 $32.50, for example. These options have longer to run, so they have a time value, which is reflected in the premium. Roughly speaking, there is a higher probability of Microsoft stock going above $32.50 sometime before the end of July than there is of this happening before the end of January. Gandalf61 11:37, 16 January 2007 (UTC)
Gandalf61, did you mean to toss a pound sign in there ? StuRat 19:34, 17 January 2007 (UTC)
Oops. Fixed. Gandalf61 20:24, 17 January 2007 (UTC)

[edit] Steinitz Replacement Theorem

One of my maths courses contained the "Steinitz Replacement Theorem" which in essence says that, for a vector space V, if {e1, ..., en} is a basis for V and {v1, ..., vm} is a set of linearly independent vectors in V, then m <= n.

Understandably this is a very important result. However, I have a couple problems:

  • The guy didn't prove it properly. Proof went along the lines of "add v1 to the set of es, then delete any es (at least one) that are now no longer linearly independent (this makes sense); then continue and at the end you will have a set containing all the vs and zero or more es" (in my view this assumes there were fewer vs than es which is what we were trying to prove).
  • Very few places mention the Steinitz Replacement Theorem, and none of them actually meean the same theorem.

So, does anyone here either know of this as the Steinitz Replacement Theorem, or know of another result that shows that at most dim(V) vectors in V can be linearly independent?

Rawling 12:31, 16 January 2007 (UTC)

Steinitz himself (E. Steinitz, Bedingt konvergente Reihen und konvexe Systeme, J. reine angew. Math. 143 (1913) 128–175) states on p.133 (translated into modern language):
Let M be a vector space which is generated by p elements. If M contains r linearly independent elements β1,…,βr, there is a system of generators of M consisting of p elements and containing the βi. In particular, a vector space generated by p elements cannot contain more than p linearly independent elements.
The proof is roughly as you stated it: If some βi is not among the p generators αj, it can be expressed as a linear combination of them, and if the coefficient of some αk is nonzero, this αk can be replaced by βi. The fact that r ≤ p is a consequence, not a prerequisite.--.7g. 13:27, 16 January 2007 (UTC)
Cheers for the reply. I've managed to salvage the proof myself without using r ≤ p as a prerequisite to prove r ≤ p, so I'm satisfied. Thanks again :) Rawling 13:57, 16 January 2007 (UTC)
We can approach this topic in several ways. Considered a finite ordered list of vectors, (v)k = (v1,…,vn) in a vector space V. We may define three properties as follows:
  1. (v)k spans V if every vector in V can be expressed in at least one way as a linear combination of these vectors.
  2. (v)k is linearly independent if every vector in V can be expressed in at most one way as a linear combination of these vectors.
  3. (v)k is a basis for V if every vector in V can be expressed in exactly one way as a linear combination of these vectors.
We have three important theorems:
  1. If (v)k spans V, it can be reduced to a basis by removing zero or more vectors.
  2. If (v)k is linearly independent, it can be expanded to a basis by appending zero or more vectors.
  3. If (v)k and (u)k are both bases for V, then they contain the same number of vectors.
The theorem in the question is a helpful intermediate result:
  • Theorem. Suppose V is spanned by a list of n vectors, and that it also contains a linearly independent list, (v)k, of m vectors; then m ≤ n.
  • Proof. The idea of the proof is to proceed by induction on m. In the process we will build a spanning list that begins with (v)k.
    Base. If m is zero, the claim is trivially true.
    Induction. Assume the claim is true for any independent list of m vectors, so that V is spanned by a list (v1,…,vm,u1,…,unm), with nm ≥ 0. This implies that vm+1 can be expressed as a linear combination of these vectors. If m equals n, then there is no ui in the list, implying that (v)k is not independent, contrary to hypothesis. Now insert vm+1 into the list to produce (v1,…,vm+1,u1,…,unm). This larger list still spans; however, we have shown it cannot be independent, so we can remove a vector without affecting the span. Go through the list from the beginning, and remove the first vector that is in the span of its predecessors. Since we have stipulated that the (v)k are linearly independent, the removed vector must be some ui. ∎
Quite likely this is what was presented in the course, or, at least, what was intended. --KSmrqT 14:07, 16 January 2007 (UTC)

[edit] Probability of retaining adjacent elements after randomization of a sequence

Take a sequence of numbers a1, a2, ... , an and produce a random permutation of the sequence. What is the probability that there exists no k such that ak is adjacent to ak+1 in the permuted sequence? What about the probability that there exists exactly one k, or exactly m k's (0<=m<=n-1) which satisfy that condition? (Note that the term "adjacent" implies that elements may appear in either order)

I am familiar with probability and combinatorics to undergraduate level (a few years ago) but don't really know where to start. Thanks.

Darkhorse06 15:15, 16 January 2007 (UTC)

The number of permutations of n elements destroying all original adjacencies is (sequence A002464 in OEIS). With 1 remaining it is (sequence A086852 in OEIS), and 2 (sequence A086853 in OEIS). That should give you a start.  --LambiamTalk
[edit conflict] First let me caution that the count of adjacencies isn't your m precisely (although they surely match at 0, and the OEIS has a formula).
Consider the simpler problem with a permuted ring (so that the first and last elements are adjacent) and ask the question of whether either neighbor of a given a'k was a neighbor previously. Suppose it does not — then a'k + 1 (remembering that a'_{n+1}\equiv a'_1) is not either of the two original neighbors, with probability \frac{n-3}{n-1}. a'k − 1 (remembering that a'_0\equiv a'_n) is also not a neighbor, but one non-neighbor has already been taken as a'k + 1, so this has probability \frac{n-4}{n-2}. So the probability that a'k kept a neighbor is just 1-\frac{(n-3)(n-4)}{(n-1)(n-2)}. Unfortunately, whether each of the various a'k kept any neighbors is not independent; a neighbor ak kept of course kept ak as a neighbor (we have one answer right there: P(m = 1) = 0; on the ring, similarly P(m = n − 1) = 0). But as a very rough approximation, we may take P(m=0)=\left(\frac{(n-3)(n-4)}{(n-1)(n-2)}\right)^n.
More precisely, we could proceed iteratively. The second element is a neighbor of the first with probability \left(1-\frac1n\right)\frac{n-3}{n-1}+\frac1n\frac{n-2}{n-1}=\frac{n^2-3n+1}{n(n-1)} because that first element may have been an edge element before or not. To handle the third element we'd have to consider whether the second element might have been an edge element, and whether it was in fact adjacent to the first element or not (if we allow m\neq0), and both of these cases would be coupled to the initial decision about the first element being on an end.
Perhaps simpler (at least for m = 0) is to consider the dimer that we might find in the permuted sequence: there are n − 1 such dimers, and having picked one there are 2(n − 1)! ways to permute the sequence treating the dimer as one element (the 2 comes from flipping it). So the probability for a given dimer is just \frac2n. The dimers, however, are not independent either so we can't just say that P(m=0)=\left(1-\frac2n\right)^{n-1} (consider n = 3).
Checking these ideas with a brute-force check shows that the independent-ring formula always dramatically underestimates P(m = 0), and that the independent-dimer formula overestimates but for n\in[6,10] is within 6%; it may actually be asymptotically correct. (The actual probabilities for n\in[2,10] are \left\{0, 0, \frac1{12}, \frac7{60}, \frac18, \frac{323}{2520}, \frac{2621}{20160}, \frac{7937}{60480}, \frac{239653}{1814400}\right\}, but beyond that they become prohibitively expensive to calculate directly.) Sorry I don't have more elegant help, but perhaps someone can extend these ideas usefully. --Tardis 23:17, 16 January 2007 (UTC)