Wikipedia:Reference desk/Archives/Mathematics/2007 November 21

From Wikipedia, the free encyclopedia

Mathematics desk
< November 20 << Oct | November | Dec >> November 22 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents

[edit] November 21

[edit] topological spaces where all countable intersections of open sets are open

Is there a common designation for these spaces and/or have they been studied? Examples are the cocountable and the cofinite topologies on uncountable sets. Thanks, Rich Peterson130.86.14.86 (talk) 04:02, 21 November 2007 (UTC)

Another topology would be the power set topology in which every subset is an open set. In this case, all countable intersections of open sets are open. In fact, any arbitrary intersection is open. To answer your question, I haven't ever heard of such a property being named and mentioned specifically. My guess would be that it is not as significant as one would think in the beginning. A Real Kaiser (talk) 05:56, 21 November 2007 (UTC)
The "power set topology" is also known as the discrete topology. Tesseran (talk) 19:33, 21 November 2007 (UTC)
I thought I'd seen that referred to as a Gδ topology, but I could be mistaken. Certainly, Gδ sets in any topology form a base for a topology, but I'm not sure what other characteristics it might have, and I'm not sure that that derived topology has the property in question. — Arthur Rubin | (talk) 23:48, 21 November 2007 (UTC)
Counterexamples in Topology defines a Gδ space to be a space in which every closed set is Gδ, rather than one in which every Gδ set is open. Algebraist 15:34, 22 November 2007 (UTC)

[edit] prime number question

Brun proved the sum of the recipricols of twin primes is finite. What about the sum of the reciprocals of all primes that differ by four from another prime?; is that finite?Thanks, Rich Peterson130.86.14.86 (talk) 04:09, 21 November 2007 (UTC)

I think the Cousin prime article addresses this, but it is not clear if this is a proof or empirical result. Apparently Viggo Brun has a whole family of constants, not just Brun's constant. A quick check in MathSciNet only revealed empirical papers and applications, modifications, and replacements for Brun's sieve. JackSchmidt (talk) 05:15, 21 November 2007 (UTC)
Yes, it's finite. http://mathworld.wolfram.com/BrunsConstant.html says:
"Segal (1930) proved that Brun-type sums Bd of 1/p over consecutive primes separated by d are convergent (Halberstam and Richert 1983, p. 92)".
I guess the proof also works for non-consecutive primes but that is not needed for cousin primes which are always consecutive above 7. PrimeHunter (talk) 12:06, 21 November 2007 (UTC)

[edit] Continuum hypothesis

According to The continuum hypothesis  \aleph_0 < |S| < 2^{\aleph_0}, thus 2^{\aleph_0}=\aleph_1.

How then is {\aleph_0}^{\aleph_0} defined, and similarly how are \aleph_{\aleph} and \aleph_{\aleph}^{\aleph_{\aleph}} defined?

71.100.5.134 (talk) 14:54, 21 November 2007 (UTC)

Perhaps you meant that there is no set S such that  \aleph_0 < |S| < 2^{\aleph_0}.
You can find a definition for cardinal exponentiation in our article Cardinal number. I think it is true in general that \aleph_0^{\aleph_0}=2^{\aleph_0}; in particular, assuming CH, this is equal to \aleph_1. You can also find the definition of aleph numbers in Aleph number. As for just \aleph without any subscript, it is sometimes used to denote 2^{\aleph_0}. -- Meni Rosenfeld (talk) 16:00, 21 November 2007 (UTC)
As applied to logical states and variables then could (or would) {\aleph_0}^{\aleph_0} be used to indicate multiple state variables whereas 2^{\aleph_0} would be used to indicate binary state variables, or is this application to logical states and variables a misunderstanding or only fictional? 71.100.5.134 (talk) 18:24, 21 November 2007 (UTC)
"Sometimes" in the sense of "essentially never". Abraham Fraenkel used that notation in a book published around 1930 or so, I think. Other than that I have never seen it in the literature. Not even once. --Trovatore (talk) 17:53, 21 November 2007 (UTC)
There is a saying in Aramic, roughly transliterated "girsa deyankuta bal nishtakakhat" and roughly translated as "what one has learned in his youth is not forgotten". My first introduction to cardinalities was from a book of the Open University of Israel, where this notation is used. While I do not recall encountering it anywhere else, I have only recently realized how rare it really is. Still, I know of no other meaning of \aleph, so we might as well interpret it as 2^{\aleph_0}. -- Meni Rosenfeld (talk) 18:10, 21 November 2007 (UTC)
In my opinion it's an especially unfortunate notation that I would truly like to see forgotten entirely. The "aleph" notation is specifically connected with wellordering, and the collection of functions from \aleph_0 into 2 is an especially difficult thing to wellorder. Because the axiom of choice is platonistically true, we know that it must have some wellordering, but we don't know where it fits in the larger scale, and it may have no definable wellordering at all. --Trovatore (talk) 18:17, 21 November 2007 (UTC)
Maybe I should write them a letter or something (the books in question are getting pretty dated as it is). I will certainly keep in mind to avoid using it.
Returning to the original question: What do you mean by "indicate state variables"? 2^{\aleph_0} can represent the number of possible states of a system with countably infinitely many binary variables. \aleph_0^{\aleph_0} can represent the number of possible states of a system with countably infinitely many variables, each of which can take a countably infinite number of different states. These two numbers happen to be the same. -- Meni Rosenfeld (talk) 19:13, 21 November 2007 (UTC)
The idea is to make the distinction that the number of states is not limited to two. The importance of this would be in describing two different types of counting "equipment" method or capability. For instance a digital computer might be defined with 2^{\aleph_0} whereas a human might be described with {\aleph_0}^{\aleph_0} capability although the results of an arithmetic computation (but not a logical equation) might be the same. 71.100.5.134 (talk) 19:56, 21 November 2007 (UTC)
I don't think they make computers with infinite memory just yet. I also don't know of any human with infinitely many neurons. I think neither 2^{\aleph_0} nor {\aleph_0}^{\aleph_0} can be related in a meaningful way to any capability of computers or humans. Also, keep in mind that those two are the same, so they certainly cannot represent different capabilities (at best, only superficially distinct but ultimately equivalent capabilities). -- Meni Rosenfeld (talk) 20:18, 21 November 2007 (UTC)
Okay, discarding infinite capability what symbolic would you use then to distinguish computer processing base and exponent capability limit from human processing base and exponent limit? In other words if we were talking about a 2 position light switch with each position representing a separate state (on and off) then the symbolic to describe it might be 21 and a 3 position switch, 31. How then would one describe a human? xn where x and n would be some maximum finite number? 71.100.5.134 (talk) 20:47, 21 November 2007 (UTC)
I'm fairly sure nobody knows how to model human processing capability, except very crudely. If they did, though, it wouldn't look like that at all. Black Carrot (talk) 20:59, 21 November 2007 (UTC)
Human processing capability depends upon what you mean. The normal healthy human brain is certainly capable of processing enough information to allow us to cross a busy street without a mishap and to perform many other similarly demanding tasks. I am sure many tests have been run to determining maximum processing capability of both individual human beings and of human beings as a group. In fact, some exist as games like chess and pinball. \aleph_0^{\aleph_0} (talk) (email) 09:29, 23 November 2007 (UTC)
More to the point of your question, though, the second model you describe (maximum finite number) can be modeled by the first (binary), and vice versa. They're not actually different. Black Carrot (talk) 21:04, 21 November 2007 (UTC)
While there may be no numerical difference in the results of equivalent equations expressed in different bases, there may however not be equivalence in terms of logical results. Ask me and I will explain. 71.100.5.134 (talk) 21:19, 21 November 2007 (UTC)
Does anybody have a proof or proof sketch that \aleph_0^{\aleph_0}=2^{\aleph_0}? I'm interested to see it. SamuelRiv (talk) 23:05, 21 November 2007 (UTC)
Clearly 2^{\aleph_0} \le \aleph_0^{\aleph_0}, and \aleph_0^{\aleph_0} \le \left(2^{\aleph_0}\right)^{\aleph_0} = 2^{\aleph_0 \cdot \aleph_0} = 2^{\aleph_0}. -- Meni Rosenfeld (talk) 23:18, 21 November 2007 (UTC)
So while the cardinality of binary state and multiple state logic may be the same out of curiosity what would be the proper symbolics, apart from cardinality, to distinguish multiple state logic from binary state logic, simply xn where x is greater than 2? 71.100.5.134 (talk) 05:18, 22 November 2007 (UTC)
Why do you think that either binary or multiple state logic can be described by some xn? (That was rhetorical, I sincerely doubt it can). -- Meni Rosenfeld (talk) 10:06, 22 November 2007 (UTC)
I do not know what symbolics to use to distinguish binary logic system from multiple state logic system, except that if I were going to describe the difference to a friend I would merely say that the difference is that the number of states in a multiple state logic system can be more than two and represented as a value of x in the equation x^n where x is greater than two while in binary logic system x is limited to 2. What I am asking is if there is symbolic notation which distinguishes the two. —Preceding unsigned comment added by 71.100.8.2 (talk) 19:33, 22 November 2007 (UTC)
My answer is "not as such", but there could be some symbolic representation for one feature or another of those systems. I suggest you stick with the verbal description. -- Meni Rosenfeld (talk) 12:21, 23 November 2007 (UTC)

[edit] Balls in a bag

Imran has 1 blue ball in a bag. What are the odds of him taking a blue ball out of bag by the 7th draw if each time he takes out a ball it is put bag in the bag. —Preceding unsigned comment added by 86.137.244.45 (talk) 20:36, 21 November 2007 (UTC)

Homework? 71.100.5.134 (talk) 20:49, 21 November 2007 (UTC)
If he has exactly one ball in the bag, and that ball is blue, how is this a question of probability? Black Carrot (talk) 20:57, 21 November 2007 (UTC)
(edit conflict). If the only thing the ball contains is 1 blue ball, and every time he draws it he puts it back in, what else could be possibly keep on drawing but that same blue ball? It's still a question of probability, though - a probability of 1 (certainty). I seriously doubt this is the answer your teacher is expecting, so I seriously doubt you've given us the whole question. -- JackofOz (talk) 21:00, 21 November 2007 (UTC)
Not quite. A probability of 1 still leaves open the possibility that he'll draw a red ball. It's one of the drawbacks of forbidding infinitesimals - "almost 1" has to actually equal 1. So, for instance, the probability of tossing a coin infinitely many times and getting all heads is "almost 0", therefore 0, but still possible. Drawing a blue ball is guaranteed here, though. Black Carrot (talk) 21:18, 21 November 2007 (UTC)
Red ball? So the blue ball changes color when he puts it back in? Interesting. Why didn't he tell us that? 71.100.5.134 (talk) 21:36, 21 November 2007 (UTC)
What Black Carrot said is that if all we know is that the probability of the drawn ball to be blue is 1, it is still possible (in some sense of the word) that the ball will be red. He also said that if we know the entire description of the problem, then we know that drawing a red ball is impossible. -- Meni Rosenfeld (talk) 21:41, 21 November 2007 (UTC)
What Black Carrot said is basically true but I'd quibble with the stuff about infinitesmials. I have actually never seen a workable account in which the probability of tossing a coin ω times and having it come up all heads is positive but infinitesimal. It definitely doesn't work in nonstandard analysis (the genuine ω can't be an element of your nonstandard model, so throwing a coin ω times just isn't possible -- you could recast the problem in terms of throwing the coin ω times in the sense of the model, but then the probability of all heads is again exactly zero). I can imagine that it might be possible to make sense of the statement in some way that makes use of the surreal numbers, but as I say, I've never actually seen it successfully done. --Trovatore (talk) 00:17, 22 November 2007 (UTC)
Tossing a coin X times includes the possibilities that heads will come up at least once and tails will come up at least once. That's because a coin cannot have only 1 side. But the question talks only about 1 ball, which happens to be blue. I'm sure in my own mind the question was supposed to mention other-coloured balls in the bag, which the questioner didn't tell us about. But that's the thing: he didn't tell us about them, so how can we just assume them, mathematically speaking? -- JackofOz (talk) 00:40, 22 November 2007 (UTC)
No, that wasn't BC's point, I think. The distinction is between "there is exactly one ball in the bag, and it's blue" and "there is exactly one ball in the bag, and with probability 1, it's blue". --Trovatore (talk) 00:44, 22 November 2007 (UTC)
There seem to be different interpretations of what BC said. Meni Rosenfeld put in terms of "if" we know the probability is 1, but surely that's assuming the very thing we're being asked to work out. If there really is only 1 ball in the bag, regardless of its colour, then the probability of drawing it each and every single time ad infinitum is 1, and the probability of drawing a ball of any other colour is 0. No? -- JackofOz (talk) 01:03, 22 November 2007 (UTC)
Yes, that's true. The concern, I think, was that you might have been saying approximately the converse -- that if the probability that the ball is blue is one, then the ball is definitely blue. But perhaps you weren't saying that at all. --Trovatore (talk) 01:14, 22 November 2007 (UTC)
Not at all. I don't believe I ever talked in terms of "if the probability of X is Y". I made the point that, in my estimation, it was very likely we weren't given the whole question. But going on just the information we were given, I said that it was 100% certain = guaranteed = probability of 1, that any draw, whether it be the first, seventh or the seven millionth, would result in a blue ball, and not just any blue ball but the same blue ball over and over and over, for obvious reasons. (I'm really undecided whether this is a fascinating discussion or just a silly one by now.) -- JackofOz (talk) 03:53, 22 November 2007 (UTC)
You're doing it again. What black carrot tried to complain about was your words "probability of 1 (certainty)". The idea was to emphasize that probability of 1 is not the same as certainty (that is, something can have a probability of 1 and still not be certain. Again, this depends on what you mean by "certain"). It was not intended to analyze the deep meaning of what we know about the ball. -- Meni Rosenfeld (talk) 09:54, 22 November 2007 (UTC)

I think we are messing things up. First of all, infinitessimals in probability arise from measure theory concerns, but they do not affect calculation, in the sense that the seventh consecutive ball being blue is an event of still probability of unity.

Imo pectore, I get the feeling that "86" has pointed at a Laplacian probability theory task (homework?) without having given us all the information. If I am correct in this way, please let 86 forget about sigma algebras. Pallida  Mors 02:55, 22 November 2007 (UTC)

Related to this I've been thinking about the article Almost surely which covers these measure zero probabilities. There's an example of throwing a dart at a board with a line drawn on it and the probability of the dart landing hitting the line. Both the dart and the line are mathematical in that they have zero width. In this situation it is possible for the dart to hit the line but with zero probability. The question which occurred to me was whether such probabilities occur in physically realizable situations? No physical dart will have a point of zero radius or a line of zero width, so for a physical dart and board the chance will be small and positive. Likewise the other example involves an infinite number of coin tosses again not physically realizable. So my question is whether its possible to get possible events with zero probability in physically realizable situations? --Salix alba (talk) 08:22, 22 November 2007 (UTC)

0 probability, and the distinction between it and impossibility, are mathematical abstractions. It could be possible to describe a 0-probability, possible event in some mathematical model of reality. But you can never know about a real physical event that its probability is 0, if for no other reason than that we can't model reality with enough confidence. For example, if I put a single blue ball in a bag, I will not stake my soul on it being blue when I draw it, as there could be some unknown physical\chemical process that will turn it red. -- Meni Rosenfeld (talk) 10:04, 22 November 2007 (UTC)
Well, consider any physical system that can be in an infinite number of states that are all approximately equally likely. Then the probability of the system being in any given state is 1/∞ = 0. Nonetheless the system must, in fact, be in some state, even though the a priori probability of it being in that specific state is zero. (Quantum mechanics may complicate this somewhat, but I'm pretty certain some variant of this "paradox" can be formulated there as well — all it really takes is some conceptual way of dividing a finite probability mass into infinitely many slices.) —Ilmari Karonen (talk) 17:03, 22 November 2007 (UTC)
Meni, I read your post 2 above and, with the greatest of respect, isn't that argument just going to the nth degree for sake of going to the nth degree? If there was an election and there was only 1 candidate, the probability of him/her being elected would be 1. Would you say the outcome wasn't certain? Would you introduce the possibility of someone influencing the electoral officials to declare a different result, or whatever? Yes, in real life, sometimes unexpected things do happen; people sometimes do break the rules. But if we're discussing the mathematical probability of a given event occurring under defined circumstances, such as the odds of drawing a pink ball out of a bag that contains only pink balls and nothing else, what's the real value of considering that the lint might have formed itself into a rigid spherical grey object, or the fairies have conspired to turn one of the balls green? Maybe I'm being obtuse here, but I promise you I'm not being deliberately so. I really don't get what making these abstruse, theoretical, hypothetical but unrealistic distinctions is all about. -- JackofOz (talk) 21:32, 22 November 2007 (UTC)
At the risk of pointing out the obvious - we are told there is 1 blue ball in the bag; we are not told that this is the only object in the bag. Unless we are told what else is in the bag we don't have enough information to answer the question. Gandalf61 (talk) 22:17, 22 November 2007 (UTC)
That underlines the very point in my first post way back at the top. We almost certainly weren't given the whole question, but what we've been discussing ever since is based on what we were given, assuming it were the whole question. -- JackofOz (talk) 22:42, 22 November 2007 (UTC)
I think you may have missed the entire point of my comments. First, I only ever mentioned this issue to explain Black Carrot's post, which seems to have confused many (and apparently it still does). I'll reiterate: That comment has nothing to do with the problem stated by the OP (and I apologize for, in trying to explain it, giving the impression that it does). It was only intended to dispel the common myth that prob. 1 = certainty and prob. 0 = impossibility, reflected in your words "probability 1 (certainty)".
Is this distinction "abstruse, theoretical, hypothetical but unrealistic"? I tend to agree that it is, and this is exactly the idea I was trying to convey in my response to Salix. For all practical purposes, probability of 0 is the same as impossibility. If by some divine inspiration we know that the probability of some physical event is 0, we might as well assume that it will not happen no matter the circumstances.
But this touches another myth I tried to dispel with my comment - that we have any chance of associating a probability 0 with some real, physical event. People seem to think that our mathematical models of reality have anything to do with reality itself. But the truth is that we don't know squat about reality. Our mathematical models are just that, and we only keep them around as long as they seem to have some resemblance to what we can observe. But taking some statement about our mathematical model, and treating it as if it represents some truth about reality, is plain foolish. And having a probability of 0 for some event in our model really says nothing about the actual universe.
Taking Ilmari's comment as an example, we can't know that there is "any physical system that can be in an infinite number of states that are all approximately equally likely". At best, we can describe some model in which there is such a system. But again, having a probability of 0 for some event in this model has very little to do with the physical universe.
Returning to the problem of the OP (assuming there are no other balls in the bag, which is pretty much implied): For the physical problem of putting balls in a bag, the probability of drawing a red ball is positive, as balls can change color. For some mathematical model of the problem we construct, including such assumptions as that balls cannot change color, drawing a red ball is impossible, which is even stronger than merely stating that the probability is 0. Since we are talking about a mathematical abstraction anyway, we might as well keep this distinction in mind. -- Meni Rosenfeld (talk) 12:10, 23 November 2007 (UTC)
Wow. Toss a snowball, start an avalanche. All I said was a known fact, that probability 1 (in standard probability theory, as standardly used) can indicate both certainty and near-certainty; we don't have different symbols for the two situations. It didn't have anything to do with the original question, which wasn't worth answering. It wasn't supposed to be deep. It has application all of never. It's just a quirk of the theory that I happen to like. Black Carrot (talk) 03:20, 25 November 2007 (UTC)
Meni, thanks for the long explanation. I really do appreciate it. Just one question while I ponder further: In which particular version of reality is it possible that balls of a certain known colour, placed into a bag, will come out 10 seconds later a different colour? Sure, colours fade over time, but how much time are we allowing for here? Or what other explanation could account for the change of colour? Thanks. -- JackofOz (talk) 09:33, 27 November 2007 (UTC)
I'm not a chemistry expert, but there are all sorts of possibilities. I know of some materials which change color according to temparature, so if the bag has a significantly different temperature than the room, we might be able to observe a fairly rapid change of color. Or it could be that the ball is experiencing a chemical reaction (a stretch if it is solid) while in the bag, and when we take it out it is composed of a different material. Or it could be that the color we have seen is a thin layer of some material which evaporates in the time the ball spent in the bag. And there's also the thing that there is an extremely small probability that the ball will undergo a transformation which makes no sense thermodynamically - for example, if we take a piece of coal, the carbon atoms only need to change their position (and the impurities need to move aside) for it to turn into diamond. The probability that it will happen spontaneously in a split second is miniscule - but it could be 1 / Graham's number for all I care, it is still positive.
That's not the point, though. The point wasn't to provide examples of processes I think I know happen in reality which can change the color of the ball. The point was that I don't know enough about reality to conclude decisively that no such process exists, hence the probability of the event, given what I know, is positive. -- Meni Rosenfeld (talk) 10:20, 27 November 2007 (UTC)
I understand what you say, Meni, and thanks again for indulging me on this. Re-reading the question, I still cannot get past the overwhelming feeling that the questioner never remotely contemplated these sorts of considerations, but wanted a simple answer to what appeared to be a simple question (incomplete as it was), in which all manner of assumptions were implicit and did not even need to be identified, let alone taken into account in framing the answer. The questioner seems to have long since abandoned this discussion (as have most sane people by now, probably), but if not, Hi there, and can you please confirm the truth of this. -- JackofOz (talk) 13:00, 28 November 2007 (UTC)
I doubt the OP has even read any of the replies, otherwise he should have responded when doubts about the presentation of the question have been raised. Of course the question was not meant to be deep (though it itself has shed some light on the way we perceive problems presented to us - "The probability is clearly 1 so OBVIOUSLY the question is wrong"), and this can be deduced from the usage of the word "odds" (which is not used in any serious discussion). -- Meni Rosenfeld (talk) 15:06, 28 November 2007 (UTC)
Yeh this was a simple question for someone not doing University level stuff. The answer was 1.

[edit] Hyperreal

In the nonstandard analysis article, it says that "From every nonstandard real number one can construct canonically a subset of the interval [0, 1], which is not Lebesgue measurable." How do you construct this subset? Black Carrot (talk) 20:57, 21 November 2007 (UTC)

[edit] Question

Wayne Riddock buys 14 cans of stella and drinks them all outside of an offlicence. If he averages 2:13 minutes per can, how long is he drinking outside the offlicence? —Preceding unsigned comment added by 86.137.244.45 (talk) 21:00, 21 November 2007 (UTC)

Long enough to refill them and give them back. Interesting tidbit: google's calculator can do this. [1] Black Carrot (talk) 21:09, 21 November 2007 (UTC)
A better question: since the measurement 2:13 must be rounded to the nearest second, the actual time might be anywhere within a narrow range. What's that range? Black Carrot (talk) 21:12, 21 November 2007 (UTC)

[edit] infinite function

I have a function defined as f(x)=\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x\cdot\cdot\cdot}}}}

Is there an equivalent function that does not do this infinitely?

Or, in other words: what is a function g(x) such that g(x)=f(x)=\sqrt{x+\sqrt{x+\sqrt{x+\sqrt{x\cdot\cdot\cdot}}}}, but g(x) is different from f(x)? —Preceding unsigned comment added by Yanwen (talkcontribs) 22:40, 21 November 2007 (UTC)

You've worded that very strangely, but I'm going to take a guess that what you're asking is "does f have an expression that doesn't require evaluation of an infinite number of square roots?" To which my answer is ... maybe, but I doubt it. Confusing Manifestation(Say hi!) 22:43, 21 November 2007 (UTC)
[edit conflict]You are not looking for a function g which is different from f. Rather, you are looking to represent f in a different way, as a closed formula which does not allude to limits, performing an operation infinitely many times, and so on.
And such a formula exists. Note that f(x)=\sqrt{x+f(x)}, so f(x)^2=x+f(x)\,\!, so f(x)=\frac{1 + \sqrt{1+4x}}{2}. -- Meni Rosenfeld (talk) 22:50, 21 November 2007 (UTC)
I want to point out some subtleties in the proof that may not be obvious to the reader. When you squared both sides, you introduced new solutions, so you need to also apply the constraint that f(x) \ge 0 (since it originally equaled a square root). There are two roots to the quadratic. f(x)=\frac{1 \pm \sqrt{1+4x}}{2}. When x > 0, one of the solutions is positive and the other is negative, so we only choose the positive one, as above. When -1/4 < x \le 0, there are two non-negative solutions. That is, there are two values y that satisfy y=\sqrt{x+y}. For example, take x = 0; y = 0 works since 0=\sqrt{0+0}; but y = 1 also works, since 1=\sqrt{0+1}. You probably need a more rigorous definition of your infinite operation to tell which one is the one you want. For x = − 1 / 4, there is one solution. And for x < − 1 / 4, you get complex solutions, which is probably not what you want because the principal square root is not defined for complex numbers. --Spoon! (talk) 05:40, 22 November 2007 (UTC)
The obvious thing to do is to define a sequence y0, y1, y2, ... depending on x, by the recurrent relation:
\begin{array}{lcl}
 y_0     & = & 0 \\
 y_{n+1} & = & \sqrt{x+y_n}
\end{array}
so that
y_0 = 0,\quad y_1 = \sqrt{x},\quad y_2 = \sqrt{x+\sqrt{x}},\quad y_3 = \sqrt{x+\sqrt{x+\sqrt{x}}},\quad ...
and define f(x) as the limit (if it exists) of yi for i tending to infinity. For x = 0 this gives f(x) = 0; for other x (including complex values) this appears to converge to the solution given by Meni.  --Lambiam 08:46, 22 November 2007 (UTC)
There is no problem to define a principal branch for the square root function over complex numbers. However, it is not continuous, so there might be some values for which Lambiam's process does not converge, or converges to something unexpected (the standard proof that if yn + 1 = g(yn) then \lim y_n=g(\lim y_n) relies on continuity). In most cases, this should converge to either \frac{1+\sqrt{1+4x}}{2} or \frac{1-\sqrt{1+4x}}{2}, and it can depend on the initial condition; note that for x = 0, it converges to 0 if y0 = 0 but to 1 if y0 > 0. -- Meni Rosenfeld (talk) 09:49, 22 November 2007 (UTC)
Yea, I didn't know how I should've worded that. Thanks.--Yanwen (talk) 22:52, 21 November 2007 (UTC)
See our article on nested radicals. Gandalf61 (talk) 22:08, 22 November 2007 (UTC)