Talk:Big O notation

From Wikipedia, the free encyclopedia

WikiProject Mathematics
This article is within the scope of WikiProject Mathematics, which collaborates on articles related to mathematics.
Mathematics rating: B Class Mid Priority  Field: Analysis
One of the 500 most frequently viewed mathematics articles.
This article is within the scope of WikiProject Computer science, which aims to create a comprehensive computer science reference for Wikipedia. Visit the project page for more information and to join in on related discussions.
B rated as B-Class on the assessment scale
Top rated as Top-importance on the assessment scale

Contents

[edit] Missing data

I notice no-one has added (in the section "Orders of common functions" - O(log log n) time.

Its cool I added it but I dont know its name?

[edit] ...Are you serious?

"Big O notation"? It sounds like the name was created with the intention of teaching it to eight-year-olds... I can't believe that this is the accepted terminology. Unless I'm mistaken, "big O" stands for order (or some foreign equivalent), and in my experience in maths it's always been referred to as such. When I heard order notation formally introduced as "Big O notation" for the first time in computer science, I burst out laughing... You people are all mad :p. --203.206.183.160 14:38, 14 September 2006 (UTC)

You say that like madness is a bad thing :-). However, mad or not, we professionals call it "big oh" more than anything else, especially when we are talking to each other. McKay 06:16, 15 September 2006 (UTC)
Meet "Wikiality" my friend. Big-O is the most commonly used term for it, no matter how ridiculous it sounds. It's taught in college's around the country and most professionals use that term for it. Thus, if the populous says it's right, it's right. --20:18, 18 September 2006 (UTC)
I agree "big O" is a common term. However, I think (to write) "big Oh" goes too far. I don't think there is a serious reference for that (once again, for writing "Oh" - of course an O is (may be) pronounced "oh", and those who cannot distinguish a 0 from an O will have that problem elsewhere and should change their fonts or have the phrase read aloud by their navigator, so there are no excuses...). So I'll delete "big oh" from the 1st line, but leave "big o". — MFH:Talk 16:30, 12 February 2007 (UTC)
Oops - just saw the other "oh" tread below, so I'll leave it (against my conviction - it's for "Order", not astonishment...)— MFH:Talk 16:41, 12 February 2007 (UTC)

[edit] Correctness of section "Related asymptotic notations"

Are the lim-sup and lim-inf definitions correct? I haven't seen them before. Reference this in the article, perhaps. Connelly 00:15, 9 August 2005 (UTC)

The lim-sup and lim-inf based definitions have vanished (in the table with the definitions). I do not know why (no comment was given at the appropriate changes), but the present lim based definitions of O,Ω,Θ are clearly wrong. Example: Consider the function f(n): = 1 for even n and f(n): = 2 for odd n. Clearly, it should be f\in O(1),f\in\Omega(1),f\in\Theta(1). But \lim |f/1| does not exist. Further, the definition of ω does not consider the absolute value of f / g. Then e.g. -n^2\not\in\omega(n) but n^2\in\omega(n) (in contrast to the other definitions where a negation of f does not matter).

In the older revision link everything is correct. So unless there was a reason for the change I do not see, I think the table should be reverted to the old state. --DniQ 11:06, 11 March 2006 (UTC)

(the below moved from "Incorrect use of limits in table?" which is discussing the same thing):

Is the use of limits in the table of "relate asymtotic notations" correct? Knuth's definitions in the cited paper don't use them, and invoke instead "exist C and x_0 such that |f(x)| < C g(x) forall x > x_0". The "formal definition" section of this article (and linked articles) agrees with Knuth, but the limit in the table (it seems to me) doesn't say the same thing. Beyond that, Knuth's definitions are easier to read:
f(n) \in \theta(g(n)) \iff \exists\; C, C', n_{0} \;\; \forall (n>n_{0}) \; Cg(n) \le f(n) \le C'g(n)
He wrote these on a typewriter but it's a good chance to excercise my Latex skills.
I propose changing the table to match Knuth's definitions, or else justifying the use of limits with at least a reference. 203.4.250.143 15:16, 5 September 2006 (UTC)
I didn't think about your proposal, but I'll agree that the current defs have problems. Consider f(n) = 1/n for odd n and f(n) = 0 for even n, and g(n) = 1 for odd n and g(n) = 0 for even n. Then f(n) = o(g(n)) by the usual definition but f(n)/g(n) fails to exist half the time so its limsup is undefined. McKay 04:04, 6 September 2006 (UTC)
So, when does it become appropriate to actually correct the article? (same user as quoted Knuth)
Surely the definitions involving limits are a bit dodgy anyway, when most of the functions that computer scientists describe with big-O are defined on the natural numbers, not the reals (so that no limit exists anyway)? Randomity 21:04, 19 July 2007 (UTC)
Actually, infinite limits don't require continuous number spaces. Remember the mathematical definition of lim_{n\to\infty}f(n) = L: For all ε > 0, there exists some Δ such that for all n > Δ, |f(n) - L| < ε. Obviously, if this is to be a non-trivial statement, it requires that f(n) have a continuous range, but the domain can be any infinite ordered set. Continuity is not required. 71.41.210.146 (talk) 02:36, 16 April 2008 (UTC)

[edit] Algorithms and their Big O performance

I'd like to put in some mention of computer algorithms and their Big O performance: selection sort being N^2, merge sort N log N, travelling salesman, and so on, and implications for computing (faster computers don't compensate for big-O differences, etc). Think this should be part of this write-up, or separate but linked?

I think separate would be better, to increase the article count :-). Then you can have links from the Complexity, Computation and Computer Science pages. Maybe you can call it "Algorithm run times" or something like that. --AxelBoldt
Or something like analysis of algorithms or Algorithmic Efficiency since you may sometimes choose based on other factors as well. --loh
I'd recommend puting it under computational complexity which earlier I made into a redirect to complexity theory. It should be a page of it's own, but I didn't want to write it ;-) --BlckKnght

[edit] Amortized complexities

How about a note on amortized complexities? For instance, inserts and lookups in hashtables are O(1) amortized, in constrast with array lookup which is always O(1).

[edit] Special name for O(n log n)

doesn't O(n log n) have some special name ? --Taw

"linearithmic" was the term coined in Sedgewick, and adopted in some places. --Robert Merkel
I've also heard it called "log-linear" --Stormix

I've heard (and taught) "quasilinear" (because f=O(xa) for all a>1, as close to 1 as desired), and think this is quite standard and also reasonable. (I plan to add this term on the main page if there is no protest against.) MFH: Talk 13:02, 24 May 2005 (UTC)

Quasi-linear is true, but quasi linear includes, for example, O(n log log n) as well. Temur 20:02, 15 February 2007 (UTC)

I added loglinear, as that's how I've seen it in several books, and at least one other user has seen that (Stormix). Chris Connett 21:02, 21 December 2005 (UTC)

Well, you can also write it in soft-O, as in Õ(n). Is that "special"? --Rovenhot 00:46, 23 June 2006 (UTC)

But Õ(n) doesn't specifically mean O(n log n). Logarithmic factors are ignored, but it's not implied that there was one in the first place. --203.206.183.160 14:16, 14 September 2006 (UTC)
Whew, talk about beating a dead horse...I just found this:
f(n) = \tilde{O} (g(n)) is shorthand for f(n) = O(g(n)logkg(n)) for some k.
Thus, apparently, Õ(n) === O(n log n). The reason is that big-O is an upper bound, so inherently, it does not imply that the function has a logarithmic factor. --Rovenhot 20:57, 31 January 2007 (UTC)
But n log² n is Õ(n) even though it isn't O(n log n). Your formula should instead be O(g(n)logkg(n)) for some fixed k. CRGreathouse (t | c) 17:26, 11 March 2007 (UTC)

[edit] Etymology

The letter "O" comes from the word "order".

I remembering it as coming from the Latin word ordo. It makes a bit more sense as Landau, and Bachmann, bringing this notation to us, both were Germans. Obviously, it means the same, but I can see the difference. :) --Chexum

The german word for order/ordo would be Ordnung, so that's another possibility...but as Chexum said, the point is moot and these words are probably cognates anyway -- Magnus N.

May the ordo disambiguation link to this page?--I hate to register 11:58, 10 March 2007 (UTC)

[edit] Page moves, "Big O" vs other names

Please do not move pages

--Eloquence 15:12 29 May 2003 (UTC)

  1. I believe big oh notation is more common in formal writing. Please refer any CS textbooks. You should find big oh not big O.
  2. because some people may be against the new name like you, so I have to take some time to waint and see.

-- Taku 15:16 29 May 2003 (UTC)

Google shows that "Big O" is twice as common, if you claim that "Big Oh" is more common in textbooks, collect a sample of at least ten random textbooks and demonstrate that more than 5 of them use "Big Oh". --Eloquence 15:24 29 May 2003 (UTC)

Sure. -- Taku 15:26 29 May 2003 (UTC)

I also vote against the move to "Big oh" until / unless cites are presented to show that it is now the common use: maybe my CS training is old-fashioned, but I recall the use of "big O" thoughout. However, show me the evidence, and I'll be willing to change my mind. The Anome 15:27 29 May 2003 (UTC)

Of course, people use big O because it's quick to write than big oh. My claim is isn't big oh notation is common as formal notation. Anyway give me some time. I think I can prove that. -- Taku 15:37 29 May 2003 (UTC)

Here is the result of my research. I couldn't find out ten books containing either big O notation or big oh notation but what is common usage seems apparent.

  1. Paul Walton Purdon, Jr. Cynthia A Brown. "The analysis of algorithm" uses Big O notation as the title of a chapter.
  2. Horowitzw, "Fundamentals of computer algorithms" - in a section Asymptoic Notation ".... One these is the O-notation"
  3. Herbert S.Wilf "Algorithms and Complexity" "...are following five "o" (read 'is little oh of'), 'O' (read is 'big oh of'), ...."
  4. Donald E. Knuth "The art of computer programming" "The O-notation. .... This is the "big oh" notation, ...."
  5. B.M.E Moret H.D.Shapiro "Algorithms from P to NP" "2. Mathematical techniques: 2.1. Big Oh, Big Omega, Big Theta Notations"
  6. Robert Sedgewick "Algorithms" 2nd ed. "... The mathematical artifact for making this notion precise is called the O-notation, or "big oh notation," ...."
  7. Steven C. Altheoen, Robert J.Bumcrot "Introduction to Discrete Mathematics" "... f(n) is said to be of order of maganitude g(n), written f(n) = O(g(n)) and read "f(n) is big oh g(n),"
  8. * Johnsonbaugh Richard, Discrete mathematics 5th ed. Macmillan, New Jersey - "An expression of the form f(n) = O(g(n)) is sometimes referred to as a big oh notation for f."

Except two books, all books I grabed at random above use big oh notation. -- Taku 19:11 29 May 2003 (UTC)

I think some of them may be saying how to pronounce big-O. Still, Knuth is usually pretty canonical. Can we call the article "O-notation", and direct both big-O and big-oh here? The Anome 19:23 29 May 2003 (UTC)

Isn't that the same case in omega and theta? My impression is that he O-notation or the big-O notation is in the same line with big-Θ and big-Ω because O is typically in italic like big-O notation while you cannot italize characters on the Internet.
Besides, actually I want to suggest to name this article asymptoic notations because we certaily want to cover little oh, big-theta and big-omega as well as big-oh.

-- Taku 19:33 29 May 2003 (UTC)

Actually no,

  1. Big O
  2. O
  3. O pronounced as "big oh"
  4. O pronounced as "big oh"
  5. Big Oh
  6. O-notation pronounced as "big oh notation"
  7. Big Oh
  8. Big Oh

Only three out of 8 refer exclusively to "Big Oh". The other clarify O-notation to be pronounced "Big Oh", this is to avoid confusion with a zero. We already avoid this by having the text "with a capital letter O, not a zero". I see no reason to change the title from "Big O notation", since this is what Google likes the most, perfectly correct and does not require us to fix any double redirects (the way I know Taku's moves, he probably won't do it himself). However, a redirect here is of course acceptable.

The page should only be moved to asymptotic notation (note singular) if it actually covers other types of this notation, not in the expectation that it will at some point. --Eloquence 19:37 29 May 2003 (UTC)

I don't think so. From the excerpts Taku posted it seemed to be that the 2nd and 3rd books were saying the *symbols* o and O stand for little-oh and big-Oh, hence those books used big-Oh as the principle name, not on how to pronounce them. Really, i have not seen any CS book including pronunciation directives. The 6th book furthermore indicates both names are used, but seems to prefer the O-notation. Score: 6 vs 8. In my opinion, if anyone cares, it should be big-Oh. The whole pronunciation thing does not make sense at all since the English pronunciation of O and Oh is quite similar. Furthermore, Wikipedia is the first source in which I see big-Oh spelled as big-O. 131.211.23.78 12:48, 14 September 2007 (UTC)
"the way I know Taku's moves, he probably won't do it himself". What do you mean by this? If you were suggesting I am lazy, it is not the case. I leave old redirects deliberatly so that it's easy to revert my new move. I think it is preferable that move the page and wait for a while to see what other think, while some disagree with this though.

-- Taku 19:46 29 May 2003 (UTC)

No, in case of a much-linked page, it's preferable to

  1. Propose the move on the talk page with arguments. Announce that you will move the page within a few days if there are no objections.
  2. If a consensus is reached, move the page and at the very least, fix double redirects (in this case, without editing the redir at Big O, 45 links would suddenly become broken).

It is not desirable to leave Wikipedia in an inconsistent state (in this case, 45 links that suddenly show the user a confusing redirect page, because of the double redir caused by your move) because your move would then be "easier to revert". It should not have to be reverted, because it was properly discussed first.

You have the annoying habit of simply moving pages around without prior discussion, in the expectation that people will complain if they disagree. That's correct, they will complain, bud they will also get pissed. If you want to avoid that, discuss first, move later. --Eloquence 19:52 29 May 2003 (UTC)

Show me actual examples that annoyed you. If I remember, most of the times, I discuss first and correct redirects if needed. I usually post a proposal at talkpage first. What case are you taking about? Besides, you think this time I should discuss first, but actually you can regard moving as a kind of suggestion. You can think I suggested a new name. If you disagree, you can just revert it. This is the same process to achive NPOV in articles. It is part of discussion. You don't have to be pissed off at all. -- Taku 02:27 30 May 2003 (UTC)

Sorry I think the comment above is not good. Hostility is the last thing we want. I can see oftentimes I piss you off. For example, the last time of datadump. We should be able to have fun not have fight. So my apology of moving this article suddely and I have made a decision that I won't move any article altogether because I don't want to risk to annoy/piss off people. You may think it is extreme but I think it is safe to me because I am often careless. You can see the statement about this in my userpage. -- Taku 03:10 30 May 2003 (UTC)

For what it's worth, the Dasgupta, Papadimitriou, and Vazirani Algorithms uses "Big O" rather than "Big Oh". CRGreathouse (t | c) 23:06, 21 September 2006 (UTC)

Anyway above is completely off-topic. Eloquence, you claim Goolgle likes the number of hits with"Big-O notation" is twice as much as that of "Big-oh notation". It is true (by the way, "big o-notation" with 6,450 while "big oh-notation" with 2,990). But my impression is that although big o outweights big-oh, pages with good and formal writing seem to use big-oh notation. First it is consistent with other notations, big-omega and big-theta. Omega and theta are pronounciation of each letter, but so is big-O. Besides, see the list above. Only one textbook uses Big-O notation. I think it is logical to think that the sentence like

O, &Omega and &Theta (big-Oh, big-Omega, big-Theta) are used in CS. And this is called big-oh notation. I don't mean to be scarstic but I think I am trying to discuss, though maybe I am wrong. -- Taku 15:17 31 May 2003 (UTC)

Big-Oh notation is used in "Big Java, 2nd Edition" by Cay Horstman on page 712. Superslacker87 11:49, 15 July 2006 (UTC)

[edit] "is an element of" vs "equals"

(Forgive the lack of mathematical notation...) Technically speaking, isn't it "f(x) is an element of O(g(x))"? In the article, "equals" is used. I think big O is the set of functions with the appropriate property. P3d0 13:06, 1 Aug 2003 (UTC)

Technically that is correct. The thing is that the "=" notation is long established; I would say most authors I have seen use "=" when writing about asymtotic bounds. I know that when I initially studied this, I would change the "=" to "\in" in my head, since by context the latter is correct. However, I, apparently like others, became accustomed to "=" notation that is (I think) more widely used.

In light of this, I notice that the article mixes these two freely. I think we should pick one or the other, and I nominate "=". If no one objects I'll change it, but add an explanation similar to the one above to the appropriate article (since this applies to all asymtotic notation, not just big-O) Chris Connett 20:59, 21 December 2005 (UTC)

On the other hand, mixing the two could accustom the reader to recognize both forms, since they are both used in practice, and there are cases (certain math classes, for example) in which one notation or the other is preferred or even enforced. Also, I must personally argue that the \in notation is very logical and unambiguous. --Rovenhot 21:08, 31 January 2007 (UTC)

My interpretation is that O(g(x)) denotes an unspecified (or anonymous) function that is asymptotically bounded by a multiple of g, rather than the class of all such functions. This is similar to how we use the constant of integration (+C) in calculus. We say that the most general antiderivative of cos(x) is sin(x)+C, but almost nobody says that C denotes the set of all constant functions -- rather, it is an "arbitrary constant". So what is wrong with saying that O(1) is an "arbitrary bounded function" and writing sin(x) = O(1)? David Radcliffe (talk) 22:30, 17 March 2008 (UTC)

[edit] Possible Correction

I noticed the statement that if

f(n,m) = n^2 + m^2 + O(n+m)

that this is like saying there exists N,C s.t. for all n,m >= N

f(n,m) <= n^2 + m^2 + C(n+m)

I would have thought that the O(n+m) is to be bounded in absolute value, i.e.

|f(n,m) - n^2 - m^2| <= C|n+m|

which leads to n^2 + m^2 - C|n+m| <= f(n,m) <= n^2 + m^2 + C|n+m|

Is this correct?

Steffan.

[edit] Little o

What does

e^x=1+x+x^2+o(x^3)\ {\rm as}\ x\rightarrow 0

where o is little o, mean?

Essentially it means that the "missing terms" go to zero "much faster" than x3 does, not just "at the same rate". Thus, for example, it could be that ex = 1 + x + x2 + x4 (not true, of course, but it would satisfy your "little-o" statement above). This works because as x goes to 0, the fraction x4/x3 doesn't just remain finite (big-O), it actually goes to 0 (little-o). - dcljr 05:14, 9 Jul 2004 (UTC)

[edit] O vs. Θ

The question is, should Wikipedia itself use Θ instead of O when both are correct, and the former is intended? (e.g. in discussing topics like the FFT and sorting.)

I have a computer-science background, and I know the difference between these two...however, I also know that Θ(...) is essentially unknown outside of computer-science circles, while O(...) is widely used and, informally, more-or-less understood to mean Θ. At a certain point, when usage becomes widespread enough, it ceases to be "incorrect" (it's only notation, after all).

One argument is that, since Θ is so little-known, its appearance will only confuse most people...thus, one should use O as long as it is correct (even if it is weaker than necessary) except in cases where one wishes to explicitly distinguish between the two. On the other hand, one could use Θ whereever possible, making the symbol a link to this page...requiring an extra page of reading for people not familiar with it, but hoping that most people will simply guess that the symbol looks like O so it means what they expect.

Steven G. Johnson 07:00, 28 Nov 2003 (UTC)

I think that using Θ linked here should address most concerns. It does have the advantage of looking like O -- before I learned the precise difference I glossed it as 'something like O', and I imagine I wasn't alone. When the distinction matters it's bestto be precise. CRGreathouse (t | c) 04:12, 20 September 2006 (UTC)

[edit] Name for n^n function

I just added O(nn) to the Common orders of functions table, mainly because I know it's often discussed in calculus classes. But I've never been able to find a name for this function (i.e., nn or xx). Does anyone have a name and source? - dcljr 04:53, 9 Jul 2004 (UTC)

Your edit seems to have been reverted, I don't know why (a comment could have been placed here). I think this could indeed be mentioned, but maybe we should wait for a name. I think for all that grows faster than exponential, one eventually takes the log and discusses the growth class of the latter. i.e.:

  • c^n = exp( (log c)·n ) = exp( linear )
  • n! ~ (n/e)^n = exp( n·(log n-1) ) = "exp( quasilinear )" (sorry I can't get used to linearithmic or so)
  • n^n = exp ( n·log n ) = "exp( quasilinear )" , i.e. still the same ("exp-reduced") class.

MFH: Talk 21:52, 21 Jun 2005 (UTC)

It sounds like a special case of a polynomial. I've never encountered a name for this, but you could call it an nth degree polynomial. --Colonel panic 04:21, 24 September 2005 (UTC)

I do not think so. Polynomials have a fixed degree. What's more confusing: nn grows faster than exponential, whereas polynomials grow slower than exponential. --NeoUrfahraner

If I recall correctly, it can be proven that O(nn) = O(n!), using Stirling's approximation. So you could just call it "factorial order." Pmdboi 02:52, 25 May 2006 (UTC)

You recall incorrectly, sorry. McKay 01:30, 11 August 2006 (UTC)
n! < n^n < e^n n! for n at least 2. (This can be made more precice but messier: n^n<n!\cdot\exp\left(\frac{12n}{12n+1}-\frac12\log(n)\right)/\sqrt{2\pi}.) Thus n! is O(n^n). But I believe that n! < nn + ε for ε < ½ and all n > f(ε), so n^n is not O(n!). CRGreathouse (t | c) 04:34, 20 September 2006 (UTC)


[edit] Other superexpoential names

I thought I saw a name for O\left(c_1^{n^{c_2}}\right)) -- that is, exp(polynomial). Has anyone seen a name for this, or a name for O\left(c_1^{c_2^n}\right) for that matter? CRGreathouse (t | c) 04:34, 20 September 2006 (UTC)

exp(exp) or double exp Temur 20:58, 15 February 2007 (UTC)

[edit] Italicisation

I know this is a really minor point, but can we standardize the italicization of O? Is it O(n) or O(n)? It's displayed both ways on this page. — Caesura 19:17, 20 Nov 2004 (UTC)

In the TeX markup for the TeXbook, Knuth uses $O(n)$. That's just a big O in "math mode", but it would be rendered in italics, like a function name in TeX's math mode. The equivalent wiki <math> markup would be <math>O(n)</math>, which is rendered as O(n), or it could be approximated by ''O(n)'', which is rendered as O(n). —AlanBarrett 16:46, 15 Dec 2004 (UTC)
Remains the difference in math mode: O\left(n\right) versus \mathcal{O}(n). I'm not sure which is more conventional. I think many scientific articles just use the capital O instead of \mathcal{O}, probably because it's shorter. —145.120.10.71 12:12, 14 April 2007 (UTC)

[edit] Big O and little o

What do the formal properties mean? In particular, what is the meaning of O(fg) = O(f)O(g)? --NeoUrfahraner 14:11, 18 Mar 2005 (UTC)

The only possible meaning of this would be that if h=O(fg), f'=O(f), g'=O(g), then h = O(f' g'), but the previous notation is indeed not completely well defined/correct. (The other way round, i.e. O(f) O(g) = O(fg) would be correct, however.) MFH: Talk 13:46, 24 May 2005 (UTC)

Thank you --NeoUrfahraner 06:25, 27 May 2005 (UTC)

[edit] Big O and little o again

What I am missing on this page is remarks on the relation between the various notions, in particular between Big-O and little-o. Intuitively, it seems that f=o(g) iff f=O(g) and g!=O(f). The "only if" direction of this claim is clearly true, but I am not so sure about the "if" direction. Is it true? If yes, it should be noted in the section discussing Big o and little o. If it is not true, there should be a brief remark why this is the case.

After a bit more thinking, I see now that the converse of the above does not hold. Take the following functions:

f(n) = 2^n if n is even, and n if n is odd

g(n) = 0.5 n

Then f=O(g), g!=O(f), an f!=o(g). I guess the converse holds only for monotonic functions.

In fact g=O(f) in that example. You need f(n)=n, g(n)=mixture of n or 2^n. McKay 14:48, 29 June 2006 (UTC)
You're correct as far as I can tell. One way of fixing this might be to say that f=o(g) iff f=O(g) and f≠Θ(g). Deco 11:41, 29 June 2006 (UTC)
No, put f(n)=g(n) for even n and f=g(n)/n for odd n. I don't think there is a simple rule like this. McKay 14:48, 29 June 2006 (UTC)

[edit] Move

This should really be moved to order of growth, asymptotic growth of a function, or something to that effect. Describing orders of growth in an article called "Big O notation" makes about as much sense as describing limits in an article called "limit notation", addition in an article called "summation notation", or for that matter mathematics under "mathematical notation". Of course, the notation must be described, but that's not the primary purpose of the article. Fredrik | talk 11:40, 10 November 2005 (UTC)

There is already an article at Asymptotic notation that duplicates much of this information as well, and a shorter one at asymptotic analysis that treats the basic concept without reference to the O-notation. I agree that there should be some consolidation and rationalization of titles and content. Perhaps this should all be merged into asymptotic analysis, which will discuss the concepts, and mention the notation where appropriate? E.g. there is a concept "asymptotic dominance", and little-omega is a common way this is written. --Delirium 02:37, 18 November 2005 (UTC)

[edit] Equation error?

In the "Multiple Variables" section, shouldn't the second equation,

   \forall n, m>N:f(n,m) \le n^2 + m^3 + C(n+m). 

end in just "+ C.", since C is just some constant and doesn't depend on n and m?

The ambiguous notation suggests a function called C, but intends C times n+m. I'll fix it. Deco 02:28, 16 December 2005 (UTC)

[edit] Õ vs. Θ

You find Õ instead of Θ (theta) at many places in the article. is this correct? --Abdull 12:50, 3 March 2006 (UTC)

No, that's wrong. Õ is soft-O, which means that logarithmic factors are ignored. Big-Theta, Θ, means that the function is guaranteed to have that particular order--no more, no less. --Rovenhot 00:52, 23 June 2006 (UTC)

[edit] Notation

Two comments on notation.

  1. As someone stated above, "=" is much more common than "is" or \in so it should be used throughout with the other notations given only as alternatives.
  2. The explanation given for "f(x) = h(x) + O(g(x))" is inadequate. Strictly speaking it is correct, but it does not allow for slightly more complex statements like "f(x) + O(j(x)) = h(x) + O(g(x))" or even "f(x) = h(x) + O(g(x))+ O(j(x))". Such patterns are commonplace in asymptotic analysis. Also consider "nO(1)=O(en)". I'll try to formulate a general rule.

McKay 11:36, 15 June 2006 (UTC)

Agreed. Frankly I'm not entirely certain how those are intended to be formally intepreted, but I think the rearrangement trick should work. Deco 12:43, 15 June 2006 (UTC)

I'm a mathematician, and I can honestly say I have never seen the "element of" notation used in connection with the big O notation. Is this something that's common in computer science? If not, it should be deleted from the article.--209.43.9.143 19:19, 19 June 2006 (UTC)

No, it isn't common in computer science either. There are some popular textbooks that use it but it has not caught on amongst the professionals. Despite the oddities of how "=" is used with O( ), it is what the vast majority of mathematicians and computer scientists use, so we should use it too. McKay 03:18, 20 June 2006 (UTC)
Everyone in practice seems to use the "=", but I think it's important to at least mention the element of syntax. While few texts stick to it, the majority of those that introduce Big O (in my memory, at least) usually give a sentence or two about it (usually with a word or two of how the world might be a better place if it was used instead).

[edit] Merge

I see a merge suggestion on the top of the page but I don't think there is a case for it? Should it be removed then? -- Evanx(tag?) 05:54, 22 June 2006 (UTC)

Seems to me that the two pages are on almost exactly the same topic, so the case for merging seems pretty good. McKay 00:11, 29 June 2006 (UTC)
I disagree. I think a page with big O and little O notation on it would be too long and it's worth having seperate pages for them. Meekohi 05:38, 6 July 2006 (UTC)
However Landau notation includes both big and little O so that's not a good argument. McKay 08:53, 6 July 2006 (UTC)
What's the point of having big O and little o in two separate articles? Seems artificial to me. 141.20.53.108 18:05, 7 September 2006 (UTC)

Definitely merge. —Steven G. Johnson 13:37, 6 July 2006 (UTC)

Definitely merge under the title "Landau notation" and make this page a redirect. 141.20.53.108 18:00, 7 September 2006 (UTC)

I think that this article should stay where it is. Merging content from Landu notation can be done as needed. CRGreathouse (t | c) 13:01, 20 September 2006 (UTC)

[edit] Sublinear

Should not the small o be used in the example of where sublinear is written? helohe (talk) 14:23, 28 July 2006 (UTC)

It isn't clear. Sometimes "sublinear" is used to mean O(n), with the "sub" deriving from the fact this is an upper bound. I wouldn't accept the terminology in a paper without a definition because of this ambiguity. McKay 04:43, 29 July 2006 (UTC)

[edit] Pronunciation

So how do you pronounce "O(n)"? [1] says it's pronounced big oh of n. Maybe we should add that? -- Felix Wiemann 18:35, 9 August 2006 (UTC)

Every computer scientist I've ever heard would say O(n) as "linear complexity with respect to n". Every mathematician I've ever heard would say "(of the) order of n". --203.206.183.160 14:07, 14 September 2006 (UTC)
I've heard "O of n" and "the order of n" and occasionally the theoretically-ambiguous "linear". CRGreathouse (t |

c) 04:37, 20 September 2006 (UTC)

I've heard most commonly "Big Oh of n", "Oh of n", and "Order of n". Of course, I've never heard anybody says that something "equals" O(n), which seems to be the preferred notation on here.

[edit] Constants (Addition, Multiplication, Exponentiation)

This claim is wrong:

O(k\ + g(n)) = O(g(n)) unless g(n) = o(1), in which case it is O(1).

A counterexample is to take g(n)=1/n if n is odd, and g(n)=n if n is even. A correct sufficient condition would be g(n)=Ω(1) but Ω has not been introduced in the article yet. McKay 00:58, 11 August 2006 (UTC)


Already covered:

O(k)\ = O(1) where k is a constant
O(k\ f(x)) = O(f(x))

I have yet to see this answered, and answering it would solve many questions I have when reading this article:

O(kcn)? = O(kn)

My gut reaction is, "No." But is there some formal answer as to why this may or may not be the case?

Your gut is correct: for k > 1 and c > 1, \mathcal{O}(k^{cn})\neq\mathcal{O}(k^n). CRGreathouse (t | c) 23:02, 29 March 2008 (UTC)

[edit] Table

Are there any objections, corrections, or suggestions to this modification of the table?

Notation Name Example
O(1) constant Determining if a number is even or odd
O(log* n) iterated logarithmic The find algorithm of Hopcroft and Ullman on a disjont set
O(log n) logarithmic Finding an item in a sorted list
O((log n)c) polylogarithmic Deciding if a number is prime with the AKS primality test
O(n) linear Finding an item in an unsorted list
O(n log n) linearithmic, loglinear, or quasilinear Sorting a list with heapsort
O(n2) quadratic Sorting a list with insertion sort
O(nc), c > 1 polynomial, sometimes called algebraic Finding the shortest path on a weighted digraph with the Floyd-Warshall algorithm
O(cn) exponential, sometimes called geometric Finding the (exact) solution to the traveling salesperson problem
O(n!) factorial, sometimes called combinatorial Determining if two logical statements are equivalent [2]
O\left(2^{c^n}\right) double exponential Finding a complete set of AC-unifiers [3]

Changes:

  • I removed supralinear from n log n. I've never seen it with this particular meaning outside of Wikipedia, but I have seen it used to mean "superlinear". In any case it's uncommon.
  • I removed an incorrect entry with nn as "exponential".
  • I added double exponential to the table.
  • I added examples of all complexities.
  • I removed the sublinear row. This should be mentioned somewhere, but it doesn't really fit on the table (all others could be expressed with Θ, for example).

Things that might need work:

  • I'm not happy with my example for factorial complexity; is there a more standard example? Ideally it would be ω(cn) for all c.
  • AKS is usually thought of as polynomial (in the size of the number rather than in the number itself); should I use a different example then for polylogarithmic complexity?
  • Should I specify both constants in the double exponential, or should I leave one fixed for simplicity's sake?
  • Is there a term for the 'epsilon complexities'? I'm thinking in particular of O(nn) = O(no(1)) (the first is for all ε > 0), but others also come up. This is a wider class than just soft-O of n; it would include, for example, n (log(n))n.
  • Is there a term for sublinear complexities of the form O(nc), like baby-step giant-step?
  • I'm not terribly happy with the wording of the examples in general; if you have a better idea, please suggest it.

General notes:

  • The purpose of the examples is to give an idea of the kinds of problems these classes represent. I'm less worried about specifying them precisely than I am of keeping them simple; the entries on the individual classes can iron out any wrinkles. As such, I don't intend to mention, for example, that oddness and evenness can be tested for in constant time only when the number is represented in an even base not dependant on the number.

CRGreathouse (t | c) 07:31, 20 September 2006 (UTC)

I noticed that, too, and I think it would occur to a fair number of readers familiar with computing but not the notation in particular. Perhaps zero or non-zero? This still depends on representation, but it's O(1) even in some fairly silly notations (e.g., unary). -Dmh 19:44, 4 January 2007 (UTC)
What's your definition of a "logical statement"? If it's just a Boolean expression, then deciding whether two are equivalent takes exponential, not factorial time (it would be equivalent to SAT). Also, it might be useful to mention that these are the asymptotic bounds of the best known algorithms, not the complexity of the problems themselves (what if P=NP and there's a linear-time TSP solution? :P)

F.R., April 3 2008:

  • According to me the DFT example is confusing and it's OK to drop it. Isn't the FFT actually an efficient way to compute a DFT? —Preceding unsigned comment added by 84.221.201.63 (talk) 04:01, 3 April 2008 (UTC)

[edit] Rename to asymptotic notation?

I merged asymptotic notation and Landau notation into this page, following the longstanding merge tags, as those pages were (almost) complete subsets of the information on this page, and this page had received by far the most attention of the three (and thus should have its history preserved).

However, now that they are merged, there is an argument for moving this page and its history to asymptotic notation, as that title seems broader.

On the other hand, since this page had received the most attention of the different titles, it seems that editors have already voted with their feet. Also, in common usage, the term "Big O notation" seems to be a synecdoche for asymptotic notation in general.

What do people think?

—Steven G. Johnson 15:40, 26 September 2006 (UTC)

I like it as it is now -- Big O notation as the main page, with other pages like Hardy notation and Vinogradov notation explaining variants on it. Because Big O is biggest by far, it gets all the basics of asymptotic notation, but doesn't have to define other less common terms (which can be done at their own pages). CRGreathouse (t | c) 06:36, 27 September 2006 (UTC)

[edit] Notation mess - let's clean it up

The article at the moment uses a real mess of different notations. This won't do. We should choose a single notation and use it throughout except for a section that describes alternative notations. The only suitable default notation is the one using "=" because that is what is used 99.9% of the time in professional mathematics and CS. Anyone disagree? McKay 08:17, 16 November 2006 (UTC)

Much as I dislike that abuse of notation, I must agree. It is the only standard (although I wouldn't say 99.9%, probably more like 99.5%). CRGreathouse (t | c) 08:57, 16 November 2006 (UTC)

[edit] Removed polylogarithmic

Reinstated my VERY bad. Missed a bracket

[edit] Changing Units

In the properties section there is a paragraph on the effects of changing units. This implies that doing so can affect the order of the algorithm, which sounds counter-intuitive.

It also says (in part): "If, however, an algorithm runs in the order of 2n, replacing n with cn gives 2cn = (2c)n. This is not equivalent to 2n (unless, of course, c=1)." I thought the actual base of the exponent was irrelevant, and we simply used base 2 as a convenience for computer nerds who are used to thinking in binary. (2c)n is still a constant base, and so it's still O(2n)

Any comments? Am I right? Gazimon 11:07, 5 December 2006 (UTC)

No, the article is correct. e.g. 2n is o(3n), i.e. asymptotically smaller. Just look at the ratio (2n) / (3n) = (2 / 3)n, which goes to zero for large n.
Possibly you are confusing this with logarithms. In a logarithm, changing from loga(n) to logb(n) is just a constant factor loga / logb that is irrelevant to big-O notation.
—Steven G. Johnson 18:01, 5 December 2006 (UTC)

[edit] Comparative Usefulness

With the way O(n) is defined, it would almost seem to confuse comparisons of the running time of algorithms. Suppose we have two functions, T1(n) = n and T2(n) = n2. I could, quite rightly according to the defintion and the examples presented, claim that T1(n) = O(en) and that T2(n) = O(n2). Now, when deciding which algorithm is the fastest, you would think that T2 would be the best to implement, when, clearly, this is not actually the case.

Perhaps we could add some comment about usefulness -- namely that in order to compare two functions, say T1 and T2, we find minimal, 'monic' expressions (i.e. without any arbitrary constants -- for the sake of cleanliness: it is really seen that T1 = O(5n)) for f(n) and g(n) before proclaiming that T1 = O(f(n)) and T2 = O(g(n)). Then, we can come to the conclusion that f(n) > g(n) \iff T_1 > T_2.

I hope this makes sense. 137.205.139.149 16:24, 12 January 2007 (UTC)

Use Θ(n) if you're concerned about misstatements of that sort. CRGreathouse (t | c) 04:25, 13 January 2007 (UTC)
Perhaps we could make it clearer in the article that this is function most people think they are using and that this is what it means? 137.205.139.149 22:48, 13 January 2007 (UTC)
The article says
Informally, the O notation is commonly employed to describe an asymptotic tight bound, but tight bounds are more formally and precisely denoted by the Θ (capital theta) symbol as described below.
in the opening section now; do you think this needs to be expanded? CRGreathouse (t | c) 23:18, 13 January 2007 (UTC)
Yes, I'm often concerned about this use of big-O also. It could be expanded to explain the potential problems and include the above example. --Rovenhot 20:43, 31 January 2007 (UTC)

[edit] Trying to figure this out

I'm not a mathematician, but I like math. I also like Unix, and the big O notation comes up a lot in Unix. My understanding of it is: for a function f(n), O(f(n)) is the maximum number of iterations that may be required to compute f(n).

It seems to match the examples given in the article:

Notation Name Example
O(1) constant Determining if a number is even or odd
O(log n) logarithmic Finding an item in a sorted list with the binary search algorithm
O(n) linear Finding an item in an unsorted list
  • It takes one operation to tell whether a number is odd or even.
  • 14 people in a hight scholl have signed up for yearbook. It takes 14 operations ("reads") at most to tell whether Michelle is one of them. (A sign-up sheet is an unsorted list, right?)
  • I takes at most about log n operations to find Ms. Lucy McGillicuddy in the phone book. That seems about right. For a natural algorithm, that's 14 operation to find her in the phone book of a city with 3 million people, and 6 operations in a small town of 1,000. For a base-10 logarithm, it drops to 6 and 3 respectively. Both seem in the correct range. (Though I assume that here log means natural logarithm, which makes a lot more sense.)

Did I get this right?

If I did, what the £%*µ! does THIS mean?

\limsup_{n \to \infty} \left|\frac{f(n)}{g(n)}\right| < \infty

And this?

\exists \;x_0,\exists \;M>0\mbox{ such that } |f(x)| \le \; M |g(x)|\mbox{ for all }x>x_0.

1. What's g? I assume that g(x) is the same as O(f(x))

2. Same thing for M. What is M? It makes sense in the mathematical example, but not in the practical ones, the ones Common_orders_of_functions.

The second definition, I can sort of make out:

  • There exists a specific value of x called x0.
  • There also exists a positive constant M.
  • Let's define a new function g(x).
  • After the function f(x) has passed x0, this is, for all values of x GREATER than x0, the absolute value of f(x) will always be smaller than the absolute value of g(x), after g(x) is multiplied by the constant M.
  • Big O of f(x) is that g(x).

But I don't understand how this relates to the examples I gave above. (If they are indeed right.)

If that vulgarisation of the mathematical definition is correct, how can I apply it to the sign-up sheet or the phone book examples? Or for that matter to the odd and even problem? How can I apply the two mathematical definitions to those problems?

The closest I can get is with the sign-up sheet example. f(x) is defined as follows. There are n names on a list. Each x is a person looking for someone on the sign-up sheet, reading from top to bottom. In the best-case scenario, the person they're looking for is the first, they only have to read one name. In the worst-case, they are looking for the last name and have to read all of the names on the list. The maximum number of operations requited in n. Thus Big O of f(x) is O(n). Would x0 be the first person to read all of the names? If there are 14 names on the, list f(x) will always be less or equal to 14. But in my example, how do x and n relate to the mathematical definition? In the mathematical definition, both f and g use the same argument.

Eje211 17:31, 12 January 2007 (UTC)

Your understanding is a little off. Given some problem P, which involves an integer n, let f(n) be the number of operations it takes to solve problem P for n. To say that for some f(n) and g(n), f(n) = O(g(n)), it means that there exists some integer x_0, and some constant c (not depending on n) such that when n > x_0, it is true that f(n) <= c * g(n). For example, depending on what you count as an 'operation', checking whether an integer is even or odd may take 1 operation, or it may take 50. However, f(n) = O(1) means that there exists an x_0, and a constant c (maybe 1, maybe 50, maybe 100, maybe 1000000) such that when n >= x_0, it is true that f(n) <= c. The x_0 is like a starting point. For example, the function f(x) = {x^2 if x < 1000, 123 if x >= 1000} is O(1), since eventually it is O(1) (here x_0 would be any number greater than 999). Does this help? (Note that Wikipedia is not a place to teach this stuff, but to discuss the article, so after we are done talking, delete this section) Rob 00:44, 13 January 2007 (UTC)
Thanks, Rob. you do make it a bit clearer. My point about asking this here, specifically, is that I think that Wikipedia can be about making this understandable to people like me. I think (but I could be wrong) that rather than delete this, the sort of definition you've just given me should actually go into the article. If I'm completely wrong, just tell me and I won't insist, but I think that including a vulgarised explanation would draw people to understand the whole and that's what Wikipedia is about. Also, asking on the Talk page may get people to use what you just wrote to figure this out too. Eje211 20:09, 14 January 2007 (UTC)


I think I got it. If f(x) is seeking in an unsorted list of x elements (any element, a random number between 1 and x in a shuffled series of numbers from one to x) and returns the position of the sought item (which is also the number of items previously checked). Let's say f(x) belongs to O(1) (which is false). Then, as x goes to infinity, \limsup_{x \to \infty} \left|\frac{f(x)}{g(x)}\right| < \infty can't always be right, because f(x) can reach infinity and g(x) is always 1.
And, no matter what M or x_0 are, \exists \;x_0,\exists \;M>0\mbox{ such that } |f(x)| \le \; M |g(x)|\mbox{ for all }x>x_0., f(x) can get to infinity and g(x) cannot. However, if g(x) = x, then, both make sense.
And, if a function changes values of O as it progresses, we only want the last one. In the example given in the article, for f(x) = 6x^4 + 2x^3 + 5, g(x) = x^4 because it's its fastest growing possible speed. (It's also its first. x_0, here, is 0, right?)
And if a function can only return two real values (like in the odd and even example), O(1) works with M being the largest value, and x_0 the LAST occurrence of the largest value.
I've also found this from Dr. Math (http://mathforum.org/library/drmath/view/51904.html):
Big-O notation is just a way of being able to compare expected run times
without getting bogged down in detail.
Suppose you need to run an algorithm on a set of n inputs. Perhaps the actual
number of 'operations' (however you choose to define that term) that would 
need to be carried out is something like:
    2n^2 + 3n - 5
This is really only interesting when n gets large, in which case the only term
that really matters is the n^2 term, which dwarfs the  others. And in fact, the
constant coefficient isn't really that important, either. So we just say that
the algorithm runs in 'order n squared' time, or O(n^2)."
From this, what I get is: g(x) is the fastest growing element, the one that "trumps all others", as Dr. Math puts it. Also, if there are several, we only look at last one (which starts at x_0) within the given range to seek, or the very last one if our range is infinite. Am I closer now? Eje211 21:06, 14 January 2007 (UTC)
Yes, if f(x) is a polynomial in x, then for x->infinity, f(x)=O( x^d ) where d is the degree of the polynomial. But not all functions f grow like some polynomial... (you can have terms like x^3 * log(x) or exp( 5 x ) etc.). In general, if you have a sum of several terms and one will grow faster than all others, the function is O( this term). If you have a product of terms (like x^2*log(x)) you have to keep all factors, except if one factor goes to a constant (e.g. x^5 exp(-x) = O(x^5) since exp(-x) -> 1). (I don't know what would be the meaning of x_0 in general). — MFH:Talk 20:56, 12 February 2007 (UTC)

[edit] "Algebraic integer"?

Is "algebraic integer" really used as a term for O(n^k), 0<k<1? If not, what term should be used? I like the recent addition, assuming it's otherwise correct, but the term doesn't look right at all. Help? CRGreathouse (t | c) 00:21, 19 January 2007 (UTC)

It is a bad choice of name. nc is indeed a rational integer if c is a rational number, but c need not be rational, nor are all rational integers like that. In fact rational integers can be greater than 1, which spoils the point of this table entry. I changed it to "fractional power". McKay 03:46, 19 January 2007 (UTC)
Thanks, thats much better. 05:25, 19 January 2007 (UTC)

May I ask, McKay: In what context is nc (n an integer, c a rational number) called a "rational integer" ? Certainly not in any part of mathematics that I've ever encountered.Daqu (talk) 17:06, 3 June 2008 (UTC)

[edit] A Question involving \Omega

what would mean that  f(x) \ne \Omega (g(x))  ?

and the fact that for big x then  f(x) \ne \Omega (1)  ?

could we say something about f(x) in both cases? --Karl-H 23:02, 25 January 2007 (UTC)

The negation of the definition of Omega is: for all M > 0, there exists arbitrarily large x such that  |f(x)| \leq M|g(x)|
When g(x) = 1, for all M > 0, there exists arbitrarily large x such that  |f(x)| \leq M . Rob 03:58, 26 January 2007 (UTC)
Hope you don't mind me correcting your typos. McKay 05:54, 29 January 2007 (UTC)
This is an interesting concept. Would it then be true that 2^{-x} \ne \Omega(1)? Also, isn't little-o the inverse of Omega? --Rovenhot 20:37, 31 January 2007 (UTC)
Yep, I don't think you negated the inequality correctly. The inverse of ≥ is <, not ≤, so it would be \forall M > 0: \exists x: |f(x)| < M|g(x)|, which is the definition of little-o.
To answer your question, then, Karl, if  f(x) \ne \Omega (g(x)) , we can say that f(x) = o(g(x)).--Rovenhot
No, no, no, that's completely wrong. You are confusing "there exists arbitrarily large x" with "for all sufficiently large x". Consider the function f(n) such that f(n)=n for even n and f(n)=1/n for odd n. This function is neither Ω(1) nor o(1). McKay 07:36, 1 February 2007 (UTC)

[edit] Quick question about the Common Orders of Functions table

The table currently contains the functions in this order:

Notation Name
O(log n) logarithmic
O(n) linear
O(n log n) linearithmic
O(n^2) quadratic
O(n^c), c > 1 polynomial

Is it infact the case that O(n^c), 1 < c < 2, grows faster than O(n^2)?

[edit] \,f\in O(g(n)) vs f = O(g(n))

There is a legitimate reason why we use f\in O(g(n)) instead of f = O(g(n)) because there are normally more than one function that are Big O of another function. Consider:

f1(n) = 2n2 + 1 and f2(n) = 3n2 − 12

Both f1 and f2 are Big O of n2

If we write f1(n) = O(n2) and f2(n) = O(n2), are we then going to say f1(n) = f2(n)? --CBKAtTopsails 16:13, 25 April 2007 (UTC)

[edit] Abuse Of Notation

There appears to be a lot of abuse of notation in this article that needs to be cleaned up to make it easier to read particularly for people who are not familiar with the subject. I've done a few. Who is going to volunteer to finish the rest? --CBKAtTopsails 16:27, 25 April 2007 (UTC)

[edit] Summery

Even though this is about a 'scientific' or 'mathematical' subject, the summery should be much more approachable. I believe it should be changed to something like:

Big O notation roughly estimates the runtime, a relative measure of computer instructions, of a function in terms of n where n is the size of the input.

This is much more clear and less cryptic. No one without significant education is going to understand the current summery, let alone page. It's like some one is trying to validate their extended education by making this simple topic more complex than it actually is. This should be simplified to meet wikipedia standards. --65.189.189.23 22:40, 5 March 2007 (UTC)

I agree completely that we should be as clear as possible for beginners. Unfortunately, your suggested text won't work.

First of all, the use in computer science is just a specialization of its use in mathematical analysis. But even assuming you're only talking about its application in the complexity theory and the analysis of algorithms, it has several serious problems:

  • Big O can be used to characterize any characteristic of an algorithm, whether it's runtime or amount of storage needed or number of times it calls a given functional argument (or oracle or even the numeric precision of the result.
  • Though it is true that Big O can be used to characterize the upper bounds of execution time for "functions" in complexity theory, it is also commonly used to characterize specific algorithms. There is a huge difference. There are many different algorithms that embody the "sort" function, for example: there are things you can say about the complexity of sorting in general, but there are also things you can say about the concrete complexity of particular algorithms.
  • The number of instructions executed is often a good proxy for runtime, but even if you're only interested in runtime, things like cache
  • The arguments of Big O can refer to any property of the input, not just its size. For example, in polynomial factorization, they may refer to the input degree.

In sum, though I agree completely that we need to make an effort to be simple, we also have to be accurate. I'll be happy to work with you and other editors to improve the article. --Macrakis 00:14, 6 March 2007 (UTC)

What about this:
Big O notation is commonly used to estimates a program's runtime, as a function of n where n is the size of the input.
It brings up the most common use, without limiting Big O to just that.
CRGreathouse (t | c) 02:55, 6 March 2007 (UTC)
Well, the n isn't essential. How about:
Big O notation is often used to characterize an algorithm's running time as a function of the size of the input.
I'm not completely happy with that, but... --Macrakis 19:22, 6 March 2007 (UTC)

How about: Big O notation is often used to estimate an algorithm's performance, usually speed or memory usage, typically as a function of the size of the input. Big O notation is also used in many other scientific and mathematical fields to provide similar estimations.

[edit] Possible mistake in "Formal Definition"

In "Formal Definition" it says "for all Xo". Shouldn't it be "exists Xo"? Italo Tasso 22:11, 27 May 2007 (UTC)

Yes, you're right. Exists x_0 such that for all x > x_0. CRGreathouse (t | c) 05:50, 28 May 2007 (UTC)

[edit] Order of Addition/Multiplication

I believe that, in general, both addition and multiplication are of order log(N). This is surely worthy of mention, as these are rather commonly used operations (and people tend to assume that they take constant time). Pog 16:06, 1 November 2007 (UTC)

Actually, in that sense, multiplication is order log(N) loglog(N). But I don't know if it's worthy of mention here. — Arthur Rubin | (talk) 17:27, 1 November 2007 (UTC)
Actually, the best known multiplication algorithm has bit-complexity slightly better than O(n log n log log n) for n-bit numbers (see Schonhage-Strassen algorithm) or equivalently O(log N log log N log log log N) where N is the larger of the two numbers. In any case I agree that that's not terribly pertinent to this article except perhaps as an example. Dcoetzee 19:42, 1 November 2007 (UTC)
Wasn't there a recent algorithm at something like O(n log n 2^(log* n))? CRGreathouse (t | c) 20:12, 1 November 2007 (UTC)

[edit] Big Thetha

Hello,

in the beginning of the article it is said that capital theta notation is "described below". It is not. I could remove this "described below" phrase, but it would be nice if someone actually describes it. 87.228.109.83 16:22, 10 November 2007 (UTC)

I think someone might have forked the content off to a new article. I'm not sure why. Surely thuis was described in the article earlier. CRGreathouse (t | c) 16:32, 10 November 2007 (UTC)
I've returned the table of other asymptotic notations until a complete article can be written. It would be nice to have an article that covers all the notations, with a proper introduction and detail, and refocus this article on the use of Big O only. Most of the redirects into this article (asymptotic notation, Landau notation, etc.) probably would want to point to the more general article, rather than this one. A worthy project for someone other than me. ---- CharlesGillingham (talk) 09:49, 8 December 2007 (UTC)


Wouldn't it be simpler to define big Omega and big Theta as follows?

  • f is Omega(g) iff g is O(f)
  • f is Theta(g) iff f is O(g) and f is Omega(g)

Is that correct? —Preceding unsigned comment added by 84.221.201.63 (talk) 04:40, 3 April 2008 (UTC)

[edit] Needs a simple computer science explanation

I would guess that many (most?) readers who visit this page want information about big O notation from a computer, not maths, perspective.

Can some kind, clever person give a simple, authoritative explanation with examples?

Sam Dutton 22:16, 14 November 2007 (UTC)

Perhaps we should include a link to the Wikibooks article on this page?
http://en.wikibooks.org/wiki/Data_Structures/Asymptotic_Notation#Big-O_Notation Sean22190 (talk) 04:40, 19 January 2008 (UTC)

[edit] fancy mathcal O

More than a year ago, someone changed the notation

\,\!\hbox{O}(n)

into

\mathcal{O}(n)

and nobody complained (I think). I want to complain now, as almost no serious publications in mathematics or CS use \mathcal{O}(n). In fact the most common usage is O(n)\,\!. Who will object to changing it now? McKay (talk) 20:49, 14 April 2008 (UTC)

You're quite right, but I think, like the fancy zero in Knuth's Concrete Mathematics, it's a reasonable stylistic variation to make the letter-O-ness clearer. I wouldn't try to veto to a change back, but I think it looks nice and is within the realm of acceptable font variation. It doesn't need to be changed back. 71.41.210.146 (talk) 02:48, 16 April 2008 (UTC)
I also prefer the mathcal O. But looking through a few papers, McKay is quite right -- the plain O is far more common. CRGreathouse (t | c) 13:47, 16 April 2008 (UTC)
I prefer the normal O. I've never seen it in mathcal font. Oleg Alexandrov (talk) 15:28, 16 April 2008 (UTC)

[edit] It takes *way* too long to get to a clear definition

I find this article maddening because until one gets to the "formal defintion" section -- which is admirably clear -- there is nothing that can be even called an informal definition. Before there is even an informal defintion, I don't want to know anything else about big-oh notation. (The rough description in the opening sentence is accurate, but much too vague to be of any help. All the intervening stuff up to the "formal definition" is just plain annoying, absent any clear definition of what is being discussed.)Daqu (talk) 17:00, 3 June 2008 (UTC)