Wikipedia:Reference desk/Archives/Mathematics/2007 August 5

From Wikipedia, the free encyclopedia

Mathematics desk
< August 4 << Jul | August | Sep >> August 6 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents

[edit] August 5

[edit] Calculating the Difference Between Two Dates

I wasn't sure what Help Desk to use for this question ... so I am going with Math (here). Given two dates, I want to be able to compute the amount of time elapsed between the two dates. So, for example, if I have May 3, 1962 ... and January 17, 1981 ... I want to know that the time elapsed between these two dates is 18 years and 259 days. So, here are my questions. (Question #1) Does any one know of any web sites that perform this calculation? Important: I need the output to be in the format of 18 years and 259 days ... and not in the format of 6,834 days ... and not in the format of 18 years and 8 months and 14 days. I have found web sites that will give me the latter two formats (which I don't need), but I was not able to find the former format (which I do need). (Question #2) Does any one know how to perform the above calculation in Excel 2007 (such that the result is in the desired format)? (Question #3) In Excel 2007, I cannot seem to work with dates prior to 1/1/1900. That is, the date of January 1, 1900 -- and all subsequent dates -- are valid. Any date prior to that (December 31, 1899 and earlier) seem to be invalid dates that trigger an error message. Does any one know a way around this limitation in Excel 2007? Thanks. (Joseph A. Spadaro 03:21, 5 August 2007 (UTC))

Isn't this ambiguous? What is the amount of time, in your format, elapsed between March 1, 2003 and March 1, 2004? And what is it for February 28, 2003 and February 28, 2004? What about February 28, 2003 and February 29, 2004? And March 1, 2003 and February 29, 2004? And, finally, February 28, 2003 and March 1, 2004? Somehow you cannot avoid anomalies if you reckon this way, and you have to be precise how you want them resolved.  --Lambiam 04:10, 5 August 2007 (UTC)
What is the ambiguity? How much more precise can I be? I want the format in Y years and D days ... as opposed to simply D days ... or as opposed to Y years, M months, and D days. What is unclear / ambiguous about my question? (Joseph A. Spadaro 05:31, 5 August 2007 (UTC))
Just break it up into two parts. First calculate the number of full years between May 1962 and January 1981. Then calculate the distance from May 3, 1980 to January 17, 1981. You can just use the Julian day calculators that you've found for that. --JayHenry 05:12, 5 August 2007 (UTC)
Yes, I know that. Have you read my question? I want a program to do this for me. I have many, many, many to do ... and I would like to simplify the matter. Either with a website / program ... or with Excel. (Joseph A. Spadaro 05:31, 5 August 2007 (UTC))
Find out which day of the year the first date was on, and the day of the year the second was on, and subtract modulo 365. This ought to do the trick? Finding the difference in years is trivial.
  • Using excel, if you can get the year numbers, month numbers and day numbers into 6 separate cells then the calculations can be done as follows:
  • A1=later year, B1=later month, C1=later day, A2=earlier year, B2=earlier month, C2=earlier day
  • B3="=IF((B2*100+C2)>(B1*100+C1),1,0)"
  • A3="=A1-A2-B3"
  • Where A3 is the number of years difference.
  • Then to find the numbers of days difference, you can use (forcing valid excel years) something like "=date(2007,B1,C1)-date(2007,B2,C2)" possibly +365 or +366 depending upon whether years are leap or not. Note that this isn't easy because, say your dates are from leapyear-feb-14 to nonleapyear-march-5, then the year difference is easy, but are the days those at the start of the period (ie a leap number of days) or at the end of the period (ie a nonleap number of days). -- SGBailey 07:14, 5 August 2007 (UTC)
If you had bothered to try to answer Lambiam's questions, you would have found out the ambiguity\anomaly. The ambiguity is "what is the difference between February 29 2004 to March 1 2005?". The anomaly is that however you answer this question, you will get something weird. We can help you more if you explain what do you need this for. As it is, I think your best bet will be to find the number of days (easy, there should be a function for this in excel) and divide it by 365.2425 - the quotient will be the number of years, and the remainder (rounded) will be the number of days. -- Meni Rosenfeld (talk) 12:44, 5 August 2007 (UTC)
There is a YEARFRAC function in excel which requires the Analysis ToolPak if your not too fussed about leap years this may do you. --Salix alba (talk) 14:43, 5 August 2007 (UTC)
As to dates before 1900 (1904 on a mac) Excels too dense to be able to work with them. OpenOffice is a bit better going back to 1583 when the calenders got changed about. If you need dates before that you may need to look at something like Java, perl seems to have some advanced packages.[1] --Salix alba (talk) 15:04, 5 August 2007 (UTC)
The HP48 series of calculaors can calculate the number of days between any 2 dates. the newer 49 series and the current HP50G also does it. if one does not have a actual HP calc, there are free HP48-49-50 emulators available online. Zeno333 19:32, 8 August 2007 (UTC)


[edit] Clarification of My Question

Thanks to everyone for the input. As was suggested by Meni Rosenfeld (above), I will provide some context and clarification for my questions. As it may be helpful, please take a look at the following Wikipedia page: List of Best Actor winners by age at win. As you see, that list includes the individuals’ ages. I am proofreading that list (and many others) and checking for errors. And I am also involved in several projects of a similar nature. Ultimately, there will be many, many, many ages that I need to compute – with the above link merely being an example. I certainly know how to calculate these figures manually. That is, I know the mathematical / numerical algorithm that will achieve the answer. However, because I have so many dates to calculate, I am trying to make the task easy / manageable / as painless as possible. I don’t want to manually do all this … I want a computer / calculator to do it for me.

That being said, I would like to be able to merely enter the two dates (start date and end date) and have the results calculated for me automatically. Thus, as my original question states … I would like to do this with either (a) a web site that can do so or (b) Excel if possible. I do not want to do these manually. So, I will indeed go through the time / expense of typing in a lot of dates for the data input. But I am not willing to manually calculate each and every result.

To be more specific: Let's use the first person on the list as an example. Adrien Brody is 29 years and 343 days old. So, if I entered the beginning and ending dates, I want that as my result. It is not helpful to me to have output that merely says "Adrien Brody is 10,935 days old." It is equally unhelpful to have output that says "Adrien Brody is 29 years, 11 months, and 9 days old." The chart in the above link is formatted with an individual's age reported as Y years old and D days old. Thus, that is the format of the data output that I would like to receive. If the results are given simply in terms of D days old … or even in terms of Y years, M months, and D days old – that will not be useful to me. All of the websites that I have found give me the output in one of those two ways that are not helpful to me. I am looking for a website (or function in Excel) that will output the result in the format I need (age expressed as Y years and D days).

To address another concern: There really is no anomaly or ambiguity in the question I have posed. The question is: Is there a program out there that does what I want? The question is not: how does one, like myself, manually calculate these results? That being said, a year is simply one date to the exact same date a year later: example = June 12, 1965 to June 12, 1966. Sure, whenever you are doing calculations with dates / calendars, the issue of leap years and February 29th comes up. Nonetheless, that does not convolute or obfuscate my original question. The anomaly or ambiguity would lie in the algorithm, which is not my original question. In any event, the February 29th "issue" (if you will) will be negligible at best and will probably hardly ever surface. If it does, I would posit that it works like this: if we start out at February 5, 2004 … a year later will be February 5, 2005 – regardless of the fact that there is an extra day (Feb. 29th) wedged in there. In other words, in that case, the term "year" simply means 366 as opposed to 365 days.

So, to address Lambian’s point - it is outside of the scope of the question. I am asking if there is a computer / calculator out there that will allow me to input two dates and receive as output the age I need. How this computer / calculator does this calculation is of no interest to me. So can anyone help? Thanks.

Also – back to Excel for a moment. It seems hard to believe that a behemoth like Microsoft would not allow us to account for dates pre-1900. Dates pre-1900 would be relatively common for various uses. (How many days was George Washington in office? How many days did the American Revolution or the Civil War last?) Is there any way in Excel to get around the limitation that December 31, 1899 (and earlier dates) are considered "errors" or invalid? Thanks. (Joseph A. Spadaro 20:42, 5 August 2007 (UTC))

Before you rush off to finding a program that calculates what you want, you need to know what you want. It astonishes me how you have avoided answering a question that was posed twice. I have no problem with "a year is simply one date to the exact same date a year later" whenever this exists. This is why we have asked about February 29 (2004, say) to March 1 2005. The problem is that there is no such thing as "the exact same date a year later", as there is no such thing as February 29 2005. So what will the answer be? 1 year, or 1 year and a day? 1 year and a day will be weird, as there is no date defined as 1 year after 2.29.05. Another peculiarity is that both 2.29.04 to 3.1.05 and 2.28.04 to 3.1.05 is 1y1d. If it is only 1 year, you also get things such as both 2.29.04 to 3.1.05 and 3.1.04 to 3.1.05 is 1y. So, I repeat - you need to specify a convention when the staring date is February 29, and be prepared for anomalies whatever choice you make.
Now that we have seen why the year & day format is problematic, we need to ask - why do we even need days for pages like this? Is it not better to simply delete the days column?
In case there are places where you have no say in the chosen format, the question is asked if you are willing for the results to be a day off sometimes. If so, my suggestion of dividing days by 365.2425 is good.
Otherwise, you should follow JayHenry's suggestion - he didn't say you need to do this manually. Have your spreadsheet software do the calculation in two parts.
As for the 1900 problem, it shouldn't be too difficult to do this with the right formulae in excel, but why should you - there are arguably better alternatives.
Last but not least, do you know any programming languages? It shouldn't be hard to write a program which can do as much as take the wiki or html code of a page and automatically calculate the date differences and check the consistency of the data. -- Meni Rosenfeld (talk) 21:19, 5 August 2007 (UTC)
(After edit conflict) If your wanting an age in years and days that is more well defined problem, than a duration. In words find the closest birthday less than you target date and then find the number of days between then and now.
To avoid the vagaries of excel just put the years, months and dates in separate columns, say A1,B1,C1,D1,E1,F1 for (year1,month1,day1,year2,month2,day2), to find the number of years difference use G1=IF(B1<E1,D1-A1,IF(C1<F1,D1-A1,D1-A1-1)), the birthday will then be H1=DATE(A1+G1,B1,C1) if the birthday after 1900 you can then subtract I1=date(D1,E1,F1)-H1. For Feb 29 you can do it by hand, taking either Feb 28 or March 1 for non leap years. If you need to work pre 1901 then you'll need to correct for leap years.
Alternatively get Open Office and free yourself the MS behemoth, it has a better selection of date functions than excel.
As to why, its probably historical. Excel was initially for financial applications and they didn't think it would be needed, and it saved space. They haven't changed since then to ensure compatibility. Indeed it might be Lotus 1-2-3's fault as I think the functions were chosen to be compatible with that. Many other computer systems start counting quite recently see Epoch (reference date). --Salix alba (talk) 22:10, 5 August 2007 (UTC)
To Salix alba: What exactly is the difference between an "age" and a "duration" ...? Aren't they both simply an elapse of time from Date 1 to Date 2? I am confused. Thanks. (Joseph A. Spadaro 17:06, 9 August 2007 (UTC))

Thanks to all for the input. (Joseph A. Spadaro 22:22, 6 August 2007 (UTC))

There is a simple way of doing it, here on Wikipedia. There is a template; Template:Age in years and days. I've not tried it to see how it deals with leap years, or, say, the fact that 1900 wasn't a leap year. Richard B 09:37, 7 August 2007 (UTC)
<edit> I've just checked, and it seems fine with leap years. Richard B 09:42, 7 August 2007 (UTC)
Yes, that is exactly what I am looking for. Thanks. I guess I didn't have to look too far, huh? Thanks! (Joseph A. Spadaro 17:37, 7 August 2007 (UTC))

[edit] Convergence of a Taylor series

I'm implementing (in python) code to calculate sine using a Taylor series (for "fun", I appreciate there are plenty of other ways to do this on a computer). The implementation continues until the value of the nth term is below a certain threshold, at which point the algorithm returns; so I can count how many terms I need to evaluate for a given degree of accuracy. There are marked optimisations to be had in exploiting the symmetries of the sine function: identities that move values into the 0..2π range, and further that move those from π..2π into 0..π make for marked improvements - clearly the series converges much faster in the 0..π range than for other values. I've coded a futher (trivial) symmetric optimisation that flips values in π/2..π down into 0..π/2, but that doesn't produce any improvement. Am I correct in thinking that the convergence of the Taylor series for sine between 0 and π/2 is not much faster than between π/2 and π? -- Finlay McWalter | Talk 14:02, 5 August 2007 (UTC)

That depends mostly on your definition of "much". When evaluating the sine by putting the variable in the Taylor series and calculating n terms, the number of correct digits you get is roughly 2n(log10(2n) − log10(xe)) (assuming your internal precision is high enough). For x=\tfrac{\pi}{2}, this is n(0.87lnn − 0.66), and for x = π this is n(0.87lnn − 1.26). Once again, it is up to you and your desired accuracy to decide if the difference is great. If what you're asking is about the largest term in the series (which bounds the greatest precision you can achieve with given internal precision), it is roughly 1.5 and 5, respectively. -- Meni Rosenfeld (talk) 14:29, 5 August 2007 (UTC)
Interesting. Any source pointer for that? —Bromskloss 22:04, 5 August 2007 (UTC)
Are you asking about my justification for the above estimates? This is just a simple application of Stirling's approximation. -- Meni Rosenfeld (talk) 07:26, 6 August 2007 (UTC)
Thank you. —Bromskloss 07:53, 6 August 2007 (UTC)
To quantify the effect: if you stop when your term drops below 2−54 in absolute value, then for x = π you will stop at x29/29!, while for x = π/2 you can stop "already" at x23/23!. That is, indeed, not a big deal. However, for x = π−3 you only need to go to x11/11!, which is a considerable gain compared to the case x = 3 (which is like x = π). The reduction also gives you better accuracy, and the effect is stronger when you get close to π.  --Lambiam 17:53, 5 August 2007 (UTC)
There be dragons! As I mentioned in response to a question about associativity, floating point addition rounds to a fixed precision. If you add the terms from largest to smallest, the contributions of the small terms can disappear. Either add from small to large, or use something like the Kahan summation algorithm (compensated summation), or use double precision to get an accurate single precision result. Otherwise, your "theoretical" accuracy will be meaningless.
Since you're doing this for fun, here's an amusing observation on range reduction. First, observe that
 \sin(\theta) = \sin\left(- \tfrac\theta 3\right) \left( 4 \sin^2\left(- \tfrac\theta 3\right) - 3 \right) .
Then recall that as θ approaches zero, sin(θ) approaches θ. Thus we can use repeated range reductions instead of a series.
What do professionals use? In software, try fdlibm. Hardware implementations take a different approach; the CORDIC algorithm is a traditional favorite. --KSmrqT 18:08, 5 August 2007 (UTC)
Hmmm, (((+))))=(((-((((=-(=)\neq 0. ;-) —Bromskloss 22:07, 5 August 2007 (UTC)

[edit] Prove or disprove

Consider the following LHS and RHS.

 LHS = \sum_{q=0}^N \frac {(ikR_{pq} + 1) \left| x_{pq} \right| e^{-ik(R_{pq} + R_{qr})}} {R_{pq}^3 R_{qr}}

 RHS = \sum_{q=0}^N \frac {(ikR_{qr} + 1) \left| x_{qr} \right| e^{-ik(R_{pq} + R_{qr})}} {R_{pq} R_{qr}^3}

Given that

Rpq = Rqp
xpq = − xqp

 i  = \sqrt {-1}

 k  \geq 0

I want to know whether LHS = RHS or not. As a background, R(p,q) is the Euclidean distance between points P and Q in a meshed unit cube (in all there are N = n^3 regularly spaced points), and x(p,q) is just x-component of the distance. Also, N can be arbitrarily increased, so that the points are really close to each other, and the summation can be replaced by an integration (if required). Hopefully there is a way to prove that these two are (not) equal. Any help will be highly appreciated. deeptrivia (talk) 18:13, 5 August 2007 (UTC)

Fun. I haven't come up with anything great, just noticed that you can multipy both sides with RpqRqr to get slightly simpler expressions. Have you tried it numerically with some values? That way you could prove that there are cases where the equality does not hold, but perhaps that is not good enough for you. Anyway, I find myself trying to figure out more about what your problem really is about. ;-) That k looks like a wave number. —Bromskloss 19:10, 5 August 2007 (UTC)


That's correct. k is the wave number. This comes from an acoustics problem. Taking this further, I guess even the exponential can be eliminated, so that we have
 LHS = \sum_{q=0}^N \frac {(ikR_{pq} + 1) \left| x_{pq} \right|} {R_{pq}^2}
 RHS = \sum_{q=0}^N \frac {(ikR_{qr} + 1) \left| x_{qr} \right|}  {R_{qr}^2}
Is that correct? deeptrivia (talk) 19:45, 5 August 2007 (UTC)
I may be mistaken, but it seems to me both of those removals are only valid if we break the two sums over q into their component terms and equate those simultaneously, i.e.:

\frac {(ikR_{pq} + 1) \left| x_{pq} \right| e^{-ik(R_{pq} + R_{qr})}} {R_{pq}^3 R_{qr}} =
\frac {(ikR_{qr} + 1) \left| x_{qr} \right| e^{-ik(R_{pq} + R_{qr})}} {R_{pq} R_{qr}^3} \ \forall q \in \{ 0, 1, \ldots, N \}
which is a stronger claim than simply the equality of the sums. While both sides of this equation can certainly be divided by \frac {e^{-ik(R_{pq} + R_{qr})}} {R_{pq} R_{qr}}, I think what's left reduces down to just \left| x_{pq} \right| = \left| x_{qr} \right| and R_{pq} = R_{qr}\, \forall q \in \{ 0, 1, \ldots, N \}, which, I think, implies p = r. So I don't think this approach looks particularly helpful. —Ilmari Karonen (talk) 20:37, 5 August 2007 (UTC)
Ahrgg, you are right! I was going to object to deeptrivia's simplification for that very reason, but apparently I did the same mistake myself. —Bromskloss 21:22, 5 August 2007 (UTC)
Yes, I see there's a problem with the simplification. Any other ideas? To me it appears that it is more likely to be false (LHS <> RHS). Any way to demonstrate this? I know plugging in a few numbers should do, but any better ideas? Thanks, deeptrivia (talk) 22:40, 5 August 2007 (UTC)
One interesting observation is that any difference will have to be entirely due to the restriction of q into the unit cube: if the sum was taken over all q \in \mathbb{R}^3 (or \mathbb{Z}^3), then for each point q there would be another point s = r + pq such that | xpq | = | xsr | , Rpq = Rsr, | xps | = | xqr | and Rps = Rqr. (In particular, this implies that the equality will hold if r = (1,1,1) − p, since then s = (1,1,1) − q will stay within the unit cube for any q.) —Ilmari Karonen (talk) 10:29, 6 August 2007 (UTC)
Note that p and r are free variables in RHS. Defining matrix T by Tpr := LHS, equality of LHS and RHS is equivalent to the statement that the transformation turning R into T preserves symmetry. I see no immediate reason why that should be the case. Did you try disproving the claim by computing the two for some small randomly chosen symmetric input matrix R?  --Lambiam 17:42, 6 August 2007 (UTC)
Okay, here are my results. Consider a unit cube, each of whose side is meshed with n nodes (so there are in all N = n^3 points.) Then, we get:
n  ‖Re(LHS-RHS)‖/N²        ‖Im(LHS-RHS)‖/N²
2       0                         0
3       0.2672                    0.1263
4       0.3554                    0.1951
5       0.4033                    0.2542
6       0.4329                    0.3001
7       0.4529                    0.3355
8      0.4672                    0.3631

I divided the norms by N^2 because both LHS and RHS matrices have N^2 elements. If, as Ilmari Karonen said, LHS should become equal to RHS as N -> inf (i.e., we consider all points in R^3), then shouldn't this "error" should keep decreasing with increasing n? The result for n = 2 makes sense because in that case, we have 8 points, which all have three neighbouring points "filled" and other three "empty." deeptrivia (talk) 19:11, 6 August 2007 (UTC)

My previous comment wasn't about the number of points (indeed, in the continuous limit, \left]0,1\right[^3 has just as many points as \mathbb{R}^3), but about the restrictions placed on Rpq and xpq by the requirement that both p and q must lie within the unit cube. For example, consider the case p = (0,0,0), q = (1,0,0) and r = (0.5,0,0): clearly | xpq | = 1, but there's no s in the unit cube that could give such a high value of | xsr | . If points outside the unit cube were allowed, however, a number of them, including s = ( − 0.5,0,0), would do the trick. —Ilmari Karonen (talk) 21:10, 6 August 2007 (UTC)
Oh yes, I get what you're saying now. Thanks a ton, it was really helpful. deeptrivia (talk) 18:13, 7 August 2007 (UTC)

[edit] Problem Solving 3

Hi. I'm back again with a new problem I think I've solved, which I again would like you to check for me.

Find the minimum value of  3x^2+{1 \over 7x^2}

First I differentiated it, leading to

 6x - {2 \over7x^3}

 {42x^4 - 2 \over 7x^3}

 {2(21x^4 -1) \over 7x^3} (eqn fixed 41 ->21 2007-08-06)

 {2 \over 7} * {21x^4-1 \over x^3}

 {2 \over 7} *[{21x^4 \over x^3} - {1 \over x^3}]

 {2 \over 7} *[21x - {1 \over x^3}]

Therefore [21x - {1 \over x^3}] must be equal to zero.

So

21x - {1 \over x^3} = 0

21x  =  {1 \over x^3}

21x^4  =  1\,\!

x^4  =  {1 \over 21}\,\!

x  =  {1 \over \sqrt[4]{21}}\,\!

Subbing this in, we get

 3({1 \over \sqrt[4]{21}})^2 + {1 \over 7 ({1 \over \sqrt[4] {21}})^2}\,\!

 3({1 \over \sqrt {21}}) + {1 \over ({7 \over \sqrt {21}})}\,\!

 {3 \over \sqrt {21}} + {\sqrt {21} \over 7}\,\!

 {3 \sqrt{21} \over {21}} + {3 \sqrt{21} \over 21}\,\!

 {6 \sqrt{21} \over {21}}\,\!

Therefore  {6 \sqrt{21} \over {21}}\,\! is the minimum value.

As ever I am only looking to be told if I am correct or not. Thank you in advance. 172.188.191.126 20:42, 5 August 2007 (UTC)

You are correct. Keep in mind for future reference, though, that there are actually two (real, four if you count imaginary) solutions to x^4  =  {1 \over 21}\,\! and the other solution(s) could have resulted in a smaller answer. (and that the imaginary solutions give you a smaller minimum, so if you count those your answer is wrong) Gscshoyru 20:51, 5 August 2007 (UTC)
OK. The real solutions must be x  = \pm {1 \over \sqrt[4]{21}}\,\! and the imaginary solutions x  = \pm  {1 \over i \sqrt[4]{21}}\,\!
Taking the imaginary roots would lead to
 3({1 \over i\sqrt[4]{21}})^2 + {1 \over 7 ({1 \over i \sqrt[4] {21}})^2}\,\!
 3({1 \over - \sqrt {21}}) + {1 \over ({7 \over - \sqrt {21}})}\,\!
 -{3 \over \sqrt {21}} - {\sqrt {21} \over 7}\,\!
 -{3 \sqrt {21} \over 21} - {3\sqrt {21} \over 21}\,\!
 - {6 \sqrt{21} \over {21}}\,\!
Is this right and if yes, as I'm sure it isn't conincidence that it is my previous answer times minus one, could I have arrived at it without doing any of this working, i.e. just using my first answer and the fact that the square of an imaginary number is minus one? 172.188.191.126 21:02, 5 August 2007 (UTC)
Yep, that's right. And to be honest, I have no idea if you can simply do that every time without doing it out... the fact that it's equal to x^4 might be why such a thing is the case, but to be honest I'm not entirely sure -- someone else will have to answer that. Gscshoyru 21:17, 5 August 2007 (UTC)
This is, indeed, not exactly a coincidence, but isn't very general either. It results from the facts that:
  • Your function involved only powers of x which are 2 modulo 4,
  • i raised to a power which is 2 modulo 4 is -1, and
  • The ratio between the roots is i (which itself comes from the fact that they are all fourth roots of the same number).
Anyway, note that finding a minimum does not amount to simply finding where the derivative is 0. The minimum can either be at such a point, or at the endpoints of the domain of the function (the problem specification should also describe what the domain is). Sometimes a minimum does not at all exist. In your case, the domain is probably (0,+\infty), so you should have checked the limits of the function as x goes to 0 and infinity. Since the value of the function is positive infinity in both limits, which is greater than the value you have found, this is indeed the minimum. -- Meni Rosenfeld (talk) 21:30, 5 August 2007 (UTC)
Thanks but I'm an amateur mathematician so what you said about limits means little to me. I have heard about minimums, and maximums, existing at 'end points' before. Could you show me where to find out more about them? Thanks 172.188.191.126 21:35, 5 August 2007 (UTC)
Oh, that's easy! Let's find the minimum of x2, where x is a real number. The derivative is zero when x = 0, so that's our minimum. But suppose now that instead of allowing x to be any real number, we wanted it to lie in the range [1,2]. Now our minimum occurs at x = 1 even though the derivative isn't zero there. Get it? —Bromskloss 21:51, 5 August 2007 (UTC)
OH! Yes I'm with you 100%. Thanks. So it's only when you have selected a specific range - in which the derivatove never equals zero? 172.188.191.126 21:57, 5 August 2007 (UTC)
Not quite: consider x3, where we restrict x to lie in the range [-1,1] as Bromskloss did above. Let's try to find the minimum in this domain; differentiating f(x) = x3, we get f'(x)=3x2, which is zero at x = 0. But this is not the minimum, since f(0) = 0 while f(-1) = -1. (In fact, since x < 0 implies x3 < 0, this point x = 0 is not even a local minimum.) Remember that the derivative is always 0 at a maximum or minimum (excepting boundary points), but not every point where the derivative is 0 is necessarily a maximum or minimum. Tesseran 22:09, 5 August 2007 (UTC)
Remember that 1 / i = − i, so 1/(i\sqrt[4]{21})=-i/\sqrt[4]{21}. —Bromskloss 21:37, 5 August 2007 (UTC)
Let's throw in a dash of geometric intuition. If a small change in x can produce a change in y = f(x), then we are surely not at a minimum, even locally. This is the source of the "derivative equals zero" condition. (Caution: note that |x| has a minimum at zero, even though the derivative does not exist.) We have no general method to decide if a local minimum is also a global minimum. And as others have noted, a minimum can also occur at a boundary.
But we have another problem. If the derivative of f(x) has a zero at x, the same is true for −f(x); yet a minimizing x for f(x) is maximizing for its negative. We must also look at the second derivative, which may be positive, negative, or zero. For a minimum, we require it to be positive, so that as we move away from x the slope increases. --KSmrqT 23:36, 5 August 2007 (UTC)
I have to disagree: it is entirely possible to have a minimum where a small change in x can produce a change in y = f(x). For example, considering f(x) = x2 at x = 0, any change in x yields a change in f(x). [Perhaps we could say "produces a proportional change"?] I believe the real reason for the "derivative equals zero" condition is more like the following (you should have the definition of derivative in mind):
If x = t (for example) is a minimum, then on both sides of t the function is greater than at t. (This is true if you restrict to a small enough area around t.) Thus we have that f(t+h)-f(t) is positive whenever h is nonzero and small. Going to the definition of derivative, we have that \frac{f(t+h)-f(t)}{h} is positive whenever h > 0; it follows from properties of limits that the "limit from above", \lim_{h\to 0^+}\frac{f(t+h)-f(t)}{h}, is zero or positive. Similarly, since \frac{f(t+h)-f(t)}{h} is negative whenever h < 0, we conclude that the limit from below, \lim_{h\to 0^-}\frac{f(t+h)-f(t)}{h}, is zero or negative. But if f is differentiable, then these two limits must be equal, and so we conclude that f'(t)=\lim_{h\to 0}\frac{f(t+h)-f(t)}{h} is zero. Tesseran 05:40, 6 August 2007 (UTC)
Sorry, you're mistaken, and the x2 example should make that clear. The very meaning of the derivative of x being zero is that a small change in x produces no change in f(x). Small here is obviously taken in the sense of infinitesimal, and zero in the sense of neglecting effects beyond first order. This is standard calculus-speak. Certainly any finite change in x will produce a change in a non-constant function, but that's why we have calculus. --KSmrqT 06:29, 6 August 2007 (UTC)
Well, not mistaken, but perhaps not interpreting terminology correctly. I understood "small" to mean "infinitesimal", but not that "produces no change" meant "produces change of second order or higher". If this is standard language in calculus, could you give some other examples of how "X happens" is used for "X is a first-order approximation"? (I'm not doubting you, I'm actually interested in this phenomenon.) This is why I often hesitate to question you--it seems to always turn out either that I'm wrong, or that there was a miscommunication. Oh well. =) Also, why do you emphasize "finite" in your last sentence? I would expect "nonzero"... Tesseran 02:05, 7 August 2007 (UTC)
In somewhat old-fashioned parlance of analysis, a finite quantity is larger (in absolute magnitude) than any infinitesimal, and thus excludes zero.[2] This ought to be noted in our article on Finite.  --Lambiam 03:34, 7 August 2007 (UTC)
Let's look at the x2 example. If we add a quantity δ to x, the output changes from x2 to x2+2xδ+δ2. The first-order change is the 2xδ, but we also have a second-order change, δ2. One way to look at the derivative is to subtract, divide, and take the limit as δ goes to zero. Instead of taking limits, we may ignore the second-order change; the result is the same. If we try again with x3, the output becomes x3+3x2δ+3xδ23. Notice that in the ratio of output change to input change (which is δ), namely 3x2+3xδ+δ2, the higher-order terms retain a factor of delta. This explains why they disappear in the derivative.
Following Abraham Robinson, we can create a model of the reals extended with infinitesimals. In this approach, δ is taken to be such a nonstandard quantity, a genuine infinitesimal, and the result of the subtraction and division is coerced back to a standard quantity. Another approach is to extend the reals with a quantity ε defined to satisfy ε2 = 0, but ε ≠ 0. If this is used for δ, the higher-order terms automatically disappear.
The general idea of discarding higher-order terms has many practical applications. For example, we may have a physical problem where the full description is quite complicated. However, if we can write the effect of a small perturbation as a series of terms of increasing order, we can often simplify the problem greatly by discarding the higher-order terms. Depending on the size of the perturbation and on the accuracy required and on the problem itself, we may wish to keep second-order terms as well (or more); but a great deal of applied mathematics works with a first-order approximation. We then apply the powerful machinery of linear algebra to the "linearized problem". This is such a habit that folks may forget that nature itself often is not linear, and there can be a great deal of interesting behavior in the nonlinearity. --KSmrqT 06:28, 7 August 2007 (UTC)
An alternative approach to the original problem by completing the square:
\left(\sqrt{3}x - \frac{1}{\sqrt{7}x} \right)^2=3x^2 - \frac{2\sqrt{3}}{\sqrt{7}} + \frac{1}{7x^2}
\Rightarrow 3x^2+\frac{1}{7x^2}= \left(\sqrt{3}x - \frac{1}{\sqrt{7}x} \right)^2 + \frac{2\sqrt{3}}{\sqrt{7}}
so 3x^2+\frac{1}{7x^2} takes a minimum value of \frac{2\sqrt{3}}{\sqrt{7}} when \sqrt{3}x - \frac{1}{\sqrt{7}x}=0 i.e. when x^2=\frac{1}{\sqrt{21}}. Same answer, different method. Gandalf61 09:38, 6 August 2007 (UTC)

A complex valued functions does not have a minimum, because a < b is undefined for complex numbers. For real values of x, the real function x2 has a minimum value for x = 0. For complex values of x, the complex function x2 does not have a minimum value for x = 0. Note that x = i gives x2 = − 1 and − 1 < 0, so even when the value x2 is real, it is not minimum. Bo Jacoby 12:41, 6 August 2007 (UTC).

A minor remark. The problem is equivalent to finding the minimum value of  3z+{1 \over 7z} subject to the constraint that z be non-negative. This results in simpler computations.  --Lambiam 18:07, 6 August 2007 (UTC)
Actually not so minor, really. This sort of substitution makes life easier at many places, not just in this problem, but in many related problems.
It's also worth noting that in finding the zeroes of a rational function, \frac{f(x)}{g(x)}, we really are only concerned about the zeroes of f(x) (we use g(x) to find values of x where the rational function is undefined which may rule out some of the zeroes from f(x)). It's often easy to forget the stuff you learned in junior high when looking at things from the perspective of calculus and beyond. You want to avoid that temptation. Donald Hosek 18:58, 6 August 2007 (UTC)