Wikipedia:Reference desk/Archives/Mathematics/2007 July 31

From Wikipedia, the free encyclopedia

Mathematics desk
< July 30 << Jul | July | Aug >> August 1 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents

[edit] July 31

[edit] Degree notation in integration

Is it considered acceptable to use degree notation for the argument and boundary expression for functions actually worked in radian form? E.g.,

\begin{align}\sin(\pi)&=\int_{0}^{\pi}\cos(\theta)d\theta\\
=\sin(180^\circ)&=\int_{0}^{180^\circ}\cos(\theta)d\theta\quad{???};\end{align}\,\!

I would think that \scriptstyle{F(180^\circ)}\,\! is okay, since it is acceptable to express the arguments of (in this case) trigonometric functions in degree form, with the conversion of degrees to radians implied. But what about the boundaries of the integral, since a definite integral equals the average integrand times the boundary difference:

\int_{\theta_s}^{\theta_f} F'(\theta)d\theta=\sum_{T=1}^{UT=\infty} \frac{F'(\theta_t)}{UT}\Big(\theta_f-\theta_s\Big)=\overline{F'(\theta)}\Big(\theta_f-\theta_s\Big);\,\!

No matter what F is (excluding zero), {\color{white}\Big|}\scriptstyle{\overline{F'(\theta)}\cdot\pi\;\ne\;\overline{F'(\theta)}\cdot{180^\circ}}\,\!.
Is the same implied conversion apparent for the boundaries, too (perhaps with a conversion caveat/reminder)? Thus,

\int_{0}^{180^\circ}F'(\theta)d\theta
=\overline{F'(\theta)}\cdot\pi\quad{???}\,\!

 ~Kaimbridge~15:35, 31 July 2007 (UTC)

I don't think using degrees is acceptable anywhere in serious mathematics. However, a degree is just a mathematical constant, defined as ° = π/180 ≈ 0.017453. So 180° is, in fact, equal to π, and there is no problem with the formula above. -- Meni Rosenfeld (talk) 15:41, 31 July 2007 (UTC)
Of course, it is not clear what you mean by \sum_{T=1}^{UT=\infty} \frac{F'(\theta_t)}{UT}\Big(\theta_f-\theta_s\Big). Perhaps you have meant \lim_{n \to \infty} \sum_{k=1}^nF'\left(\theta_s+\frac{k}{n}(\theta_f-\theta_s)\right)\frac{\theta_f-\theta_s}{n}? (This is not really a conventional way to refer to it). -- Meni Rosenfeld (talk) 15:48, 31 July 2007 (UTC)
Yes, that's exactly what I meant, though \scriptstyle{\frac{k}{n}}\,\! should be \scriptstyle{\frac{k-1}{n-1}}\,\!:
\theta_n=\theta_s+\frac{k-1}{n-1}\Big(\theta_f-\theta_s\Big)\quad(\theta_s=\theta_1,\;\theta_f=\theta_n).\,\!
(I just didn't think it was necessary to define \theta_n\,\!! P=)  ~Kaimbridge~17:45, 31 July 2007 (UTC)
The main problem wasn't with defining θT, but rather with \sum_{T=1}^{UT=\infty}; It's not conventional to assign a value to a variable inside the sigma notation, and UT isn't equal to ∞, it tends to it.
And no, we have n summands, so we should divide by n. Taking either \frac{k-1}{n} or \frac{k}{n} is good. -- Meni Rosenfeld (talk) 18:05, 31 July 2007 (UTC)
The integrand is divided by n, but in calculating \theta_n\,\!, \scriptstyle{\frac{k-1}{n-1}}\,\! needs to be used to land on the boundaries——let n equal 3:
\frac{1-1}{3-1}=0\quad(\theta_1=\theta_s)\,\!, \frac{2-1}{3-1}=.5\quad(\theta_2=\frac{\theta_s+\theta_f}{2})\,\! (the midpoint), and \frac{3-1}{3-1}=1\quad(\theta_3=\theta_f)\,\!. Using \frac{k-1}{n}\,\! gives 0,\;\frac{1}{3},\;\frac{2}{3}\,\! and \frac{k}{n}\,\! gives \frac{1}{3},\;\frac{2}{3},\;1\,\!. Remember now, this is the basic, even spaced average...Gaussian and other quadratures are another story.  ~Kaimbridge~19:56, 31 July 2007 (UTC)
Well, you can do that, but it is unintuitive. It is more natural to divide the entire interval into n sub-intervals, with n+1 endpoints, x0 to xn, with x_k = x_0+\frac{i}{n}(x_f-x_s). Then, in each interval, you need to choose a point in which to evaluate the integrand. I have chosen the right end of each interval, but any other choice is also okay (I'm assuming the function is continuous). In your calculation, you essentially take your point to have a different position in every interval - the left end of the first interval, the right end of the last interval, and somewhere in the middle for the other intervals. This is okay, but as I said, not very standard and not very intuitive. -- Meni Rosenfeld (talk) 20:08, 31 July 2007 (UTC)

[edit] component in one direction of a scalar quantity spherical distributed

Say I have many particles moving all with speed V. The direction of these particles is equally likely in all directions (in 3D space).

I want to calculate the average velocity in one direction. eg the 'x' axis or across the 'yz' plane.

I'm excluding particles that move in the opposite direction ie I only consider particles that have positive values of 'x' velocity.

My calculations seem to show that the 'average velocity' is V/2 (the mean)

Can someone point to a page that considers questions like this or just confirm I have/have not made a mistake. Thanks.83.100.252.241 18:55, 31 July 2007 (UTC)

I got V/2 as well, but I think the quantity you are trying to find is not as useful as, say, the root mean square speed in one direction, which is V/√3. Perhaps you will find further discussion of this by taking a look at Kinetic theory and following links. -- Meni Rosenfeld (talk) 19:57, 31 July 2007 (UTC)
Yes. rms V/√3 (for kinetic energy calculations) and V/2 for average momentum, the rms speed is the physically significant one (pressure). Thanks.83.100.252.241 20:31, 31 July 2007 (UTC)

[edit] antiderivative of 1/sin x

I'm looking at forces in semicircular arches and the integral of 1/sin x seems to come into it..

Hence I'd like to be able to get the antiderivative, integrating by parts to give an equation in terms of x/sin x + x^2 trig(x) +x^3 trig2(x) +etc which looks likes it always converges (assuming this haven't checked since integration by simpsons rule will be just as good in terms of the calculation). ( trig(x) means an expression involving sin's and cosine's of x )

My question is is there a simpler expression, and a way to get to it. Thanks 83.100.252.241 19:07, 31 July 2007 (UTC)

1/sin x = csc x. The indefinite integral of that is -ln|cscx + cot x| + C. See Table_of_integrals. Gscshoyru 19:10, 31 July 2007 (UTC)
thanks, I just found that too. Is there a way to 'find' this integral other than spending long time in darkened room looking at trig identities and spotting that this function works? (this might be a rhetorical question)
Also anyone know who found it first? (trivia question not important)83.100.252.241 19:19, 31 July 2007 (UTC)
Yes. You spend a short time with your table of integrals, or you learn them. Theres is no easy way to deduce them, I was told in school.--SpectrumAnalyser 19:40, 31 July 2007 (UTC)
There's not always an easy way to deduce them. But someone must have deduced them sometime, right?
Finding antiderivatives of functions is, in general, a rather difficult problem - but there are some techniques which are known to be useful in certain situations. The article Trigonometric substitution mentions the transformation u=\tan{\frac{x}{2}}, which can help when the integrand involves trigonometric functions.
This particular example is not deep enough for the question of originator to be meaningful. -- Meni Rosenfeld (talk) 19:43, 31 July 2007 (UTC)
Thanks everyone.83.100.252.241 19:48, 31 July 2007 (UTC)

This one is not very difficult. Translate into complex exponential functions in order to avoid difficult trigonometry. Make the integration and translate back to trigonometric notation in order to avoid complex functions in a real-number problem. The constant log(i) is absorbed in the integration constant. It is not obvious that this result, log(tan(x/2)), is the same as the result in the table of integrals. (I may have made an error.)


\int\frac{dx}{\sin x} 
=\int\frac{dx}{(\frac{e^{i\cdot x}-e^{-i\cdot x}}{2\cdot i})} 
=\int\frac{2\cdot i\cdot dx}{e^{i\cdot x}-e^{-i\cdot x}} 
=\int\frac{2\cdot i\cdot e^{i\cdot x}\cdot  dx}{(e^{i\cdot x})^2-1}
=\int\frac{2}{(e^{i\cdot x}-1)\cdot (e^{i\cdot x}+1)}\cdot d(e^{i\cdot x})

=\int \left( \frac 1{e^{i\cdot x}-1}-\frac 1{e^{i\cdot x}+1}\right)\cdot d(e^{i\cdot x})
=\int \left( \frac {d(e^{i\cdot x}-1)}{e^{i\cdot x}-1}-\frac {d(e^{i\cdot x}+1)}{e^{i\cdot x}+1}\right)

= \log (e^{i\cdot x}-1)-\log (e^{i\cdot x}+1)
= \log \frac{e^{i\cdot x}-1}{e^{i\cdot x}+1}
= \log \frac{e^{i\cdot x/2}-e^{-i\cdot x/2}}{e^{i\cdot x/2}+e^{-i\cdot x/2}}
= \log \frac{2\cdot i\cdot\sin (x/2)}{2\cdot\cos(x/2)}

= \log (\tan(x/2))+\log (i)
= \log (\tan(x/2))+C \,

Bo Jacoby 21:32, 31 July 2007 (UTC).

More thanks - yes I should have tried that - that's an obvious way83.100.138.237 13:55, 1 August 2007 (UTC)
...or, if you're lazy, just enter "1/sin(x)" into The Integrator. —Ilmari Karonen (talk) 22:22, 31 July 2007 (UTC)

Try this: 
\int\frac{dx}{\sin x} 
=\int\frac{\sin xdx}{1-\cos^{2}x} 
=\int\frac{-du}{1-u^2} 
where u=\cos\ x. Use partial fractions to finish the job. Is it acceptable? Twma 02:46, 1 August 2007 (UTC)

Or 
\int\frac{dx}{\sin x} 
=\int\frac{du}{u}
, where u=\tan\ (x/2), after some basic trig and differentiation.…81.154.107.185 12:05, 2 August 2007 (UTC)
  • While all the ideas above are fine, if the OP wants to know how someone might have obtained the entry in the table of integrals, the reason is that they noticed that 1 / sinx = cscx, and so by cleverly multiplying the numerator and denominator of (cscx) / 1, we get:
\int 1/\sin x \ dx = \int \csc x \ dx = \int \frac{\csc x}{1}\cdot\frac{\csc x + \cot x}{\csc x + \cot x}\ dx = \int \frac{\csc^2x + \csc x \cot x}{\csc x + \cot x}\ dx

and now a substitution of u = cscx + cotx will finish the job. –King Bee (τγ) 12:56, 2 August 2007 (UTC)

[edit] Non-analytic function and smoothness...

Hi. The book I'm reading is talking about analytic functions, and one of the footnotes asks:

f(x) = e^{-1/x^2} : Show that f(x) is C^\infty, but not analytic at the origin.

I think it's obvious it's not analytic at the origin since the coefficients of power series expansion of f(x) are 0 at f(0). But how can I go on to show that it can be indefinitely differentiated? 86.137.43.18 23:03, 31 July 2007 (UTC)

Well, everywhere except at 0, it's pretty obvious, right? So you just need to treat the case x=0. First try to show that the first derivative exists at x=0 (hint: you'll have to go back to the definition of derivative; just applying formulas isn't going to be enough). Then generalize. --Trovatore 23:57, 31 July 2007 (UTC)
A question, which may be my own obtuseness -- how can you find the power series expansion of a function without computing its derivatives? Are you applying some sort of substitution to the power series for ex? Tesseran 02:35, 1 August 2007 (UTC)
Well, substituting 1/z^2 into the power series for the exponential gives you the Laurent series for f at 0. Since it has an infinite number of negative terms, that means f is not only not analytic at 0, but also has an essential singularity there. nadav (talk) 05:56, 1 August 2007 (UTC)

Since the complex function f(z) and the zero function g agree on a small negative segment of x-axis, they are identical by identity theorem but f\ne g on any small positive segment of x-axis. Hence f is not analytic. To show that f is smooth, use induction on n in the general form of f(n)(0). Hope this would help. Twma 02:53, 1 August 2007 (UTC)

Where exactly do they agree, apart from at z = 0 (assuming we define f(0) = 0)?  --Lambiam 04:15, 1 August 2007 (UTC)
They agree on a small neighborhood of z=0 by identity theorem if f is analytic. Twma 00:32, 2 August 2007 (UTC)
But in this case f is not analytic.  --Lambiam 01:10, 2 August 2007 (UTC)
If f is analystic, then we have a contradiction. Hence f cannot be analytic. Twma 01:59, 3 August 2007 (UTC)
See my comment below. (I'm confused as to your statement about them agreeing on a neighborhood of zero.) –King Bee (τγ) 03:06, 3 August 2007 (UTC)
Even still; they agree on no neighborhood of zero. –King Bee (τγ) 12:59, 2 August 2007 (UTC)
Let me reword this argument. f(z) and the zero function have the same power series at 0. If two analytic functions have the same power series at a point, then they agree in a neighborhood of that point. Since f(z) does not agree with the zero function on any neighborhood of 0, these functions cannot both be analytic. Since the zero function is analytic, we conclude that f is not. Tesseran 23:00, 3 August 2007 (UTC)

Your problem is treated in Taylor_series#Properties. Bo Jacoby 14:18, 1 August 2007 (UTC).