Wikipedia:Reference desk/Archives/Mathematics/2008 February 17

From Wikipedia, the free encyclopedia

Mathematics desk
< February 16 << Jan | February | Mar >> February 18 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] February 17

[edit] Functions with different arguments

Is there a standard notation to distinguish different arguments/formulations of a given function that is usually expressed as a single argument? For example:

\begin{align}\phi &=\quad\phi(\beta)=\arctan(\sec(o\!\varepsilon)\tan(\beta)),\\
&=\quad\phi(\widehat{\sigma})=\arctan(\cos(\widehat{\alpha})\tan(\widehat{\sigma}));\end{align}\,\!

If you are given \phi(37^\circ){}_{\color{white}|}\,\!, it could be either \phi(\beta){}_{\color{white}|}\,\! or \phi(\widehat{\sigma}){}_{\color{white}|}\,\!! Would you incorporate the constant as a coargument——e.g., \phi(o\!\varepsilon;37^\circ){}_{\color{white}|}\,\! or \phi(\widehat{\alpha}\backslash{37}^\circ){}_{\color{white}|}\,\!, or something else?
In the case where the function, itself, is an argument of another function, expressed either singularly or as a bivariable, it would seem perfectly proper to express it likewise:

\begin{align}\phi&=\phi(\widehat{\Alpha},\widehat{\sigma})=\arcsin(\cos(\widehat{\Alpha})\sin(\widehat{\sigma}));\\
M&=\!\!\quad{M}(\phi)=M(\widehat{\Alpha},\widehat{\sigma});\end{align}\,\!

 ~Kaimbridge~00:44, 17 February 2008 (UTC)

I really can't understand what this question is asking. If a = b, then f(a) = f(b). That's the substitution property of equality. Introducing a system of notation that breaks this rule is an awful idea. If you want φ(β) to have one definition and \phi(\widehat{\sigma}) to have a different definition, then you're really talking about two different functions, and you need to give them two different names. —Keenan Pepper 01:19, 17 February 2008 (UTC)
I think Kaimbridge is working with some already awful notation commonly used in some subject, and trying to modify it to make more sense. If so, it would help to see a reference where this notation is used. -- Meni Rosenfeld (talk) 15:18, 17 February 2008 (UTC)
Using functions like that is an abuse of notation. It's quite common in science where the difference between a physical value and the function for calculating that value is rather blurred (hence the common shorthand for "position is determined by time" of "r=r(t)", the first r is a physical value, the second r is a function). The idea is that it's obvious from context what you mean, if it isn't, you need a better notation. In formal mathematics, you should give different functions different names, so you should have a φ1 and a φ2 or something. Two functions are equal if and only if they give the same result for any given input, which your two forms of φ clearly don't, so they aren't the same function, so they shouldn't have the same name. (You can get away with using the same name if you can distinguish the functions by number of variables, since that removes the ambiguity, but it's often best to distinguish them more clearly.) --Tango (talk) 15:51, 17 February 2008 (UTC)

Okay, let me give you the actual scenerio.
For Earth, \phi\,\! is geographic latitude and \beta\,\! is the reduced latitude. At the same time, \widehat{\alpha}\,\! is the spherical/geographical azimuth and \widehat{\sigma}\,\! is the transverse geographical colatitude (likewise, \tilde{\alpha}\,\! and \tilde{\sigma}\,\! are the elliptical equivalent, based on \beta\,\!). My particular question had to do with derivative. With respect to latitude relationship,

\begin{align}\beta(\phi)&=\arctan(\cos(o\!\varepsilon)\tan(\phi));\\
\beta'(\phi)&=\sqrt{m'(\phi)n'(\phi)}=\frac{\cos(o\!\varepsilon)}{1-(\sin(\phi)\sin(o\!\varepsilon))^2};\end{align}\,\!

But in terms of \sigma\,\! to latitude conversion,

\phi'(\widehat{\sigma})=\cos(\widehat{\alpha});\quad\beta'(\tilde{\sigma})=\cos(\tilde{\alpha});\,\!

While the functions equate, the derivatives obviously don't! In the particular situation I am dealing with, the elliptical values are localized, with respect to \widehat{\sigma}\,\!:

\tilde{\sigma}(\widehat{\sigma})=\lim_{\Delta\widehat{\sigma}{\to}0}\tilde{\sigma};\quad\tilde{\alpha}(\widehat{\sigma})=\lim_{\Delta\widehat{\sigma}{\to}0}\tilde{\alpha};\,\!

So, \beta=\beta(\tilde{\sigma})=\beta(\phi(\widehat{\sigma}))=\beta(\tilde{\sigma}(\widehat{\sigma}))\,\!.

The ultimate use and purpose for all of this is this equation:

o'(\phi(\widehat{\sigma}))=C'\Big(\beta(\tilde{\sigma}(\widehat{\sigma}))\Big)\tilde{\sigma}'(\widehat{\sigma})=C'\Big(\beta(\phi(\widehat{\sigma}))\Big)\beta'(\phi(\widehat{\sigma}))\frac{\phi'(\widehat{\sigma})}{\beta'(\tilde{\sigma}(\widehat{\sigma}))};\,\!

or

C'\Big(\beta(\tilde{\sigma}(\widehat{\sigma}))\Big)\beta'(\tilde{\sigma}(\widehat{\sigma}))\tilde{\sigma}'(\widehat{\sigma})=C'\Big(\beta(\phi(\widehat{\sigma}))\Big)\beta'(\phi(\widehat{\sigma}))\phi'(\widehat{\sigma});\,\!


Thus \beta'(\tilde{\sigma}(\widehat{\sigma})){\color{white}.}\,\! is azimuth in nature while \beta'(\phi(\widehat{\sigma})){\color{white}.}\,\! is elliptical. Is it just the author's responsibility to effectively distinguish the two via some indexing——e.g., \beta'_{1}(\tilde{\sigma}(\widehat{\sigma}))\mbox{ and }\beta'_{2}(\phi(\widehat{\sigma})){\color{white}.}\,\! or \beta'(\tilde{\sigma}(\widehat{\sigma}))_1\mbox{ and }\beta'(\phi(\widehat{\sigma}))_2{\color{white}.}\,\!——or is there some established special notation?  ~Kaimbridge~16:22, 17 February 2008 (UTC)

[edit] Empty set

One of my fellow ask me a simple question about Empty Set.As we call set that thing if there are some thing in it,eg people,number etc but an empty set is call a "SET" becoz there are nothing in it according to text base definition and i am not mathematician so plz simplefy me. —Preceding unsigned comment added by Usmanzia1 (talk • contribs) 02:26, 17 February 2008 (UTC)

Yes, an empty set is a set with nothing in it. It is a collection of objects, which makes it a set - just that this collection of objects is empty. x42bn6 Talk Mess 13:55, 17 February 2008 (UTC)
Try reading our article, Empty set, it should help. It's a rather strange concept to get used to, but it's common throughout maths - the lack of something is an example of a something (The empty product equals 1, the empty sum equals 0, the empty union is the empty set, the empty intersection is the universe, etc. etc.) --Tango (talk) 14:37, 17 February 2008 (UTC)

[edit] \pm and \mp

What is the difference between \mp\,\! and \pm\,\!? Thanks, Zrs 12 (talk) 03:21, 17 February 2008 (UTC)

When they are separate, there is no difference. However, if they are used in the same equation, such as "a \pm b \mp c", it often denotes that if you use + in the first instance, you match it with - in the second. The previous example would generally be used to indicate two possibilities: "a + bc" and "ab + c". 134.173.92.17 (talk) 03:53, 17 February 2008 (UTC)
Also see Plus-minus sign#Minus-plus sign. --hydnjo talk 03:58, 17 February 2008 (UTC)
Thanks, Zrs 12 (talk) 05:17, 17 February 2008 (UTC)

[edit] Stochastic matrix Question 4

 \mathbf{p}_k = \mathbf{v}P^k \, .
We want to find the probability that the system is in a given state after a given number of time steps. The set of probabilities for each state after k time steps is given by the probability vector pk. The purpose of the formula is that it gives an expression for the probability vector after k time steps in terms of the initial state vector v and the stochastic matric P - so if we know v and P we can find the probability vector at any subsequent time. The "mathematical induction" part just means that we can derive the general formula for pk by looking at the formulae for p1, p2 etc. and then generalising the pattern that we see to k time steps. Can you see where the formulae that I give above for p1, p2 come from ? Can you see how they lead to a general formula for pk ? Gandalf61 (talk) 09:35, 12 February 2008 (UTC)

I am assuming the formulae  \mathbf{p}_k = \mathbf{v}P^k \, that you gave above for p1, p2 came from Summation? If this is true then can the formula be put in Sigma notation format \sum_{i=m}^n x_i = x_m + x_{m+1} + x_{m+2} +\cdots+ x_{n-1} + x_n.  ? --Obsolete.fax (talk) 05:28, 17 February 2008 (UTC)

[edit] Indefinite integral of the Gamma function

Is there any way to integrate (doesn't have to be closed form) Gamma(x)?

--wj32 t/c 10:34, 17 February 2008 (UTC)

Well, gamma is continuous, so yes, it has an antiderivative. The Integrator gives nothing, so I suspect the antiderivative can't be expressed in terms of functions anyone's bothered naming, but I could be wrong, and don't know anything like enough differential Galois theory to prove such a result. Algebraist 14:50, 17 February 2008 (UTC)

[edit] Factorials

How do you prove, or whats the mathematical proof of O!=1? I've asked this to several math teachers and they always answer someting like: "You just take it for granted", "It's just a rule", etc... —Preceding unsigned comment added by 201.167.101.193 (talk) 21:58, 17 February 2008 (UTC)

See Factorial#Definition. Basically, it's because it turns out that whenever you come across 0! in some setting, you want it to be equal to 1. For example, n! is the number of ways you can arrange n different objects in a line. How many ways can you arrange zero objects in a line? Well, one—there is one way to arrange zero objects, but that's it. —Bkell (talk) 22:12, 17 February 2008 (UTC)
You can arrange 0 objects? Without any objects, how can you do this? I'm confused. Zrs 12 (talk) 23:38, 18 February 2008 (UTC)
Okay, here I go. Are you watching? There, I did it. I arranged zero objects on my desk just now. I didn't have to do anything but stare at my desk, but I certainly did arrange zero objects on my desk. There they are, all zero of them, all in their correct places. (Note that I didn't have any choices to make while arranging these zero objects, so there was only one way to do it.) It sounds very strange to talk about arranging zero objects, but things like this are useful concepts in mathematics.
Sometimes similar statements are made about the empty set. For example, it is a true statement that "All elements of the empty set are even numbers"—if that weren't true, then you should be able to show me an element of the empty set which is not an even number in order to disprove the statement, but you can't. You can't show me any elements of the empty set at all, because it doesn't have any elements in the first place. Another way to say this is that the statement "All elements of the empty set are even numbers" is true because there are no exceptions. (Of course, it is also a true statement that "All elements of the empty set are odd numbers," or "All elements of the empty set are invisible pink unicorns.") —Bkell (talk) 14:43, 19 February 2008 (UTC)
Yeah, I get the concept. The wording is what doesn't work for me. By your logic, I would have to prove that they can't be arranged and you would have to prove that they could. Since neither can be proven, they are both correct? Anyway, I was just trying to bring attention to maybe getting a better wording. Zrs 12 (talk) 15:19, 19 February 2008 (UTC)
Well, if we're going to try to prove something, then we need to have rigorous definitions instead of vague terms like "arrange". If the rigorous definition of "arrange" is given in terms of permutations of the elements of a set, as is common, then it can be shown that there is exactly one way to arrange zero objects, because there is exactly one permutation of the elements of the empty set. This is true whether a permutation is defined as a sequence of elements listing each element exactly once (there is exactly one empty sequence) or as a bijection from a set to itself (there is exactly one function whose domain and codomain are the empty set). —Bkell (talk) 15:46, 19 February 2008 (UTC)
It's not something that can be proved, it's simply part of the definition of the function. -mattbuck (Talk) 22:50, 17 February 2008 (UTC)
Sure, but the reason it's defined that way is because it's the "right" definition to make. Defining 0! to be anything but 1 would break a lot of elegant formulas and identities. —Bkell (talk) 23:16, 17 February 2008 (UTC)
A better question than "why does 0! equal 1" is "why does 0! exist in the first place", and for that matter, "why does factorial exist in the first place?" It exists, by that I mean it is given its own name and used all over the place, because it is a convenient way of arranging information. 0! is included because in many applications the input winds up as zero in some special cases, and those special cases might as well be dealt with officially. It equals 1 because that's what you'd put as the output in each of those individual cases anyway. (Contrast this with division by zero, where the most convenient answer varies and no single definition would do the job.) At each step, the point is to summarize things as efficiently as possible. To do that, you introduce a structure, and make refinements of it. Only once you have a structure to describe can you start proving things about it. Black Carrot (talk) 23:26, 17 February 2008 (UTC)
When you're working with natural numbers, it's best to think of the factorial function as counting permutations. That is, n! is the number of distinct ways of arranging n different objects in a line. If you have zero objects, there's only one way to arrange them (don't do anything), so 0! = 1. More formally, if n! is the number of bijections from an n-element set to itself, then 0! = 1 because there is precisely one bijection from the empty set to itself. Michael Slone (talk) 23:39, 17 February 2008 (UTC)
Another way to think of it is by observing that n!=(n+1)!/(n+1) for all n>1, so extend that for n=0, and you get 0!=1.--Tango (talk) 23:45, 17 February 2008 (UTC)
And there's the fact that the Gamma function, a beautiful extension of the factorial, has a value of 1 at 1 (Gamma(t+1)=t! for t non-negative integer). Pallida  Mors 03:10, 18 February 2008 (UTC)