Wikipedia:Reference desk/Archives/Mathematics/2008 March 16

From Wikipedia, the free encyclopedia

Mathematics desk
< March 15 << Feb | March | Apr >> March 17 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


Contents


[edit] March 16

[edit] grasping math to an intermediate level

hi, I like to get to an intermediate level of understanding of different subjects and I want to get a decent grasp of mathematics, but I'm having a hard time with it. I have a BS in math and an MS in statistics. I feel as if I'm wandering blindly in a mathematical universe, feeling the obvious, stumbling over the less-obvious, but without confidence in my travels. When I started to learn typography, say, I had that blind feeling at first but with study soon found the fundamentals and understood the basic ideas and questions in that field.

I want to get to that level of confidence with math, but I'm not sure how to get there. Obviously it's a big world out there, but is there a field that I should concentrate study on (analysis maybe?) to get an intermediate grasp (i.e., not expert and not blind beginner) ? thanks 72.150.136.11 (talk) 14:24, 16 March 2008 (UTC)

In all honesty, it's pretty much impossible to get an intermediate grasp of everything, since there is so much. I mean, I am doing an MMath, and I'd say that by doing so I was intermediate in the subjects I do, but there are others I simply have no clue about - topology, fluid mechanics, relativity, anything related to statistics... maths is so vast, and a lot of it is so remote from anything you do at school as to be an entirely different subject. Group theory is probably a useful thing to try and understand, as is basic vector calculus and mathematical analysis. I personally like complex analysis and combinatorics. Of course, combinatorial game theory is also fun to try. -mattbuck (Talk) 15:00, 16 March 2008 (UTC)
Linear algebra, while rather boring, is also a useful subject to have a working knowledge of - it seems to pop up all over the place. However, if you've got a BS in Maths I would expect you've already covered the basics of all these areas. If you want to do more Maths, you need to specialise, there's no way to have a better than undergraduate understanding of more than one or two areas. --Tango (talk) 15:19, 16 March 2008 (UTC)
I disagree with the last statement, there are many researchers who are active in several quite distinct areas. It's not easy, of course.
Mathematical logic is very important to know the basics of (say, up to the level of understanding Godel's theorems). Likewise for topology, which at the very least one needs to know to the level of feeling the difference between metric features and topological ones. I recall that when I have been applying for a math MSc, virtually all schools frowned upon me not having taken a topology course in my BA. Fortunately I had learnt the material independently - my peers who hadn't had some trouble with many of the courses.
Of course, there's the obvious stuff, like analysis (real, complex, functional, differential geometry...), algebra (groups, rings, fields, Galois theory...), discrete (combinatorics, graphs, ...), and of course computer science (computational complexity, information theory, machine learning, ...) and applied math (numerical methods, mathematical physics, game theory...). Disclaimer: This categorization is based on my own personal view and the subjects are not equally divided. -- Meni Rosenfeld (talk) 17:37, 16 March 2008 (UTC)
Well, I guess you run into trouble defining an "area" of maths. There are plenty of people that do research in what would usually be considered distinct areas, but they're usually working in some kind of overlap between them - pretty much all branches of maths overlap somewhere. Are there many people that have a solid understanding of the whole of several areas? I would say most understand just the parts of those areas that are relevant to their research. It does depend very much on how large you consider an area to be (is algebra a single area or is ring theory a distinct area to Galois theory? - obviously, you can understand both ring theory and Galois theory to a high level, but understanding the whole of algebra to a high level is pretty challenging without even trying to start on any non-algebraic areas). Put simply: I think we're both right for appropriate definitions of "area". --Tango (talk) 18:12, 16 March 2008 (UTC)

Thanks for these answers. I thought it might be impossible for a not-particularly-gifted person to walk with confidence in the math spaces. I just hate that feeling of flailing around blindly, holding only to those proofs I really really understand. Sometimes I feel like I'm missing a sense that some others have; which I suppose is true for everyone on some level. I'll explore mathematical logic and topology and see how that goes. thanks everyone 72.150.136.11 (talk) 19:00, 16 March 2008 (UTC)

While we all strive for perfection, nobody gets there. However far you go, there's always going to be more you don't understand, so you'll probably always be walking blindly, you'll just be walking in more advanced parts of the world of maths. I think the important thing is to enjoy it - study the areas of maths that interest you and that you find fun. There's little point studying anything else. --Tango (talk) 19:31, 16 March 2008 (UTC)
From what you say, I'm not sure that what you're looking for is so much a subject as a teaching style. You need some reintroductions to subjects you already know, from a different viewpoint. At the moment, I don't know of any better way to find good sources than to read things randomly and hope to hit gold (as I have occasionally), but someone else on the board might. If you want to grasp the material, try some popularizers, avoiding as best you can those who dumb it down instead of actually making it clearer. I was just reading Indra's Pearls, for instance, which gives some solid visual meaning to some pretty abstract techniques from several fields (analysis, group theory, linear algebra, complex analysis, maybe some others). If you want to understand the motivation, that is the goals, behind something, look into its history. Even the original papers on the subject, if you can get them. For instance, I've heard over and over again this same fable about complex numbers being the solution to the equation x^2+1=0, which ties in nicely to some other equations, like x^2-2=0 and x+1=0. It wasn't until fairly recently that I found out that complex numbers were motivated for the first time, after thousands of years of people intentionally ignoring their effects on quadratic equations, by the solution to the cubic equation. There are a lot of nice books that give the story behind that, not least the Ars Magna itself. Also, something that will always help you grasp a subject is to use it. Set your sights on a number of small, random goals, and see what you can do with them. Abstraction is bad on an empty stomach - you need a lot of bulk to soak it up. Once you can navigate real situations with confidence, the patterns you find among them will motivate and support abstraction. The exercizes in textbooks are a start, but don't rely on them for originality, follow your own curiosity. I'm trying to keep my studies broad-based, since I like them that way, but if you want to focus on a single subject for awhile, there's certainly plenty to choose from. It doesn't sound like you plan to use this study immediately, so focus your decision on how much you can learn about learning. It may be best to relearn something concrete as an introduction. Euclid's geometry, for instance, moving through some historical works like the Arithmetica Infinitorum or Newton's papers, then up through the crisis a century ago with Cauchy and Weierstrass et al, to get to analysis. An interesting stop you might take on the way is in Algebraland - most of calculus, as it applies to polynomials, can be derived without hardly touching limits. Black Carrot (talk) 02:28, 17 March 2008 (UTC)
I would have to agree with Black Carrot there, for me, my confidence in my mathematical ability comes from my thorough understanding of the basics (that is HS level and some college level) mathematics, I'm so comfortable with these rules, that I have little trouble knowing whether or not they apply in a situation I've never encountered in a text. Memorization of the basic rules to a very exacting degree helps with that a lot. And curiosity is definitely the best motivator, I have learned more from my own curiosity than from those who have taught math to me. (Mainly do to the fact that I wasn't challenged in HS math class even AP) A math-wiki (talk) 23:49, 21 March 2008 (UTC)
I wouldn't recommend memorising the basic rules, it's much better to make sure you fully understand them and why they work. Once you've done that, you'll find you never forget them without having to make any actually effort to memorise them. (I frequently make mistakes in basic integration [as you may have seen lower down!], and that's because I didn't understand it when I first learnt it so tried to just memorise the methods, and ended up memorising them incorrectly and now get confused between my incorrect memory and my correct understanding and end up making mistakes - had I just taken the time to work through it and understand it, I wouldn't have the problems I do now.) --Tango (talk) 00:37, 19 March 2008 (UTC)
Just out of curiosity (which we have already established to be important), care to elaborate what is it that you have incorrectly memorized? -- Meni Rosenfeld (talk) 16:55, 19 March 2008 (UTC)
Well yes tango, of course you should understand it. I had intended to mean that you should not only know the rule but thoroughly understand it as well. I would argue one should never use a theorem they don't fully understand, and I try to observe that whenever reasonably possible. A math-wiki (talk) 23:25, 21 March 2008 (UTC)

[edit] Local Sidereal Time Conversion

How do I convert local sidereal time to something that is more commonly used? Is this actually a measure of time? I read the article about sidereal time and it appeared to be an angle measurement, a measure of place. So when someone tells me "do this action at 13:20" local sidereal time, is this something that makes sense, or is it just astrological mumbo jumbo? And if it makes sense, how do I convert it to Eastern Standard Time or Hawaii Standard Time or GMT or anything else that is more commonly used? —Preceding unsigned comment added by RastaNancy (talk • contribs) 20:45, 16 March 2008 (UTC)

Sidereal time is measure of time, but it's not easy to convert to normal time. Regular time is determined by watching the Sun move across the celestial sphere, sidereal time is determined by watching the stars move - the stars move only due to Earth's rotation on it's axis, the Sun moves because of that and because of Earth orbiting it. Therefore, a solar day differs from a sidereal day (by about 4 minutes). This means that to convert from solar to sidereal time you need to know the date in order to work out how much they differ by. The easiest way to convert would be to find a converter online, I'm sure there are plenty. --Tango (talk) 21:42, 16 March 2008 (UTC)

[edit] Possibility of solving this equation

I have the following equation:

{\it \lambda _i}=c\int _{0}^{L}\! \left| -1+R\theta \ '\left(s \right) \cos \left( {\it \phi _i} \right)  \right| {ds}

I can choose any values for φi, and get the corresponding values of λi through an experiment. The aim is to find the function  \theta \left(s \right) as accurately as possible. I am wondering if it is possible at all to do this. I was trying to expand  \theta \left(s \right) in a Fourier series with n terms (and n undetermined coefficients) and then using n different values of φi to get a system of n equations. However, it turns out that all these equations are linearly dependent. Note that φi is not a function of s. Any ideas on how best we can find  \theta \left(s \right) will be highly appreciated. Regards, deeptrivia (talk) 21:40, 16 March 2008 (UTC)

Do you have any additional knowledge about θ'? If it is known to be monotonic then I have an idea which might work in some cases. The essence is to start with \phi \approx \pi/2, for which the absolute value can be solved; then decrease φ until you see a deviation from normal. The rate of deviation should give you the ratio θ' / θ'' at the x-intercept. This might then be solvable. -- Meni Rosenfeld (talk) 22:56, 16 March 2008 (UTC)
Thanks, Meni. Although I can choose any values for φi, I can't choose too many. Practically, n is two or three, at most six. θ' is smooth but not monotonous. Also, I don't need to find θ exactly, but just as best as I can. How best do you think can I find it by finding λi for, say, n wisely chosen values of φi ? Regards, deeptrivia (talk) 23:13, 16 March 2008 (UTC)
PS: I think in my case it is safe to assume that
{\it \lambda _i}=c\int _{0}^{L}\! \left( 1 - R\theta \ '\left(s \right) \cos \left( {\it \phi _i} \right)  \right) {ds}
Then, since R and φi are not functions of s, we can directly integrate and get:
{\it \lambda _i}=c \left( L - R \cos \left( {\it \phi _i} \right) \left( \theta \left( L \right) - \theta \left( 0 \right) \right) \right)
Consequently, it appears that no matter how we vary φ, all we can find about θ is the difference between its values at 0 and L. Now, if we remove the assumption I just made regarding the absolute value, we can still break the integral up into a countable number of parts that can be directly integrated, and using a similar logic, we can argue that we cannot know anything more than θ(L) - θ(0) by varying φ. Am I right? Thanks, deeptrivia (talk) 23:33, 16 March 2008 (UTC)
It's certainly very hard to deduce information about θ in the general case, but it's not theoretically impossible. If you break up the interval, in some segments you will have θ(a) − θ(b) and in some θ(b) − θ(a), and when you add them up they don't cancel. Since changing θ can change the integral, knowledge of the integral can at least eliminate some possibilities. -- Meni Rosenfeld (talk) 00:15, 17 March 2008 (UTC)
Thanks. Am I at least right for the case where there is no sign change, becaause  R\theta \ '\left(s \right) \cos \left( {\it \phi _i} \right) \ll 1 and no need to break the interval? Regards, deeptrivia (talk) 00:47, 17 March 2008 (UTC)
Yes, indeed. That is what I called the "normal" case, where the integral is reduced to a multiple of θ(L) − θ(0). Deviations from this tells you something about θ, but how to extract anything useful from it, especially with limited observations, is a mystery. -- Meni Rosenfeld (talk) 06:32, 17 March 2008 (UTC)
Am I missing something? If \theta\left(s\right) solves the equation, then so does \theta \left(s\right)+C, so you can only determine solutions up to an additive constant.  --Lambiam 17:17, 17 March 2008 (UTC)
Certainly, but it appears that it's rather difficult, if not impossible, to even determine it that far. --Tango (talk) 17:28, 17 March 2008 (UTC)