Wikipedia:Reference desk archive/Mathematics/2006 October 12

From Wikipedia, the free encyclopedia

< October 11 <<Sep | October | Nov>> October 13 >
Humanities Science Mathematics Computing/IT Language Miscellaneous Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions at one of the pages linked to above.


Contents


[edit] PROPOSED CHANGES TO THE REFERENCE DESK

If you haven't been paying attention to Wikipedia talk:Reference desk, you may not know that a few users are close to finishing a proposal (with a bot, now in testing and very close to completion) which, if approved by consensus, will be a major change for the Reference Desk.

Please read the preamble here, and I would appreciate if you signed your name after the preamble outlining how you feel about what we are thinking.

This notice has been temporarily announced on all of the current desks.  freshofftheufoΓΛĿЌ  06:58, 12 October 2006 (UTC)

For convenience, I propose any reactions to this anouncement be limited to Wikipedia:Reference_desk/Miscellaneous#PROPOSED_CHANGES_TO_THE_REFERENCE_DESK. DirkvdM 07:59, 12 October 2006 (UTC)

[edit] Bounded polynomials

I'm looking for some polynomials pn(x) such that pn(0) = pn(1) = 0 and |p_n(x)|\le 1 for 0\le x\le 1 and most importantly, for some fixed, real θ, have | pn(eiθ) | > 1 when n is large enough. In a sense, I want | pn(eiθ) | to be "as large as possible". Such polynomials are easy to find for θ = − π but I'm having trouble dreaming something up for small values of θ. linas 15:28, 12 October 2006 (UTC)

Idea: take \frac{x(1-x)^2}{1-2\cos\theta x+x^2} and approximate by using the geometric series.--gwaihir 15:47, 12 October 2006 (UTC)
Excellent! That's exactly what I needed! And its so ...obvious, yet I would have scratched my head for days :-( Thanks! linas 02:39, 13 October 2006 (UTC)
Let's see p(x)=3x(1-x)\; — it is zero for x being 0 or 1, it does not exceed 3/4<1 for x\in [0,\ 1]\subset\mathbb R, and for z=e^{i \frac\pi 2}=i it is 3i(1-i) = 3(i+1)\;, which gives an absolute value equal 3\sqrt 2.
Is \theta=\frac\pi 2 small enough and 3\sqrt 2 big enough for your needs? --CiaPan 16:46, 12 October 2006 (UTC)
Well, I needed it to work for any θ, no matter how small, and to be unbounded as n got large. gwaihir's answer was spot-on. linas 02:39, 13 October 2006 (UTC)

[edit] Weird question

If x + 1/x = y then what is x^3 + 1/x^3. Can anyone help? even my math teacher can't figure it out. (This is NOT a HW problem.)John 16:52, 12 October 2006 (UTC)

You'll need to do something to the first equation to make it look like the second. Try the most obvious thing; it won't work perfectly, but it'll get you halfway there. Melchoir 17:00, 12 October 2006 (UTC)
You can also just solve the corresponding quadratic equation for x (multiply by x to get rid of 1/x), no real problems there. But Melchoirs solution is definitely shorter.--gwaihir 17:53, 12 October 2006 (UTC)
    • It is not a weird question. You are within a millimeter of reinventing Chebyshev polynomials.Rich 19:18, 12 October 2006 (UTC)

y³ ? --RedStaR 20:53, 12 October 2006 (UTC)

hm well if you cube x + 1/x, i.e. (x + 1/x)^3 = x^3 + 1/x^3, so you have timesed both sides by x + 1/x)^2, but since x + 1/x = y i guess you could substitute y in for it on the other side giving x^3 + 1/x^3 = y(y^2) = y^3. So yeh, I'd go with y³. Philc TECI 22:28, 12 October 2006 (UTC)
(x + 1/x)³ is not x³ + 1/x³!!!! When you exponentiate a binomial (ie a + b), you use the binomial theorem. See my answer below. Oskar 22:30, 12 October 2006 (UTC)
(edit conflict) Yeah, that's not how math works.
I tried it, and actually it's very simple (I'm surprised that your teacher couldn't solve it). We begin, as expected, with cubing (note: wikipedias TeX engine seems very flaky, I'll use ASCII):
x + 1/x = y
(x + 1/x)³ = y³
Thanks to Messrs. Pascal and Newton, this kind of exponentiation is easily expanded (using the third row of pascals triangle and the binomial theorem, we get(a + b)³ = a³ + 3a²b + 3ab² + b³)
x³ + 3x²*1/x + 3x*1/x² + 1/x³ = y³
x³ + 3x + 3*1/x + 1/x³ = y³
So then we really have our answer don't we? We can just say
x³ + 1/x³ = y³ - 3x - 3*1/x
and that'd be correct. Kind of an anti-climax? Maybe there's something we're missing... Hmm, that 3 looks factorable!
x³ + 1/x³ = y³ - (3x + 3*1/x)
x³ + 1/x³ = y³ - 3(x + 1/x)
Wait, wait! I recognize that expression! It's the very first equation we had! Remember, x + 1/x = y. Let's substitute our little hearts out!
x³ + 1/x³ = y³ - 3y
Voila! We have an answer. I believe that is how you were supposed to solve it, the trick was remembering that y = x + 1/x and realising that you could substitute it. Mini-exercise: to confirm our answer, expand y³ - 3y in terms of x, and make sure that we do, infact, end up with x³ + 1/x³. Cheers! Oskar 22:28, 12 October 2006 (UTC)

[edit] dy by dx

Why is \frac{dy}{dx} often pronounced "dy by dx"? — Matt Crypto 21:19, 12 October 2006 (UTC)

I suppose that it has to to with the basic idea of differentiation. You take y = f(x), you increase x by dx, and you see what increase you get in y. Ie y + dy = f(x+dx), dy = f(x+dx) - f(x) and finally dy/dx = (f(x+dx) - f(x))/dx. What I'm saying is that dx is the "controlled" variable, the one that you limit down, and dy is the "result". You get dy by changing dx.
In reality, I have no idea, I'm just guessing. It's just something people say, just accept it :) Oskar 22:39, 12 October 2006 (UTC)
On second thought, maybe it's a historical brevity thing. Leibniz, who invented the notation, used infinitismals, not limits. To get the derivative he divided dy by dx. Maybe we today just dropped the "divided", since that's no longer the way we think about calculus. Oskar 22:42, 12 October 2006 (UTC)
I often pronounce it just "dy dx"... when no confusion will result®. Melchoir 22:47, 12 October 2006 (UTC)
I say "the derivative of y with respect to x". StuRat 00:59, 13 October 2006 (UTC)
For the same reason why 3/5 is pronounced "3 by 5", I guess. deeptrivia (talk) 02:14, 13 October 2006 (UTC)
I pronounce that "three fifths". I also pronounce .6 "three fifths" though for less-obvious decimals I just use the powers place instead of simplifying. Oh and I say "dy dx". --frothT C 18:58, 17 October 2006 (UTC)
Perhaps as an origination for the symbol for small changes, the Greek letter δ, which eventually became "d"? x42bn6 Talk 18:55, 13 October 2006 (UTC)
Thanks for your answers! — Matt Crypto 19:59, 14 October 2006 (UTC)

[edit] de Bruijn sequence

Hey ya'll!

I'm after a very specific number or equation, used to crack numbered locks. It is used by repeatedly entering the number or equation - and, after continually entering the number, the door, lock, or other password opens.

I know that this number exists from an X-Files fan book, which covered series 1 and 2. In one of the episodes in series one and two, Scully and Mulder used this number/equation/code to get through a number-lock door. They had to repeatedly enter the number, and, eventually, they opened the door.

I can't say more than that, unfortunatly. However, if anyone has any other useful number/equation that does something similar to that, I'd love to hear about it. Scalene 22:03, 12 October 2006 (UTC)

Um, The X-Files is fiction. If you don't have any information about the combination, you have to try every possibility. There's no way around that. The "number/equation/code" you're looking for is called a brute force attack, also known as counting. Am I missing something? —Keenan Pepper 22:56, 12 October 2006 (UTC)
I found it, eventually. The de Bruijn sequence - searched for an hour on the net and found it by luck. SO, that leads me to the second part of my question:
What is the basic algothrism for a de Bruijn sequence, with base 2 and length 2. (As in the length of the code you're trying to crack) And how can this be changed to find the value with different bases and lengths of the required result?
Essentially, I need an equation for base 10, with a length of 4.
Scalene 23:39, 12 October 2006 (UTC)
See the de Bruijn sequence page again - it describes the way to find your sequence in details.
Probably there is no compact equation that would give what you need. You'll have to build a proper graph and traverse it with a Hamiltonian cycle. The graph will have 104 nodes (one for each posible 4–digit code), connected with edges (an oriented edge leads from node A to node B when you can derive B code from A code by removing the first digit and appending one at the end). That graph has a Hamiltonian cycle, which defines the de Bruijn sequence for your problem. --CiaPan 06:35, 13 October 2006 (UTC)
Oh, I see. I assumed the lock had something like an enter key and would accept the correct sequence only in isolation. Still, it only speeds up the process by a constant factor. The sequence produced by a maximal linear feedback shift register is quite close to a de Bruijn sequence, but not exactly, and I'm not sure how to convert from one to the other. Anyone have a reference? —Keenan Pepper 07:02, 13 October 2006 (UTC)
I must have misunterstood then – I assumed the lock accepts any input stream (key sequence), and opens immediately when you enter proper digits, ignoring possible wrong input before. And yes, the de Bruijn sequence reduces time by constant factor only. It is the best you can get, though. It's obvious that every two different codes must begin on different positions, so you can not pack 104 codes in less than 104-digit string.
On the other hand, I'm affraid the only way to break a lock with enter key is trying all 104 possible codes. A lock does not give any Mastermind-like feedback, which you could use to refine your guess in next attempts. --CiaPan 09:29, 13 October 2006 (UTC)
Some combination locks actually do – a slight but perceptible difference in how much you can move things. I've occasionally succeeded in opening a lock whose combination was lost using just that feedback, and some luck. Also, a decimal Gray code sequence may help to reduce the physical effort, but the gain is marginal and typically not worth the complication.  --LambiamTalk 12:20, 13 October 2006 (UTC)
The Hamiltonian path problem is, unfortunately, NP-Complete, making the de Bruijn sequence a really tricky thing to calculate. - Rainwarrior 17:08, 15 October 2006 (UTC)
You can do with an Eulerian cycle over the directed graph whose vertices are (n–1)-length words over the alphabet (here digits), with an edge from uσ to σv for every pair of symbols u and v and every (n–2)-length word σ. This is an easy problem. Since every vertex has two incoming and two outgoing edges, such a cycle exists. This link gives an extensive account.  --LambiamTalk 18:32, 15 October 2006 (UTC)
I'm afraid you can't. The Eulerian cycle visits each edge in the graph. To do that you would have to visit each node N times for N-digit alphabet (N>1), whilst we need each node to be visited (i.e. each code be generated) exactly once. Also the last part of your comment is false: in general we have more than two edges starting and two edges ending in every node (unless we consider binary codes). It's true, however, that finding the Eulerian cycle is easy, because in each vertex there is same number of incoming on outgoing edges (so you can leave every vertex you visit, until you come back to the starting node). --CiaPan 05:49, 17 October 2006 (UTC)
Read what I wrote carefully again. How many edges are there? What is the length of a de Bruijn sequence? Do you see a similarity? The elements of the sequence are not represented by the vertices but by the edges. Look at the last diagram in our article.  --LambiamTalk 06:29, 17 October 2006 (UTC)
Ahh, so can the Euler path be constructed in a greedy fasion? (or at least in polynomial time?) I can't think of a good way offhand. - Rainwarrior 07:01, 17 October 2006 (UTC)
I should have just looked at Euler path. So apparently Fleury's algorithm is what Scalene should be using to find this sequence. - Rainwarrior 07:22, 17 October 2006 (UTC)
Testing whether removal of an edge results in loss of connectivity is expensive, and no incremental method is known. A faster way is to represent the path as a doubly-linked list, take at each vertex just any not yet traversed edge, and when coming to a premature dead end, backtrack – without destroying the tail of the path – to a vertex that still has outgoing edges, cut the path open there, and continue. When you come again to that vertex, connect the loose tail and if necessary backtrack again, etc.  --LambiamTalk 08:40, 17 October 2006 (UTC)
That'd do it nicely. - Rainwarrior 04:03, 18 October 2006 (UTC)
Though, I'm wondering how exactly this "Fleury's Algorithm" is actually implemented... I can't seem to find an adequate description. All of the pages I've found say to avoid bridges while possible, basically, but neglect to explain how to maintain the list of bridges. What you suggest sort of does the bridge test, functionally, but while also finishing up the cycle adjacent to the bridge. Hmm... Perhaps instead of your suggestion, we could first build the unconnected cycles, keeping a list of the edges that could not be added, then at the end break these cycles to add the "bridge" edges in this list, which should occur in pairs (or else there would be no Euler path), thus avoiding backtracking entirely? (Both methods should be linear time, anyway, I think, so this would be a minor difference.) Perhaps this is what is meant by Fleury's algorithm?: identify the bridges by forming cycles, then add the bridges? -- Rainwarrior 04:17, 18 October 2006 (UTC)