User talk:NickyMcLean
From Wikipedia, the free encyclopedia
Welcome!
Hello, NickyMcLean, and welcome to Wikipedia! Thank you for your contributions. I hope you like the place and decide to stay. Here are a few good links for newcomers:
- The five pillars of Wikipedia
- How to edit a page
- Help pages
- Tutorial
- How to write a great article
- Manual of Style
I hope you enjoy editing here and being a Wikipedian! Please sign your name on talk pages using four tildes (~~~~); this will automatically produce your name and the date. If you need help, check out Wikipedia:Questions, ask me on my talk page, or place {{helpme}}
on your talk page and someone will show up shortly to answer your questions. Again, welcome! Cheers, Tangotango 05:18, 27 April 2006 (UTC)
Contents |
[edit] Floating point example
Greetings. I have made significant changes to your pi-as-computed-by-Archimedes example. It's a really good example, but I think that doing it for both the inscribed and circumscribed polygons is somewhat redundant and confusing. (It wasn't redundant for Archimedes -- he needed error bounds. But we already know the answer.) The fact that the numbers get close to pi and then veer away is a nice touch. I also used exact 64-bit precision, since that's "standard".
Anyway, I thought I'd give you a 'heads up' on this; I don't know whether this is on your watchlist. Feel free to discuss this on the floating point talk page, or my talk page. William Ackerman 00:54, 27 July 2006 (UTC)
[edit] Floating point -- edit conflict!!!!!
I just made a major edit to reorganize the page. (Basically, I moved the hideous "accuracy and misconceptions" section down next to the equally hideous "problems" section, so that they can all be cleaned up / butchered together.) Unfortunately, I got a notice of an editing conflict with you, apparently covering your last 2 changes: 22:03 30 Aug ("re-order for flow") and 22:08 30 Aug ("accuracy and common misconceptions"). Since my changes were much more extensive than yours, I took the liberty of saving my changes, thereby blowing yours away. I will now look at yours and attempt to repair the damage. Sorry. Someday this page (which, after all, is an extremely important subtopic of computers) will look respectable. :-) William Ackerman 23:07, 30 August 2006 (UTC)
It looks as though I didn't break anything after all. I got a false alarm. William Ackerman 00:16, 31 August 2006 (UTC)
No worries! I got into a bit of a tangle with the browser (Firefox), realising that I had forgotten a typo-level change (naturally, this is observed as one activates "post", not after activating "preview") and used the back arrow. I had earlier found via unturnut uxplurur a few days earlier that back arrow from a preview lost the entire text being edited so I didn't really think the back-arrow to add a further twiddle would work but it seemed worth a try for a quick fix but no... On restarting properly the omitted twiddle could be added.
I agree that floating-point arithmetic is important! I recall a talk I attended in which colleagues presented a graph of probabilities (of whatever), and I asked what was the significance of the Y-axis's highest annotation being not 1 but 1.01? Err... ahem... On another occasion I thought to use a short-cut for deciding how many digits to allow for a number when annotating a graph, via Log10(MaxVal), and learnt, ho ho, that on an IBM360 descendant, Log10(10.0000000) came out as 0.99999blah which when truncated to an integer was 0, not 1. Yes, I shouldn't have done it that way, but I was spending time on the much more extensive issues of annotation and layout and thought that a short cut would reduce distraction from this. In the end, a proper integer-based computation with pow:=pow*10; type stepping was prepared. And there are all the standard difficulties in computation with limited precision as well.
[edit] Floating Point cancellation
Nicky,
- Why not align the calculation? And, despite JakeVortex's earlier statement, the round is AFTER the subtraction.
It would be fine for the example to cancel further (if that is what you are asking). I was trying to tie into an earlier example. Maybe we need to name the values. But I was trying to show that if you compute z := x + y (rounded to 7 digits) then w = z - x doesn't give you y back. And I don't understand your comment about the round AFTER the subtraction. The subtraction is exact, so the "rounding step" doesn't alter the value of the result. Thanks for the continued help with the page. ---Jake 21:16, 16 October 2006 (UTC)
Jake, Earlier I had appended an extension
- Nearly all of the digits of the normalized result are meaningless. This is cancellation. It occurs when nearly equal numbers are subtracted, or numbers of opposite sign but nearly equal magnitude are added. Although the trailing digits are zero, their value could be anything.
-
- This is the point that we need to tease apart. From the point of view of the floating point subtraction operation the trailing digits aren't meaningless at all. They have to be zero since the result of subtracting the two inputs is exactly representable. The issue that you are getting at shows up when trying to analyze the error introduced in an algorithm that involves a subtraction after some earlier operation which required rounding. Any this is why I say it isn't the subtraction that is at fault. if you can prove that the inputs to the subtraction are exact, then the result is also exact and no error has been introduced. What is happening with cancellation is that if you have some absolute errors in the inputs you might have up to twice the absolute error (or, more tightly, the sum of the absolute errors) in the result. Why this can be alarming is that the relative error can become very much larger, up to the point of being 100% when the returned answer is zero but the desired value is not.
-
- The way you introduce here is close to significant figures analysis, or could be done more formally with interval arithmetic, but neither of these are what floating point arithmetic actually does.
-
-
- My adjustment was to append "Although the trailing digits [of the result] are zero" etc, not wanting to mess with the previous author's words. I should have added the [...] part, but the example calculation with the ??? seemed to me to be a clear demonstration irrespective of any word tangles. NickyMcLean 20:14, 17 October 2006 (UTC)
-
- The numbers entering the calculation are presumably not known to be exact values, so the calculation might have been described as
e=1; s=3.141600??????... - e=1; s=3.141593??????... ---------------- e=1; s=0.000007??????... e=-5; s=7.??????...
which someone has whacked. Your text goes
e=5; s=1.235585 - e=5; s=1.234567 ---------------- e=5; s=0.001018 (true difference) e=2; s=1.018000 (after rounding/normalization)
In this, clearly the subtraction has been performed, then there is the rounding/normalisation. Your edit comment "It is not the subtraction which is the problem, it is the earlier round" is unintelligible unless "earlier" is replaced by "later" (thus my remark), though I was wondering if you were meaning to put blame on the rounding that went into the formation of the input numbers (the 1.235585 and 1.234567) which if they had been held with more accuracy would mean that the cancellation would be less damaging except of course that there are only seven digits allowed.
-
- Excatly, I was meaning to put the blame on the rounding that went into the formation of the input numbers.
In these examples, there is no rounding after the subtraction only the shifting due to normalisation. Thus I erred in saying that the round was after the subtraction since there is no round. After an operation there is the renormalisation step, which in general may involve a round first, thus the order of my remark. With subtraction, only if there was a shift for alignment would there be rounding of the result, and if there is shifting the two values can't be close enough to cause cancellation! A further example would have operands that required alignment for the subtraction (as in the earlier example demonstrating subtraction) and then rounding could result as well as cancellation. (Thimks) But no.
11.23456 - 1.234561 (both seven digit, and, not nearly equal)
11.23456o (eighth digit for alignment) - 1.234561
10.000009 (subtract) 10.00001 (round to seven digits)
So, cancellation doesn't involve rounding. Loss of Significance (which does involve rounding, but that's not the problem), and Cancellation are thus two separate phenomena. Clearly, cancellation occurs when the high-order digits match (which requires the same exponent value) while rounding works away at the low-end digit.
-
- The one exception to the usual finding that subtraction which results in cancellation does not involve rounding is if you have an operation which can accept inputs of wider precision than the result (or put another way, can round the result to shorter precision than the inputs). In such a case you can have both cancellation and proper rounding. The Intel X87 does this if you operate in short rounding but load 80-bit values onto the stack (or change the precision control with values already on the stack). But this is a nuance best left for a different page I would think.
-
-
- The provenance of the input numbers is no concern of the subtraction operation and indeed would open up a large discussion belonging to numerical analysis. The action of loading high-precision numbers and then rounding to a smaller precision would be to me an additional operation separate from the subtraction operation, effected in pseudocode by Single(x) - Single(y), rather than say Single(x - y), though the compound operation would be the example of both cancellation and rounding as you have supplied. (I do precision abandonment in progs. that store data in vast volume as real*4 because the source data is known to be good to only one in 10,000 or so, but work in real*8 - the rule is of course round only the final result, not the intermediate calculations)
- Undiscussed (as belonging elsewhere?) is the practice of "guard bits" in the hardware performing arithmetic. In the case of the 8087et seq which has THREE guard bits, it has never been clear to me where these might be stored (as on a context switch), but I've never had the patience to chase this bunny over the horizon when there are so many others to pursue. If these bits are not saved, then a computation will vary depending on what should be irrelevant: the execution of other tasks while yours is running. If they are saved, then there is a new floating-point format, of 83 bits.
- We are agreed over what is happening (once we've been though through the details!), the difficulty is to phrase matters so that there will be clear, unambiguous, not misleading and misconception-crushing communication of that understanding to persons not already having it, all in the one sentence. Misunderstanding is easy; I misunderstood your remark's usage of "round" because it was the subtraction operation under discussion, not the origin of the numbers going in to it. NickyMcLean 20:14, 17 October 2006 (UTC)
-
The bit about z:=x + y; w:=z - x; relates more to the Kahan Summation algorithm, which I have also messed with though to less disfavour: perhaps WA hasn't noticed. I seem to have upset him.
-
- This certainly is a step in the Kahan Summation algorithm, but my point was intended to clarify the more elementary observation that in floating point we do not have the distributive law: (x + y) - x == y
- --Jake
- Yep. I had even refined some of the details on the violation of axioms of real arithmetic (like, introducing the text x(a + b) = xa + xb for those who don't quite recall what the distributive axiom is, or maybe it was for the associative axiom), though as I type now, I don't know exactly what survives the current ferment. NickyMcLean 20:14, 17 October 2006 (UTC)
[edit] Interprocedural optimization
Hi. I noticed you were the original author of the Interprocedural optimization page. I didn't see any external references or sources to other websites. Perhaps you could put a note on the talk page if it is your original work. Thanks -Hyad 23:36, 16 November 2006 (UTC)
As I recall, there was some reference to interprocedural optimisation in some article I had come to, that led to nothing - a red link, I think. So (feeling co-operative) to fill a hole I typed a simple short essay on the spot. The allusion is to the Halting Problem (Entscheidungsproblem) which I didn't elaborate upon but otherwise the text is mine. A much larger article could be written on details, especially along the notions of "invariants" or other properties that a procedure might require or not, create or not, and advantage be taken or not in code surrounding the procedure invocation, but I stuck with brevity and the basics. NickyMcLean 01:51, 18 November 2006 (UTC)
- Thanks so much for your response and for writing the article. It seems like an interesting topic. Too bad no one has touched it months; oh well. -Hyad 08:11, 18 November 2006 (UTC)