Talk:Euclidean distance
From Wikipedia, the free encyclopedia
Is that formula for approximating the distance using only integers wrong? Surely if dx and dy are the same (eg, at a 45-degree angle) then dx + dy - 2×min(dx,dy) will always be 0, and the approximation will always be 100% error?
- Yea, it was wrong and has been changed. StuRat 16:40, 21 September 2005 (UTC)
I think it is worth noting that the Euclidean metric used to be called Pythagorean metric. At least there should be a page title Pythagorean metric that redirects here. 127
- Sure, go ahead and add it. StuRat 20:07, 21 October 2005 (UTC)
[edit] fast 2d calculation values
A fast approximation of 2D distance based on an octagonal boundary can be computed as follows. Let dx = | px − qx | (absolute value) and dy = | py − qy | . If dy > dx, approximated distance is 0.41dx + 0.941246dy.
- where do 0.41 and 0.941246 come from? --Abdull 12:59, 21 February 2006 (UTC)
-
- Well. 0.41 sounds like an approximation of sqrt(2). I don't know where the other coefficient comes from or why it's necessary to have 6 decimal place accuracy (while 0.41 only has a two decimal place accuracy). StuRat 19:57, 21 February 2006 (UTC)
-
-
- Those coefficients come from a certain optimal interpolation, which I have calculated some years ago. It goes like this: dy >= dx, therefore dy = a*dx (where a is greater or equal to 1). Distance d is then equal to dx * sqrt(1 + a^2). Now, when you plot the square-root expression for a>=1, it strongly resembles a plain straight line. And those mysterious coefficients are just the description of the optimal interpolation line (b = 0.41 + 0.94*a). Then, d =~ dx * (0.41 + 0.94*a). The last step is a realization that a is in fact dy/dx and performing an appropriate simplification. (One point is missing here, the criteria of interpolation optimality. Frankly, I have forgotten, what condition I have actually used: whether least area between the curve and line, or maximum distance minimization. I have to find the paper and recall it :-)). --BB 09:54, 22 February 2006 (UTC)
-
-
-
- One more addition, IIRC, the optimality criterium comes from error-factor formula sqrt(1+a^2)/(0.41+0.941246*a). --BB
-
- Some years ago I developed a similar distance approximation algorithm using three terms, instead of just 2, which is much more accurate, and because it uses power of 2 denominators for the coefficients can be implemented without using division hardware. The formula is: 1007/1024 max(|x|,|y|) + 441/1024 min(|x|,|y|) - if ( max(|x|.|y|)<16min(|x|,|y|), 40/1024 max(|x|,|y|), 0 ). Also it is possible to implement a distance approximation without using either multiplication or division when you have very limited hardware: ((( max << 8 ) + ( max << 3 ) - ( max << 4 ) - ( max << 1 ) + ( min << 7 ) - ( min << 5 ) + ( min << 3 ) - ( min << 1 )) >> 8 ). This is just like the 2 coefficient min max algorithm presented earlier, but with the coefficients 123/128 and 51/128. I have an article about it at http://web.oroboro.com:90/rafael/docserv.php/index/programming/article/distance --Rafael
-
- Perhaps we need a whole new article on distance approximations. Can you give me an example of a place where min and max functions are available but multiplication and division are not ? StuRat 16:10, 18 May 2006 (UTC)
[edit] Comparing against a distance
If you want to know whether objects A and B are at distance c or less, you can compare ((Ax - Bx)^2 + (Ay - By)^2 + (Az - Bz)^2) with c^2, and similarly for other numbers of dimensions. This avoids the square root entirely. If c is a constant, then c^2 can be precomputed as well. -- Myria 06:18, 21 July 2006 (UTC)
- True, but this is already implied by the section that talks about how distances can be compared while skipping the square root operation. StuRat 05:37, 29 July 2006 (UTC)