Talk:Corner detection
From Wikipedia, the free encyclopedia
[edit] Possible Infobox
I have created an infobox for all pages related to corner/edge/blob/feature/interest point detection: User:Keeganmann/Corner Detection Navagation. I think it would help with navigation.
Should I add it and is it a good idea? Keeganmann (talk) 00:46, 17 February 2008 (UTC)
I have looked at the infobox that you created. It contains some information that may be informative. However, it unfortunately has a strong bias towards certain parts of the information in the feature detection pages (edge detection, corner detection, blob detection, ridge detection). In particular it gives overemphasis to the Harris affine and Hessian affine detectors while omitting other well-known feture detectors, such as the regular Harris operator and the Shi and Tomasi corner detector. In particular, the notion of ridge detection is missing. If an information box should be created, it has to be more balanced with regard to the technical contents. The information in the table of feature detectors in the page Feature detection (computer vision) is more balanced in this regard. Concerning the last item "Miscellaneous" in table, it would be more informative to rename it into "Affine invariant feature detectors" and also put the Harris affine and Hessian affine detectors under such a header. Furthermore, the link to the Laplacian operator should also be moved to a link to the section on the Laplacian of Gaussian operator within the blob detection page. Similarly, the link to the determinant of the Hessian operator should be to the section on this topic within the blob detection page. Tpl (talk) 10:35, 18 February 2008 (UTC)
Please feel free to modify the infobox if you feel that you can make it better. Keeganmann (talk) 04:13, 21 February 2008 (UTC)
Thanks for the invitation. Now, I have updated the infobox along the directions I had in mind. I've also included if from feature detection (computer vision), interest point detection, corner detection and blob detection pages. Probably one should give some further thoughts of how to use the figure illustrations from the different pages to be appropriate. Please, give your suggestions. Tpl (talk) 13:38, 23 February 2008 (UTC)
[edit] Recent changes
Did some minor editing here and there, but most importantly squared the trace term in the expressions for Mc to make them consistent with the literature. --KYN 13:22, 6 May 2006 (UTC)
Oops, my bad... it was a typo, the previous step, I had the square. Retardo 20:07, 6 May 2006 (UTC)
[edit] Requested move
I suggest that the article is moved to the heading "Interest point". The reason is that it is generally accepted that all the methods which is described here detect general interest points rather than corners specifically. There is no reason why Wikipedia must add to this confusion by presenting these methods under the heading "Corner detection". Also, the heading "Interest point" is more general than "Interest point detection" and can include apects of interest points other than detection, e.g., tracking or other applications which use the detected points. --KYN 18:04, 31 August 2006 (UTC)
I agree RE: interest points. I think there should be a short page on corner detection explaining the slightly confused terminology, but otherwise redirecting to interest points. In practice, even brand new papers call it corner detection. Serviscope Minor 20:35, 31 August 2006 (UTC)
[edit] There are other interest point operators than those who can be referred to as corner detectors
In the computer vision literature, there are several blob detectors, for example the scale normalized Laplacian, the scale-normalized determinant of the Hessian as well as a hybrid operator "Hessian-Laplace" which uses the determinant of the Hessian for spatial selection and the scale-normalized Laplacian for scale selection. The appropriate section to put these operators would be under the heading "blob detection" which is referred to from the page "scale-space". This page has, however, not been written yet.
If the current page on "corner detection" is moved to "interest point", then the scope of the article would have to be extended substantially from the current scope based on the Harris operator. Tpl 13:47, 2 September 2006 (UTC)
- That is my intention. This article started with only the Harris operator, but given its current content is seems more appropriate to present a discussion about what interest points are (there is something on this already) and how they are used. In particular, a new heading can cover different ways of extracting the image coordinate (x,y) for an interest point. Also, the difference between point and blob can be discussed. The aspect of transformation from image gray values (typically) to a set of image coordinates should be discussed. For example, if we only threshold the response from Harris, we get a blob of pixels. If we instead try to estimate the local maxima we get a pixel coordinate, perhaps even with sub-pixel accuracy if certain measures have been taken. Non-max suppression should also be mentioned. Then there can also be a list of detection methods, more or less like in the current article. Alternatively, there could be one (shorter) article for each specific method and only links from the new page. --KYN 17:23, 2 September 2006 (UTC)
- Note that some of the now mentioned methods have applications in different areas. For example, the Tomasi-Kanade or Shi-Tomasi stuff was originally used for stereo image registration, but has also been used for tracking a region in an image sequence, and can of course also be used for finding interest points in one single image. From that perspective, it could make sense to develop each individual method on a page of their own, describing various details and their applications. There can also be survey articles, like "interest point" which describes the concept from a general point of view, presents a list of methods which can be used, and refer the reader to the page of each specific method for the details. --KYN 17:23, 2 September 2006 (UTC)
Thanks for your reply. I could start writing on an article on blob detection that describes a number of the main blob detectors in the literature, in order to clear up a number of common misunderstandings and also to show how they are respectively related and differ. Then, that material could be a better starting point for a discussion on if the articles on corner detection and blob detection should be merged or not. I think that I could do the writing during next week, not today however. Tpl 06:23, 3 September 2006 (UTC)
I think MSER is the best candidate for blob detection. Hessian interest points look pretty similar to Harris interest points, in practice. Also, the LoG/ DoG detector is arguable a blob detector (it's matched filtering for LoG shaped blobs), but in practice, it's still referred to as a corner or interest point detector. Also, there aren't any corner detectors I know of which aren't really interest point detectors. There are some genuine corner detectors which detect corners (ie sharp bends) in deteced edges, but I haven't seen them referenced (other than in surveys) in recent work.
Serviscope Minor 15:20, 2 September 2006 (UTC)
Now, there is a first outline of an article about blob detection. Four commonly used blob detectors based on differential expressions are described in sufficient detail, and headers have been added for two other important notions of blobs based on local extrema with extent (including MSER). Tpl 17:17, 4 September 2006 (UTC) This description has now been complemented by brief descriptions of two extremum based blob detection methods Tpl 18:16, 4 September 2006 (UTC) Now, I think that it should be easier to make an informed decision whether the articles on corner detection and blob detection should be merged and transferred to an article on interest point detection, or whether they should be kept separate. From my point of view, a division into two articles is more informative provided that cross-references are kept and explanatory comments are given on the notion of interest points.
There is still room for extending these articles with additional corner and/or blob detectors. Regarding the area of feature detection, there are also articles on edge detection and ridge detection. Tpl 08:03, 5 September 2006 (UTC)
[edit] Affine invariance (or not)
I don't want to contaminate the article with my views before discussion has taken place on this.
With the typical implementation of the Affine adapted interest points, especially Harris-affine points, the resulting detector is not affine invariant. This is because a search through affine space (unlike scale space) is too expensive.
Any successfully detected points are invariant to affine transformations, in that the affine ellipse which can be drawn around them will more or less cover the same part of the image even after affine transformations. However, the implementation relies on multiscale feature detection, followed by iterative affine adaption. The normal Harris detector is not particularly invariant (or repeatable) under affine transformations of the image. Since this is the first step, it puts an upper bound on the `affine invariantness' of the overall algorithm. That is, under affine transformations, many points will not be detected repeatably. Serviscope Minor 15:51, 5 September 2006 (UTC)
You are right in the observation that the commonly used Euclidean and scale invariant preprocessing stage to affine shape adaptation is not invariant to the full affine group. The correct statement of the affine shape adaptation is that if a fixed point can be found for the affine shape adaptation algorithm, then the resulting image features are affine invariant. This statement is also said explicitly in the original reference (Lindeberg and Garding 1994, 1997). In practic this implies that affine transformations with moderate deviations from the similarity group will imply reasonably high repeatibility of the image features, while almost degenerate affine transformations will imply substantial problems. Nevertheless, the overall approach is highly useful for applications such as wide baseline stereo matching. Tpl 18:09, 5 September 2006 (UTC)
Since the text on affine shape adaptation is much more general than the scope of this article, I moved it to a separate article affine shape adaptation. Besides, corner detection and blob detection, affine shape adaptation also applies to texture segmentation, texture classification and texture recognition. Tpl 10:08, 6 September 2006 (UTC)
[edit] Implementation
Do you think it's reasonable on a page like this to have some external links to implementations?
Here's my thoughts, since I'm not in the business of endorsing anyone's code in particular.
Some detectors have sample implementations by the authors, eg SUSAN, DoG (in SIFT), Harris-Laplace. These take precedence, since they may have details not exactly present in the paper and all results _should_ be reproducable with these implementations.
Other detectors (eg Harris, Shi-Tomasi) have very stable implementations in certain libraries, eg intel OpenCV and these libraries are sufficiently widely used that they're not going to be disappearing anytime soon.
If you concur that this section is reasonable, then I'll start adding links, noting whether they are they authors' sample implementations or not. Serviscope Minor 16:56, 8 September 2006 (UTC)
[edit] Move, etc
I marked this article some time ago for moving its title to something like "interest point". Given that there now is an article also on "blob detection", I would like to bring some order to the overall presentation. Here is my proposals:
- Parts of the content of this article (Corner detection) is moved to new article "Interest point" which is intended to give a general introduction to this topic, describe applications for interest points and also provide a list of methods for detecting interest points. This list would probably include most of those which now are found in the "Corner detection" article. My proposal is that they are presented at a general level, technical details are not presented in this list of methods.
- The technical details of each of the interest point detection methods are put into separate articles, one article per method.
- The relation between "blobs" and "interest points" needs to be sorted out. Personally, I don't know if they should be kept separate or if they should be presented in the same article. Any ideas? Either way, the distinctions or similarities need to be discussed.
- I also propose that the current article "Blob detection" is renamed to "Blob (computer vision)". Detection is only one aspect of blobs which should be discussed in that article. Applications and general rationale for why we should worry about blobs are other aspects which also should be presented. Detection of blobs should rather be a section in that article.
--KYN 20:50, 14 September 2006 (UTC)
Relationship between blobs and interest points: Well, there's definitely an intersection there. I've never heard of MSER referred to as interest points, or Harris points as blobs, but DoG/LoG features fall happily in to either camp. Maybe the place to cover this is in a generic "Features" article. Features of interest include edges (1D) , interest points (0D or 2D depending on your inclination), blobs and regions. The thing is that all of these features share the same roles (eg matching them for various reasons), so it might be worth dealing with all of the together. As well as sharing similar uses, they should all have the same kind of properties (eg repeatability). That also sidesteps the issue of "is a given feature detector a corner detector or a blob detector".
One could then have a list under each of the headings (interest point/corner, blob, etc), pointing to the relavent article. That kind of implies that I agree on having each detector in its own article. One can then have detectors under multiple headings.
Serviscope Minor 21:34, 14 September 2006 (UTC)
There is definitely a clash in terminology here. The old terminology divides feature detection into corner detection, blob detection, edge detection, ridge detection, etc. The terminology "interest point" is more recent, but the notion of "regions of interest" has been used for a much longer period of time. To have a long and general article on "feature detection" that replaces the current articles on corner detection, blob detection, edge detection and ridge detection would, however, not be a good idea from my viewpoint, since such an article would cover too much, and one would easily lose the overview (unless one already has a good internal picture of the overview). The area "feature detection" is general and could from the viewpoint of Wikipedia easily be decomposed into several articles as it is today. However, I am not inclined to putting each individual feature detector in its own article either, since several of the blob detectors and several of the corner detectors have similar mechanisms in common. It would be hard to navigate between one article on the Moravec detector, one on the original Harris, one on the multi-scale Harris, one on the Harris-Laplace operator, one on the Laplacian, one on the difference of Gaussians, one on the determinant of the Hessian, one on the mixed Laplacian-Hessian operator etc. In particular, it would be hard for a new reader to get the overview.
From my viewpoint, the current division into corner detection, blob detection, edge detection and ridge detection seems as the best compromise between overview and level of detail. One could easily write short meta article on "interest points" that refers to corner detection and blob detection. Similarly, one could write a meta overview article on "feature detection" that refers to interest points, edge detection and ridge detection as well as other uses of the term "feature detection". If there is more support for such an approach, I could make a first outline for these articles. In such articles, one could also describe common notions of why these image features are detected, including the notion of repeatibility.
By the way, concerning the choice of interest points, the best choice today seems to be the blob detector based on the determinant of the Hessian (DoH). In the recent article on the SURF descriptor, this detector is reported to have better properties than the LoG/DoG operators (see the article on blob detection for a reference).
Concerning the sometimes occurring naming of the LoG/DoG operators as "corner detectors" I still think that this terminology is not correct. The LoG/DoG operators measure the similarity between the local image pattern and a circularly symmetric filter with a bright/dark center and a dark/bright surrounding. A better way of expressing the property is that the original LoG/DoG operators should be referrred to as blob detectors, while the additional filtering stage on the eigenvalues of the Hessian will filter away spurious responses for which there are not significant variations in two directions. Still, however, I will not complain on the short reference that is made to these operators that in the current article on corner detection. I think that it would be much worse to reorganise the two articles completely with the major aim om addressing this minor problem.
In addition, I do not think that it would be a good idea to rename "blob detection" into "blob (computer vision)". The term "blob detection" is well established in the field, and in close analogy with the even more established notion of "edge detection". Tpl 05:42, 15 September 2006 (UTC)
I think maybe a general overview in feature detection, including the kinds of usage these things undergo would be useful. It can then point to blob, interest point/corner etc detection. I agree with your point about the problems with putting all the detectors in different articles. If that is done, then they will have to have very good see-also sections.
I don't think interest points refer to blob detection, though (I have the feeling that they're synonyms with the rather poorly named "corners"). Feature would cover those two, though. But yes, I support your writing of the meta-article, except I think that it should just be "feature detection", not interest point (edges and blobs aren't really point like). But I think a meta-article is worthwhile.
Anyway, since we're now on to ECCV 2006 feature detectors, I think FAST (link in Corner_detection: "Machine learning for..." ECCV 2006) looks a better bet than DoH. FAST has considerably better repeatability than DoG, whereas DoH seems pretty similar. FAST also about 50x faster. Though, the implemention of the standard detectors in the DoH paper seems somewhat slower than in the FAST paper.
Finally, about DoG corners and blobs: of the radius is small enough, then it detects point like features (aka corners), if the radius is large, it detects blobs. I think the definitions are suitably wooly that it could happily fit in to either. Perhaps when the dust settles on this, I'll produce a table of all the described feature detectors and the classification to which they belong (multiple classifications allowed). That would disambiguate it somewhat. Serviscope Minor 16:37, 15 September 2006 (UTC)
Thanks for your support. As the articles on corner detection and blob detection are today, I think that they are more valuable than a set of individual articles for each feature detector. With this organization, the current articles indeed provide added value compared to the original research papers, since the interrelations between the different feature detectors are made explicit (and provide a clarification against common misunderstandings in certain research articles).
Concerning an overview article, I could start writing a brief meta article on feature detection. Concerning the interpretation of "interest points", whether to include or exclude blobs, my view is that a blob descriptor should indeed more interpreted as more than just a point -- a blob at least contains an attribute of scale, which implies a support region, which in turn can be either rotationally symmetric, elliptic or have a more complex shape. But still, since the notion of "interest point" has become more popular by certain people than the previous terminology "corner", and in addition many of the more recent interest point operators also include an estimate of scale, I still think that it would be wrong to exclude blob descriptors from the class of interest points. The center (or maximum/minimum) of a blob descriptor is definitely a point, and usually satisfies similar criteria as one would impose on other interest points. Although this terminology may not have spread to all practioners in the field, I do not find it wrong to make this generalization here, in particular since blob descriptors have previously been used precisely as regions of interest for further processing.
Yes, I agree that many blob descriptors will also respond to corners at fine scales, although with a less precise localization. As you suggest, a table may be illustrative. Tpl 13:24, 16 September 2006 (UTC)
[edit] Summary and more arguments
I summarize the discussion so far as follows:
- Regarding moving a larger part of this article to a new name "Interest point", I havn't seen any major objections. I will make this move within short, unless someone can provide a good reason not to. The motivation for the move is that, even if the corresponding methods classically are referred to as "corner detectors", this label not correct since they also detect other types of "interest points". This realization is also reflected in some of the recent literature. However, I don't know if all of the recently added methods in the "Corner detection" article can be referred to as methods for interest points detection or if they should go into blob detection.
- The issue of moving the more technical content of the various methods into separate articles appears not to be approved at this stage. I may advocate for this strategy later on.
- About the relation between interest points and blobs, this needs to be explained in the corresponding articles. Right now, the "Corner detection" (soon to be "Interest point") article does provide some intuitive and conceptual definition of an interest point. I would like to see a similar presentation of a "blob" in the "Blob detection" article. This should hopefully shed some light on the difference between the two concepts.
- The question of renaming the "Blob detection" is related to the proposed extension (see previous point) of that article. However, I remove this issue from the discussion on this talk page and move it to the "Blob detection" Talk page.
My contribution to this discussion are as follows:
- A (interest) point can only be characterized in terms of a position or image coordinate, possibly together with the specific feature on which the detection method is based. A blob, on the other hand, consist in general of a region, i.e., a set of points. This set may be small, in some cases even consisting of a single point. This observation implies that a blob detection method must provide a set of points as output rather than a single point, which is what we get from an interest point detector. This is the difference between the two concepts.
- The resulting point set from a blob detector can often be condensed into a single point, e.g., by computing its center of gravity (mean). This is then a type of interest point. Also, in some cases we can get an interest point by finding local maximum/minimum of the corresponding "feature strength" function that is used for blob detection. Here we have some relations between the two concepts.
- About an overview article on features, this is a good idea, but please note the existing article Feature (Computer vision). That article could benefit from additional work.
--KYN 21:49, 17 September 2006 (UTC)
[edit] Objections ...
Well, I have a major objection against moving this article to "interest point". Moreover, I do not agree on the distinction between interest points and blob detectors that you present. If you define "interest point" as a point in the image domain, which has a clear (mathematical and operational) definition and can be robustly detected (which I think is a good definition both in terms of usefulness and the interest point operators that exist today), then each one of the five blob detectors defined in the article on blob detection also satisfy the requirement of an "interest point operator". From this viewpoint, I would find it better to have an overview article on interest point detectors which then refers to the two articles on corner detection and blob detection concerning specific approaches. The overview article could also describe why these features are used at all, for which there are many similarities between corner detectors and blob detectors. A main distinction I see between a corner detector and a blob detector, is if you look at these concepts from viewpoint of CAD-like object models, which are to be tracked over time and/or recognized in complex scenes. If one aims at using the physical corners of the CAD-model as features, then only a corner detector will satisfy the relevant requirements, since a corner detector will respond to the physical corners while a blob detector will respond to the object as a whole or parts of the object. Another main distinction is in terms of structure from motion. If one wants to compute point-correspondences over time or in a stereo pair, then the expected better localization of corner features may be an advantage in many situations. Nonwithstanding this, blob features can also be used for tracking and recognition, although the localization may not always be as precise. Tpl 12:49, 18 September 2006 (UTC)
[edit] Differences and similarities between interest points, corners and blobs
In an attempt to clarify how I think that the notion of interest points should be defined, I have written an outline of an article on interest point detection that includes criteria that an interest point should satisfy as well as how this notion relates to previous notions of corner detectors, blob detectors and regions of interest. I do not claim that this article (which has been written up rather fast) is an ultimate solution. My opinion is however that it makes it easier to have an informed discussion. Intentionally, I chose a different name for the article than for the suggestion for a move. My personal suggestion is that it would be better to rename the new article interest point detection to interest point than the current article on corner detection. Tpl 14:42, 18 September 2006 (UTC)
[edit] Feature vs. Interest point
Think that the new interest point detection better describes general features in general (it would be more complete with the addition of edges). I think interest point refers to point like features as opposed to blobs, ridges or regions of interest. Serviscope Minor 17:29, 18 September 2006 (UTC)
In many respects you are right. But also for the most common interest point operators that have a mechanism for scale selection, there will be an additional attribute of scale, which imply a region around the interest point and thereby less of a distinction to a blob as obtained from LoG or DoH blob detection. Sorry to repeat this argument ... I have partly addressed your comment on general features by adding a brief reference to edge detection as well as a link to Feature (Computer vision). But I agree that there is a large overlap between the article on interest point detection and Feature (Computer vision)
One of the good features of Wikipedia is that it is often good at describing opposing views in situations when there are conflicting views. I hope that we can find a good way to resolve this without sacrificing neither generality nor clarity. Tpl
True, though Re: scale selection, one can always build an image pyramid and run a simple (non scale selective) feature detector such as SUSAN or FAST at each level. Are these now blob detectors? Wikipedia is good ad addressing differences in point of view. Fortunately, I don't think any of us here have any particular vested interest in one point of view (it's just terminology), but we all desire to see wikipedia consistent with itself and the literature (somewhat harder, given its own lack of consistency).
Anyway, I like the improvements to Feature (Computer vision). Serviscope Minor 17:29, 19 September 2006 (UTC)
Glad that you like the modifications of Feature (Computer vision). Personally, I have not used SUSAN or FAST to analyse their response properties at coarse scales. Intuitively, however, I would not count them as blob detectors in a similar way as I would not count any of the versions of the Harris operator as a blob detector either. Concering terminology, I'm rather satisfied with the feature detection articles as they are now, in particular as you write it the current Wikipedia articles are more consistent and more clear today than certain parts of the research literature. Clearly, the current status is not a 100 % perfect situation. However, it is a much better compromise than other alternatives. Tpl 18:22, 19 September 2006 (UTC)
I can speak more about FAST (re: blob detection) than SUSAN. Fast responds strongly to a light sopt on a dark background (ie roughly LoG like feature), so it will respond to blobs at coarse scales. However, the point only needs to look LoG like for a little over half of the feature. Some interest point detectors (eg detect + chain edges (Canny?) and look for "corners" in the chained edges) definitiely aren't blob detectors. Some, like MSER are blob detectos and definitely not interest point detectors. However, the in between ones (like FAST, SUSAN and Harris) will respond to things at coarse scales which are blobby at fine ones. Are those blobs, and does that make them blob detectors? By this point I'm in to semantics, and I'm not convinced it's a particularly helpful distinction to make. I also don't know if its worth mentioning in the article. It's touched on with the LoG detector, but if you dig further, you'll probably reach your own unique conclusion about what is a blob and what isn't. So I think its probably worth avoiding the issue as much as possible. Serviscope Minor 22:50, 19 September 2006 (UTC)
Concerning "blob responses" from the Harris operator at coarse scales, these responses will typically arise as side effects of large amounts of smoothing at coarse scales. Besides this minor detail, I agree with your view and I can buy the double listing of the LoG, DoG and DoH detectors. To make it clear that the topic we are addressing is "feature detection", I have also moved the specific material from the article Feature (Computer vision) to a new article on feature detection. In this way, we avoid the conflict with regard to feature map approaches that may also be relevant for a more general article on "feature". As the feature detection article is right now, I'm rather satisfied with the scope and the contents. Tpl 17:42, 20 September 2006 (UTC)
[edit] Hessian or Autocorrelation matrix?
In this article, the Hessian matrix and the Autocorrelation matrix (=Harris matrix) are mixed up: In section "The Harris & Stephens / Plessey corner detection algorithm", the matrix A is denoted as Hessian. However, it is writen I_x^2 (square of the first derivative) and not I_xx (second derivative). 129.143.13.82 17:42, 4 June 2007 (UTC)
A is the Hessian of S. It turns out that the Hessian of S only has first derivatives of I in it. If you find the second derivative of S with respect to (x, y), you get A. If you compute the autocorrelation which is:
C = | ∑ | ∑ | I(u,v)I(u − x,v − x) |
u | v |
and find the Hessian (second derivative of C wrt. x, y) and you do not get A. This mistake is a common one to make because Harris and Stephens made the mistake in their paper and it is frequently copied.
[edit] Second derivatives in the Harris corner description
It's not clear that the components of A come from the second derivatives of S. Following the derivation in the paper, it looks like the components of A come from a first-order Taylor Series approximation of S. Sancho 06:56, 19 October 2007 (UTC)
- A comes from the second order Taylor term of S, but it is constructed from first order derivatives in I (squared). This is because S has a specific second order dependency in I! --KYN 10:14, 19 October 2007 (UTC)
- Ah. I phrased my question wrong also though.. in the paper, they come to A through a first order Taylor expansion of only the terms in the brackets:
- So I see how to come to A through this derivation, but it's not clear in this article that the second derivatives of S give the specific elements of A; it's just asserted. I like your changes that you made to the article, but this one point is still a bit confusing. I can see that this should be proportional to the A that is pulled out of the second order expansion of S. I wonder which derivation should be in the article. From the one you added, people can see that A comes out of the second order expansion of S. From this one in the paper, people can see how the terms arise. Sancho 16:10, 19 October 2007 (UTC)
- Ah. I phrased my question wrong also though.. in the paper, they come to A through a first order Taylor expansion of only the terms in the brackets:
- As far as I can see, you can do it either way. I don't mind if it is changed to the original derivation from the Harris & Stephens paper, but since it appears to be a debate on whether or not A can be called a Hessian, the current (perhaps incomplete) derivation shows that this statement is correct if we also say that it is the Hessian of S and not of I. --KYN 18:36, 20 October 2007 (UTC)
- I don't like the new derivation of the Harris detector. A is still the second derivative of S wrt. x and y, but now it's justified via a Taylor series. This is a standard result so I don't think it's necessary: this appears to be a fairly round about way to say that A is the Hessian of S wrt. x and y. I think it's better to replace text with links to relevant sections. And I agree: A is the hessian of S, very much not the Hessian of I.
- How about something more along the lines of this:
S(x,y) = blah
- The Harris matrix A is defined as the second derivative of S with respect to x and y (the Hessian matrix of S), taken around (0,0). According to the Taylor series, this can be used to approximate S(x,y) for small x,y, since the lower order terms are zero. Since...
- I think that this shortens it without removing useful information (rather it relies on it already being elsewhere on the wiki).
Serviscope Minor 20:40, 5 November 2007 (UTC)
- I think that it should be the first derivative. Why would the first derivative be 0? This makes no sense... See for example the lecture at: http://www.wisdom.weizmann.ac.il/~deniss/2004-03_invariant_features/InvariantFeatures.ppt RobWijnhoven (talk) 16:16, 27 March 2008 (UTC)
-
- No, A is defined as the second derivative (Hessian) of S. This is clear from the equations in the article.
-
- Q: Why would the first derivative be 0? A: We are talking about the first derivatives of S with respect to x & y. The derivative w.r.t. x becomes
-
- Evaluate this for (x,y)=(0,0):
-
- Same thing for the derivative w.r.t. y. This fact is perhaps not obvious from the derivation in the article but it does make sense. Otherwise the approximation of S near (0,0) must contain a first order term in addition to the second order term given by A!
-
- The reference to the Frolova/Simakov PowerPoint is fine and gives an intuitive motivation for the Harris-Stephens operator but it does not explain (1) WHY we can approximate S (called E in their PowerPoint) as a second order (bilinear) expression in (x,y), i.e., why the zero and first order terms vanish, or (2) WHY the matrix A (M in the PowerPoint) is given as the weighted mean of the outer products of the gradient of I. These facts are only obvious if we look at the details of the Taylor expansion of S.
-
- Over time various readers have had a quick look at the presentation on the Harris-Stephens operator and been lead to the conclusion that A must be the Hessian of I (which is wrong) and/or tried to change its definition accordingly. This has happened on several occasions, and to avoid these mistakes I tried to make a derivation (although not completely rigid, some details are missing) which describes how S, I and A are related. This can possibly be made in a better way, but changing to the presentation used in the PowerPoint will, I am afraid, not stop the accident reader from making these mistakes again. --KYN (talk) 21:26, 27 March 2008 (UTC)