Talk:Radial basis function
From Wikipedia, the free encyclopedia
Does this have something to do with basis function? --Abdull 17:08, 21 February 2006 (UTC)
- Yes. Radial basis functions are basis functions of a particular form. They depend on the distance to a center associated with each basis function. Unlike many basis functions, these sets are not complete. Complexica 18:52, 23 February 2006 (UTC)
Contents |
[edit] Overview comments
Hi,
It's one of my first intervention, I hope I won't do anything not aligned with the wikipedia policy.
1) Neural Networks are not always composed of three layers, there could be no hidden layer, one or two, even more for 'exotic cases'.
- I found an answer to this question here, it says "In principle, they could be employed in any sort of model (linear or nonlinear) and any sort of network (single-layer or multi-layer). However, since Broomhead and Lowe's 1988 seminal paper [3], radial basis function networks (RBF networks) have traditionally been associated with radial functions in a single-layer network" Paskari 21:06, 1 December 2006 (UTC)
2) Would it be relevant if one add in the overview that there is a strong relationship between radial basis function and kernels ? Nicogla 14:54, 18 March 2006 (UTC)
- Good idea about the kernels. I am planning an extensive extension of the article. I will add, unless you want to. Perhaps something also should be added about layers. Please feel free to add what you think. Also, it is good to sign your comments with 4 ~. This puts a date stamp and an identifier on your comment. Complexica 20:14, 31 March 2006 (UTC)
-
- Ok, thanks for the tips. I will think at something for the NN layers. Nicogla 12:32, 3 April 2006 (UTC)
the overview is really complicated: 'multidimensional space', 'distance criterion with respect to a center.', 'sigmoidal transfer function'... Also 'RBF networks have the advantage of not being locked into local minima as do feedforward networks.', I thought RBF's were feed forward. they are listed as feedforward under the Artificial neural network page Paskari 19:48, 1 December 2006 (UTC)
[edit] Query
I'm not sure how to go about this but I was thinking of contributing something about using radial basis functions to interpolate scattered data samples. The current page is focused on their application in Neural Networks, which is a significant application - but not their only use.
- Perhaps the best place for this is as an example, maybe the first example.Complexica 20:20, 17 August 2006 (UTC)
[edit] wrong focus, inappropriate domain-specific jargon
I'm very familiar with RBFs as basis functions for meshless multivariate interpolation and the contents of this web page are totally unfamiliar. This page presents their application to neural networks as what RBFs are all about and also uses jargon specific to that application ("streams of data", "learning") to describe general properties, which is very confusing and misleading. There is almost nothing here which is from Holger Wendland's "Scattered Data Approximation", which is probably the most authoritative book on RBFs.
- Please feel free to add what you think is missing. That is what the wiki is about, isn't it? Complexica 17:29, 14 September 2006 (UTC)
- I agree in that the use of the terms 'streams of data' and 'complete data sets' is misleading', I think the introduction to any page should be as basic as possible, such that all readers, not just scientists and mathematicians, should understand. Paskari 19:43, 1 December 2006 (UTC)
[edit] Intro and Overview
I admire the fact that you have spent such a long time organizing all the formulas, but you have to simplify at least the intro and the overview, otherwise no one will be able to follow Paskari 19:50, 1 December 2006 (UTC)
[edit] Split
I suggest that the page is split into two: radial basis function would describe the actual functions and radial basis network would describe the type of neural network which has radial basis activation functions. This would clarify the page and take care of the neural network bias mentioned in the comments above. Any comments? AnAj 15:12, 18 February 2007 (UTC)
- I made the split. The new page is radial basis function network. AnAj 19:23, 22 February 2007 (UTC)
- I wrote the "wrong focus, inappropriate domain-specific jargon" comment. I think that your changes are a large step in the right direction. Thanks for making these changes. - Andrew
[edit] Least Squares vs Solving a system of linear equations
In reply to the undo from Jheald at 16:43, 2 October 2007: The reason why I made the change is not because the Matrix has to be positive definite. The difference I wanted to point out in my commentary was, that for interpolation usually the points to interpolate correlate with the centers of the basis functions. Then the number of interpolate data points is the same as rbf's that will be added and the matrix will be square, thus least squares methods are inapplicable, since these methods find an approximation, where no solution is possible, when the matrix is not square. As you surely know, a square matrix also does not have to define a solvable system of linear equations, and thats where the positive definiteness comes in, because if the matrix is positive definite, Ax=b has a unique solution. For that reason I undo your undo, no offense. - Frank
- It's perfectly accurate to call N linear equations in N unknowns a least squares problem -- it's just that we happen to be hoping that the LS solution will have all the residuals zero, a special case.
- But in any case, interpolating between N points with N radial basis functions is a basically stupid thing to do. It will give a crazily overfitted approximation function, with wild oscillations between points. What RBFs are good for is approximation rather than interpolation, using m centres, where m is rather less than N. If you really do want to go through all N points, then you should use a lot more than N RBFs, with some sort of regularisation function to select the smoothest or least wild interpolant - eg a quadratic cost function on the weights. Either way, it becomes a least squares type problem. The word which probaby should have been corrected/removed was "interpolant". It's not an "interpolant" usually, it's an "approximant" we want with an RBF fit.
- Finally, we shouldn't be encouraging people to use naive gradient descent. It's almost always a stupid way to go about things. For nonlinear functions, conjugate gradient (or a variable metric method) is invariably going to be far more efficient. For a linear system like an RBF fit, unless it's very sparse (which RBF fits typically won't be), a direct linear method like QR will typically be the way to go. Jheald 15:06, 5 October 2007 (UTC)
-
- But the section is now referring just to the application of neural networks, which is, as stated by others above, just _one_ application.
- By the way, RBF interpolation is a standard method for scattered data interpolation, I don't know if there are any industrial applications of it, but RBF's show up in every book about scattered data approximation, used for approximation as well as for interpolation. I mean, will you erase the article about polynomial 1-D interpolation, just because no one interpolates with polynomials for practical reasons? - Frank —Preceding unsigned comment added by 80.171.114.204 (talk) 18:32, 5 October 2007 (UTC)
-
-
- There's a difference between putting a surface close to the points on the one hand, and insisting the surface goes through the points on the other. A naive RBF fit is a very good way of doing the former, but a very bad way to do the latter. In the applications cited - eg for making delay-space models for timeseries - it is the former that is wanted.
-
-
-
- It's got nothing to do with whether you want to think of the thing as a neural network or not - an RBF fit is an RBF fit. Jheald 18:40, 5 October 2007 (UTC)
-
-
-
-
- I agree with you now that my version was also incomplete, but RBF's are definitely used for interpolation (Yes, I know what that word means, thank you), see e. g. H. Wendland “Scattered data approximation” or A. Iske “Multiresolution methods in scattered data modelling”, or even here Radial_basis_function_network.
- Also I don't understand how the words “The weights could thus be learned using any of the standard iterative methods for neural networks.” Can be seen independent of the application of RBF's as a neural network. - Frank —Preceding unsigned comment added by 80.171.114.242 (talk) 08:00, 6 October 2007 (UTC)
-
-
-
-
-
-
- Neural networks aren't an application of Radial Basis Functions. They're merely one form of language for talking about them. The point the article needs to make is that even though RBF approximants are sometimes talked about as neural networks, they are actually just linear fits, so nothing more than linear matrix methods are required. That was the point I was trying to communicate.
-
-
-
-
-
-
-
- As to interpolation, doesn't that just follow as a special case of the function approximation? I suppose we could make the point explicitly that with N rbfs it is (generically) possible to exactly pass through N points. But if we did, IMO it should come with a heavy health warning. Jheald 09:14, 7 October 2007 (UTC)
-
-
-
-
-
-
-
-
- Radial Basis Functions have been used for the past 15 years in the field of Computational Fluid Dynamics to interpolate, as was mentioned before, scattered data plots. It isn't stupid to use a fully populated NxN matrix with an Nx1 vector of coefficients to calculate the value of any point in a given field. All it requires is a little thought of node placement (this if for mapping a fluid system where position of points is crucial to the accuracy of the matrix) proper conditioning and an understanding that with improved accuracy comes ill conditioning. As a result it should be mentioned that a Gaussian elimination, QR decomposition or singular value decomposition will quite happily find you your coefficient vector. In short you can use full NxN matrices, this is not just a neural network problem and the reference to it makes the entire article very ambiguous as I have never run into and will never need to encounter neural networks in my study of these functions, and finally some thought should be dedicated to its applications - Red 86.21.195.46 (talk) 04:37, 10 December 2007 (UTC)
-
-
-
-
-
-
-
-
-
-
- I suspect the key words in the above paragraph are "proper conditioning" -- i.e. there's some kind of regularization going on, to rein in directions in which the linear fit is worst conditioned; so that strictly speaking, you're then doing a fit rather than an interpolation. So long as regularization is in place, so that you' are doing a fit rather than an interpolation, it can make sense to use even more than N radial basis functions (which reduces the issue of where you put them).
-
-
-
-
-
(de-indenting) I'm not sure that you are understanding each other. What Jheald is saying, as I get it, has nothing to do with radial basis functions. His point is that in practice, the interpolation values are contaminated by noise. If now the interpolation points are close to each other (relative to the noise), then the noise will cause oscillations in the interpolant. In some applications this is a problem, and thus you shouldn't do interpolation. However, there are also situations in which interpolation is the correct thing to do. In my opinion, the article should mention both interpolation and fitting, and explain how to compute the weights in both problems. I'm not sure whether we should discuss in this article when to do interpolation and when to do fitting. -- Jitse Niesen (talk) 17:40, 10 December 2007 (UTC)
- In practice you use exactly the same maths either way; that's not really an issue. But encouraging people to do exact interpolation through N datapoints using N rbfs is irresponsible -- the coefficients obtained corresponding to the smallest singular values are likely to be total garbage (whether or not the data points are in fact noisy) Jheald (talk) 18:19, 10 December 2007 (UTC)
-
- Can you give any evidence that the same method is used to compute the weights for interpolation and fitting? Or that (some of) the weights are garbage if the data is not noisy? What value of N are you thinking of (what order of magnitude)? -- Jitse Niesen (talk) 19:17, 10 December 2007 (UTC)
-
-
- Sure. Either way, you would use QR or SVD, as per linear least squares. For the N equations with N unknowns, Red (above) suggests you could also use Gaussian elimination, but you would have to be very sure the design matrix wasn't ill conditioned (as it probably would be). For an example of an RBF fit getting increasingly wild as the regularization is turned down, see eg chapter 2 of David Mackay's PhD thesis, [1] page 17, fig 2.4., using 60 rbfs.
-
-
-
- Even with no noise on the data, the shape of the singular vector corresponding to the least significant singular value becomes highly oscillatory, with its exact shape depending rather closely on the placement of the rbf functions. But this is the svec which gets magnified hugely when that singular value gets inverted to do an exact interpolation. Thus, in the exact interpolation case, ie when there is no truncation or regularisation, the fitted function can vary wildly with small changes in the rbf placements. The contributions from those last singular vectors are simply not stable. Jheald (talk) 00:34, 11 December 2007 (UTC)
-
-
-
-
- Garbage singular values can be modified, reevaluated or even just truncated allowing a full ill conditioned matrix with very small singular values to still be used with a high degree of convergence (10^-9) full numerical analysis of a couple of relevant methods will be finished in a few months time. Red 128.243.220.41 (talk) 13:38, 13 December 2007 (UTC)
-
-
[edit] RBFs: Functions of scalars or vectors?
User:Orie0505 objects to the writing of statements like , on the basis that the function φ can't act both on a scalar and a vector.
I think that's being unreasonably picky. The point about an RBF is to define a mapping from a vector to scalars, which depends only on the norm ||x|| of the vector; in fact a family of mappings, indexed by the centres {c}, depending only on the norms ||x-c||. But the fact that these are mappings from V to R is important.
In view of how closely related they are, it is natural (and common practice) to use the same letter φ for φ(x,c), φ(x) and φ(||x||). Yes, these are three different functions, becuase they act on three different types of inputs. But such operator overloading -- using the same letter for different, but closely related functions, identified by their different types of inputs, is not unusual. In computer science it's a commonplace, of course. But even in physical sciences and applied maths, it's not unusual to write things like ψ(r,θ)=ψ(r) to indicate there is no theta dependence.
In this case it would be much more confusing to use different letters for the different functions, because they are so closely related. It is also, as I wrote, the standard practice in the field, and the article should reflect that.
At the end of the day RBFs depend only on a scalar. But the whole name, "basis function", refers to a mapping of all the points in a space to scalars. The article should reflect that. Jheald 15:39, 5 October 2007 (UTC)