Talk:Stochastic tunneling
From Wikipedia, the free encyclopedia
I have to admit that I don't understand this approach. And if I (supposedly an expert in optimization) can't, I doubt the average reader can. A couple of questions: The article defines it as an "approach." Is it an algorithm, or is it a conditioning method to improve the objective function to make an existing algorithm work better, or what? Also, has anything been proven about this approach, i.e. you say it's a method of global optimization. Is it proven to deterministically find the global optimum? In every case or only under certain conditions? If this really does find the global optimum all the time that's a breakthrough indeed. Thanks. moink 17:26, 31 Mar 2004 (UTC)
I added some details due to your contribution. Nevertheless the first sentence already pointed to the Monte-Carlo-(MC)-technique - therefore we are talking about a stochastic algorithm, like genetic algorithms for example. STUN is indeed a method to improve the effectivness of MC by accelerating the process (by attenuating kinetic barriers). Think of it as another Simulated Annealing (SA). While SA has a time-dependent temperature, STUN has a location-dependent one (one can see that through a taylor expansion).
I learned from your questions that there seem to be very different points of view on GO. I got the impression that GO-algorithms are from your engineering point of view only those that have a deterministic behavior. (Am I correct?) I as a physicsist think very differently about GO (more in terms of success rate, computational time required, and thermodynamics).
STUN - due to its stochastic nature - cannot find global optima all the time. It can just increase the probability to achieve a good solution with a fixed computational time. In some applications we confirmed, that STUN needs only roughly 1/80 of steps that Simulated Annealing needed to get the same optimum.
KaHa242 08:33, 2 Apr 2004 (UTC)
- Thanks! It's a little more clear now, though I may continue thinking about ways to make it even more clear. As far as global optimization, I tend to avoid using the term at all since there is no existing algorithm which is guaranteed to find the global optimum. So there are no "global optimization" methods according to my definition. There are local optimization (normally gradient-based methods, also nelder-mead simplex etc) and there are stochastic optimization (genetic algorithms, simulated annealing, particle swarm, and from what i can tell stochastic tunneling fits into this category). There's a couple that fit into neither category, like tabu search. [User:Moink|moink]] 18:11, 2 Apr 2004 (UTC)
- Just go ahead if you can improve the description. Perhaps you want to check TRUST ("Terminal Repeller Un........", some Paper in Science 1997 I guess) and find its similarity to tabu search.
- Nevertheless I think that there is always a global optimization algorithm for discrete problem domains at least: enumerative search. Nevertheless you may find it more suitable to avoid that ;-))
KaHa242 11:27, 5 Apr 2004 (UTC)
-
- Good point, though I could argue that even that's not really true for continuous problems. But does that even count as an algorithm? The problem I'm working on right now has 43 design variables, and it's just me exploring a problem with thousands. Enumerative search would be... errr.. fun!!! :) moink 15:16, 5 Apr 2004 (UTC)
-
-
- Hmmm, it might be if you consider additional information, like the minimal distance between local minima. It this is known you are able to span a lattice over the continous domain and start at every lattice-point a conjugate-gradient-run.
- If I would be you I would really start to work on the enumerative search part. It proofs your devotion to the topic of your thesis. Your thesis adviser might like this ;-)))))))) KaHa242 11:54, 6 Apr 2004 (UTC)
-