Hill climbing
From Wikipedia, the free encyclopedia
- This article is about the mathematical algorithm. For other meanings such as the branch of motorsport, please see Hillclimbing (disambiguation).
Hill climbing is an optimization technique which belongs to the family of local search.
Hill climbing attempts to maximize (or minimize) a function f(x), where x are discrete states. These states are typically represented by vertices in a graph, where edges in the graph encode nearness or similarity of a graph. Hill climbing will follow the graph from vertex to vertex, always locally increasing (or decreasing) the value of f, until a local maximum xm is reached. Hill climbing can also operate on a continuous space: in that case, the algorithm is called gradient ascent (or gradient descent if the function is minimized).
Hill climbing is used widely in artificial intelligence fields, for reaching a goal state from a starting node. Choice of next node/ starting node can be varied to give a list of related algorithms.
In simple hill climbing, the first closer node is chosen whereas in steepest ascent hill climbing all successors are compared and the closest to the solution is chosen. Both forms fail if there is no closer node. This may happen if there are local maxima in the search space which are not solutions. Steepest ascent hill climbing is similar to best first search but the latter tries all possible extensions of the current path in order whereas simple hill climbing only tries one.
Random-restart hill climbing is a meta-algorithm built on top of the hill climbing algorithm. It is also known as Shotgun hill climbing.
Random-restart hill climbing simply runs an outer loop over hill-climbing. Each step of the outer loop chooses a random initial condition x0 to start hill climbing. The best xm is kept: if a new run of hill climbing produces a better xm than the stored state, it replaces the stored state.
Random-restart hill climbing is a surprisingly effective algorithm in many cases. It turns out that it is often better to spend CPU time exploring the space, rather than carefully optimizing from an initial condition.
Contents |
[edit] Problems
[edit] Local Maxima
A problem with hill climbing is that it will find only local maxima. Unless the heuristic is good / smooth, it need not reach a global maximum. Other local search algorithms try to overcome this problem such as stochastic hill climbing, random walks and simulated annealing. One way to solve this problem with standard hill climbing is to iterate the hill climbing.
[edit] Ridges
A ridge is a curve in the search place that leads to a maximum. But the orientation of the ridge compared to the available moves that are used to climb is such that each moves will lead to a smaller point. In other words to the algorithm each point on a ridge looks a like a local maximum even though the point is part of a curve leading to a better optimum
[edit] Plateau
Another problem with Hill climbing is called the Plateau problem. This occurs when we get to a "flat" part of the search space, i.e. we have a path where the heuristics are all very close together and we end up wandering aimlessly.
[edit] Pseudo code
Algo (Hill Climbing) bestEval = -INF; currentNode = startNode; bestNode = NULL; for MAX times if (EVAL(currentNode) > bestEval) bestEval = EVAL(currentNode); bestNode = currentNode; L = NEIGHBORS(currentNode); nextEval = -INF; for all x in L if (EVAL(x) > nextEval) currentNode = x; nextEval = EVAL(x); return currentNode;
Contrast genetic algorithm; random optimization.
[edit] See also
[edit] References
- Stuart Russell, Peter Norvig, Chapter 4, Artificial Intelligence: A Modern Approach Prentice-Hall, (1995), ISBN 0-13-103805-2
This article was originally based on material from the Free On-line Dictionary of Computing, which is licensed under the GFDL.