Hyper-heuristic
From Wikipedia, the free encyclopedia
A hyper-heuristic is a heuristic search method that seeks to automate, often by the incorporation of machine learning techniques, the process of selecting, combining, generating or adapting several simpler heuristics (or components of such heuristics) to efficiently solve computational search problems. Hyper-heuristics can be thought of as "heuristics to choose heuristics". One of the motivations for studying hyper-heuristics is to build systems which can handle classes of problems rather than solving just one problem.[1][2]
There might be multiple heuristics from which one can choose for solving a problem, and each heuristic has its own strength and weakness. The idea is to automatically devise algorithms by combining the strength and compensating for the weakness of known heuristics. In a typical hyper-heuristic framework there is a high-level methodology and a set of low-level heuristics (either constructive or perturbative heuristics). Given a problem instance, the high-level method selects which low-level heuristic should be applied at any given time, depending upon the current problem state, or search stage.[2]
Contents |
[edit] Hyper-heuristics vs. metaheuristics
The fundamental difference between metaheuristics and hyper-heuristics is that most implementations of metaheuristics search within a search space of problem solutions, whereas hyper-heuristics always search within a search space of heuristics. Thus, when using hyper-heuristics, we are attempting to find the right method or sequence of heuristics in a given situation rather than trying to solve a problem directly. Moreover, we are searching for a generally applicable methodology rather than solving a single problem instance.
Hyper-heuristics could be regarded as "off-the-peg" methods as opposed to "made-to-measure" metaheuristics. They aim to be generic methods, which should produce solutions of acceptable quality, based on a set of easy-to-implement low-level heuristics.
[edit] Motivation
Despite the significant progress in building search methodologies for a wide variety of application areas so far, such approaches still require specialists to integrate their expertise in a given problem domain. Many researchers from computer science, artificial intelligence and operational research have already acknowledged the need for developing automated systems to replace the role of a human expert in such situations. One of the main ideas for automating the design of heuristics requires the incorporation of machine learning mechanisms into algorithms to adaptively guide the search. Both learning and adaptation processes can be realised on-line or off-line, and be based on constructive or perturbative heuristics.
A hyper-heuristic usually aims at reducing the amount of domain knowledge in the search methodology. The resulting approach should be cheap and fast to implement, requiring less expertise in either the problem domain or heuristic methods, and (ideally) it would be robust enough to effectively handle a range of problem instances from a variety of domains.
[edit] Origins
The term hyper-heuristics was introduced in the early 2000s to describe the idea of "heuristics to choose heuristics" in the context of combinatorial optimisation. The first journal paper to use the term appeared in 2003,[3] whilst the first refereed conference paper was presented in 2000.[4] The origin of the idea can, however, be traced back to the early 1960s[5][6] and was independently re-discovered and extendedd several times during the 1990s. [7][8][9] In the domain of Job Shop Scheduling, the pioneering work by Fisher and Thompson (1961, 1963) hypothesised and experimentally proved, using probabilistic learning, that combining scheduling rules (also known as priority or dispatching rules) was superior than any of the rules taken separately. Although the term was not then in use, this was the first "hyper-heuristic" paper. Another root inspiring the concept of hyper-heuristics comes from the field of artificial intelligence. More specifically, it comes from work on automated planning systems, and its eventual focus towards the problem of learning control knowledge. The so-called COMPOSER system, developed by Gratch et al. (1993, 1996), was used for controlling satellite communication schedules involving a number of earth-orbiting satellites and three ground stations. The system can be characterized as a hill-climbing search in the space of possible control strategies.
[edit] Classification of Approaches
Hyper-heuristic approaches so far can be classified into two main categories:
- Heuristic to choose heuristics: Discover good combinations of fixed, human-designed, well-known low-level heuristics.
- Based on constructive heuristics
- Based on perturbative heuristics
- Heuristics to generate heuristics: Generate new heuristic methods using basic components of previously existing heuristic methods.
- Based on basic components of constructive heuristics
- Based on basic components of perturbative heuristics
In the first class, captured by the phrase "heuristics to choose heuristics", the hyper-heuristic framework is provided with a set of pre-existing, generally widely known heuristics for solving the target problem. The task is to discover a good sequence of applications of these heuristics for efficiently solving the problem. In the second class ("heuristics to generate heuristics") the key idea is to "evolve new heuristics by making use of the components of known heuristics"[10] . The process requires, as in the first class of hyper-heuristics, the selection of a suitable set of heuristics known to be useful in solving the target problem. However, instead of supplying these directly to the framework, the heuristics are first decomposed into their basic components.
These two main broad types can be further categorised according to whether they are based on constructive or perturbative search. An additional orthogonal classification of hyper-heuristics considers the source providing feedback during the learning process, which can be either one instance (on-line learning) or many instances of the underlying problem studied (off-line learning).
In on-line learning hyper-heuristics, the learning takes place while the algorithm is solving an instance of a problem, therefore, task-dependent local properties can be used by the high-level strategy to determine the appropriate low-level heuristic to apply. Examples of on-line learning approaches within hyper-heuristics are: the use of reinforcement learning for heuristic selection (Nareyek, 2003),(Pisinger and Ropke, 2007) and generally the use of metaheuristics as high-level search strategies over a search space of heuristics (Fang et al.,1993)(Dorndorf and Pesch, 1995)(Burke et al., 2003, 2007).
In off-line learning hyper-heuristics, the idea is to gather knowledge in form of rules or programs, from a set of training instances, which would hopefully generalise to the process of solving unseen instances. Examples of off-line learning approaches within hyper-heuristics are: learning classifier systems (Ross et al., 2002), case-base reasoning (Burke et al., 2006) and genetic programming (Burke et al., 2006, 2007).
[edit] Applications
Hyper-heuristics have been applied across many different problems. Indeed, one of the motivations of hyper-heuristics is to be able to operate across different problem types. The following list is a non-exhaustive selection of some of the problems and fields in which hyper-heuristics have been explored:
- bin packing problem [11][12]
- boolean satisfiability problem [10]
- educational timetabling [3][13]
- job shop scheduling [5][6][7][8][9]
- multi-objective problem solving and space allocation [14]
- nurse rostering [3]
- personnel scheduling [4]
- traveling salesman problem [15]
- vehicle routing problem [16]
[edit] Related Areas
Hyper-heuristics are not the only approach being investigated in the quest for more general and applicable search methodologies. Many researchers from computer science, artificial intelligence and operational research have already acknowledged the need for developing automated systems to replace the role of a human expert in the process of tuning and adapting search methodologies. The following list outlines some related areas of research:
- adaptation and self-adaptation of algorithm parameters
- adaptive memetic algorithm
- adaptive large neighbourhood search
- algorithm configuration
- algorithm control
- algorithm portfolios
- genetic programming
- indirect encodings in evolutionary algorithms
- variable neighbourhood search
- reactive search
[edit] See also
- genetic algorithms
- genetic programming
- evolutionary algorithms
- local search (optimization)
- machine learning
- memetic algorithms
- metaheuristics
- no free lunch in search and optimization
- particle swarm optimization
- reactive search
[edit] Notes
- ^ E. K. Burke, E. Hart, G. Kendall, J. Newall, P. Ross, and S. Schulenburg, Hyper-heuristics: An emerging direction in modern search technology, 2003
- ^ P. Ross, Hyper-heuristics, 2005.
- ^ E. K. Burke, G. Kendall, and E. Soubeiga, A tabu-search hyperheuristic for timetabling and rostering, 2003.
- ^ P. Cowling, G. Kendall, E. Soubeiga,A Hyperheuristic Approach to Scheduling a Sales Summit, 2001.
- ^ H. Fisher and G. L. Thompson, 1961, 1963.
- ^ W. B. Crowston et al., 1963.
- ^ R. H. Storer et al., 1992.
- ^ H. L. Fang, P. Ross, and D. Corne, 1993
- ^ U. Dorndorf and E. Pesch, 1995
- ^ M. Bader-El-Den and R. Poli, 2008
- ^ P. Ross et. al, 2002
- ^ E. K. Burke, M. R. Hyde, and G. Kendall, 2006
- ^ E. K. Burke, B. McCollum, A. Meisels, S. Petrovic, and R. Qu, 2007
- ^ E. K. Burke, J. D. Landa-Silva, E. Soubeiga, 2005
- ^ R. E. Keller, R. Poli, 2008
- ^ D. Pisinger and S. Ropke, 2007
[edit] External links
[edit] Recent activities
- Workshop on Hyper-heuristics, to be held in conjunction with PPSN X, 10th International Conference on Parallel Problem Solving From Nature, Dortmund, Germany
[edit] References
[edit] Introductory chapters
- E. K. Burke, E. Hart, G. Kendall, J. Newall, P. Ross, and S. Schulenburg, Hyper-heuristics: An emerging direction in modern search technology, Handbook of Metaheuristics (F. Glover and G. Kochenberger, eds.), Kluwer, 2003, pp. 457–474.
- P. Ross, Hyper-heuristics, Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques (E. K. Burke and G. Kendall, eds.), Springer, 2005, pp. 529-556.
[edit] Other sources
- M. Bader-El-Den and R. Poli, Generating sat local-search heuristics using a GP hyper-heuristic framework, Artificial Evolution, 8th International Conference, Evolution Artificielle, EA 2007, Tours, France, October 29-31, 2007, Revised Selected Papers. Lecture Notes in Computer Science 4926 Springer, 2008, pp. 37-49.
- E. K. Burke, M. R. Hyde, and G. Kendall, Evolving bin packing heuristics with Genetic Programming, Parallel Problem Solving from Nature - PPSN IX, LNCS, vol. 4193, Springer-Verlag, 2006, pp. 860–869.
- E. K. Burke, M. R. Hyde, G. Kendall, and J. Woodward, Automatic heuristic generation with genetic programming: evolving a jack-of-all trades or a master of one, GECCO ’07: Proceedings of the 9th annual conference on Genetic and evolutionary computation (London), vol. 2, ACM Press, 2007, pp. 1559–1565.
- E. K. Burke, G. Kendall, and E. Soubeiga, A tabu-search hyperheuristic for timetabling and rostering, Journal of Heuristics, 9(6), 2003, 451–470.
- E. K. Burke, J. D. Landa-Silva, E. Soubeiga Multi-objective Hyper-heuristic Approaches for Space Allocation and Timetabling, Meta-heuristics: Progress as Real Problem Solvers, Selected Papers from the 5th Metaheuristics International Conference (MIC 2003), Springer, 2005, pp. 129-158.
- E. K. Burke, B. McCollum, A. Meisels, S. Petrovic, and R. Qu, A graph-based hyper-heuristic for educational timetabling problems, European Journal of Operational Research, 176, 2007, 177–192.
- E. K. Burke, S. Petrovic, and R. Qu, Case based heuristic selection for timetabling problems, Journal of Scheduling, 9(2), 2006, 115–132.
- P. Cowling, G. Kendall, E. Soubeiga , A Hyperheuristic Approach to Scheduling a Sales Summit. In Practice and Theory of Automated Timetabling III : Third International Conference, PATAT 2000, Konstanz, Germany, August 2000, selected papers (eds Burke E.K. and Erben W), Springer, Lecture Notes in Computer Science Vol 2079, 2001, pp 176-190.
- W. B. Crowston, F. Glover, G. L. Thompson, and J. D. Trawick. Probabilistic and parametric learning combinations of local job shop scheduling rules. ONR Research memorandum, GSIA, Carnegie Mellon University, Pittsburgh, 1963.
- U. Dorndorf and E. Pesch, Evolution based learning in a job shop scheduling environment, Computers and Operations Research, 22(1), 1995, 25–40.
- H.L Fang, P. Ross, and D. Corne, A promising genetic algorithm approach to job shop scheduling, rescheduling, and open-shop scheduling problems, Fifth International Conference on Genetic Algorithms (San Mateo) (S. Forrest, ed.), Morgan Kaufmann, 1993, pp. 375–382.
- H. Fisher and G. L. Thompson, Probabilistic learning combinations of local job-shop scheduling rules, Factory Scheduling Conference (Carnegie Institue of Technology), May 10-12, 1961.
- H. Fisher and G. L. Thompson, Probabilistic learning combinations of local job-shop scheduling rules,Industrial Scheduling (New Jersey) (J. F. Muth and G. L. Thompson, eds.), Prentice-Hall, Inc, 1963, pp. 225–251.
- J. Gratch and S. Chien, Adaptive problem-solving for large-scale scheduling problems: a case study, Journal of Artificial Intelligence Research, 4, 1996, 365–396.
- J. Gratch, S. Chien, and G. DeJong, Learning search control knowledge for deep space network scheduling, Proceedings of the Tenth International Conference on Machine Learning (Amherst, MA), 1993, pp. 135–142.
- E. Hart, J. Nelson, and P. Ross, Solving a Real-World Problem Using an Evolving Heuristically Driven Schedule Builder. Evolutionary Computation 6(1), 1998, 61-80.
- E. Hart and P. Ross(1998) A Heuristic Combination Method for Solving Job-Shop Scheduling Problems. In Parallel Problem Solving from Nature - PPSN V PPSN, Lecture Notes in Computer Science 1498, 845-854
- R. E. Keller, R. Poli: Cost-Benefit Investigation of a Genetic-Programming Hyperheuristic. Artificial Evolution, 8th International Conference, Evolution Artificielle, EA 2007, Tours, France, October 29-31, 2007, Revised Selected Papers. Lecture Notes in Computer Science 4926 Springer 2008, pp. 13-24.
- J. Mockus and L. Mockus, Bayesian approach to global optimization and applications to multi-objective constrained problems, Journal of Optimization Theory and Application, 70(1), 1991, 155-171.
- A. Nareyek, Choosing search heuristics by non-stationary reinforcement learning, Metaheuristics: Computer Decision-Making (M. G. C. Resende and J. P. de Sousa, eds.), Kluwer, 2003, pp. 523–544.
- E. Özcan, B. Bilgin, E. E. Korkmaz, Hill Climbers and Mutational Heuristics in Hyperheuristics, Lecture Notes in Computer Science, Springer-Verlag, The 9th International Conference on Parallel Problem Solving From Nature, 2006, pp. 202-211.
- E. Özcan, B. Bilgin, E. E. Korkmaz, A Comprehensive Analysis of Hyper-heuristics, Intelligent Data Analysis, 12:1, 2008, pp. 3-23.
- D. Pisinger and S. Ropke, A general heuristic for vehicle routing problems, Computers and Operations Research, 34, 2007, 2403– 2435.
- P. Ross, S. Schulenburg, J. G. Marin-Blazquez, and E. Hart, Hyper-heuristics: learning to combine simple heuristics in bin-packing problem, Proceedings of the Genetic and Evolutionary Computation Conference, GECCO’02, Morgan- Kauffman, 2002, pp. 942-948.
- R. H. Storer, S. D.Wu, and R. Vaccari, New search spaces for sequencing problems with application to job shop scheduling, Management Science, 38 (10), 1992, 1495–1509.