Genetic algorithm in economics
From Wikipedia, the free encyclopedia
The genetic algorithm in economics is an algorithm used to model the learning behaviour of economic agents. The term "genetic algorithm" is often abbreviated as GA. The genetic algorithm is a particular class of evolutionary algorithm inspired by evolutionary biology. A genetic algorithm is defined as basic if it only contains a method for reproduction and experimentation. It is defined as augmented if it contains a selection operator as well.
Genetic algorithms are implemented as a computer simulation in which a population of abstract representations (called chromosomes or strings) of candidate solutions (called individuals, or agents) to an optimization problem evolves toward better solutions. Traditionally, solutions are represented in binary as strings of 0s and 1s, but other encodings are also possible. The evolution usually starts from a population of randomly generated individuals and happens in generations. In each generation, the fitness of every individual in the population is evaluated, multiple individuals are stochastically selected from the current population (based on their fitness), and modified (mutated or recombined) to form a new population. The new population is then used in the next iteration of the algorithm.
The genetic algorithm has increasingly been applied to economics over the last two decades. It has been used to characterize a variety of models including the cobweb model, the overlapping generations model, game theory and asset pricing
Contents |
[edit] Design
The genetic algorithm generally consists of a population of n agents with m strings. These strings are often initially randomly generated but are then updated every g periods. Each string is assigned a fitness value through a defined method which is used as a measure of performance. The strings are updated through a series of operators. The basic genetic algorithm generally consists of three unique operators: the reproduction operator, which attempts to imitate successful agents and the two experimentation operators, crossover and mutation, which are implemented to bring diversity into the system. The augmented genetic algorithm includes an election operator, which adds a selection criteria.
[edit] Reproduction
The first operator, reproduction, works by attempting to imitate. In general, it selects another agent to observe its fitness value. If its fitness value is greater than its own, then it elects to adopt the other agent's string. Otherwise, it preserves it own. These strings are then placed into an offspring pool to undergo the mutation operators, crossover and mutation. Most functions are stochastic and designed so that a small proportion of less fit solutions are selected. This helps keep the diversity of the population large, preventing premature convergence on poor solutions. Popular and well-studied selection methods include roulette wheel selection and tournament selection.
[edit] Crossover
[edit] Mutation
[edit] Election
These processes ultimately result in the offspring pool of strings that is different from the initial parent pool. The election operator then works by comparing the fitness of the parent strings to the potential fitness of the offspring pool. If the offspring string has a higher fitness value, it will replace the parent string in the population. Otherwise, the parent string will stay. Generally the average fitness will have increased by this procedure for the population, since only the best strings are selected.
[edit] Genetic Algorithm in the Cobweb Model
The cobweb model is a simple supply and demand model for a good over t periods. Firms (agents) make a production quantity decision in a given period, however their output is not produced until the following period. Thus, the firms are going to have to use some sort of method to forecast what the future price will be. The GA is used as a sort of learning behaviour for the firms. Initially their quantity production decisions are random, however each period they learn a little more. The result is the agents converge within the area of the rational expectations (RATEX) equilibrium for the stable and unstable case. If the election operator is used, the GA converges exactly to the RATEX equilibrium.
There are two type of learning methods these agents can be deployed with: social learning and individual learning. In social learning, each firm is endowed with a single string (which is used as its quantity production decision). It then compares this string against other firm's strings. In the individual learning case, agents are endowed with a pool strings. These strings are then compared against other strings within the agent's population pool. This can be thought of as mutual competing ideas within a firm whereas in the social case, it can be thought of as firm's learning from more successful firms. Note that in the social case and in the individual learning case with identical cost functions, that this is a homogeneous solution, that is all agent's production decisions are identical. However, if the cost functions are not identical, this will result in a heterogeneous solution, where firms produce different quantities (note that they are still locally homogeneous, that is within the firm's own pool all the strings are identical).
After all agent's have made a quantity production decision, the quantities are aggregated and plugged into a demand function to get a price. Each firm's profit is then calculated. Fitness values are then calculated as a function of profits. After the offspring pool is generated, hypothetical fitness values are calculated. These hypothetical values are based on some sort of estimation of the price level, often just by taking the previous price level.
[edit] See also
[edit] References
- J Arifovic, 'Genetic Algorithm Learning and the Cobweb Model ', Journal of Economic Dynamics and Control, vol. 18, Issue 1, (January 1994), 3-28.