In machine learning, instance-based learning or memory-based learning is a family of learning algorithms that, instead of performing explicit generalization, compare new problem instances with instances seen in training, which have been stored in memory. Instance-based learning is a kind of lazy learning.
It is called instance-based because it constructs hypotheses directly from the training instances themselves.[1] This means that the hypothesis complexity can grow with the data:[1] in the worst case, a hypothesis is a list of n training items and the computational complexity of classification a single new instance is O(n). One advantage that instance-based learning has over other methods of machine learning is its ability to adapt its model to previously unseen data. Where other methods generally require the entire set of training data to be re-examined when one instance is changed, instance-based learners may simply store a new instance or throw an old instance away.
A simple example of an instance-based learning algorithm is the k-nearest neighbor algorithm. Daelemans and Van den Bosch describe variations of this algorithm for use in natural language processing (NLP), claiming that memory-based learning is both more psychologically realistic than other machine-learning schemes and more effective in practice.[2]