tf–idf

From Wikipedia, the free encyclopedia

The tf–idf weight (term frequency–inverse document frequency) is a weight often used in information retrieval and text mining. This weight is a statistical measure used to evaluate how important a word is to a document in a collection or corpus. The importance increases proportionally to the number of times a word appears in the document but is offset by the frequency of the word in the corpus. Variations of the tf–idf weighting scheme are often used by search engines as a central tool in scoring and ranking a document's relevance given a user query.

Contents

[edit] Mathematical details

The term frequency in the given document is simply the number of times a given term appears in that document. This count is usually normalized to prevent a bias towards longer documents (which may have a higher term frequency regardless of the actual importance of that term in the document) to give a measure of the importance of the term ti within the particular document dj.

 \mathrm{tf_{i,j}} = \frac{n_{i,j}}{\sum_k n_{k,j}}

where ni,j is the number of occurrences of the considered term in document dj, and the denominator is the number of occurrences of all terms in document dj.

The inverse document frequency is a measure of the general importance of the term (obtained by dividing the number of all documents by the number of documents containing the term, and then taking the logarithm of that quotient).

 \mathrm{idf_{i}} =  \log \frac{|D|}{|\{d_{j}: t_{i} \in d_{j}\}|}

with

  • | D | : total number of documents in the corpus
  •  |\{d_{j} : t_{i} \in d_{j}\}|  : number of documents where the term ti appears (that is  n_{i,j} \neq 0).

Then

 \mathrm{tf{}idf_{i,j}} = \mathrm{tf_{i,j}} \cdot  \mathrm{idf_{i}}

A high weight in tf–idf is reached by a high term frequency (in the given document) and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms.

[edit] Example

There are many different formulas used to calculate tf–idf. The term frequency (TF) is the number of times the word appears in a document divided by the number of total words in the document. If a document contains 100 total words and the word cow appears 3 times, then the term frequency of the word cow in the document is 0.03 (3/100). One way of calculating document frequency (DF) is to determine how many documents contain the word cow divided by the total number of documents in the collection. So if cow appears in 1,000 documents out of a total of 10,000,000 then the document frequency is 0.0001 (1000/10,000,000). The final tf-idf score is then calculated by dividing the term frequency by the document frequency. For our example, the tf-idf score for cow in the collection would be 300 (0.03/0.0001). Alternatives to this formula are to take the log of the document frequency (as above). The natural logarithm is commonly used. In this example we would have idf = ln(10,000,000/1,000) = 9.21, so tf-idf = 0.03 * 9.21 = 0.27.

[edit] Applications in Vector Space Model

The tf-idf weighting scheme is often used in the vector space model together with cosine similarity to determine the similarity between two documents.

[edit] See also

[edit] References

[edit] External links