Concept mining

From Wikipedia, the free encyclopedia

Contents

[edit] Introduction

Concept mining is a discipline at the nexus of data mining, text mining, and linguistics, drawing on artificial intelligence and statistics. It aims to extract concepts from documents. Since at face value documents consist of words and other symbols rather than concepts, the problem is nontrivial, but it can provide powerful insights into the meaning, provenance and similarity of documents.

[edit] Methods

Traditionally, the conversion of words to concepts has been performed using a Thesaurus, and for computational techniques the tendency is to do the same. The Thesauri used are either specially created for the task, or a pre existing language model, usually related to Princeton's WordNet.

The mappings of words to concepts are often ambiguous. Typically each word in a given language will relate to several possible concepts. We, i.e. humans, use context to disambiguate the various meanings of a given piece of text, where available. Machine translation systems cannot easily infer context, and this gives rise to some of the marvelous howlers such systems generate.

For the purposes of Concept mining however, these ambiguities tend to be less important than they are with Machine Translation, for large documents the ambiguities tend to even out, much as is the case with text mining.

There are many techniques for disambiguation that may be used. Examples are linguistic analysis of the text and the use of word and concept association frequency information that may be inferred from large text corpora.

[edit] Applications

  • Detecting and indexing similar documents in large corpora

One of the spin-offs of calculating document statistics in the concept domain, rather than the word domain, is that concepts form natural tree structures based on hypernymy and meronymy. These structures can be used to produce simple tree membership statistics, that can be used to locate any document in an Euclidean concept space. If the size of a document is also considered as another dimension of this space then an extremely efficient indexing system can be created. This technique is currently in commercial use locating similar legal documents in a 2.5 million document corpus.

  • Clustering documents by topic

Standard numeric clustering techniques may be used in "concept space" as described above to locate and index documents by the infered topic. These are numerically far more efficient than their Text mining cousins, and tend to behave more intuitively, in that they map better to the similarity measures a human would generate.

[edit] Benefits

Text mining models tend to be very large. A model that attempts to classify, for instance, news stories using Support Vector Machines or the Naïve Bayes algorithm will be very large, in the megabytes, and thus slow to load and evaluate. Concept mining models can be minute in comparison - hundreds of bytes.

For some applications, such as plagiarism detection, concept mining offers new possibilities. Where the plagiariser has been cunning enough to perform a thesaurus based substitution that will fool text comparison algorithms, the concepts in a document will be relatively unchanged. So 'the cat sat on the mat' and 'the feline squatted on the rug' appear very different from text mining algorithms, and nearly identical to concept mining algorithms.

[edit] Software

Concept Mining is very much in a state of flux, but there are a few commercial products in existence:

There is a live demo of detecting plagiarised Blog stories available on the blog plagiarism demo page

  • ConceptNet A project attempting to extract concept relationships from a large text corpus.
  • PolyAnalyst - commercial data mining/text mining tool, uses WordNet, supports generalizing keywords via hypernymy and other features
  • DrugSense NewsBot - specialized concept mining/text mining, uses over 200 concepts crafted to categorize/classify news articles that relate to illegal drugs. Concepts guide 24/7 back-end spider processes to discover around 800 breaking drug news articles per day. Concepts drive creation of site html pages, news feeds, more. Proprietary "Concept Server" engine used to efficiently find concepts in documents. Inferences generated from concepts detected in articles used in various (drug) news analysis, products. Concept-based automated propaganda analysis.

[edit] See also