Text Retrieval Conference

The Text REtrieval Conference (TREC) is an on-going series of workshops focusing on a list of different information retrieval (IR) research areas, or tracks. It is co-sponsored by the National Institute of Standards and Technology (NIST) and the Intelligence Advanced Research Projects Activity (part of the office of the Director of National Intelligence), and began in 1992 as part of the TIPSTER Text program. Its purpose is to support and encourage research within the information retrieval community by providing the infrastructure necessary for large-scale evaluation of text retrieval methodologies and to increase the speed of lab-to-product transfer of technology.

Each track has a challenge wherein NIST provides participating groups with data sets and test problems. Depending on track, test problems might be questions, topics, or target extractable features. Uniform scoring is performed so the systems can be fairly evaluated. After evaluation of the results, a workshop provides a place for participants to collect together thoughts and ideas and present current and future research work.

Tracks

Current Tracks

New tracks are added as new research needs are identified, this list is current for TREC 2014.

Past tracks

In 2003, this track became its own independent evaluation named TRECVID.

Related Events

In 1997, a Japanese counterpart of TREC was launched (first workshop in 1999), called NTCIR (NII Test Collection for IR Systems), and in 2000, a European counterpart was launched, called CLEF (Cross Language Evaluation Forum).

Conference Contributions

NIST claims that within the first six years of the workshops, the effectiveness of retrieval systems approximately doubled.[1] The conference was also the first to hold large-scale evaluations of non-English documents, speech, video and retrieval across languages. Additionally, the challenges have inspired a large body of publications. Technology first developed in TREC is now included in many of the world's commercial search engines. An independent report by RTII found that "about one-third of the improvement in web search engines from 1999 to 2009 is attributable to TREC. Those enhancements likely saved up to 3 billion hours of time using web search engines. ... Additionally, the report showed that for every $1 that NIST and its partners invested in TREC, at least $3.35 to $5.07 in benefits were accrued to U.S. information retrieval researchers in both the private sector and academia." [2] [3]

While one study suggests that the state of the art for ad hoc search has not advanced substantially in the past decade,[4] it is referring just to search for topically relevant documents in small news and web collections of a few gigabytes. There have been advances in other types of ad hoc search in the past decade. For example, test collections were created for known-item web search which found improvements from the use of anchor text, title weighting and url length, which were not useful techniques on the older ad hoc test collections. In 2009, a new billion-page web collection was introduced, and spam filtering was found to be a useful technique for ad hoc web search, unlike in past test collections.

The test collections developed at TREC are useful not just for (potentially) helping researchers advance the state of the art, but also for allowing developers of new (commercial) retrieval products to evaluate their effectiveness on standard tests. In the past decade, TREC has created new tests for enterprise e-mail search, genomics search, spam filtering, e-Discovery, and several other retrieval domains.

TREC systems often provide a baseline for further research. Examples include:

Participation

The conference is made up of a varied, international group of researchers and developers.[9][10][11] In 2003, there were 93 groups from both academia and industry from 22 countries participating.

References

External links