Content-based image retrieval

From Wikipedia, the free encyclopedia

Content-based image retrieval (CBIR), also known as query by image content (QBIC) and content-based visual information retrieval (CBVIR) is the application of computer vision to the image retrieval problem, that is, the problem of searching for digital images in large databases.

"Content-based" means that the search will analyze the actual contents of the image. The term 'content' in this context might refer colors, shapes, textures, or any other information that can be derived form the image itself. Without the ability to examine image content, searches must rely on metadata such as captions or keywords. Such metadata must be generated by a human and stored alongside each image in the database.

A content-based image retrieval system (CBIRS) simply refers to a piece of software that implements CBIR.

Contents

[edit] History

The term CBIR seems to have originated in 1992, when it was used by T. Kato to describe experiments into automatic retrieval of images from a database, based on the colors and shapes present. Since then, the term has been used to describe the process of retrieving desired images from a large collection on the basis of syntactical image features. The techniques, tools and algorithms that are used originate from fields such as statistics, pattern recognition, signal processing, and computer vision.

[edit] Technical progress

There is growing interest in CBIR because of the limitations inherent in metadata-based systems, as well as the large range of possible uses for efficient image retrieval. Textual information about images can be easily searched using existing technology, but requires humans to personally describe every image in the database. This is impractical for very large databases, or for images that are generated automatically, e.g. from surveillance cameras. It is also possible to miss images that use different synonyms in their descriptions. Systems based on categorizing images in semantic classes like "cat" as a subclass of "animal" avoid this problem but still face the same scaling issues.

Potential uses for CBIR include:

  • Art collections
  • Photograph archives
  • Retail catalogs
  • Medical records

[edit] CBIR software systems and techniques

[edit] Query techniques

Different implementations of CBIR make use of different types of user queries.

[edit] Query by example

Query by example is a query technique that involves providing the CBIRS with an example image that it will then base its search upon. The underlying search algorithms may vary depending on the application, but result images should all share common elements with the provided example.

Options for providing example images to the system include:

  • A preexisting image may be supplied by the user or chosen from a random set.
  • The user draws a rough approximation of the image they are looking for, for example with blobs of color or general shapes.[1]

This query technique removes the difficulties that can arise when trying to describe images with words.

An example of a system that allows users to draw their search criteria can be found here: [1]

[edit] Semantic retrieval

The ideal CBIR system from a user perspective would involve what is referred to as semantic retrieval, where the user makes a request like "find pictures of dogs" or even "find pictures of Abraham Lincoln". This type of open-ended task is very difficult for computers to perform - pictures of chihuahuas and Great Danes look very different, and Lincoln may not always be facing the camera or in the same pose. Current CBIR systems therefore generally make use of lower-level features like texture, color, and shape, although some systems take advantage of very common higher-level features like faces (see facial recognition system). Not every CBIR system is generic. Some systems are designed for a specific domain, e.g. shape matching can be used for finding parts inside a CAD-CAM database.

[edit] Other query methods

Other methods include specifying the proportions of colors desired (e.g. "80% red, 20% blue") and searching for images that contain an object given in a query image (as at [2]).

CBIR systems can also make use of relevance feedback, where the user progressively refines the search results by marking images in the results as "relevant", "not relevant", or "neutral" to the search query, then repeating the search with the new information.

[edit] Content comparison techniques

The sections below describe common methods for extracting content from images so that they can be easily compared. The methods outlined are not specific to any particular application domain.

[edit] Color

Examining images based on the colors they contain is one of the most widely used techniques because it does not depend on image size or orientation. Color searches will usually involve comparing color histograms, though this is not the only technique in practice.

[edit] Texture

Texture measures look for visual patterns in images and how they are spatially defined. Textures are represented by texels which are then placed into a number of sets, depending on how many textures are detected in the image. These sets not only define the texture, but also where in the image the texture is located.

[edit] Shape

Shape does not refer to the shape of an image but to the shape of a particular region that is being sought out. Shapes will often be determined first applying segmentation or edge detection to an image. In some cases accurate shape detection will require human intervention because methods like segmentation are very difficult to completely automate.

[edit] Controversy in content sorting

Some software producers are aggressively trying to push CBIR based applications into the filtering and law enforcement markets for the purpose of identifying and censoring images with skin-tones and shapes that could indicate the presence of nudity, with controversial results.

[edit] External links

[edit] CBIR software systems

[edit] Free & Open Source

  • GIFT - The GNU Image Finding Tool - an open source query by example CBIRS. License: GNU GPL.
    • Viper Demo - an online demonstration of the GIFT
    • Perl MRML Client - another GIFT demo, using a different client, and combining textual annotation with visual features
    • EyeVisionBot - An interface for eye-tracking based image search using the GIFT
  • imgSeek - opensource photo collection manager and viewer with content-based search and many other features. License: GPL. Windows, Linux and MacOS versions available.

[edit] Closed Source

  • imgSeek server - Server side implementation of a content-based images database. Employs the same wavelet-based algorithms as the opensource imgSeek application.
  • MFIRS Multi-Features Image Retrieval System, Abdol Hamid Pilevar
  • PIRIA CBIR tool developed at CEA-LIST, LIC2M (Multimedia Multilingual Knowledge Engineering Laboratory).
  • CIRES developed by the University of Texas at Austin.
  • Tiltomo : Image Visual Search Engine CBIR (content based image retrieval) system uses advanced proprietary Subject, Color & Texture recognition algorithms to analyze image composition.
  • IKONA - Online demonstration - Generic CBIR system - INRIA - IMEDIA
  • Cortina - Content Based Image Retrieval for 3 Million Images. From UCSB.
  • Behold Image Search - A search engine that combines automatic image annotation and content based image retrieval for over 1 million images from university websites.
  • Retrievr - search and explore in a selection of Flickr images by drawing a rough sketch or uploading an image.
  • MUVIS - MUVIS Image and Video Retrieval CBIR System at TUT- Tampere University of Technology.
  • WebSEEk, 'A Content-Based Image and Video Search and Catalog Tool for the Web' at Columbia University using structured hierarchies and colour matching to search over 650,000 images and videos on the web.
  • Multimedia Analysis and Retrieval Systems at the University of Illinois at Urbana-Champaign, using a combination of user feedback and salient points analysis:
    • UIUC CBIR on the WEB is a web-based demo.
    • ImageGrouper is a related Java interface to a CBIR system which lets users select a group of images and use it as the basis of a query, or bulk-annotate the group.
  • CBIR applied in museum and heritage contexts:
  • xcavator - an interactive image search demo integrated with Flickr. Powered by technology developed by CogniSign.

[edit] Uncategorized

[edit] Relevant research papers

[edit] Footnotes

  1. ^ Shapiro, Linda; George Stockman. Computer Vision. ISBN 0-13-030796-3. 

[edit] References

  • Bird, C.L.; P.J. Elliott, Griffiths (1996). "User interfaces for content-based image retrieval". 
  • Rui, Yong; Thomas S. Huang, Shih-Fu Chang (1999). "Image Retrieval: Current Techniques, Promising Directions, and Open Issues". 
In other languages