Automatic image annotation

Automatic image annotation (also known as automatic image tagging or linguistic indexing) is the process by which a computer system automatically assigns metadata in the form of captioning or keywords to a digital image. This application of computer vision techniques is used in image retrieval systems to organize and locate images of interest from a database.

This method can be regarded as a type of multi-class image classification with a very large number of classes - as large as the vocabulary size. Typically, image analysis in the form of extracted feature vectors and the training annotation words are used by machine learning techniques to attempt to automatically apply annotations to new images. The first methods learned the correlations between image features and training annotations, then techniques were developed using machine translation to try to translate the textual vocabulary with the 'visual vocabulary', or clustered regions known as blobs. Work following these efforts have included classification approaches, relevance models and so on.

The advantages of automatic image annotation versus content-based image retrieval (CBIR) are that queries can be more naturally specified by the user . CBIR generally (at present) requires users to search by image concepts such as color and texture, or finding example queries. Certain image features in example images may override the concept that the user is really focusing on. The traditional methods of image retrieval such as those used by libraries have relied on manually annotated images, which is expensive and time-consuming, especially given the large and constantly growing image databases in existence.

Some annotation engines are online, including the ALIPR.com real-time tagging engine developed by Pennsylvania State University researchers, and Behold.

Some major work

Y Mori, H Takahashi, and R Oka (1999). "Image-to-word transformation based on dividing and vector quantizing images with words.". Proceedings of the International Workshop on Multimedia Intelligent Storage and Retrieval Management.
P Duygulu, K Barnard, N de Fretias, and D Forsyth (2002). "Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary". Proceedings of the European Conference on Computer Vision. pp. 97–112.
J Li and J Z Wang (2006). "Real-time Computerized Annotation of Pictures". Proc. ACM Multimedia. pp. 911–920.
J Z Wang and J Li (2002). "Learning-Based Linguistic Indexing of Pictures with 2-D MHMMs". Proc. ACM Multimedia. pp. 436–445.
J Li and J Z Wang (2008). "Real-time Computerized Annotation of Pictures". IEEE Trans. on Pattern Analysis and Machine Intelligence.
J Li and J Z Wang (2003). "Automatic Linguistic Indexing of Pictures by a Statistical Modeling Approach". IEEE Trans. on Pattern Analysis and Machine Intelligence. pp. 1075–1088.
K Barnard, D A Forsyth (2001). "Learning the Semantics of Words and Pictures". Proceedings of International Conference on Computer Vision. pp. 408–415.
D Blei, A Ng, and M Jordan (2003). "Latent Dirichlet allocation". Journal of Machine Learning Research. pp. 3:993–1022.
G Carneiro, A B Chan, P Moreno, and N Vasconcelos (2006). "Supervised Learning of Semantic Classes for Image Annotation and Retrieval". IEEE Trans. on Pattern Analysis and Machine Intelligence. pp. 394–410.
R W Picard and T P Minka (1995). "Vision Texture for Annotation". Multimedia Systems.
C Cusano, G Ciocca, and R Scettini (2004). "Image Annotation Using SVM". Proceedings of Internet Imaging IV.
R Maree, P Geurts, J Piater, and L Wehenkel (2005). "Random Subwindows for Robust Image Classification". Proceedings of the IEEE International Conference on Computer Vision and Pattern Recognition. pp. 1:34–30.
J Jeon, R Manmatha (2004). "Using Maximum Entropy for Automatic Image Annotation". Int'l Conf on Image and Video Retrieval (CIVR 2004). pp. 24–32.
J Jeon, V Lavrenko, and R Manmatha (2003). "Automatic image annotation and retrieval using cross-media relevance models". Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. pp. 119–126.
V Lavrenko, R Manmatha, and J Jeon (2003). "A model for learning the semantics of pictures". Proceedings of the 16th Conference on Advances in Neural Information Processing Systems NIPS.
R Jin, J Y Chai, L Si (2004). "Effective Automatic Image Annotation via A Coherent Language Model and Active Learning". Proceedings of MM'04.
D Metzler and R Manmatha (2004). "An inference network approach to image retrieval". Proceedings of the International Conference on Image and Video Retrieval. pp. 42–50.
S Feng, R Manmatha, and V Lavrenko (2004). "Multiple Bernoulli relevance models for image and video annotation". IEEE Conference on Computer Vision and Pattern Recognition. pp. 1002–1009.
J Y Pan, H-J Yang, P Duygulu and C Faloutsos (2004). "Automatic Image Captioning". Proceedings of the 2004 IEEE International Conference on Multimedia and Expo (ICME'04).
J Fan, Y Gao, H Luo and G Xu (2004). "Automatic Image Annotation by Using Concept-Sensitive Salient Objects for Image Content Representation". Proceedings of the 27th annual international conference on Research and development in information retrieval. pp. 361–368.
A Oliva and A Torralba (2001). "Modeling the shape of the scene: a holistic representation of the spatial envelope". International Journal of Computer Vision. pp. 42:145–175.
A Yavlinsky, E Schofield and S Rüger (2005). "Automated Image Annotation Using Global Features and Robust Nonparametric Density Estimation". Int'l Conf on Image and Video Retrieval (CIVR, Singapore, Jul 2005).
N Vasconcelos and A Lippman (2001). "Statistical Models of Video Structure for Content Analysis and Characterization". IEEE Transactions on Image Processing. pp. 1–17.
Ilaria Bartolini, Marco Patella, and Corrado Romani (2010). "Shiatsu: Semantic-based Hierarchical Automatic Tagging of Videos by Segmentation Using Cuts". 3rd ACM International Multimedia Workshop on Automated Information Extraction in Media Production (AIEMPro10).
Yohan Jin, Latifur Khan, Lei Wang, and Mamoun Awad (2005). "Image annotations by combining multiple evidence & wordNet". 13th Annual ACM International Conference on Multimedia (MM 05). pp. 706–715.
Changhu Wang, Feng Jing, Lei Zhang, and Hong-Jiang Zhang (2006). "Image annotation refinement using random walk with restarts". 14th Annual ACM International Conference on Multimedia (MM 06).
Changhu Wang, Feng Jing, Lei Zhang, and Hong-Jiang Zhang (2007). "content-based image annotation refinement". IEEE Conference on Computer Vision and Pattern Recognition (CVPR 07).
Ilaria Bartolini and Paolo Ciaccia (2007). "Imagination: Exploiting Link Analysis for Accurate Image Annotation". Springer Adaptive Multimedia Retrieval.
Ilaria Bartolini and Paolo Ciaccia (2010). "Multi-dimensional Keyword-based Image Annotation and Search". 2nd ACM International Workshop on Keyword Search on Structured Data (KEYS 2010).
Emre Akbas and Fatos Y. Vural (2007). "Automatic Image Annotation by Ensemble of Visual Descriptors". Intl. Conf. on Computer Vision (CVPR) 2007, Workshop on Semantic Learning Applications in Multimedia.
Ameesh Makadia and Vladimir Pavlovic and Sanjiv Kumar (2008). "A New Baseline for Image Annotation". European Conference on Computer Vision (ECCV).
Chong Wang and David Blei and Li Fei-Fei (2009). "Simultaneous Image Classification and Annotation". Conf. on Computer Vision and Pattern Recognition (CVPR).
Matthieu Guillaumin and Thomas Mensink and Jakob Verbeek and Cordelia Schmid (2009). "TagProp: Discriminative Metric Learning in Nearest Neighbor Models for Image Auto-Annotation". Intl. Conf. on Computer Vision (ICCV).
Yashaswi Verma and C. V. Jawahar (2012). "Image Annotation Using Metric Learning in Semantic Neighbourhoods". European Conference on Computer Vision (ECCV).

See also

References

External links