Pinar Duygulu, Kobus Barnard, Nando de Freitas, and David Forsyth, "Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary", Seventh European Conference on Computer Vision, pp IV:97-112, 2002 (Awarded ECVision best paper in cognitive computer vision).
The copyright for the above paper has been transfered to Springer-Verlag. As part of the agreement , Springer-Verlag kindly allows authors to have the text of the papers on their personal web pages and thus the text versions of the paper will remain after publication. The paper will be published in the appropriate Lecture Notes in Computing Science (LNCS)
Data now available
We describe a model of object recognition as machine translation. In this model, recognition is a process of annotating image regions with words. Firstly, images are segmented into regions, which are classified into region types using a variety of features. A mapping between region types and keywords supplied with the images, is then learned, using a method based around EM. This process is analogous with learning a lexicon from an aligned bitext. For the implementation we describe, these words are nouns taken from a large vocabulary. On a large test set, the method can predict numerous words with high accuracy. Simple methods identify words that cannot be predicted well. We show how to cluster words that individually are difficult to predict into clusters that can be predicted well --- for example, we cannot predict the distinction between train and locomotive using the current set of features, but we can predict the underlying concept. The method is trained on a substantial collection of images. Extensive experimental results illustrate the strengths and weaknesses of the approach.
Keywords: Object recognition, correspondence, EM algorithm.
Full text    (pdf)    (gzipped ps)