AnnoSearch: Image Annotation by Search
Although it has been studied for several years by both computer vision and machine learning communities, image annotation is still far from practical and satisfactory. In this paper, we present AnnoSearch a brand-new way to solve the image annotation problem by leveraging the search technology and the huge number of images existing on the Web. We reformulate the annotation problem as searching for semantically and visually similar images on the Web and labeling with the key phrases extracted from the description of the images. On the one hand, to ensure that the found images are semantically similar, besides the target image, at least one accurate annotation is required. This requirement is not as brute force as it may seem like. For most Web images, such accurate annotation, e.g. filename, URL or alt text is available. Based on the accurate annotation, text-based search is conducted to filter out those semantically irrelevant images. On the other hand, to ensure that the found images are visually similar, visual features are used to rank those semantically relevant images. To be efficient, a hash coding-based algorithm is proposed to map the high dimensional visual features to hash codes and thus speed up the similarity search process. As the result of the proposed search-based annotation, annotating with unlimited vocabulary is possible which is impossible for all existing approaches. Experimental results on real web images show the effectiveness and efficiency of the proposed algorithm.
Wang, X., Zhang, L., Jing, F., and Ma, W. 2006. Image annotation using search and mining technologies. In Proceedings of the 15th International Conference on World Wide Web (Edinburgh, Scotland, May 23 - 26, 2006). WWW '06. ACM Press, New York, NY, 1045-1046.
Other items being presented by these speakers
Sponsor of The CIO Dinner