MA2VICMR – Improving Access, Analysis and Visibility of Multilingual and Multimedia Information and Content on the Web for the Community of Madrid
Principal investigator: Abraham Duarte Funding entities: Comunidad de Madrid y Fondo Social Europeo (S2009/TIC-1542) Duration: 01/01/2010 – 31/12/2013
Abstract:
Multimedia information access systems that work on image collections usually have access to two types of data: the textual descriptors and the visual content of the images. Traditionally, these systems have approached the problem of image retrieval either by analyzing the associated textual information (TextBased Information Retrieval, TBIR) or by analyzing the visual content (ContentBased Information Retrieval, CBIR). Until a few years ago, mixed approaches did not provide any advantage to the results, besides being rather inefficient.
On the one hand, researchers from NLP&IRUNED and the Vision Team group at the University of Valencia coordinated their previous experience in textual and image content-based retrieval. The result of this collaboration has been an approach that not only takes advantage of the synergy between visual aspects and textual annotations together, but also provides an efficient computational method for the search of annotated images in large collections, from a multimedia query, either text and one or several images. This work has generated, besides participations in competitions such as ImageCLEF and MediEval, several publications in conference proceedings, an article in the IEEE Transactions on Multimedia Journal and a PhD thesis in the NLP&IRUNED group entitled Late Semantic Multimedia Fusion Applied to Multimedia Information Retrieval.
On the other hand, another mixed team formed by members of NLP&IRUNED and GAVABURJC have integrated previous technologies to build a hybrid image search system. The proposal, which combined content features and rich text analysis with linguistic resources such as WordNet, participated in two editions of the ImageCLEF Photo Annotation Task competition.