In this work we describe the URJC&UNED participation in the ImageCLEF 2013 Photo Annotation Task. We use visual information to find similar images and textual information extracted from the training set to label the test images. We propose two additional visual features apart from the provided by the organization and a method to expand the textual information available. The new visual features proposed define the images in terms of color and texture, and the textual method uses WordNet to obtain synonyms and hyperonyms of the textual information provided. The score of each concept is obtained by using a co-ocurrence matrix that matches concepts and textual information of the training images. The experimental results show that the proposal is able to obtain competitive results in all the performance measures used