In this paper we propose a novel approach to combine information form multiple high-dimensional feature spaces, which allows reducing the computational time required for image retrieval tasks. Each image is represented in a "(dis)similarity space", where each component is computed in one of the low-level feature spaces as the (dis)similarity of the image from one reference image. This new representation allows the distances between images belonging to the same class being smaller than in the original feature spaces. In addition, it allows computing similarities between images by taking into account multiple characteristics of the images, and thus obtaining more accurate retrieval results. Reported results show that the proposed technique allows attaining good performances not only in terms of precision and recall, but also in terms of the execution time, if compared to techniques that combine retrieval results from different feature spaces.