On noticing the paradox of visual polysemia and concept poly-morphism, this paper proposes a new perspective called "Vicept" to associate elementary visual features and cognitive concepts. Firstly, a carefully prepared large image dataset and associate concepts are established. Secondly, we extract local interest points as the ele-mentary visual features, cluster them into visual words, and use Fuzzy Concept Membership Updating (FCMU) to build the link between codebook and concept membership distributions. This bottommost feature is called "Vicept word". Then, the global level Vicept features are established to correlate concepts with (partial) images. Finally, we validate our Vicept approach and show its effectiveness in concept detection task. Our approach is independent of case-specific training data and thus can be extended to web-scale scenarios.
Download Full PDF Version (Non-Commercial Use)