Download PDFOpen PDF in browser

Laplacian Deep Hashing for Image Retrieval

EasyChair Preprint 222

4 pagesDate: June 2, 2018

Abstract

The hash method has been widely applied in the large scale image retrieval to improve the speed and reduce the storage cost in retrieval. The traditional hash methods are based on handcraft features and shallow models. These methods map the massive images to hash codes by visual features. Although the methods have advantage in preserving the similarity relationships of original images in the mapping operation, the performances of retrieval are not excellent. Due to the good performance of CNNs in classification and retrieval, the features extraction become the important element in image retrieval. Consequently, the framework of deep CNNs is introduced to hash methods as component of features extraction to improve the performance in retrieval. However, the over-fitting is problem in the deep CNNs. In this work, we propose the Laplacian Deep Hash method, combining the superiority of deep CNNs and shallow hash models to optimize the hash method in image retrieval. The robust and discriminative semantic features are obtained by deep CNNs to improve hash codes generation. And the shallow hash models as the hash function are applied to decrease over-fitting . Finally, the effectivity and efficiency of this method is presented in the comparative experiment results.

Keyphrases: Artificial Intelligenc, deep learning, hash, image retrieval

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:222,
  author    = {Chunzhi Wang and Fangyu Zhou and Lingyu Yan and Zhiwei Ye and Pan Wu and Hanlin Lu},
  title     = {Laplacian Deep Hashing for Image Retrieval},
  doi       = {10.29007/khmh},
  howpublished = {EasyChair Preprint 222},
  year      = {EasyChair, 2018}}
Download PDFOpen PDF in browser