Download PDFOpen PDF in browser

Vanilla Probabilistic Autoencoder

11 pagesPublished: November 2, 2021

Abstract

The autoencoder, a well-known neural network model, is usually fitted using a mean squared error loss or a cross-entropy loss. Both losses have a probabilistic interpretation: they are equivalent to maximizing the likelihood of the dataset when one uses a normal distribution or a categorical distribution respectively. We trained autoencoders on image datasets using different distributions and noticed the differences from the initial autoen- coder: if a mixture of distributions is used the quality of the reconstructed images may increase and the dataset can be augmented; one can often visualize the reconstructed im- age along with the variances corresponding to each pixel. The code which implements this method can be found at https://github.com/aciobanusebi/vanilla-probabilistic-ae.

Keyphrases: autoencoder, deep learning, machine learning, matrix normal distribution, normal distribution, probabilistic distributions

In: Yan Shi, Gongzhu Hu, Quan Yuan and Takaaki Goto (editors). Proceedings of ISCA 34th International Conference on Computer Applications in Industry and Engineering, vol 79, pages 71-81.

BibTeX entry
@inproceedings{CAINE2021:Vanilla_Probabilistic_Autoencoder,
  author    = {Sebastian Ciobanu},
  title     = {Vanilla Probabilistic Autoencoder},
  booktitle = {Proceedings of ISCA 34th International Conference on Computer Applications in Industry and Engineering},
  editor    = {Yan Shi and Gongzhu Hu and Quan Yuan and Takaaki Goto},
  series    = {EPiC Series in Computing},
  volume    = {79},
  publisher = {EasyChair},
  bibsource = {EasyChair, https://easychair.org},
  issn      = {2398-7340},
  url       = {/publications/paper/R2l1},
  doi       = {10.29007/s1mx},
  pages     = {71-81},
  year      = {2021}}
Download PDFOpen PDF in browser