Download PDFOpen PDF in browser

Binary Autoencoder for Text Modeling

EasyChair Preprint 2031

12 pagesDate: November 25, 2019

Abstract

Variational Autoencoders play important role in text generation tasks, when semantically consistent latent space is needed. However, training VAE for text is not a trivial task due to mode collapse issue. In this paper, autoencoder with binary latent space trained using straight-through estimator is shown to have advantages over VAE on text modeling task. In our model, Bernoulli distribution is used instead of Gaussian (usual for VAE). The model can be trained with only reconstruction objective, without using any additional terms such as KL divergence. Experiments reported in this paper show binary autoencoder to have the main features of VAE: semantic consistency and good latent space coverage; while not suffering from the mode collapse and being a lot easier to train than VAE.

Keyphrases: Autoencoder, Kullback-Leibler divergence, Latent Representation, Latent Vector, Natural Language Processing, Straight-Through Estimator, VAE, bernouli autoencoder, binary autoencoder, binary latent space, binary vae, gumbel softmax autoencoder, gumbel softmax vae, language model, latent collapse, latent variable, text modeling, variational autoencoder, word dropout

BibTeX entry
BibTeX does not have the right entry for preprints. This is a hack for producing the correct reference:
@booklet{EasyChair:2031,
  author    = {Ruslan Baynazarov and Irina Piontkovskaya},
  title     = {Binary Autoencoder for Text Modeling},
  howpublished = {EasyChair Preprint 2031},
  year      = {EasyChair, 2019}}
Download PDFOpen PDF in browser