Publication Details
Learning Document Embeddings Along With Their Uncertainties
Plchot Oldřich, Ing., Ph.D. (DCGM)
Burget Lukáš, doc. Ing., Ph.D. (DCGM)
Gangashetty Suryakanth V
Bayesian methods, embeddings, topic identification.
Majority of the text modeling techniques yield onlypoint-estimates of document embeddings and lack in capturingthe uncertainty of the estimates. These uncertainties give a notionof how well the embeddings represent a document. We presentBayesian subspace multinomial model (Bayesian SMM), a generativelog-linear model that learns to represent documents in theform of Gaussian distributions, thereby encoding the uncertaintyin its covariance. Additionally, in the proposed Bayesian SMM,we address a commonly encountered problem of intractabilitythat appears during variational inference in mixed-logit models.We also present a generative Gaussian linear classifier for topicidentification that exploits the uncertainty in document embeddings.Our intrinsic evaluation using perplexity measure showsthat the proposed Bayesian SMM fits the unseen test data betteras compared to the state-of-the-art neural variational documentmodel on (Fisher) speech and (20Newsgroups) text corpora. Ourtopic identification experiments showthat the proposed systems arerobust to over-fitting on unseen test data. The topic ID results showthat the proposedmodel outperforms state-of-the-art unsupervisedtopic models and achieve comparable results to the state-of-the-artfully supervised discriminative models.
@article{BUT168164,
author="Santosh {Kesiraju} and Oldřich {Plchot} and Lukáš {Burget} and Suryakanth V {Gangashetty}",
title="Learning Document Embeddings Along With Their Uncertainties",
journal="IEEE/ACM TRANSACTIONS ON AUDIO, SPEECH AND LANGUAGE PROCESSING",
year="2020",
volume="2020",
number="28",
pages="2319--2332",
doi="10.1109/TASLP.2020.3012062",
issn="2329-9290",
url="https://ieeexplore.ieee.org/document/9149686"
}