- Under the constraint that we consider a finite set of documents, for example, n pLSA, these are the training documents and may have up to 1M. Consider the vocabulary size is also a large number, P(w|d) has huge dimensionality and requires too much data to do sufficient statistics in estimation.
- In practice, we prefer to realize the fact that it is an infinite set of documents, as modeled by LDA. If there is not latent topics, it would be intractable to estimate P(w|d) either theoretically or practically.
In practice, latent topic modeling is particularly valuable in the case with sparse data. For an instance, let us consider the problem of understanding the meaning of a Web search query. Usually, a query contains only few words. If there is not latent topics and we understand the query (document) as a word distribution, a query would correspond to few non-zero values in a V-dimensional distribution --- this is too little information, difficult to understand the document --- we do not even have any confidence that some of the few words are "noise".
In contrast, given latent topic modeling, each word in the document can be associated with a topic distribution by Bayesian inference. Note that each topic is a set of weighted words learned from the potentially huge (sufficient) training corpus. This means that latent topic modeling understands documents by sufficient knowledge collected from training data, instead of a word distribution with few non-zeros values.
We are recently using the idea of latent topic modeling to identify domain-specific words. In this case, a domain is similar to a super-document. Using latent topic factorizing, we design a hierarchical latent topic model (ref. figure to the right) to estimate P(w|z), P(z|d) and P(d|c), where w denotes a word, z denotes a topic, d denotes a document and c denotes a domain (category).
Note that the model to the right can handle the case where a document is labeled belonging to multiple domains. An extension to it using ideas from Pachinko allocation could enhance it to handle the case where domains are organized into a hierarchy.
Given the learning result, we can use Gibbs sampling to infer the word distribution of a given domain
No comments:
Post a Comment