![]() TI - Fast Dictionary Learning with a Smoothed Wasserstein LossīT - Proceedings of the 19th International Conference on Artificial Intelligence and StatisticsĭP - Proceedings of Machine Learning ResearchĪB - We consider in this paper the dictionary learning problem when the observations are normalized histograms of features. We show in particular that we can learn dictionaries (topics) for bag-of-word representations of texts using words that may not have appeared in the original texts, or even words that come from a different language than that used in the texts. We apply these techniques on face images and text documents. Our methods build upon Fenchel duality and entropic regularization of Wasserstein distances, which improves not only speed but also computational stability. ![]() earth mover’s or optimal transport) distance as the fitting error between each original point and its reconstruction, and we propose scalable algorithms to to so. To leverage this side-information, we propose to use the Wasserstein (\textita.k.a. We assume in this work that we have prior knowledge on these features. Because these fitting errors are separable and treat each feature on equal footing, they are blind to any similarity the features may share. This problem can be tackled using non-negative matrix factorization approaches, using typically Euclidean or Kullback-Leibler fitting errors. %X We consider in this paper the dictionary learning problem when the observations are normalized histograms of features. %C Proceedings of Machine Learning Research %B Proceedings of the 19th International Conference on Artificial Intelligence and Statistics %T Fast Dictionary Learning with a Smoothed Wasserstein Loss We show in particular that we can learn dictionaries (topics) for bag-of-word representations of texts using words that may not have appeared in the original texts, or even words that come from a different language than that used in the texts.Ĭite this = ![]() We consider in this paper the dictionary learning problem when the observations are normalized histograms of features. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |