A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Probabilistic Compositional Embeddings for Multimodal Image Retrieval
[article]
2022
arXiv
pre-print
To learn an informative embedding that can flexibly encode the semantics of various queries, we propose a novel multimodal probabilistic composer (MPC). ...
Given an arbitrary number of query images and (or) texts, our goal is to retrieve target images containing the semantic concepts specified in multiple multimodal queries. ...
Given an arbitrary number of multimodal queries (e.g. images and texts), the model is trained to learn a feature embedding for retrieving images that contain the composite set of semantic concepts specified ...
arXiv:2204.05845v1
fatcat:ls2rtlso5rfhloopwxcsh4hvcq
Learning User Embeddings from Temporal Social Media Data: A Survey
[article]
2021
arXiv
pre-print
The temporal nature of user-generated data on social media has largely been overlooked in much of the existing user embedding literature. ...
In this paper, we survey representative work on learning a concise latent user representation (a.k.a. user embedding) that can capture the main characteristics of a social media user. ...
dynamics of the embedding at any arbitrary timestamp. ...
arXiv:2105.07996v1
fatcat:6elhasieuzce5ogddsl7uukv64
Mixtures of Deep Neural Experts for Automated Speech Scoring
2020
Interspeech 2020
models, several word embeddings, and two bag-of-word models. ...
Different deep neural network architectures (both feed-forward and recurrent) are specialized over diverse representations of the texts in terms of: a reference grammar, the outcome of probabilistic language ...
of an arbitrary number of feature-specific DNNs. ...
doi:10.21437/interspeech.2020-1055
dblp:conf/interspeech/PapiTGMF20
fatcat:roiv5qtj5nbe7ksff2ry6ccmnq
Renal Cancer Cell Classification Using Generative Embeddings and Information Theoretic Kernels
[chapter]
2011
Lecture Notes in Computer Science
We conclude that the proposed hybrid approach achieves higher accuracy, revealing itself as a promising approach for this class of problems. ...
In particular we use probabilistic latent semantic analysis (pLSA) as a generative model to perform generative embedding onto the free energy score space (FESS). ...
In [15] , these IT kernels were successfully used for text categorization, based on multinomial (bag-of-words type) text representations. ...
doi:10.1007/978-3-642-24855-9_7
fatcat:ddpq7tj7k5bwpfctxyhn26axla
CLAREL: Classification via retrieval loss for zero-shot learning
[article]
2020
arXiv
pre-print
On top of that, we provide a probabilistic justification for a metric rescaling approach that solves a very common problem in the generalized zero-shot learning setting, i.e., classifying test images from ...
We address the problem of learning fine-grained cross-modal representations. We propose an instance-based deep metric learning approach in joint visual and textual space. ...
Each batch consists of randomly sampled instances, i.e. pairs of images and their corresponding texts. Images are embedded via ResNet and texts are embedded via a CNN/LSTM stack. ...
arXiv:1906.11892v3
fatcat:6xt4awflxrbhbdyk7gu77irh7a
Probabilistic frequent subtrees for efficient graph classification and retrieval
2017
Machine Learning
We also present different fast techniques for computing the embedding of unseen graphs into (probabilistic frequent) subtree feature spaces. ...
We also consider partial embeddings, i.e., when only a part of the feature vector has to be calculated. ...
Acknowledgements The authors thank the anonymous reviewers for their careful reviews and useful suggestions. ...
doi:10.1007/s10994-017-5688-7
fatcat:i6qa3j62cvhk5g5nu3mabqgjse
A probabilistic framework for multi-view feature learning with many-to-many associations via neural networks
[article]
2018
arXiv
pre-print
PMvGE is a probabilistic model for predicting new associations via graph embedding of the nodes of data vectors with links of their associations. ...
A simple framework Probabilistic Multi-view Graph Embedding (PMvGE) is proposed for multi-view feature learning with many-to-many associations so that it generalizes various existing multi-view methods ...
Therefore, in this paper, we propose a nonlinear framework for multi-view feature learning with manyto-many associations. We name the framework as Probabilistic Multi-view Graph Embedding (PMvGE). ...
arXiv:1802.04630v2
fatcat:fu5vb7etsja37d635eopi6hxai
Spherical Paragraph Model
[article]
2017
arXiv
pre-print
To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. ...
of texts. ...
All these vMF-based methods treat the text as a single object (i.e., a normalized feature vector), and successfully integrate a directional measure of similarity into a probabilistic setting for text modeling ...
arXiv:1707.05635v1
fatcat:7ngupuuw6bcydibc7om3mjhd4q
Learning Continuous User Representations through Hybrid Filtering with doc2vec
[article]
2017
arXiv
pre-print
using doc2vec prove to be highly valuable features in supervised machine learning models for look-alike modeling. ...
In order to maximize the predictive performance of our look-alike modeling algorithms, we propose two novel hybrid filtering techniques that utilize the recent neural probabilistic language model algorithm ...
The great popularity of neural text embeddings in NLP and the conceptual similarity of generating embeddings from sequences of words and generating embeddings from arbitrary sequences have motivated researching ...
arXiv:1801.00215v1
fatcat:avcszeivnjbjxkem6nfe4ctjci
Spherical Paragraph Model
[chapter]
2018
Lecture Notes in Computer Science
To address these problems, we introduce the Spherical Paragraph Model (SPM), a probabilistic generative model based on BoWE, for text representation. ...
of texts. ...
For example, in the popular TF-IDF scheme [11] , each text is represented by tfidf values of selected feature-words. ...
doi:10.1007/978-3-319-76941-7_22
fatcat:d2ablbdvqzbsrdvlyuz6z73lku
CompLex: A New Corpus for Lexical Complexity Prediction from Likert Scale Data
[article]
2020
arXiv
pre-print
With a few exceptions, previous studies have approached the task as a binary classification task in which systems predict a complexity value (complex vs. non-complex) for a set of target words in a text ...
Predicting which words are considered hard to understand for a given target population is a vital step in many NLP applications such as text simplification. ...
Acknowledgements We would like to thank the anonymous reviewers for their valuable feedback. ...
arXiv:2003.07008v3
fatcat:ldjiks6crfcdpou2q3fnbb5h6e
Combining Probabilistic Logic and Deep Learning for Self-Supervised Learning
[article]
2021
arXiv
pre-print
These are either added directly (a form of structured self-training) or verified by a human expert (as in feature-based active learning). ...
We first present deep probabilistic logic(DPL), which offers a unifying framework for task-specific self-supervision by composing probabilistic logic with deep learning. ...
The latter subsumes feature-based active learning [15] with arbitrary features expressible using probabilistic logic. ...
arXiv:2107.12591v1
fatcat:dl2rrtgllvcnhjznpiuerwdynq
Notes on Coalgebras in Stylometry
[article]
2021
arXiv
pre-print
In this paper, we discuss how coalgebras are used to formalise the notion of behaviour by embedding syntactic features of a given text into probabilistic transition systems. ...
By introducing the behavioural distance, we are then able to quantitatively measure differences between points in these systems and thus, comparing features of different texts. ...
Thus, for every feature, we obtain a behavioural value for this text. ...
arXiv:2010.02733v2
fatcat:jxjrcdg3pjet5gmyvx732kdmgy
Using the Naive Bayes as a discriminative classifier
[article]
2021
arXiv
pre-print
For classification tasks, probabilistic models can be categorized into two disjoint classes: generative or discriminative. ...
Moreover, this observation also discusses the notion of Generative-Discriminative pairs, linking, for example, Naive Bayes and Logistic Regression, or HMM and CRF. ...
In this way, the Naive Bayes can be applied for any classification tasks and consider arbitrary observation features. ...
arXiv:2012.13572v3
fatcat:ec22g7qmfnhlzmvjoq666tyi7q
CompLex - A New Corpus for Lexical Complexity Predicition from LikertScale Data
2020
International Conference on Language Resources and Evaluation
With a few exceptions, previous studies have approached the task as a binary classification task in which systems predict a complexity value (complex vs. non-complex) for a set of target words in a text ...
Predicting which words are considered hard to understand for a given target population is a vital step in many NLP applications such as text simplification. ...
Acknowledgements We would like to thank the anonymous reviewers for their valuable feedback. ...
dblp:conf/lrec/ShardlowZC20
fatcat:44bf6arkdvg43kygvhs4ojhzda
« Previous
Showing results 1 — 15 out of 16,771 results