Filters








27,397 Hits in 6.9 sec

A Deep Recurrent Collaborative Filtering Framework for Venue Recommendation

Jarana Manotumruksa, Craig Macdonald, Iadh Ounis
2017 Proceedings of the 2017 ACM on Conference on Information and Knowledge Management - CIKM '17  
In particular, GRMF and MLRP learn to model complex structures of user-venue interactions using element-wise and dot products as well as the concatenation of latent factors.  ...  Building upon this momentum, various approaches for recommendation have been proposed in the literature to enhance the e ectiveness of MF-based approaches by exploiting neural network models such as: word  ...  [5] proposed a Wide & Deep learning approach for mobile application recommendation system that exploits both linear models and DNNs to incorporate various features of users and items.  ... 
doi:10.1145/3132847.3133036 dblp:conf/cikm/ManotumruksaMO17 fatcat:nbkqgavhbbgxteaplyc6dmqe5u

MANTRA: Minimum Maximum Latent Structural SVM for Image Classification and Ranking

Thibaut Durand, Nicolas Thome, Matthieu Cord
2015 2015 IEEE International Conference on Computer Vision (ICCV)  
(resp. negative) evidence for a given output y.  ...  Firstly, we introduce a new structured output latent variable model, Minimum mAximum lateNt sTRucturAl SVM (MANTRA), which prediction relies on a pair of latent variables: h + (resp. h − ) provides positive  ...  For a correct output, it prevents from having large negative values for any region, thus h − acts as a latent space regularizer exploiting contextual information.  ... 
doi:10.1109/iccv.2015.311 dblp:conf/iccv/DurandTC15 fatcat:yoq2ofptj5grhembof7bo5ywhi

A Path Towards Quantum Advantage in Training Deep Generative Models with Quantum Annealers [article]

Walter Vinci, Lorenzo Buffoni, Hossein Sadeghi, Amir Khoshaman, Evgeny Andriyash, Mohammad H. Amin
2019 arXiv   pre-print
We also provide evidence that our setup is able to exploit large latent-space (Q)BMs, which develop slowly mixing modes.  ...  QVAE consists of a classical auto-encoding structure realized by traditional deep neural networks to perform inference to, and generation from, a discrete latent space.  ...  Rolfe for useful discussions during the preparation of this work, Kelly Boothby for providing the figures for Pegasus and Chimera architectures, and Fiona Hanington for editing the manuscript.  ... 
arXiv:1912.02119v1 fatcat:jtwj6c5545hz5pyhif2k4ivdle

A deep learning approach to predict inter-omics interactions in multi-layer networks

Niloofar Borhani, Jafar Ghaisari, Maryam Abedi, Marzieh Kamali, Yousof Gheisari
2022 BMC Bioinformatics  
Lack of sufficient experimental evidence on interactions is more significant for heterogeneous molecular types.  ...  Results Here, as a novel nonlinear deep learning method, Data Integration with Deep Learning (DIDL) was proposed to predict inter-omics interactions.  ...  Details of the network structure and model training are given in the following: Encoder As a first stage, encoder extracts the best latent features for representing each biomolecule.  ... 
doi:10.1186/s12859-022-04569-2 pmid:35081903 pmcid:PMC8793231 fatcat:7y66f4qlvvev3azpapkhlcw4gy

Non-Negative Matrix Factorization Framework for Multi-View Clustering

Raphael K.M. Ahiaklo-Kuz, Fidel Essuan Nyameke, Nathanael Okoe Larsey
2022 Zenodo  
A deep matrix factorization framework for MVC is presented in this paper, where semi-non-negative matrix factorization is engaged to learn the hierarchical semantics of multi-view data in a layer-by-layer  ...  To exploit the shared information from every view, the non-negative depiction of every view in the output layer is required to be the same.  ...  learning from deep structure.  ... 
doi:10.5281/zenodo.6360680 fatcat:2kxdddjpqnguboiguprd27djqa

Graph-incorporated Latent Factor Analysis for High-dimensional and Sparse Matrices [article]

Di Wu, Yi He, Xin Luo
2022 arXiv   pre-print
However, most existing LFA-based models perform such embeddings on a HiDS matrix directly without exploiting its hidden graph structures, thereby resulting in accuracy loss.  ...  To address this issue, this paper proposes a graph-incorporated latent factor analysis (GLFA) model.  ...  It has a deep structure to play a role like regularization for improving BLF's representation learning ability. NLF A non-negative LFA model (2014).  ... 
arXiv:2204.07818v1 fatcat:d5a6lycph5g43dblhqwdwxfc4m

Joint alignment and reconstruction of multislice dynamic MRI using variational manifold learning [article]

Qing Zou, Abdul Haseeb Ahmed, Prashant Nagpal, Sarv Priya, Rolf F Schulte, Mathews Jacob
2021 arXiv   pre-print
The proposed scheme jointly learns the parameters of the deep network as well as the latent vectors for each slice, which capture the motion-induced dynamic variations, from the k-t space data of the specific  ...  In addition to not being able to exploit the inter-slice redundancies, manual intervention or sophisticated post-processing methods are needed to align the images post-recovery for quantification.  ...  Recently, unsupervised deep generative models that exploit the manifold structure of images were shown to outperform the classical manifold methods [5, 6, 7] .  ... 
arXiv:2111.10889v1 fatcat:zhlkmys2nfgs3pgxyd4prgdxle

Exploration-Exploitation Motivated Variational Auto-Encoder for Recommender Systems [article]

Yizi Zhang, Meimei Liu
2021 arXiv   pre-print
A hierarchical latent space model is utilized to learn the personalized item embedding for a given user, along with the population distribution of all user subgraphs.  ...  for exploration.  ...  In XploVAE, we couple the proposed hierarchical latent space model with deep neural networks to improve its efficiency in dealing with complex data.  ... 
arXiv:2006.03573v4 fatcat:4jeixnidqna7be27e67qzwri6i

Collaborative Memory Network for Recommendation Systems

Travis Ebesu, Bin Shen, Yi Fang
2018 The 41st International ACM SIGIR Conference on Research & Development in Information Retrieval - SIGIR '18  
We propose Collaborative Memory Networks (CMN), a deep architecture to unify the two classes of CF models capitalizing on the strengths of the global structure of latent factor model and local neighborhood-based  ...  However, existing methods compose deep learning architectures with the latent factor model ignoring a major class of CF models, neighborhood or memory-based approaches.  ...  The architecture allows for the joint nonlinear interaction of the specialized local structure of neighborhood-based methods and the global structure of latent factor models.  ... 
doi:10.1145/3209978.3209991 dblp:conf/sigir/EbesuSF18 fatcat:dixbrvd6o5gd5kqziruxyn2hb4

Shared Generative Latent Representation Learning for Multi-View Clustering

Ming Yin, Weitian Huang, Junbin Gao
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Specifically, benefitting from the success of the deep generative learning, the proposed model can not only extract the nonlinear features from the views, but render a powerful ability in capturing the  ...  The motivation is based on the fact that the multi-view data share a common latent embedding despite the diversity among the various views.  ...  Instead, deep generative models were built to better handle the rich latent structures within data (Jiang et al. 2017) .  ... 
doi:10.1609/aaai.v34i04.6146 fatcat:beozidgdindwtmngokbv6ptr7a

WELDON: Weakly Supervised Learning of Deep Convolutional Neural Networks

Thibaut Durand, Nicolas Thome, Matthieu Cord
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
In this paper, we introduce a novel framework for WEakly supervised Learning of Deep cOnvolutional neural Networks (WELDON).  ...  Firstly, WELDON leverages recent improvements on the Multiple Instance Learning paradigm, i.e. negative evidence scoring and top instance selection.  ...  On the contrary, red regions incorporate negative evidence for the class, i.e. are the lowest scoring areas. Our deep WSL model is detailed in section 3.  ... 
doi:10.1109/cvpr.2016.513 dblp:conf/cvpr/DurandTC16 fatcat:2mqnc6l7jzhmhgjr55vispq4he

Lossless compression with state space models using bits back coding [article]

James Townsend, Iain Murray
2021 arXiv   pre-print
We generalize the 'bits back with ANS' method to time-series models with a latent Markov structure.  ...  We provide experimental evidence that our method is effective for small scale models, and discuss its applicability to larger scale settings such as video compression.  ...  In this work we present a generalization of BB-ANS to sequences which are not modeled as independent, by 'interleaving' bits-back steps with the time-steps in a model, exploiting latent Markov structure  ... 
arXiv:2103.10150v3 fatcat:btqd2r5dy5fjplnexdeynznusu

Structured Bayesian Gaussian process latent variable model [article]

Steven Atkinson, Nicholas Zabaras
2018 arXiv   pre-print
We introduce a Bayesian Gaussian process latent variable model that explicitly captures spatial correlations in data using a parameterized spatial kernel and leveraging structure-exploiting algebra on  ...  the model covariance matrices for computational tractability.  ...  The insight to exploit Kronecker product structure in Gaussian process latent variable models was first shown in [18] .  ... 
arXiv:1805.08665v1 fatcat:fftbpnsavrdsdfzblpnplclkcu

Complex event recognition by latent temporal models of concepts

Ehsan Zare Borzeshi, Afshin Dehghan, Massimo Piccardi, Mubarak Shah
2014 2014 IEEE International Conference on Image Processing (ICIP)  
In this paper we argue that concepts in an event tend to articulate over a discernible temporal structure and we exploit a temporal model using the scores of concept detectors as measurements.  ...  over concepts for complex event recognition.  ...  This result gives evidence to the benefit of exploiting temporal structure over the concept detector scores.  ... 
doi:10.1109/icip.2014.7025481 dblp:conf/icip/BorzeshiDPS14 fatcat:fcgxzizyjvdo7b4ohn3sifeqoa

Unpaired Deep Image Deraining Using Dual Contrastive Learning [article]

Xiang Chen, Jinshan Pan, Kui Jiang, Yufeng Li, Yufeng Huang, Caihua Kong, Longgang Dai, Zhentao Fan
2022 arXiv   pre-print
Specifically, BTB exploits full advantage of the circulatory architecture of adversarial consistency to generate abundant exemplar pairs and excavates latent feature distributions between two domains by  ...  method performs favorably against existing unpaired deraining approaches on both synthetic and real-world datasets, and generates comparable results against several fully-supervised or semi-supervised models  ...  Since the ground truth labeled data is not fully available, how to model the latent-space representation by exploring the relationship between the rainy inputs and clean outputs is important for the deep  ... 
arXiv:2109.02973v4 fatcat:i5rmbjewuvgcxnklm6nfaadbmq
« Previous Showing results 1 — 15 out of 27,397 results