Filters








3,649 Hits in 0.65 sec

Are Nearby Neighbors Relatives?: Testing Deep Music Embeddings [article]

Jaehun Kim, Julián Urbano, Cynthia C. S. Liem, Alan Hanjalic
2019 arXiv   pre-print
In this paper, we therefore propose a systematic way to test the trustworthiness of deep music representations, considering musical semantics.  ...  Then, we examine within- and between-space distance consistencies, both considering audio space and latent embedded space, the latter either being a result of a conventional feature extractor or a deep  ...  Are Nearby Neighbors Relatives? As depicted in Figure 7 , substantial inconsistencies emerge in L when compared to A.  ... 
arXiv:1904.07154v3 fatcat:xj4alam5tnfg3d3f3cytmndkra

Are Nearby Neighbors Relatives? Testing Deep Music Embeddings

Jaehun Kim, Julián Urbano, Cynthia C. S. Liem, Alan Hanjalic
2019 Frontiers in Applied Mathematics and Statistics  
In this paper, we therefore propose a systematic way to test the trustworthiness of deep music representations, considering musical semantics.  ...  Then, we examine within-and between-space distance consistencies, both considering audio space and latent embedded space, the latter either being a result of a conventional feature extractor or a deep  ...  Are Nearby Neighbors Relatives? As depicted in Figure 7 , substantial inconsistencies emerge in L when compared to A.  ... 
doi:10.3389/fams.2019.00053 fatcat:auyqjsad5zcj3htpdpahza6viy

Second-Order Word Embeddings from Nearest Neighbor Topological Features [article]

Denis Newman-Griffis, Eric Fosler-Lussier
2017 arXiv   pre-print
Furthermore, second-order embeddings are able to handle highly heterogeneous data better than first-order representations, though at the cost of some specificity.  ...  Due to variance in the random initializations of word embeddings, utilizing nearest neighbor features from multiple first-order embedding samples can also contribute to downstream performance gains.  ...  The consistency of second-order embedding performance relative to first-order embeddings, typically only differing by 1 to 2 points absolute in both the deep and linear models, suggests that the nearest  ... 
arXiv:1705.08488v1 fatcat:cp2btkurozg6xck27ouqq7e2ia

Large-Scale Classification of Structured Objects using a CRF with Deep Class Embedding [article]

Eran Goldman, Jacob Goldberger
2017 arXiv   pre-print
The visual features are computed by convolutional layers, and the class embeddings are learned by factorizing the CRF pairwise potential matrix.  ...  This paper presents a novel deep learning architecture to classify structured objects in datasets with a large number of visually similar categories.  ...  They are composed of the visual features of the CNN h t and the learned neighbor embeddings Ry t−1 (see Fig. 4 ).  ... 
arXiv:1705.07420v2 fatcat:pjcw534dsnbdjmgdrbjoduucsa

Short Text Embedding Autoencoders with Attention-based Neighborhood Preservation

Chao Wei, Lijun Zhu, Jiaoxiang Shi
2020 IEEE Access  
The words discovered by PTM and BTM are confusing, like "claim evid arm game true" of PTM and "tennis music ski buy movie" of BTM.  ...  which means those similar lowdimensional embeddings from nearby short texts may have similar discriminative dimensions yet those distinct lowdimensional embeddings from non-neighbor short texts may have  ... 
doi:10.1109/access.2020.3042778 fatcat:k4rn6rwurzcpheet74zudnfwfu

Audio-Based Activities of Daily Living (ADL) Recognition with Large-Scale Acoustic Embeddings from Online Videos

Dawei Liang, Edison Thomaz
2019 Proceedings of the ACM on Interactive Mobile Wearable and Ubiquitous Technologies  
We propose a framework for audio-based activity recognition that makes use of millions of embedding features from public online video sound clips.  ...  Based on the combination of oversampling and deep learning approaches, our framework does not require further feature processing or outliers filtering as in prior work.  ...  The embedding features are generated from the embedding layer of a VGG-like deep neural network (DNN) architecture trained on the YouTube-100M dataset [23].  ... 
doi:10.1145/3314404 fatcat:nzcbsxwwdfhvvoyxe2xbyc3mqm

CAME: Content- and Context-Aware Music Embedding for Recommendation

Dongjing Wang, Xin Zhang, Dongjin Yu, Guandong Xu, Shuiguang Deng
2020 IEEE Transactions on Neural Networks and Learning Systems  
CAME seamlessly combines deep learning techniques, including convolutional neural networks and attention mechanisms, with the embedding model to capture the intrinsic features of music pieces as well as  ...  Then, a novel method called content- and context-aware music embedding (CAME) is proposed to obtain the low-dimension dense real-valued feature representations (embeddings) of music pieces from HIN.  ...  Since users' preferences are relatively fixed especially during a short period of time, music pieces that are close in the music listening sequences generally have common styles or features.  ... 
doi:10.1109/tnnls.2020.2984665 pmid:32305946 fatcat:ody4ay2swfepvhmofyvgv36n6q

Thin-Slicing for Pose: Learning to Understand Pose without Explicit Pose Estimation

Suha Kwak, Minsu Cho, Ivan Laptev
2016 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)  
The embedding function is built on a deep convolutional network, and trained with triplet-based rank constraints on real image data.  ...  We address the problem of learning a pose-aware, compact embedding that projects images with similar human poses to be placed close-by in the embedding space.  ...  If the nearest neighbors are from the same action class with the query, they are colored in blue, otherwise in red.  ... 
doi:10.1109/cvpr.2016.534 dblp:conf/cvpr/KwakCL16 fatcat:mve2ynmwg5g2hhqtlcdmyhqwp4

PACELA

Thanh-Nam Doan, Ee-Peng Lim
2018 Proceedings of the 26th Conference on User Modeling, Adaptation and Personalization - UMAP '18  
PACELA also includes a deep learning neural network to combine both embedding and latent features to predict if a user performs check-in on a location.  ...  PACELA learns the embeddings space for the user and venue data as well as the latent attributes of both users and venues.  ...  features of users and venue into a deep neural network for training and test.  ... 
doi:10.1145/3209219.3209231 dblp:conf/um/DoanL18 fatcat:pdhnnpqihvatjbcydgdwrqjjdm

Unsupervised Latent Behavior Manifold Learning from Acoustic Features: audio2behavior [article]

Haoqi Li, Brian Baucom, Panayiotis Georgiou
2017 arXiv   pre-print
We hypothesize that nearby segments of speech share the same behavioral context and hence share a similar underlying representation in a latent space.  ...  Specifically, we propose a Deep Neural Network (DNN) model to connect behavioral context and derive the behavioral manifold in an unsupervised manner.  ...  [13, 14] have proposed an embedding that ties 1-hot word representations of nearby words via an intermediate, hidden, vector representation.  ... 
arXiv:1701.03198v1 fatcat:lgzuzxt6prajzavnqbsvqibspe

Extending Multi-Sense Word Embedding to Phrases and Sentences for Unsupervised Semantic Applications [article]

Haw-Shiuan Chang, Amol Agrawal, Andrew McCallum
2021 arXiv   pre-print
We introduce an end-to-end trainable neural model that directly predicts the set of cluster centers from the input text sequence during test time.  ...  The codebook embeddings can be viewed as the cluster centers which summarize the distribution of possibly co-occurring words in a pre-trained word embedding space.  ...  SemEval 2013 Turney2012 BiRD WikiSRS HypeNet For phrase representation, the number of transformer layers Test Test Test Sim Rel Val Test  ... 
arXiv:2103.15330v2 fatcat:qv7rz7kl2ndovfrgcra2lixz6y

VAE-SNE: a deep generative model for simultaneous dimensionality reduction and clustering [article]

Jacob M. Graving, Iain D. Couzin
2020 bioRxiv   pre-print
Here we introduce a method for both dimension reduction and clustering called VAE-SNE (variational autoencoder stochastic neighbor embedding).  ...  Scientific datasets are growing rapidly in scale and complexity.  ...  We then computed the proportion of the neighbors that are assigned to the correct local neighborhood in low-dimensional embedding, which ranges from (no neighbors preserved) to (all neighbors preserved  ... 
doi:10.1101/2020.07.17.207993 fatcat:x4g6qwa62bcdznfxfjoeorv4zy

The Swiss army knife of time series data mining: ten useful things you can do with the matrix profile and ten lines of code

Yan Zhu, Shaghayegh Gharghabi, Diego Furtado Silva, Hoang Anh Dau, Chin-Chia Michael Yeh, Nader Shakibay Senobari, Abdulaziz Almaslukh, Kaveh Kamgar, Zachary Zimmerman, Gareth Funning, Abdullah Mueen, Eamonn Keogh
2020 Data mining and knowledge discovery  
The recently introduced data structure, the Matrix Profile, annotates a time series by recording the location of and distance to the nearest neighbor of every subsequence.  ...  Clearly, sorting can be a bottleneck for some applications, but these are rare enough that we think our claim is self-evident.  ...  We can see deep valleys in the vicinity of all the embedded earthquake patterns, as they all have close matches from the same source.  ... 
doi:10.1007/s10618-019-00668-6 fatcat:ahq7vze57fgc3kghnnpng5zyla

Using Cross-Loss Influence Functions to Explain Deep Network Representations [article]

Andrew Silva, Rohit Chopra, Matthew Gombolay
2020 arXiv   pre-print
Despite the rise in unsupervised learning, self-supervised learning, and model pre-training, there are currently no suitable technologies for estimating influence of deep networks that do not train and  ...  test on the same objective.  ...  While W EAT C tests for relative career-family alignment, encouraging male to move closer to music is also a viable WEAT Set Example Cause Career A "The vice president has an office in the building but  ... 
arXiv:2012.01685v1 fatcat:pvavx6o7vvdabb4cdelszcsbge

A Tutorial on Network Embeddings [article]

Haochen Chen, Bryan Perozzi, Rami Al-Rfou, Steven Skiena
2018 arXiv   pre-print
We first discuss the desirable properties of network embeddings and briefly introduce the history of network embedding algorithms.  ...  Then, we discuss network embedding methods under different scenarios, such as supervised versus unsupervised learning, learning embeddings for homogeneous networks versus for heterogeneous networks, etc  ...  Skip-gram [29] is a highly efficient method for learning word embeddings. Its key idea is to learn embeddings which are good at predicting nearby words in sentences.  ... 
arXiv:1808.02590v1 fatcat:ramuqdavczfabb4o7r42kice7q
« Previous Showing results 1 — 15 out of 3,649 results