Filters








70,689 Hits in 6.0 sec

Forward Compatible Training for Large-Scale Embedding Retrieval Systems [article]

Vivek Ramanujan, Pavan Kumar Anasosalu Vasu, Ali Farhadi, Oncel Tuzel, Hadi Pouransari
2022 arXiv   pre-print
In this work, we propose a new learning paradigm for representation learning: forward compatible training (FCT).  ...  To avoid the cost of backfilling, BCT modifies training of the new model to make its representations compatible with those of the old model.  ...  Acknowledgements We would like to thank Floris Chabert and Vinay Sharma for their advice and discussions, and Jason Ramapuram and Dan Busbridge for their help with self-supervised trainings.  ... 
arXiv:2112.02805v2 fatcat:bsvhotuiwfht7m4btk7kmzfyqe

Compatible Learning for Deep Photonic Neural Network [article]

Yong-Liang Xiao, Rongguang Liang, Jianxin Zhong, Xianyu Su, Zhisheng You
2020 arXiv   pre-print
Compatible learning opens an envisaged window for deep photonic neural network.  ...  Compatibility indicates that matrix representation in complex space covers its real counterpart, which could enable a single channel mingled training in real and complex space as a unified model.  ...  In the following, we would like to implement compatible learning for deep photonic neural network training.  ... 
arXiv:2003.08360v1 fatcat:24q46orbnnahpkuamoy25ysgh4

Learning Compatible Embeddings [article]

Qiang Meng, Chixiang Zhang, Xiaoqiang Xu, Feng Zhou
2021 arXiv   pre-print
To address these issues, we propose a general framework called Learning Compatible Embeddings (LCE) which is applicable for both cross model compatibility and compatible training in direct/forward/backward  ...  Achieving backward compatibility when rolling out new models can highly reduce costs or even bypass feature re-encoding of existing gallery images for in-production visual retrieval systems.  ...  Feature Representation Transfer Learning Feature representation transfer learning aims at transforming each original feature into a new feature representation for knowledge transfer [57] . Pan et al.  ... 
arXiv:2108.01958v1 fatcat:uqftmbuvbjhfhi4os4v7hn2z6a

Modeling kinematic forward model adaptation by modular decomposition

Laura Patane, Alessandra Sciutti, Bastien Berret, Valentina Squeri, Lorenzo Masia, Giulio Sandini, Francesco Nori
2012 2012 4th IEEE RAS & EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob)  
for it.  ...  In particular, we tested the prediction that in presence of a modular control, perturbations not compatible with the existing modules should be learned with more difficulty than compatible perturbations  ...  [13] , to avoid any visually guided movement correction during the training. This choice allowed us to measure for each trial the learning of the feed-forward model.  ... 
doi:10.1109/biorob.2012.6290827 fatcat:ro5vkulukrci5eumifqxdtpdou

Learning Fashion Compatibility from In-the-wild Images [article]

Additya Popli, Vijay Kumar, Sujit Jos, Saraansh Tandon
2022 arXiv   pre-print
In this work, we propose to learn representations for compatibility prediction from in-the-wild street fashion images through self-supervised learning by leveraging the fact that people often wear compatible  ...  Most existing approaches learn representation for this task using labeled outfit datasets containing manually curated compatible item combinations.  ...  Learning representations for the compatibility prediction task is straight-forward if a labeled outfit dataset is available.  ... 
arXiv:2206.05982v1 fatcat:j4pzourxmrgzjnwlo4fwfbaeua

Self-Attention with Relative Position Representations [article]

Peter Shaw, Jakob Uszkoreit, Ashish Vaswani
2018 arXiv   pre-print
Relying entirely on an attention mechanism, the Transformer introduced by Vaswani et al. (2017) achieves state-of-the-art results for machine translation.  ...  Instead, it requires adding representations of absolute positions to its inputs.  ...  For any sequence and head, this requires sharing the same representation for each position across all compatibility function applications (dot products) with other positions.  ... 
arXiv:1803.02155v2 fatcat:kinupbmvtrh4zbts5regr2fxgi

Unitary Learning for Deep Diffractive Neural Network [article]

Yong-Liang Xiao
2020 arXiv   pre-print
Particularly a compatible condition on how to select the nonlinear activations in complex space is unveiled, encapsulating the fundamental sigmoid, tanh and quasi-ReLu in complex space.  ...  The temporal-space evolution characteristic in unitary learning is formulated and elucidated.  ...  For simplification, we utilize a 4×4 Exclusive-OR(XOR) logic training task using compatible learning [15] .  ... 
arXiv:2009.08935v1 fatcat:dzl4qlwgmbc2xluwquj6nrfrdy

A Vertical Federated Learning Method For Multi-Institutional Credit Scoring: MICS [article]

Yusuf Efe
2021 arXiv   pre-print
However, data privacy regulations and compatibility issues for different data representations are huge obstacles to cooperative model training.  ...  Also, different companies within the same industrial sector carry similar kinds of data about the customers with different data representations.  ...  By forward propagating the data in each industry encoder, industry-embedding vectors are produced for the customers.  ... 
arXiv:2111.09038v1 fatcat:2j4c4k4qqvb7fpo2wc3gqvbtoy

CLOUD: Contrastive Learning of Unsupervised Dynamics [article]

Jianren Wang, Yujie Lu, Hang Zhao
2020 arXiv   pre-print
In this work, we propose to learn forward and inverse dynamics in a fully unsupervised manner via contrastive estimation.  ...  Specifically, we train a forward dynamics model and an inverse dynamics model in the feature space of states and actions with data collected from random exploration.  ...  . • Predictive Model: We train a predictive model as proposed by Agrawal et al. [9] that jointly learns a forward and inverse dynamics model for intuitive physics.  ... 
arXiv:2010.12488v1 fatcat:2efpjwvaqrcjthesxj5c4otmna

Contextual modulation of mirror and countermirror sensorimotor associations

Richard Cook, Anthony Dickinson, Cecilia Heyes
2012 Journal of experimental psychology. General  
Evidence that the MNS develops through associative learning comes from previous research showing that automatic imitation is attenuated by counter-mirror training, in which the observation of one action  ...  In Experiment 1 we found less residual automatic imitation when human participants were tested in their counter-mirror training context.  ...  Because the trained S-R mappings were open to top forwards and close to bottom forwards, the size of each SRC effect was calculated by subtracting the mean RT on compatible trials (open to top forwards  ... 
doi:10.1037/a0027561 pmid:22428612 fatcat:pboga4evwnhazmhsmkhwsgnfn4

FashionNet: Personalized Outfit Recommendation with Deep Neural Network [article]

Tong He, Yang Hu
2018 arXiv   pre-print
Our system, dubbed FashionNet, consists of two components, a feature network for feature extraction and a matching network for compatibility computation.  ...  To achieve personalized recommendation, we develop a two-stage training strategy, which uses the fine-tuning technique to transfer a general compatibility model to a model that embeds personal preference  ...  We explored the use of deep neural networks, which jointly perform representation learning and compatibility modeling, for this task.  ... 
arXiv:1810.02443v1 fatcat:hhbrgmeez5cl7df3mj6grhfqke

Learning Fashion Compatibility with Bidirectional LSTMs

Xintong Han, Zuxuan Wu, Yu-Gang Jiang, Larry S. Davis
2017 Proceedings of the 2017 ACM on Multimedia Conference - MM '17  
Further, we learn a visual-semantic space by regressing image features to their semantic representations aiming to inject attribute and category information as a regularization for training the LSTM.  ...  Given the fashion items in an outfit, we train a bidirectional LSTM (Bi-LSTM) model to sequentially predict the next item conditioned on previous ones to learn their compatibility relationships.  ...  CONCLUSION In this paper, we propose to jointly train a Bi-LSTM model and a visual-semantic embedding for fashion compatibility learning.  ... 
doi:10.1145/3123266.3123394 dblp:conf/mm/HanWJD17 fatcat:5unebunwwratxfq77fn4zu7xc4

Decoding Brain Representations by Multimodal Learning of Neural Activity and Visual Features [article]

Simone Palazzo, Concetto Spampinato, Isaak Kavasidis, Daniela Giordano, Joseph Schmidt, Mubarak Shah
2020 arXiv   pre-print
manifold that maximizes a compatibility measure between visual features and brain representations.  ...  After verifying that visual information can be extracted from EEG data, we introduce a multimodal approach that uses deep image and EEG encoders, trained in a siamese configuration, for learning a joint  ...  Martina Platania for supporting the data acquisition phase, Dr. Demian Faraci for the experimental results, and NVIDIA for the generous donation of two Titan X GPUs.  ... 
arXiv:1810.10974v2 fatcat:pe5fwfsjrzbbdeyakcpid6ryqy

A Temporally and Spatially Local Spike-based Backpropagation Algorithm to Enable Training in Hardware [article]

Anmol Biswas, Vivek Saraswat, Udayan Ganguly
2022 arXiv   pre-print
Although signed gradient values are a challenge for spike-based representation, we tackle this by splitting the gradient signal into positive and negative streams.  ...  . (2) A major advancement toward native spike-based learning has been the use of approximate Backpropagation using spike-time-dependent plasticity (STDP) with phased forward/backward passes.  ...  The simultaneous forward and backward propagation ensures that no external memory is needed for learning as is the case for phased forward and backward pass with information exchange.  ... 
arXiv:2207.09755v1 fatcat:agp4qo4tofeubfasx6ywul2r4e

Accurate Deep Representation Quantization with Gradient Snapping Layer for Similarity Search [article]

Shicong Liu, Hongtao Lu
2016 arXiv   pre-print
loss and also propel the learned representations to be accurately quantized.  ...  The proposed framework is compatible with various existing vector quantization approaches.  ...  Network Settings We implement GSL on the open-source Caffe deep learning framework. We employ the AlexNet architecture (Krizhevsky, Sutskever, and Hinton 2012) and use pre-trained weights  ... 
arXiv:1610.09645v1 fatcat:txz33dcwabfcpmbdw4rjrma7ru
« Previous Showing results 1 — 15 out of 70,689 results