Filters








9,121 Hits in 3.9 sec

Learning to Pivot with Adversarial Networks [article]

Gilles Louppe, Michael Kagan, Kyle Cranmer
2017 arXiv   pre-print
In this work, we introduce and derive theoretical results for a training procedure based on adversarial networks for enforcing the pivotal property (or, equivalently, fairness with respect to continuous  ...  The method includes a hyperparameter to control the trade-off between accuracy and robustness. We demonstrate the effectiveness of this approach with a toy example and examples from particle physics.  ...  We introduce a flexible learning procedure based on adversarial networks (Goodfellow et al., 2014) for enforcing that f (X) is a pivot with respect to Z.  ... 
arXiv:1611.01046v3 fatcat:odyvxfcjcbcf3muzl4425rssca

Cascade Adversarial Machine Learning Regularized with a Unified Embedding [article]

Taesik Na, Jong Hwan Ko, Saibal Mukhopadhyay
2018 arXiv   pre-print
To address this challenge, we first show iteratively generated adversarial images easily transfer between networks trained with the same strategy.  ...  We train a network from scratch by injecting iteratively generated adversarial images crafted from already defended networks in addition to one-step adversarial images from the network being trained.  ...  We train cascade networks with/without pivot loss. We also train networks with ensemble adversarial training (Tramèr et al., 2017) with/without pivot loss for comparison.  ... 
arXiv:1708.02582v3 fatcat:737ja6tba5fnrmug4v6ye2qajq

Adversarial Examples Detection in Features Distance Spaces [chapter]

Fabio Carrara, Rudy Becarelli, Roberto Caldelli, Fabrizio Falchi, Giuseppe Amato
2019 Landolt-Börnstein - Group III Condensed Matter  
We train an LSTM network that analyzes the sequence of deep features embedded in a distance space to detect adversarial examples.  ...  We argue that the representations of adversarial inputs follow a different evolution with respect to genuine inputs, and we define a distance-based embedding of features to efficiently encode this information  ...  However, it is known to the research community that machine learning and specifically deep neural networks, are vulnerable to adversarial examples.  ... 
doi:10.1007/978-3-030-11012-3_26 fatcat:5m7krqpivfajle6haokl7heska

SATNet: Symmetric Adversarial Transfer Network Based on Two-Level Alignment Strategy towards Cross-Domain Sentiment Classification (Student Abstract)

Yu Cao, Hua Xu
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
In this paper, we propose a novel domain adaptation method called Symmetric Adversarial Transfer Network (SATNet). Experiments on the Amazon reviews dataset demonstrate the effectiveness of SATNet.  ...  Recently, many researchers apply the unlabeled data for training to learn representations shared across domains, i.e., Neural Network with Auxiliary Task(AuxNN) (Yu and Jiang 2016) .Recently, some existing  ...  Symmetric Adversarial Transfer Network In this section, we first present an overview of our proposed SATNet model. Then we detail the model with three related works.  ... 
doi:10.1609/aaai.v34i10.7153 fatcat:xasak35fpzgj7abfvqe33r6ubq

Hierarchical Attention Generative Adversarial Networks for Cross-domain Sentiment Classification [article]

Yuebing Zhang and Duoqian Miao and Jiaqi Wang
2019 arXiv   pre-print
In recent years, many researchers have used deep neural network models for cross-domain sentiment classification task, many of which use Gradient Reversal Layer (GRL) to design an adversarial network structure  ...  Different from those methods, we proposed Hierarchical Attention Generative Adversarial Networks (HAGAN) which alternately trains a generator and a discriminator in order to produce a document representation  ...  With the development of deep learning, many neural network models were proposed for CDSC.  ... 
arXiv:1903.11334v1 fatcat:goob4kttnvgaze3oks22snz5ou

End-to-End Adversarial Memory Network for Cross-domain Sentiment Classification

Zheng Li, Yu Zhang, Ying Wei, Yuxiang Wu, Qiang Yang
2017 Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence  
To address the problem, we introduce an end-to-end Adversarial Memory Network (AMN) for cross-domain sentiment classification.  ...  Recently, deep learning methods have been proposed to learn a representation shared by domains. However, they lack the interpretability to directly identify the pivots.  ...  The goal of the joint learning is to minimize L total with respect to the model parameters except for the adversarial training part.  ... 
doi:10.24963/ijcai.2017/311 dblp:conf/ijcai/LiZWWY17 fatcat:plx3bcgvifbbbjqw25bpzqr2nm

Systematic aware learning

Victor Estrade, Cécile Germain, Isabelle Guyon, David Rousseau, A. Forti, L. Betev, M. Litmaath, O. Smirnova, P. Hristov
2019 EPJ Web of Conferences  
Experimental science often has to cope with systematic errors that coherently bias data.  ...  Systematics-aware learning should create an efficient representation that is insensitive to perturbations induced by the systematic effects.  ...  Pivot Adversarial Network.  ... 
doi:10.1051/epjconf/201921406024 fatcat:gxgzr2yl3vdzbld5vavk2vfhiu

Graph Domain Adversarial Transfer Network for Cross-Domain Sentiment Classification

Hengliang Tang, Yuan Mi, Fei Xue, Yang Cao
2021 IEEE Access  
Therefore, from a new perspective, this paper proposes the Graph Domain Adversarial Transfer Network (GDATN) based on the idea of adversarial learning, which uses the labeled source domain data to predict  ...  Although current deep learning models have already achieved good performance through their powerful feature learning capabilities, there are serious deficiencies in dealing with the above problem.  ...  [30] proposed the Pivot Based Language Model (PBLM), which combines pivot-based models with neural networks in a structure-aware manner.  ... 
doi:10.1109/access.2021.3061139 fatcat:fzyscc7ytvdyros34ombxjljpi

Learning Invariant Representations across Domains and Tasks [article]

Jindong Wang, Wenjie Feng, Chang Liu, Chaohui Yu, Mingxuan Du, Renjun Xu, Tao Qin, Tie-Yan Liu
2021 arXiv   pre-print
In addition to learning transferable features via domain-adversarial training, we propose a novel task semantic adaptor that uses the learning-to-learn strategy to adapt the task semantics.  ...  In this paper, we propose a novel Task Adaptation Network (TAN) to solve this unsupervised task transfer problem.  ...  The pivot data are the data that with high confidence scores during the learning process, so they can be representatives of the domain-adversarial training.  ... 
arXiv:2103.05114v1 fatcat:ykgrogsyuvcgddqepdcxwvazaq

SR-GAN: Semantic Rectifying Generative Adversarial Network for Zero-shot Learning [article]

Zihan Ye and Fan Lyu and Linyan Li and Qiming Fu and Jinchang Ren and Fuyuan Hu
2019 arXiv   pre-print
First, we pre-train a Semantic Rectifying Network (SRN) to rectify semantic space with a semantic loss and a rectifying loss.  ...  Then, a Semantic Rectifying Generative Adversarial Network (SR-GAN) is built to generate plausible visual feature of unseen class from both semantic feature and rectified semantic feature.  ...  semantic rectifying network (SRN).  ... 
arXiv:1904.06996v1 fatcat:hstqbmn6gbd3ffctcewen4hvey

Adversarial Soft-detection-based Aggregation Network for Image Retrieval [article]

Jian Xu, Chunheng Wang, Cunzhao Shi, Baihua Xiao
2019 arXiv   pre-print
Therefore, it is significant to extract the discriminative representations that contain regional information of the pivotal small object.  ...  Our trainable adversarial detector generates semantic maps based on adversarial erasing strategy to preserve more discriminative and detailed information.  ...  Compared with the semantic map generated without adversarial learning as shown in Fig. 3(b) , the adversarial detectors capture more discriminative and detailed patterns.  ... 
arXiv:1811.07619v3 fatcat:cctz2l6ognbejg6xx3b555ebei

Metadata-conscious anonymous messaging

Giulia Fanti, Peter Kairouz, Sewoong Oh, Kannan Ramchandran, Pramod Viswanath
2016 IEEE Transactions on Signal and Information Processing over Networks  
Recent advances in network analysis have revealed that such diffusion processes are vulnerable to author deanonymization by adversaries with access to metadata, such as timing information.  ...  Anonymous messaging platforms like Whisper and Yik Yak allow users to spread messages over a network (e.g., a social network) without revealing message authorship to other users.  ...  For instance, in Figure 2 (right), we can use spies 7 and 8 to learn that node 2 is a pivot with level m 2 = 2. Es-timation hinges on the minimum-level pivot across all spy nodes, min .  ... 
doi:10.1109/tsipn.2016.2605761 fatcat:rugyvh2ojfb3xa7jrnddmzehy4

Crown Jewels Analysis using Reinforcement Learning with Attack Graphs [article]

Rohit Gangupantulu, Tyler Cody, Abdul Rahman, Christopher Redino, Ryan Clark, Paul Park
2021 arXiv   pre-print
In our experiment, CJA-RL identified ideal entry points, choke points, and pivots for exploiting a network with multiple crown jewels, exemplifying how CJA-RL and reinforcement learning for penetration  ...  This paper presents a novel method for crown jewel analysis termed CJA-RL that uses reinforcement learning to identify key terrain and avenues of approach for exploiting crown jewels.  ...  In this paper, Deep Q-learning (DQN) is used to approximate Q * with a neural network Q(s, a; θ), where θ are parameters of the neural network [17] , [18] .  ... 
arXiv:2108.09358v1 fatcat:aj7kjrsld5eslaoyvign56ae6q

Toward anonymity in Delay Tolerant Networks: Threshold Pivot Scheme

Rob Jansen, Robert Beverly
2010 2010 - MILCOM 2010 MILITARY COMMUNICATIONS CONFERENCE  
Delay Tolerant Networks (DTNs) remove traditional assumptions of end-to-end connectivity, extending network communication to intermittently connected mobile, ad-hoc, and vehicular environments.  ...  We develop a novel Threshold Pivot Scheme (TPS) for DTNs to address these challenges and provide resistance to traffic analysis, source anonymity, and sender-receiver unlinkability.  ...  The pivot learns no information about the location of the source, and the pivot location leaks no information to the destination. However, pivoting is inefficient in practice.  ... 
doi:10.1109/milcom.2010.5680442 fatcat:w6xantb6fbe4fpgf6gmdldtrdi

Neural Unsupervised Domain Adaptation in NLP—A Survey [article]

Alan Ramponi, Barbara Plank
2020 arXiv   pre-print
Deep neural networks excel at learning from labeled data and achieve state-of-the-art resultson a wide array of Natural Language Processing tasks.  ...  We outline methods, from early traditional non-neural methods to pre-trained model transfer.  ...  This research is supported by a visit grant to Alan supported by COSBI and a research leader sapere aude grant to Barbara by the Independent Research Fund Denmark (Danmarks Frie Forskningsfond, grant number  ... 
arXiv:2006.00632v2 fatcat:n4g3yelofzdqpoacta3agjahde
« Previous Showing results 1 — 15 out of 9,121 results