Filters








5,561 Hits in 6.9 sec

Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers [article]

Masoumeh Soflaei, Hongyu Guo, Ali Al-Bashabsheh, Yongyi Mao, Richong Zhang
2021 arXiv   pre-print
We consider the problem of learning a neural network classifier.  ...  Such an approach assisted with some variational techniques, result in a novel learning framework, "Aggregated Learning", for classification with neural network models.  ...  The superiority of vector quantizers to scalar quantizers then motivates us to develop a vector-quantization approach to IB learning, which we call Aggregated Learning or Agr-Learn in short.  ... 
arXiv:2001.03955v3 fatcat:culwpeqtmzfbrffvr7agsoghcu

Aggregated Learning: A Vector-Quantization Approach to Learning Neural Network Classifiers

Masoumeh Soflaei, Hongyu Guo, Ali Al-Bashabsheh, Yongyi Mao, Richong Zhang
2020 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
We consider the problem of learning a neural network classifier.  ...  Such an approach assisted with some variational techniques, result in a novel learning framework, "Aggregated Learning", for classification with neural network models.  ...  The superiority of vector quantizers to scalar quantizers then motivates us to develop a vector-quantization approach to IB learning, which we call Aggregated Learning or Agr-Learn in short.  ... 
doi:10.1609/aaai.v34i04.6038 fatcat:ndnsb3jqmnf3fa565ylbtevg3y

Aggregated Learning: A Deep Learning Framework Based on Information-Bottleneck Vector Quantization [article]

Hongyu Guo, Yongyi Mao, Ali Al-Bashabsheh, Richong Zhang
2019 arXiv   pre-print
Such a deficiency then inspires us to develop a novel learning framework, AgrLearn, that corresponds to vector IB quantizers for learning with neural networks.  ...  Based on the notion of information bottleneck (IB), we formulate a quantization problem called "IB quantization". We show that IB quantization is equivalent to learning based on the IB principle.  ...  Recognizing theoretical inferiority of scalar quantizers to vector quantizers, we devise a novel neural-network learning framework, AgrLearn, that is equivalent to vector IB quantizers. • We empirically  ... 
arXiv:1807.10251v3 fatcat:7opuatzfknfh5ld2qat5y4qzuq

Federated Learning with Quantum Secure Aggregation [article]

Yichi Zhang, Chao Zhang, Cai Zhang, Lixin Fan, Bei Zeng, Qiang Yang
2022 arXiv   pre-print
This article illustrates a novel Quantum Secure Aggregation (QSA) scheme that is designed to provide highly secure and efficient aggregation of local model parameters for federated learning.  ...  It was empirically demonstrated that the proposed QSA can be readily applied to aggregate different types of local models including logistic regression (LR), convolutional neural networks (CNN) as well  ...  vector machines and neural networks etc  ... 
arXiv:2207.07444v1 fatcat:eneq3tis6jalhdssdzd4reozn4

Learning Local Feature Aggregation Functions with Backpropagation [article]

Angelos Katharopoulos, Despoina Paschalidou, Christos Diou and Anastasios Delopoulos
2017 arXiv   pre-print
To achieve that, we compose the local feature aggregation function with the classifier cost function and we backpropagate the gradient of this cost function in order to update the local feature aggregation  ...  Bag of Words, Fisher Vectors and VLAD, by a large margin.  ...  J(x, y ; W ) = −log   exp W T y x ŷ exp W T y x   (10) We could have used any classifier whose training is equivalent to minimizing a differentiable cost function, such as Neural Networks.  ... 
arXiv:1706.08580v1 fatcat:5c736eu4rfatjfdcr6x4vmpdbu

Learning Local Feature Aggregation Functions With Backpropagation

Anastasios Delopoulos, Christos Diou, Angelos Katharopoulos, Despoina Paschalidou
2018 Zenodo  
This allows us to jointly learn a classifier and a feature aggregation function by solving the optimization problem of equation 4.  ...  J(x, y ; W ) = −log   exp ( W T y x ) ∑ŷ exp ( W T y x )   (10) We could have used any classifier whose training is equivalent to minimizing a differentiable cost function, such as Neural Networks.  ... 
doi:10.5281/zenodo.1159842 fatcat:seqxq3tf7factgtsauzo4b6zfi

Learning and aggregating deep local descriptors for instance-level recognition [article]

Giorgos Tolias, Tomas Jenicek, Ondřej Chum
2020 arXiv   pre-print
We achieve state-of-the-art performance, in some cases even with a backbone network as small as ResNet18.  ...  We propose an efficient method to learn deep local descriptors for instance-level recognition.  ...  ) A metric learning approach is used to train the network.  ... 
arXiv:2007.13172v1 fatcat:ql5gxzcz4zdezhian3d36koszm

Learning Private Neural Language Modeling with Attentive Aggregation [article]

Shaoxiong Ji, Shirui Pan, Guodong Long, Xue Li, Jing Jiang, Zi Huang
2019 arXiv   pre-print
Federated learning (FL) provides a promising approach to learning private language modeling for intelligent personalized keyboard suggestion by training models in distributed clients rather than training  ...  To solve these problems, we propose a novel model aggregation with the attention mechanism considering the contribution of clients models to the global model, together with an optimization technique during  ...  To facilitate learning in a language model and reduce the number of trainable parameters, Inan et al. proposed tying word vectors and word classifiers [17] .  ... 
arXiv:1812.07108v2 fatcat:jkkadqzf6vhpzlvzpr56lmg57q

Broadband Analog Aggregation for Low-Latency Federated Edge Learning (Extended Version) [article]

Guangxu Zhu and Yong Wang and Kaibin Huang
2019 arXiv   pre-print
To leverage the data and resources, a new machine learning paradigm, called edge learning, has emerged where learning algorithms are deployed at the edge for providing fast and intelligent services to  ...  We consider a popular framework, federated edge learning (FEEL), where edge-server and on-device learning are synchronized to train a model without violating user-data privacy.  ...  , where the AI-model is based on a neural network and a real image dataset.  ... 
arXiv:1812.11494v3 fatcat:cwrjujujnvfx7cma7k2javbloi

Simultaneous Feature Aggregating and Hashing for Compact Binary Code Learning [article]

Thanh-Toan Do, Khoa Le, Tuan Hoang, Huu Le, Tam V. Nguyen, Ngai-Man Cheung
2019 arXiv   pre-print
This global vector is then subjected to a hashing function to generate a binary hash code. In previous works, the aggregating and the hashing processes are designed independently.  ...  When the data label is available, the framework can be adapted to learn binary codes which minimize the reconstruction loss w.r.t. label vectors.  ...  The binary codes are learned such that they not only encourage the aggregating property but also optimize for a linear classifier.  ... 
arXiv:1904.11820v1 fatcat:22dl2zp2zfeito6lqx3x4uyaoa

Layer-wised Model Aggregation for Personalized Federated Learning [article]

Xiaosong Ma, Jie Zhang, Song Guo, Wenchao Xu
2022 arXiv   pre-print
Meanwhile, a parameterized mechanism is introduced to update the layer-wised aggregation weights to progressively exploit the inter-user similarity and realize accurate model personalization.  ...  to optimize the personalized model aggregation for clients with heterogeneous data.  ...  Hypernetworks Hypernetworks [11] are used to generate parameters of other neural networks, e.g., a target network, by mapping the embeddings of the target tasks to corresponding model parameters.  ... 
arXiv:2205.03993v1 fatcat:if2tjdfdsbfjhepz4dkk5fr3g4

Privacy-Preserving Aggregation in Federated Learning: A Survey [article]

Ziyao Liu, Jiale Guo, Wenzhuo Yang, Jiani Fan, Kwok-Yan Lam, Jun Zhao
2022 arXiv   pre-print
Practical PPFL typically allows multiple participants to individually train their machine learning models, which are then aggregated to construct a global model in a privacy-preserving manner.  ...  This survey aims to fill the gap between a large number of studies on PPFL, where PPAgg is adopted to provide a privacy guarantee, and the lack of a comprehensive survey on the PPAgg protocols applied  ...  POSEI-DON [141] extends the supported ML models of SPINDLE [140] from linear models to neural networks, and comes up with a distributed bootstrapping protocol for training deep neural networks in an  ... 
arXiv:2203.17005v2 fatcat:nlsi6g2fzbgwnhpgj2z5tgmqmq

One-Bit Over-the-Air Aggregation for Communication-Efficient Federated Edge Learning: Design and Convergence Analysis [article]

Guangxu Zhu, Yuqing Du, Deniz Gunduz, Kaibin Huang
2020 arXiv   pre-print
To address this issue, we propose in this work a novel digital version of broadband over-the-air aggregation, called one-bit broadband digital aggregation (OBDA).  ...  In the FEEL framework, edge devices periodically transmit high-dimensional stochastic gradients to the edge server, where these gradients are aggregated and used to update a global model.  ...  MNIST dataset, as illustrated in Fig. 3 , the classifier model is implemented using a 6-layer convolutional neural network (CNN) that consists of two 5×5 convolution layers with ReLU activation (the first  ... 
arXiv:2001.05713v2 fatcat:jvjltw46k5d7vpohvqq64nix6a

Quantum Bootstrap Aggregation [chapter]

David Windridge, Rajagopal Nagarajan
2017 Lecture Notes in Computer Science  
We set out a strategy for quantizing attribute bootstrap aggregation to enable variance-resilient quantum machine learning.  ...  We achieve a linear performance advantage, O(d), in addition to the existing O(log(n)) advantages of quantization as applied to Support Vector Machines.  ...  Acknowledgment The first author would like to acknowledge financial support from the Horizon 2020 European Research project DREAMS4CARS (no. 731593).  ... 
doi:10.1007/978-3-319-52289-0_9 fatcat:r6zaxepu3vgqhek6gmstoapkoi

Facial Expression Recognition Using a Hybrid CNN–SIFT Aggregator [chapter]

Tee Connie, Mundher Al-Shabi, Wooi Ping Cheah, Michael Goh
2017 Lecture Notes in Computer Science  
The proposed method is motivated by the success of Convolutional Neural Networks (CNN) on the face recognition problem.  ...  This paper describes a novel approach towards facial expression recognition task.  ...  ConclusionIn this paper, a hybrid Convolutional Neural Network and Scale Invariant Feature Transform aggregator approach is proposed to recognize facial expression.  ... 
doi:10.1007/978-3-319-69456-6_12 fatcat:2ztgfabjpfcevbefnqa52c7iiy
« Previous Showing results 1 — 15 out of 5,561 results