A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2022; you can also visit the original URL.
The file type is application/pdf
.
Filters
Continual learning with hypernetworks
[article]
2022
arXiv
pre-print
Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks ...
We provide insight into the structure of low-dimensional task embedding spaces (the input space of the hypernetwork) and show that task-conditioned hypernetworks demonstrate transfer learning. ...
Continual learning with hypernetwork output regularization. ...
arXiv:1906.00695v4
fatcat:xtdourohoza4livlpio75iitka
Continual Model-Based Reinforcement Learning with Hypernetworks
[article]
2021
arXiv
pre-print
We argue that this is too slow for lifelong robot learning and propose HyperCRL, a method that continually learns the encountered dynamics in a sequence of tasks using task-conditional hypernetworks. ...
the state transition experience; second, it uses fixed-capacity hypernetworks to represent non-stationary and task-aware dynamics; third, it outperforms existing continual learning alternatives that rely ...
Algorithm 1: Continual Reinforcement Learning via Hypernetworks (HyperCRL) 1: Input: T tasks, each with its own dynamics S × A → S , reward r(s, a). ...
arXiv:2009.11997v2
fatcat:nwf3ximskrhytj3pbvxqz33xsa
Continual Learning in Recurrent Neural Networks
[article]
2021
arXiv
pre-print
While a diverse collection of continual learning (CL) methods has been proposed to prevent catastrophic forgetting, a thorough investigation of their effectiveness for processing sequential data with recurrent ...
Our study shows that established CL methods can be successfully ported to the recurrent case, and that a recent regularization approach based on hypernetworks outperforms weight-importance methods, thus ...
RELATED WORK Continual learning with sequential data. As in Parisi et al. ...
arXiv:2006.12109v3
fatcat:dy7xrnm2nvfr7iwre6hqihatpa
Continual learning with hypernetworks
2020
Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks ...
Continual learning (CL) isless difficult for this class of models thanks to a simple key feature: instead ofrecalling the input-output relations of all previously seen data, task-conditionedhypernetworks ...
Nc) c (3) e (t) Θ trgt (t, 1) Θ trgt (t, 2) Θ trgt (t, 3) Θ trgt (t, Nc) f h f h f h f h Continual learning with hypernetwork output regularization. ...
doi:10.5167/uzh-200390
fatcat:nchjgzvcs5hxfgo6do7bxrblmq
Continual Learning from Demonstration of Robotic Skills
[article]
2022
arXiv
pre-print
Our results show that hypernetworks outperform other state-of-the-art regularization-based continual learning approaches for learning from demonstration. ...
To this end, we propose an approach for continual learning from demonstration using hypernetworks and neural ordinary differential equation solvers. ...
With this, we compute the continual learning metrics shown in Tab. III. ...
arXiv:2202.06843v2
fatcat:hblan6epvrax3atomvei3lx3uu
Hypernetworks: A Molecular Evolutionary Architecture for Cognitive Learning and Memory
2008
IEEE Computational Intelligence Magazine
Here we review the principles underlying human learning and memory, and identify three of them, i.e., continuity, glocality, and compositionality, as the most fundamental to human-level machine learning ...
The chemically-based massive interaction for information organization and processing in the molecular hypernetworks, referred to as hyperinteractionism, is contrasted with the symbolist, connectionist, ...
Cognitive Learning with Hypernetworks We are now in a better position to examine the properties of the hypernetwork model. ...
doi:10.1109/mci.2008.926615
fatcat:dr23kuxnsnb2hayxjhf4r3hxbe
Hypernetworks for Continual Semi-Supervised Learning
[article]
2021
arXiv
pre-print
Learning from data sequentially arriving, possibly in a non i.i.d. way, with changing task distribution over time is called continual learning. ...
We consolidate the knowledge of sequential tasks in the hypernetwork, and the base network learns the semi-supervised learning task. ...
MCSSL: Meta-Consolidation for Continual Semi-Supervised Learning This section starts with the problem set-up of Continual Semi-Supervised Learning. ...
arXiv:2110.01856v1
fatcat:ckic6sbibnbqzl5c6fbhm2cgia
Evolving hypernetwork models of binary time series for forecasting price movements on stock markets
2009
2009 IEEE Congress on Evolutionary Computation
In particular, the hypernetwork approach outperforms other machine learning methods such as support vector machines, naive Bayes, multilayer perceptrons, and k-nearest neighbors. ...
Applied to the Dow Jones Industrial Average Index and the Korea Composite Stock Price Index data, the experimental results show that the proposed method effectively learns and predicts the time series ...
The quality of the hypernetwork assessed with feedback connection provides good insight about the quality of the patterns learned by the hypernetwork from the dataset. ...
doi:10.1109/cec.2009.4982944
dblp:conf/cec/BautuKBLZ09
fatcat:ba2orail55f4vltqcnf34imqay
Stochastic Hyperparameter Optimization through Hypernetworks
[article]
2018
arXiv
pre-print
Machine learning models are often tuned by nesting optimization of model weights inside the optimization of hyperparameters. ...
We show that our technique converges to locally optimal weights and hyperparameters for sufficiently large hypernetworks. ...
Sufficiently powerful hypernetworks can learn continuous best-response functions, which minimizes the expected loss for all hyperparameter distributions with convex support. ...
arXiv:1802.09419v2
fatcat:sntuz2kwcnfpxhpxwibeeha3ri
Evolutionary hypernetworks for learning to generate music from examples
2009
2009 IEEE International Conference on Fuzzy Systems
Evolutionary hypernetworks (EHNs) are recently introduced models for learning higher-order probabilistic relations of data by an evolutionary self-organizing process. ...
Short-term and long-term sequential patterns can be extracted and combined to generate music with various styles by our method. ...
Music completion as validation With the trained hypernetwork and given fragment of melody as a cue, we generate music by continuously predicting notes. ...
doi:10.1109/fuzzy.2009.5277047
dblp:conf/fuzzIEEE/KimKZ09
fatcat:g3nli2kr6jcubmi4xy6hzq247q
Evolutionary concept learning from cartoon videos by multimodal hypernetworks
2013
2013 IEEE Congress on Evolutionary Computation
Two key ideas on evolutionary concept learning are representing concepts in a large collection (population) of hyperedges or a hypergraph and to incrementally learning from video streams based on an evolutionary ...
Previous researches on concept learning have focused on unimodal data, usually on linguistic domains in a static environment. ...
Multimodal hypernetwork incrementally learns higher-order concept relations from the visual and texual sets with subsampling-based evolutionary method (c). ...
doi:10.1109/cec.2013.6557700
dblp:conf/cec/LeeHKZ13
fatcat:q4trwieypnctzebnojfde7bfcq
Hypernetwork functional image representation
[article]
2019
arXiv
pre-print
We use a hypernetwork to automatically generate continuous functional representation of images at test time without any additional training. ...
Since obtained representation is continuous, we can easily inspect the image at various resolutions. ...
Hypernetwork. Hypernetwork is a convolutional neural network with some modifications, see Figure 5 . We created an eight layered network with one residual connection. ...
arXiv:1902.10404v2
fatcat:to2tsv6mnzb7viamqenot3cn5e
HyperInvariances: Amortizing Invariance Learning
[article]
2022
arXiv
pre-print
Providing invariances in a given learning task conveys a key inductive bias that can lead to sample-efficient learning and good generalisation, if correctly specified. ...
However, invariance learning is expensive and data intensive for popular neural architectures. We introduce the notion of amortizing invariance learning. ...
A train i : HyperInvariance train accuracy with continuous invariance, Ai * : HyperInvariance test accuracy with continuous invariance. A train MTL : Multi-task baseline train accuracy. ...
arXiv:2207.08304v1
fatcat:pryxbpkzybbv3dri7ed3l5uk2a
Layered Hypernetwork Models for Cross-Modal Associative Text and Image Keyword Generation in Multimodal Information Retrieval
[chapter]
2010
Lecture Notes in Computer Science
Here, we propose a novel text and image keyword generation method by cross-modal associative learning and inference with multimodal queries. ...
We use a modified hypernetwork model, i.e. layered hypernetworks (LHNs) which consists of the first (lower) layer and the second (upper) layer which has more than two modality-dependent hypernetworks and ...
As learning continues, the structure of a hypernetwork fits the distribution of given data more. ...
doi:10.1007/978-3-642-15246-7_10
fatcat:yvileqmhfnf6jhmvjwfuzzgmsm
Hypernetwork Dismantling via Deep Reinforcement Learning
[article]
2022
arXiv
pre-print
In this work, we formulate the hypernetwork dismantling problem as a node sequence decision problem and propose a deep reinforcement learning (DRL)-based hypernetwork dismantling framework. ...
Then trial-and-error dismantling tasks are conducted by an agent on these synthetic hypernetworks, and the dismantling strategy is continuously optimized. ...
L = L Q + αL E (15) With repeatedly gathering experiences and learning from them, the agent updates its hypernetwork dismantling strategy continuously. ...
arXiv:2104.14332v2
fatcat:ob7txofxzffgvlbed6c3r3ekm4
« Previous
Showing results 1 — 15 out of 697 results