Filters








373 Hits in 3.0 sec

HoloQuantum HyperNetwork Theory [article]

Alireza Tavanfar
2018 arXiv   pre-print
and formulated out of a unique system of nine principles.  ...  The fundamental, general, kinematically-and-dynamically complete quantum many body theory of the entirely-quantized HyperNetworks, namely HoloQuantum HyperNetwork Theory, 'M', is axiomatically defined  ...  Now, for the general HoloQuantum HyperNetwork to satisfy principle four, we directly solve the quantum constraints (17) .  ... 
arXiv:1801.05286v3 fatcat:woisf73zxnewbfsjoppx2cawrq

Stochastic Hyperparameter Optimization through Hypernetworks [article]

Jonathan Lorraine, David Duvenaud
2018 arXiv   pre-print
We show that our technique converges to locally optimal weights and hyperparameters for sufficiently large hypernetworks.  ...  We compare this method to standard hyperparameter optimization strategies and demonstrate its effectiveness for tuning thousands of hyperparameters.  ...  ACKNOWLEDGMENTS We thank Matthew MacKay, Dougal Maclaurin, Daniel Flam-Shepard, Daniel Roy, and Jack Klys for helpful discussions.  ... 
arXiv:1802.09419v2 fatcat:sntuz2kwcnfpxhpxwibeeha3ri

Stochastic Maximum Likelihood Optimization via Hypernetworks [article]

Abdul-Saboor Sheikh, Kashif Rasul, Andreas Merentitis, Urs Bergmann
2018 arXiv   pre-print
A hypernetwork initializes the weights of another network, which in turn can be employed for typical functional tasks such as regression and classification.  ...  This work explores maximum likelihood optimization of neural networks through hypernetworks.  ...  We use a softplus layer to scale (initially 1.0) the input to the hypernetwork in (2) .  ... 
arXiv:1712.01141v2 fatcat:l6fmprhazbghldbsygeysckwnm

Fast Conditional Network Compression Using Bayesian HyperNetworks [article]

Phuoc Nguyen, Truyen Tran, Ky Le, Sunil Gupta, Santu Rana, Dang Nguyen, Trong Nguyen, Shannon Ryan, Svetha Venkatesh
2022 arXiv   pre-print
We employ a hypernetwork to parameterize the posterior distribution of weights given conditional inputs and minimize a variational objective of this Bayesian neural network.  ...  We introduce a conditional compression problem and propose a fast framework for tackling it.  ...  We initialize the hypernetworks such that it generates the pretrained weight W 0 with noise initially as follows.  ... 
arXiv:2205.06404v1 fatcat:g7msgqpuzndp3gizovfhyeb2hq

Hypernetworks based Radio Spectrum Profiling in Cognitive Radio Networks

Shah Nawaz Khan, Andreas Mitschele-Thiel
2015 EAI Endorsed Transactions on Cognitive Communications  
An overlay spectrum sharing approach is assumed in this paper and the evolutionary hypernetworks are used for the realization of the radio spectrum profiling concept.  ...  This paper presents a novel concept of active radio spectrum profiling f or C ognitive R adio ( CR) networks using evolutionary hypernetworks.  ...  The initial weight of a hyperedge is set to the same value i.e. 1, unless otherwise specified at the time of initialization.  ... 
doi:10.4108/cogcom.1.2.e5 fatcat:3ihfpr7rrnc6fl22unyytrasrm

HyperNetworks [article]

David Ha, Andrew Dai, Quoc V. Le
2016 arXiv   pre-print
This work explores hypernetworks: an approach of using a one network, also known as a hypernetwork, to generate the weights for another network.  ...  The focus of this work is to make hypernetworks useful for deep convolutional networks and long recurrent networks, where hypernetworks can be viewed as relaxed form of weight-sharing across layers.  ...  ACKNOWLEDGMENTS We thank Jeff Dean, Geoffrey Hinton, Mike Schuster and the Google Brain team for their help with the project.  ... 
arXiv:1609.09106v4 fatcat:z7uccdomdbcszn3dldt7vzaozi

Random Hypergraph Models of Learning and Memory in Biomolecular Networks: Shorter-Term Adaptability vs. Longer-Term Persistency

Byoung-Tak Zhang
2007 2007 IEEE Symposium on Foundations of Computational Intelligence  
We study this issue in a probabilistic hypergraph model called the hypernetworks.  ...  The hypernetwork architecture consists of a huge number of randomly sampled hyperedges, each corresponding to higherorder micromodules in the input.  ...  ACKNOWLEDGEMENTS The author would like to thank Joo-Kyung Kim, Sun Kim, and Ha-Young Jang for performing simulations.  ... 
doi:10.1109/foci.2007.371494 dblp:conf/foci/Zhang07 fatcat:yvt2gd3k5zdujcrwrol4rcsxty

Evolving hypernetwork models of binary time series for forecasting price movements on stock markets

Elena Bautu, Sun Kim, Andrei Bautu, Henri Luchian, Byoung-Tak Zhang
2009 2009 IEEE Congress on Evolutionary Computation  
The paper proposes a hypernetwork-based method for stock market prediction through a binary time series problem.  ...  In the paper, we describe two methods for assessing the prediction quality of the hypernetwork approach.  ...  The hyperedge is then associated with the tag t E = b i+m and inserted into the hypernetwork, with the initial weight w E = 1.  ... 
doi:10.1109/cec.2009.4982944 dblp:conf/cec/BautuKBLZ09 fatcat:ba2orail55f4vltqcnf34imqay

Delta-STN: Efficient Bilevel Optimization for Neural Networks using Structured Response Jacobians [article]

Juhan Bae, Roger Grosse
2020 arXiv   pre-print
We demonstrate empirically that our Δ-STN can tune regularization hyperparameters (e.g. weight decay, dropout, number of cutout holes) with higher accuracy, faster convergence, and improved stability compared  ...  Based on these observations, we propose the Δ-STN, an improved hypernetwork architecture which stabilizes training and optimizes hyperparameters much more efficiently than STNs.  ...  For ResNet18, we used entropy weight of τ = 0.0001. The number Cutout holes was initialized to 1, the Cutout length was initialized to 4, and all other hyperparameters were initialized to 0.05.  ... 
arXiv:2010.13514v1 fatcat:7ehq5fhzmvadxbcvoyt5perydy

Meta-Learning over Time for Destination Prediction Tasks [article]

Mark Tenzer, Zeeshan Rasheed, Khurram Shafique, Nuno Vasconcelos
2022 arXiv   pre-print
In our case, the weights responsible for destination prediction vary with the metadata, in particular the time, of the input trajectory.  ...  We propose an approach based on hypernetworks, a variant of meta-learning ("learning to learn") in which a neural network learns to change its own weights in response to an input.  ...  Distribution Statement "A" (Approved for Public Release, Distribution Unlimited).  ... 
arXiv:2206.14801v1 fatcat:vtrms5hncfgldm7ecsrl7yhlnm

On Infinite-Width Hypernetworks [article]

Etai Littwin, Tomer Galanti, Lior Wolf, Greg Yang
2021 arXiv   pre-print
Hypernetworks are architectures that produce the weights of a task-specific primary network.  ...  In these scenarios, the hypernetwork learns a representation corresponding to the weights of a shallow MLP, which typically encodes shape or image information.  ...  A principled technique for weight initialization in hypernetworks is then developed.  ... 
arXiv:2003.12193v7 fatcat:6sqfcokb4baulbe3xgi6dmwj5y

Continual learning with hypernetworks [article]

Johannes von Oswald and Christian Henning and Benjamin F. Grewe and João Sacramento
2022 arXiv   pre-print
Continual learning (CL) is less difficult for this class of models thanks to a simple key feature: instead of recalling the input-output relations of all previously seen data, task-conditioned hypernetworks  ...  To overcome this problem, we present a novel approach based on task-conditioned hypernetworks, i.e., networks that generate the weights of a target model based on task identity.  ...  Cervera and Jannes Jegminat for discussions, helpful pointers to the CL literature and for feedback on our paper draft.  ... 
arXiv:1906.00695v4 fatcat:xtdourohoza4livlpio75iitka

Continual Model-Based Reinforcement Learning with Hypernetworks [article]

Yizhou Huang, Kevin Xie, Homanga Bharadhwaj, Florian Shkurti
2021 arXiv   pre-print
We argue that this is too slow for lifelong robot learning and propose HyperCRL, a method that continually learns the encountered dynamics in a sequence of tasks using task-conditional hypernetworks.  ...  learning sessions that do not revisit training data from previous tasks, so it only needs to store the most recent fixed-size portion of the state transition experience; second, it uses fixed-capacity hypernetworks  ...  Acknowledgement We thank Qiyang Li (UofT), Krishna Murthy (Mila), Ali Kuwajerwala (UofT), and Roger Grosse (UofT) for providing feedback on the manuscript, and Compute Canada for compute support.  ... 
arXiv:2009.11997v2 fatcat:nwf3ximskrhytj3pbvxqz33xsa

Evolutionary concept learning from cartoon videos by multimodal hypernetworks

Beom-Jin Lee, Jung-Wo Ha, Kyung-Min Kim, Byoung-Tak Zhang
2013 2013 IEEE Congress on Evolutionary Computation  
Concepts have been widely used for categorizing and representing knowledge in artificial intelligence.  ...  We evaluate the proposed method on a suite of children's cartoon videos for 517 minutes of total playing time.  ...  ACKNOWLEDGMENT The authors thank Jae Kyung Jeon for supporting simulation.  ... 
doi:10.1109/cec.2013.6557700 dblp:conf/cec/LeeHKZ13 fatcat:q4trwieypnctzebnojfde7bfcq

Evolutionary hypernetwork models for aptamer-based cardiovascular disease diagnosis

JungWoo Ha, JaeHong Eom, SungChun Kim, ByoungTak Zhang
2007 Proceedings of the 2007 GECCO conference companion on Genetic and evolutionary computation - GECCO '07  
to error correction for model learning.  ...  Since medical applications require both competitive performance and understandability of results, the hypernetwork models are suitable for this kind of applications.  ...  According to procedure explained in 2.2, make a hyper network H and initialize the weights to W c . 3.  ... 
doi:10.1145/1274000.1274073 dblp:conf/gecco/HaEKZ07 fatcat:lulc57dbbfc5ndyzlo7vvtioam
« Previous Showing results 1 — 15 out of 373 results