Filters








22,440 Hits in 5.4 sec

Continual Learning with Node-Importance based Adaptive Group Sparse Regularization [article]

Sangwon Jung, Hongjoon Ahn, Sungmin Cha, Taesup Moon
2021 arXiv   pre-print
We propose a novel regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), using two group sparsity-based penalties.  ...  Our method selectively employs the two penalties when learning each node based its the importance, which is adaptively updated after learning each new task.  ...  Inspired by such connection, we propose a new regularization-based continual learning method, dubbed as Adaptive Group Sparsity based Continual Learning (AGS-CL), that can adaptively control the plasticity  ... 
arXiv:2003.13726v4 fatcat:6wws447lhje4xlttfzrobuh6r4

SpaceNet: Make Free Space For Continual Learning [article]

Ghada Sokar, Decebal Constantin Mocanu, Mykola Pechenizkiy
2021 arXiv   pre-print
The continual learning (CL) paradigm aims to enable neural networks to learn tasks continually in a sequential fashion.  ...  The adaptive training of the sparse connections results in sparse representations that reduce the interference between the tasks.  ...  We finally showed that the learned representations by SpaceNet is highly sparse and the adaptive sparse training results in redistributing the sparse connections in the important neurons for each task.  ... 
arXiv:2007.07617v2 fatcat:c5olxdxoszfinne7fintanq3ty

Sparse methods for biomedical data

Jieping Ye, Jun Liu
2012 SIGKDD Explorations  
Therefore, finding sparse representations is fundamentally important for scientific discovery.  ...  Sparse methods based on the 1 norm have attracted a great amount of research efforts in the past decade due to its sparsity-inducing property, convenient convexity, and strong theoretical guarantees.  ...  One important issue that has not been well addressed is how to adapt sparse methods to deal with missing data [70; 96] .  ... 
doi:10.1145/2408736.2408739 pmid:24076585 pmcid:PMC3783968 fatcat:z4axej6w6vfmbmgg72spx2neya

Learning Local Structured Correlation Filters for Visual Tracking via Spatial Joint Regularization

Chenggang Guo, Dongyi Chen, Zhiqi Huang
2019 IEEE Access  
In this paper, we introduce the tree-structured group sparsity regularization into the DCF-based formula. The correlation filter to be learned is divided into hierarchical local groups.  ...  Moreover, a local response consistency term is incorporated together with the structured sparsity to make each local filter group contributes equally to the final response.  ...  Furtherly, we introduce a tree-structured group sparse constraint to do adaptive hierarchical feature group selection for correlation template learning.  ... 
doi:10.1109/access.2019.2906508 fatcat:3dqg45gfsbdjvpmgj4his3yrda

Flexible co‐data learning for high‐dimensional prediction

Mirrelijn M. Nee, Lodewyk F.A. Wessels, Mark A. Wiel
2021 Statistics in Medicine  
Available group adaptive methods primarily target for settings with few groups, and therefore likely overfit for non-informative, correlated or many groups, and do not account for known structure on group  ...  These are then used to estimate adaptive multi-group ridge penalties for generalized linear and Cox models.  ...  Second, GRridge only uses the leaf groups of Figure 4 to represent the continuous co-data, whereas ecpc is able to represent the lfdrs with an adaptive sparse hierarchical model.  ... 
doi:10.1002/sim.9162 pmid:34438466 pmcid:PMC9292202 fatcat:l63ra6a2unb7fd33ryq4cwgahu

SHELS: Exclusive Feature Sets for Novelty Detection and Continual Learning Without Class Boundaries [article]

Meghna Gummadi, David Kent, Jorge A. Mendez, Eric Eaton
2022 arXiv   pre-print
The resulting approach uses OOD detection to perform class-incremental continual learning without known class boundaries.  ...  Inspired by natural learners, we introduce a Sparse High-level-Exclusive, Low-level-Shared feature representation (SHELS) that simultaneously encourages learning exclusive sets of high-level features and  ...  Across-datasets experiments with CIFAR10 as the ID classes use VGG16 pretrained on ImageNet, replacing the final fully-connected layer with a cosine normalization layer.  ... 
arXiv:2206.13720v1 fatcat:xnz25vwyibc2ndcoty3pnu6cam

RelEx: A Model-Agnostic Relational Model Explainer [article]

Yue Zhang, David Defazio, Arti Ramesh
2020 arXiv   pre-print
This is essential, as complex deep learning models with millions of parameters produce state of the art results, but it can be nearly impossible to explain their predictions.  ...  In this work, we develop RelEx, a model-agnostic relational explainer to explain black-box relational models with only access to the outputs of the black-box.  ...  L 2,1 is a group sparseness measure, here we treat each row of adjacency matrix as one group. Thus, we pursue sparseness on both edges and nodes.  ... 
arXiv:2006.00305v1 fatcat:roqfurwfrng4tdq2akzovltawi

You Only Search Once: Single Shot Neural Architecture Search via Direct Sparse Optimization [article]

Xinbang Zhang, Zehao Huang, Naiyan Wang
2018 arXiv   pre-print
Instead of applying evolutionary algorithm or reinforcement learning as previous works, this paper proposes a Direct Sparse Optimization NAS (DSO-NAS) method.  ...  Next, we impose sparse regularizations to prune useless connections in the architecture. Lastly, we derive an efficient and theoretically sound optimization method to solve it.  ...  With adaptive FLOPs technique, the weight of sparse regularization for each block will be changed adaptively according to Eqn. 9.  ... 
arXiv:1811.01567v1 fatcat:2uwklco3izaohedh2j4lfxz5hq

GROWN: GRow Only When Necessary for Continual Learning [article]

Li Yang, Sen Lin, Junshan Zhang, Deliang Fan
2021 arXiv   pre-print
Building on this learnable sparse growth method, we then propose GROWN, a novel end-to-end continual learning framework to dynamically grow the model only when necessary.  ...  To address this issue, continual learning has been developed to learn new tasks sequentially and perform knowledge transfer from the old tasks to the new ones without forgetting.  ...  Related Work Continual learning Regularization-based method.  ... 
arXiv:2110.00908v1 fatcat:m42vs5zkjjc3fd25dc76uwofzq

A Functional Model for Structure Learning and Parameter Estimation in Continuous Time Bayesian Network: An Application in Identifying Patterns of Multiple Chronic Conditions [article]

Syed Hasib Akhter Faruqui, Adel Alaeddini, Jing Wang, Carlos A. Jaramillo
2021 arXiv   pre-print
We also propose an adaptive regularization method with an intuitive early stopping feature based on density based clustering for efficient learning of the structure and parameters of the proposed network  ...  Here, we propose a continuous time Bayesian network with conditional dependencies, represented as Poisson regression, to model the impact of exogenous variables on the conditional dependencies of the network  ...  The model also utilizes an adaptive group regularization method to learn a sparse representation of the system.  ... 
arXiv:2007.15847v2 fatcat:wx6s5ggdrbc55bwupnz4cyf2kq

PASNet: pathway-associated sparse deep neural network for prognosis prediction from high-throughput data

Jie Hao, Youngsoon Kim, Tae-Kyung Kim, Mingon Kang
2018 BMC Bioinformatics  
The predictive performance of PASNet was evaluated with multiple cross-validation experiments.  ...  Moreover, it is challenging to develop robust computational solutions with high-dimension, low-sample size data.  ...  A group LASSO-based approach associated genes with pathways and characterized them based on biological pathways [10] .  ... 
doi:10.1186/s12859-018-2500-z fatcat:dlf2fcmijfgqrnca3hwabbal5i

Sparse Representations and Efficient Sensing of Data (Dagstuhl Seminar 11051)

Stephan Dahlke, Michael Elad, Yonina Eldar, Gitta Kutyniok, Gerd Teschke, Marc Herbstritt
2011 Dagstuhl Reports  
First, we wanted to elaborate the state of the art in the field of sparse data representation and corresponding efficient data sensing methods.  ...  This report documents the program and the outcomes of Dagstuhl Seminar 11051 "Sparse Representations and Efficient Sensing of Data". The scope of the seminar was twofold.  ...  Learning problems, be it dictionary learning for compressed sensing or smoothing kernels (point spread functions) for sampling, play also an important role for sparse methods to become practical.  ... 
doi:10.4230/dagrep.1.1.108 dblp:journals/dagstuhl-reports/DahlkeEEKT11 fatcat:mbqcjw2h7zg4tmdi4owq2gx7ou

A Functional Model for Structure Learning and Parameter Estimation in Continuous Time Bayesian Network: An Application in Identifying Patterns of Multiple Chronic Conditions

Syed Hasib Akhter Faruqui, Adel Alaeddini, Jing Wang, Carlos A. Jaramillo, Mary Jo Pugh
2021 IEEE Access  
We also propose an adaptive group regularization method with an intuitive early stopping feature based on Gaussian mixture model clustering for efficient learning of the structure and parameters of the  ...  Here, we propose a continuous time Bayesian network with conditional dependencies represented as regularized Poisson regressions to model the impact of exogenous variables on the conditional intensities  ...  The model also utilizes an adaptive group regularization method to learn a sparse representation of the system.  ... 
doi:10.1109/access.2021.3122912 pmid:35371895 pmcid:PMC8975131 fatcat:v4usqdpgyberjk2lsvuhtritya

Hierarchical Region-Network Sparsity for High-Dimensional Inference in Brain Imaging [chapter]

Danilo Bzdok, Michael Eickenberg, Gaël Varoquaux, Bertrand Thirion
2017 Lecture Notes in Computer Science  
Hierarchical region-network priors are shown to better classify and recover 18 psychological tasks than other sparse estimators.  ...  Varying the relative importance of region and network structure within the hierarchical tree penalty captured complementary aspects of the neural activity patterns.  ...  (sparse) group sparsity imposes a structured 1 / 2 -block norm (with additional 1 term) with a known region atlas of voxel groups onto the statistical estimation process.  ... 
doi:10.1007/978-3-319-59050-9_26 pmid:29743804 pmcid:PMC5937695 fatcat:dta6hxhsufbwbehgq5mtf5ehri

Genomic prediction using machine learning: A comparison of the performance of regularized regression, ensemble, instance-based and deep learning methods on synthetic and empirical data [article]

Vanda Milheiro Lourenço, Joseph Ochieng Ogutu, Rui Pimenta Rodrigues, Hans-Peter Piepho
2022 bioRxiv   pre-print
Thus, despite their greater complexity and computational burden, neither the adaptive nor the group regularized methods clearly improved upon the results of their simple regularized counterparts.  ...  However, such studies are crucial for identifying (i) groups of methods with superior genomic predictive performance and assessing (ii) the merits and demerits of such groups of methods relative to each  ...  The ensemble, instance-based and deep learning methods did not improve upon the results of the regularized or the group regularized methods (Tables 10 & 11 ).  ... 
doi:10.1101/2022.06.09.495423 fatcat:esn7pawqy5b35hbsxu2eeqhd3m
« Previous Showing results 1 — 15 out of 22,440 results