Filters








23,760 Hits in 5.8 sec

Effective Pruning Method for a Multiple Classifier System Based on Self-Generating Neural Networks [chapter]

Hirotaka Inoue, Hiroyuki Narihisa
2003 Lecture Notes in Computer Science  
Self-generating neural networks (SGNN) are one of the suitable base-classifiers for MCS because of their simple setting and fast learning.  ...  Experiments have been conducted to compare the pruned MCS with an unpruned MCS, the MCS based on C4.5, and k-nearest neighbor method.  ...  Self-generating neural networks (SGNN) [6] have simple network design and high speed learning.  ... 
doi:10.1007/3-540-44989-2_2 fatcat:jqlpjsnmijcnzkzlt7cag7kx5a

Improving Performance of a Multiple Classifier System Using Self-generating Neural Networks [chapter]

Hirotaka Inoue, Hiroyuki Narihisa
2003 Lecture Notes in Computer Science  
The pruning method is constructed from an on-line pruning method and an off-line pruning method. We implement the pruned MCS with two sampling methods.  ...  Self-generating neural networks (SGNN) are one of the suitable base-classifiers for MCS because of their simple setting and fast learning.  ...  Self-generating neural networks (SGNN) [6] have simple network design and high speed learning.  ... 
doi:10.1007/3-540-44938-8_26 fatcat:k6idnjaownhi5afb3x26r3mds4

Adaptive Neural Network Structure Optimization Algorithm Based on Dynamic Nodes

Miao Wang, Xu Yang, Yunchong Qian, Yunlin Lei, Jian Cai, Ziyi Huan, Xialv Lin, Hao Dong
2022 Current Issues in Molecular Biology  
Experimental results show that compared with traditional neural network topology optimization algorithms, GA-DNS can generate neural networks with higher construction efficiency, lower structure complexity  ...  We propose a Dynamic Node-based neural network Structure optimization algorithm (DNS) to handle these issues. DNS consists of two steps: the generation step and the pruning step.  ...  In 2015, Han S proposed a new compressing neural networks based on this pruning method [17] .  ... 
doi:10.3390/cimb44020056 pmid:35723341 fatcat:rff6k47l4rdr5b3v6jz7kpi3ba

Self-organizing Neural Grove [chapter]

Hirotaka Inoue
2014 Lecture Notes in Computer Science  
The pruning method is constructed from an on-line pruning method and a off-line pruning method. We implement the SONG with two sampling methods.  ...  Self-generating neural tree (SGNT) is one of the suitable base-classifiers for MCS because of their simple setting and fast learning.  ...  In general, the base classifiers of the MCS use traditional models such as neural networks (backpropagation network and radial basis function network) [4] and decision trees (CART and C4.5) [5] .  ... 
doi:10.1007/978-3-319-12637-1_18 fatcat:hk5s6ifwxbd2lec3xgquxjifva

Compact Neural Networks via Stacking Designed Basic Units [article]

Weichao Lan, Yiu-ming Cheung, Juyong Jiang
2022 arXiv   pre-print
To this end, this paper presents a new method termed TissueNet, which directly constructs compact neural networks with fewer weight parameters by independently stacking designed basic units, without requiring  ...  Unstructured pruning has the limitation of dealing with the sparse and irregular weights.  ...  To this end, we propose TissueNet from a new perspective that directly constructs compact neural network by independently stacking designed basic units.  ... 
arXiv:2205.01508v1 fatcat:uqhxhwqek5gmdgegg5wemwc4zi

Research and Application of Improved AGP Algorithm for Structural Optimization Based on Feedforward Neural Networks

Ruliang Wang, Huanlong Sun, Benbo Zha, Lei Wang
2015 Mathematical Problems in Engineering  
The simulation results show that, compared with the AGP algorithm, the improved method (IAGP) can quickly and accurately predict traffic capacity.  ...  The adaptive growing and pruning algorithm (AGP) has been improved, and the network pruning is based on the sigmoidal activation value of the node and all the weights of its outgoing connections.  ...  Acknowledgments This work was jointly supported by the Guangxi Key Laboratory Foundation of University and Guangxi Department of Education Foundation.  ... 
doi:10.1155/2015/481919 fatcat:y7assx767ff7jgdbmcwbkdsxfq

Fast Neural Architecture Construction using EnvelopeNets [article]

Purushotham Kamath, Abhishek Singh, Debo Dutta
2018 arXiv   pre-print
Fast Neural Architecture Construction (NAC) is a method to construct deep network architectures by pruning and expansion of a base network.  ...  NAC exploits this finding to construct convolutional neural nets (CNNs) with close to state of the art accuracy, in < 1 GPU day, faster than most of the current neural architecture search methods.  ...  The EnvelopeNet construction method lies between bottom up incremental methods of construction and top down reduction methods and provides a reasonable compromise that allows generation of a network without  ... 
arXiv:1803.06744v3 fatcat:oo7dx5gh3fdvrdtzms4qlog43a

Constructive Autoassociative Neural Network for Facial Recognition

Bruno J. T. Fernandes, George D. C. Cavalcanti, Tsang I. Ren, Gennady Cymbalyuk
2014 PLoS ONE  
The technique proposed by Ma and Khorasani outperforms the methods using fixed neural network structure in computational efficiency, generalization and recognition performance capability.  ...  Ma and Khorasani [17] proposed a model with a constructive feedforward neural network for facial expression recognition.  ...  Acknowledgments This work was partially supported by Brazilian agencies: CNPq, CAPES and Facepe. Author Contributions Conceived and designed the experiments: BJTF GDCC TIR.  ... 
doi:10.1371/journal.pone.0115967 pmid:25542018 pmcid:PMC4277427 fatcat:pq4fplb4dbcsznwr54u6u6tahm

Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity on Pruned Neural Networks [article]

Shuai Zhang, Meng Wang, Sijia Liu, Pin-Yu Chen, Jinjun Xiong
2021 arXiv   pre-print
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned, indicating the structural importance of a winning ticket.  ...  With a fixed number of samples, training a pruned neural network enjoys a faster convergence rate to the desired model than training the original unpruned one, providing a formal justification of the improved  ...  Network (http://ibm.biz/AIHorizons).  ... 
arXiv:2110.05667v1 fatcat:gqydts6asjcm5c6jmhcdu37yym

DI-ANN Clustering Algorithm for Pruning in MLP Neural Network

P. Monika, D. Venkatesan
2015 Indian Journal of Science and Technology  
The Performance result obtained from the proposed method shows that it reduces the error rate and improves efficiency and accuracy of the MLP network.  ...  The Pruning technique in an MLP (The Multilayer Perceptron) neural network is to remove the unwanted neurons based upon their corresponding weights; as a result it improves the accuracy and speed of the  ...  As a result, it constructs a better topology; improve the accuracy and computational complexity. The main drawback of this method is it takes more time to train the network. Thuan Q.  ... 
doi:10.17485/ijst/2015/v8i16/62540 fatcat:qmulay4h7jdatohyzupmomk7o4

Optimizing a Multiple Classifier System [chapter]

Hirotaka Inoue, Hiroyuki Narihisa
2002 Lecture Notes in Computer Science  
Self-generating neural networks (SGNN) are one of the suitable base-classifiers for MCS because of their simple setting and fast learning.  ...  In this paper, we propose a novel optimization method for the structure of the SGNN in the MCS. We compare the optimized MCS with two sampling methods.  ...  Self-generating neural networks (SGNN) [7] have simple network design and high speed learning.  ... 
doi:10.1007/3-540-45683-x_32 fatcat:4kgvwjav65fndiw7cyyonjmcsi

Sparse Flows: Pruning Continuous-depth Models [article]

Lucas Liebenwein, Ramin Hasani, Alexander Amini, Daniela Rus
2021 arXiv   pre-print
Our empirical results suggest that pruning improves generalization for neural ODEs in generative modeling.  ...  Moreover, pruning finds efficient neural ODE representations with up to 98% less parameters compared to the original network, without loss of accuracy.  ...  Acknowledgments This research was sponsored by the United States Air Force Research Laboratory and the United States Air Force Artificial Intelligence Accelerator and was accomplished under Cooperative  ... 
arXiv:2106.12718v2 fatcat:zmb6kyshzrd7hehjsutdf3glce

Juvenile state hypothesis: What we can learn from lottery ticket hypothesis researches? [article]

Di Zhang
2021 arXiv   pre-print
A new winning ticket sub-network with deeper network structure, better generalization ability and better test performance can be obtained in this recursive manner.  ...  Therefore, we propose a strategy that combines the idea of neural network structure search with a pruning algorithm to alleviate this problem.  ...  Introduction After the proposition of the lottery ticket hypothesis (Frankle and Carbin 2019), more effective methods for neural network pruning appeared, and at the same time, new methods for neural network  ... 
arXiv:2109.03862v1 fatcat:wp7a7iph4bf2tcsbioxasvq3za

Text Categorization Using Neural Networks Initialized with Decision Trees

Nerijus Remeikis, Ignas Skučas, Vida Melninkaitė
2004 Informatica  
We present results comparing the accuracy of this approach with multilayer neural network initialized with traditional random method and decision tree classifiers.  ...  As a result, the neural networks constructed from these decision trees are often larger and more complex than necessary.  ...  Decision Tree Mapping to Neural Network Decision Tree Construction Algorithm Algorithm constructs the decision tree with a divide and conquer strategy.  ... 
doi:10.15388/informatica.2004.078 fatcat:fup5sqrpkbhnbns3ae6g4eeozy

Rule Extraction using Artificial Neural Networks

S. M. Kamruzzaman, Ahmed Ryadh Hasan
2010 arXiv   pre-print
Extensive experimental results on several benchmarks problems in neural networks demonstrate the effectiveness of the proposed approach with good generalization ability.  ...  Artificial neural networks have been successfully applied to a variety of business application problems involving classification and regression.  ...  The method does not require network pruning and hence no network retraining is necessary. R. Setiono [14] presents MofN3, a new method for extracting M-of-N rules from neural networks.  ... 
arXiv:1009.4984v1 fatcat:3l7qgyix6zhn3mqyg66q44t4pe
« Previous Showing results 1 — 15 out of 23,760 results