Filters








23,582 Hits in 4.6 sec

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks [chapter]

Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi
2016 Lecture Notes in Computer Science  
The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure).  ...  Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task.  ...  Shallow networks: Estimating a deep neural network with a shallower model reduces the size of a network.  ... 
doi:10.1007/978-3-319-46493-0_32 fatcat:n757ualbzzc5pof4hcv2e3tema

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks [article]

Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi
2016 arXiv   pre-print
The classification accuracy with a Binary-Weight-Network version of AlexNet is only 2.9% less than the full-precision AlexNet (in top-1 measure).  ...  Our binary networks are simple, accurate, efficient, and work on challenging visual tasks. We evaluate our approach on the ImageNet classification task.  ...  XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks  ... 
arXiv:1603.05279v4 fatcat:dsfl3pwi55fxzcicde65yh4s3y

Statistical Analysis Driven Optimized Deep Learning System for Intrusion Detection [article]

Cosimo Ieracitano, Ahsan Adeel, Mandar Gogate, Kia Dashtipour, Francesco Carlo Morabito, Hadi Larijani, Ali Raza, Amir Hussain
2018 arXiv   pre-print
In this work, we propose an innovative statistical analysis driven optimized deep learning system for intrusion detection.  ...  Furthermore, a huge amount of data produced by large networks has made the recognition task even more complicated and challenging.  ...  In [22] proposed a Recurrent Neural Network (RNN) for anomaly detection using the same benchmark, claiming accuracies of 83.28% and 81.29% in binary and multiclass classification, respectively.  ... 
arXiv:1808.05633v1 fatcat:2driqk5pfjcrbjhhod7yukzspm

Accurate EEG-based Emotion Recognition on Combined Features Using Deep Convolutional Neural Networks

J. X. Chen, P. W. Zhang, Z. J. Mao, Y. F. Huang, D. M. Jiang, Y. N. Zhang
2019 IEEE Access  
were used to make emotional binary classification experiments on DEAP datasets in valence and arousal dimensions.  ...  INDEX TERMS EEG, emotion recognition, convolution neural network, combined features, deep learning.  ...  The aim of our experiments was to evaluate the binary classification performance of four shallow baseline classifiers and three CNN models on valence and arousal dimensions to find out the optimal classification  ... 
doi:10.1109/access.2019.2908285 fatcat:y4bfmeo7gjagvj7ld54ycovk2u

A scoping review of transfer learning research on medical image analysis using ImageNet [article]

Mohammad Amin Morid, Alireza Borjali, Guilherme Del Fiol
2020 arXiv   pre-print
Objective: Employing transfer learning (TL) with convolutional neural networks (CNNs), well-trained on non-medical ImageNet dataset, has shown promising results for medical image analysis in recent years  ...  AlexNet for brain (42%) and DenseNet for lung studies (38%) were the most frequently used models.  ...  Research gaps:  Deep networks should be explored for other X-Ray anatomical sites. Tooth Most prevalent:  Shallow networks with small kernel sizes.  ... 
arXiv:2004.13175v5 fatcat:wqghyqq4wfgpnpatvftty4vzx4

Food vs Non-Food Classification

Francesco Ragusa, Valeria Tomaselli, Antonino Furnari, Sebastiano Battiato, Giovanni M. Farinella
2016 Proceedings of the 2nd International Workshop on Multimedia Assisted Dietary Management - MADiMa '16  
Existing approaches for food vs non-food classification have used both shallow and deep representations, in combination with multi-class or one-class classification approaches.  ...  In this paper, we consider the most recent classification approaches employed for food vs non-food classification, and compare them on a publicly available dataset.  ...  Fine-tuning the networks and using the output of the softmax layer directly for classification (third column for each model in Figure 2 ) brings some minor improvements over the binary SVM in the case  ... 
doi:10.1145/2986035.2986041 fatcat:ct3lnrbcfnevxdhtomh5svqbgu

An Empirical Investigation into Deep and Shallow Rule Learning [article]

Florian Beck, Johannes Fürnkranz
2021 arXiv   pre-print
Our experiments on both artificial and real-world benchmark data indicate that deep rule networks outperform shallow networks.  ...  In this paper, we empirically compare deep and shallow rule learning with a uniform general algorithm, which relies on greedy mini-batch based optimization.  ...  Acknowledgments We are grateful to Eneldo Loza Mencía, Michael Rapp and Eyke Hüllermeier for inspiring discussions, fruitful pointers to related work, and for suggesting the evaluation with artificial  ... 
arXiv:2106.10254v1 fatcat:ntcstnpa7nevffppzzb7ot5lfi

An Empirical Investigation Into Deep and Shallow Rule Learning

Florian Beck, Johannes Fürnkranz
2021 Frontiers in Artificial Intelligence  
In this paper, we therefore take a different approach: we empirically compare deep and shallow rule sets that have been optimized with a uniform general mini-batch based optimization algorithm.  ...  are able to outperform shallow networks, even though the latter are also universal function approximators.  ...  The network is designed for binary classification problems and produces a single prediction output that is true if and only if an input sample is covered by any of the rules in the rule set.  ... 
doi:10.3389/frai.2021.689398 pmid:34746767 pmcid:PMC8570245 fatcat:suwlfammszdkrhenzhunjtb7cy

Limiting Network Size within Finite Bounds for Optimization [article]

Linu Pinto, Dr. Sasi Gopalan
2019 arXiv   pre-print
For binary classification tasks shallow networks are used as they have universal approximation property and it is enough to size the hidden layer width for such networks.  ...  The paper brings out a theoretical justification on required attribute size and its corresponding hidden layer dimension for a given sample set that gives an optimal binary classification results with  ...  The work is limited in its network architecture, binary classification problem and activation functions.  ... 
arXiv:1903.02809v1 fatcat:buocisrqzza6lj4w73phgnmoke

Multi-Classification of Fetal Health Status Using Extreme Learning Machine

Ömer KASIM
2021 ICONTECH INTERNATIONAL JOURNAL  
This result proved that a high classification accuracy was obtained by analysing the CTG data both binary and multiple classification.  ...  As a result of the experiments, binary classification accuracy was obtained as 99.29%. There was only 1 false positive.  ...  Although certain accuracy and other metrics have been achieved in studies with machine learning and shallow artificial neural networks, multi-class and binary classification is needed with highly stable  ... 
doi:10.46291/icontechvol5iss2pp62-70 fatcat:ppe2sapijvf35khgnbmalrt6ca

Theoretical issues in deep networks

Tomaso Poggio, Andrzej Banburski, Qianli Liao
2020 Proceedings of the National Academy of Sciences of the United States of America  
In approximation theory both shallow and deep networks are known to approximate any continuous functions at an exponential cost.  ...  This result is especially relevant because it has been recently shown that, for overparameterized models, selection of a minimum norm solution optimizes cross-validation leave-one-out stability and thereby  ...  Recent results by ref. 27 illuminate the apparent absence of "overfitting" in the special case of linear networks for binary classification.  ... 
doi:10.1073/pnas.1907369117 pmid:32518109 fatcat:ezfitsiwlze6tlezaapggq4fwi

Meta Learning for Few-Shot One-class Classification [article]

Gabriel Dahia, Maurício Pamplona Segundo
2020 arXiv   pre-print
We formulate the learning of meaningful features for one-class classification as a meta-learning problem in which the meta-training stage repeatedly simulates one-class classification, using the classification  ...  We show how the Support Vector Data Description method can be used with our method, and also propose a simpler variant based on Prototypical Networks that obtains comparable performance, indicating that  ...  optimizing one-class classification.  ... 
arXiv:2009.05353v2 fatcat:u2fea5hwxjggjagzauikbv3lfq

An Ameliorated Multiattack Network Anomaly Detection in Distributed Big Data System-based Enhanced Stacking Multiple Binary Classifiers

AbdAllah A. AlHabshy, Bashar I. Hameed, Kamal A. ElDahshan
2022 IEEE Access  
a network.  ...  The EMBAM ensemble multiple binary classifiers into a single model by stacking. The core binary model is a decision tree classifier with hyperparameters optimized using the grid search method.  ...  et al. [23] • Network Anomaly • Intrusion Detection UNSW- NB15 Distributed approach NONE • SVM • RF • DT • NB • Binary classification • Fast • Simple • Multi classification Hyperparameter optimization  ... 
doi:10.1109/access.2022.3174482 fatcat:xfbsqwyxarhzfnhwaq7hv3rubq

A three-branch 3D convolutional neural network for EEG-based different hand movement stages classification

Tianjun Liu, Deling Yang
2021 Scientific Reports  
of 'easy-hard' examples, when trained with the focal loss, the three-branch 3D-CNN network achieve good performance (relatively more balanced classification accuracy of binary classifications) on the  ...  Experimental results indicated that there are also a problem of the different classification difficulty for different classes in motor stages classification tasks, we introduce focal loss to address problem  ...  Table 10 compared the classification results of our proposed network with other state of the art networks, these networks can't solve the problem of class imbalance in binary MI Classification(The accuracy  ... 
doi:10.1038/s41598-021-89414-x pmid:34031436 fatcat:tl23pvt5qjfe7a5qjpnuzdssny

Is deep learning necessary for simple classification tasks? [article]

Joseph D. Romano, Trang T. Le, Weixuan Fu, Jason H. Moore
2020 arXiv   pre-print
Our observations suggest that AutoML outperforms simple DL classifiers when trained on similar datasets for binary classification but integrating DL into AutoML improves classification performance even  ...  Here, we seek to address both of these issues, by (1.) providing a head-to-head comparison of AutoML and DL in the context of binary classification on 6 well-characterized public datasets, and (2.) evaluating  ...  However, for standard binary classification on simple datasets, smaller DL architectures can still substantially outperform shallow learners, both in terms of error and training time (Auer et al., 2002  ... 
arXiv:2006.06730v1 fatcat:xa5sa3ixqzc6nncpvssfm3piji
« Previous Showing results 1 — 15 out of 23,582 results