Filters








3,376 Hits in 7.2 sec

Towards Ignoring Backgrounds and Improving Generalization: a Costless DNN Visual Attention Mechanism [article]

Pedro R.A.S. Bassi, Andrea Cavalli
2022 arXiv   pre-print
By focusing on lungs and ignoring sources of bias in the background, the ISNet reduced the problem.  ...  Thus, it improved generalization to external (out-of-distribution) test datasets in the biomedical classification problems, surpassing a standard classifier, a multi-task DNN (performing classification  ...  of Cyber Infrastructure and Computational Biology (OCICB) in Bethesda, MD.  ... 
arXiv:2202.00232v4 fatcat:tnsjygctz5hdhcscjpfknrawsm

What You See is What You Classify: Black Box Attributions [article]

Steven Stalder, Nathanaël Perraudin, Radhakrishna Achanta, Fernando Perez-Cruz, Michele Volpi
2022 arXiv   pre-print
An important step towards explaining deep image classifiers lies in the identification of image regions that contribute to individual class scores in the model's output.  ...  These attributions are in the form of masks that only show the classifier-relevant parts of an image, masking out the rest.  ...  Unlike existing approaches, we optimize to balance low multi-label binary cross entropy for the unmasked portion of the input image, and high entropy for areas of the image masked out by the Explainer.  ... 
arXiv:2205.11266v1 fatcat:hvrwqh5tirdnxc5ccr5kf3lkju

Spatial Consistency Loss for Training Multi-Label Classifiers from Single-Label Annotations [article]

Thomas Verelst, Paul K. Rubenstein, Marcin Eichner, Tinne Tuytelaars, Maxim Berman
2022 arXiv   pre-print
We aim to train multi-label classifiers from single-label annotations only.  ...  We show that adding a consistency loss, ensuring that the predictions of the network are consistent over consecutive training epochs, is a simple yet effective method to train multi-label classifiers in  ...  We train a multi-label classifier from a dataset of single-label images. In this example, only the zebra is annotated.  ... 
arXiv:2203.06127v1 fatcat:slydvs3upbcq5jghn7jgmjoqbm

Learning to Classify DWDM Optical Channels from Tiny and Imbalanced Data

Paweł Cichosz, Stanisław Kozdrowski, Sławomir Sujecki
2021 Entropy  
Their predictive performance is compared with that of four one-class classification algorithms: One-class SVM, one-class naive Bayes classifier, isolation forest, and maximum entropy modeling.  ...  Applying machine learning algorithms for assessing the transmission quality in optical networks is associated with substantial challenges.  ...  , • isolation forest (IF): The implementation provided by the isotree R package [54], • maximum entropy modeling (ME): 3 , 3 and applying L 2 regularization on leaf values with a regularization coefficient  ... 
doi:10.3390/e23111504 pmid:34828202 pmcid:PMC8623617 fatcat:vltiwpyzzvhvdjyp4kdkzcjnvu

Development of LSTM&CNN Based Hybrid Deep Learning Model to Classify Motor Imagery Tasks [article]

Caglar Uyulan
2020 bioRxiv   pre-print
The performance criteria given in the BCI Competition IV dataset A are estimated. 10-folded Cross-validation (CV) results show that the proposed method outperforms in classifying electroencephalogram (  ...  In this paper, a hybrid method, which fuses the one-dimensional convolutional neural network (1D CNN) with the long short-term memory (LSTM), was performed for classifying four different MI tasks, i.e.  ...  In light of the above basic information, in this paper, the PCA method was preferred to remove artefacts. PCA serves the speed-boosting of the fitting of the classifier by dimensionality reduction.  ... 
doi:10.1101/2020.09.20.305300 fatcat:6dyemk26nvbu5fompqot2h7bfa

Predicting Alzheimer's disease by classifying 3D-Brain MRI images using SVM and other well-defined classifiers

S Matoug, A Abdel-Dayem, K Passi, W Gross, M Alqarni
2012 Journal of Physics, Conference Series  
We assessed the performance of the classifiers by using results from the clinical tests.  ...  Finally, a classifier is trained to differentiate between normal and AD brain tissues.  ...  indicate, respectively, the weight and the bias ( ) are predefined functions of x , that map into a higher-dimensional space C > 0 is the regularization parameter.  ... 
doi:10.1088/1742-6596/341/1/012019 fatcat:niplou5avndppjjswtyeka2qfq

Classifying Component Function in Product Assemblies with Graph Neural Networks [article]

Vincenzo Ferrero, Kaveh Hassani, Daniele Grandi, Bryony DuPont
2021 arXiv   pre-print
These design repositories are a collection of expert knowledge and interpretations of function in product design bounded by function-flow and component taxonomies.  ...  We support automated function classification by learning from repository data to establish the ground truth of component function assignment.  ...  Why the model fails could be attributed to subjectivity in the function definitions and to data imbalance caused by the overall OSDR embedded bias toward defining tier 1 functions and solid flows.  ... 
arXiv:2107.07042v1 fatcat:2dzhajwfwbfvrdvovigyr3oqra

Classifying Operational Events in Cable Yarding by a Machine Learning Application to GNSS-Collected Data: A Case Study on Gravity-Assisted Downhill Yarding

S.A. Borz, M. Cheta, M. Birda, A.R. Proto
2022 Series II - Forestry • Wood Industry • Agricultural Food Engineering  
This study evaluates the possibility of using GNSS data and machine learning techniques to classify important cable yarding events in the time domain.  ...  Data collected by a consumer-grade GNSS unit was processed to extract some differential parameters which were coupled with GNSS motorial and geometric features to feed a Multi-Layer Perceptron Neural Network  ...  When dealing with multi-modal signals or with the possibility of fusing data [5] , supplementary questions may arise in what regards the potential contribution of the information carried by respective  ... 
doi:10.31926/but.fwiafe.2022.15.64.1.2 fatcat:gixzv37ts5brtaaih76hzgec64

User Clustering for MIMO NOMA via Classifier Chains and Gradient-Boosting Decision Trees

Chaouki Ben Issaid, Carles Anton-Haro, Xavier Mestre, Mohamed-Slim Alouini
2020 IEEE Access  
Even if the cost function in the optimization problem (9) is multi-modal in all cases, the sum-rate difference between maxima and minima is larger for increasing K (and so is loss when performing a random  ...  Notice that the minimization of the cross-entropy loss, the score function adopted in Section IV-B for each individual classifier, along with the CC framework results into the minimization of the HL.  ... 
doi:10.1109/access.2020.3038490 fatcat:k2kg35uaw5hhpo34dmbmx5p7py

Comparison of machine learning methods for classifying mediastinal lymph node metastasis of non-small cell lung cancer from 18F-FDG PET/CT images

Hongkai Wang, Zongwei Zhou, Yingci Li, Zhonghua Chen, Peiou Lu, Wenzhi Wang, Wanyu Liu, Lijuan Yu
2017 EJNMMI Research  
However, CNN does not make use of the import diagnostic features, which have been proved more discriminative than the texture features for classifying small-sized lymph nodes.  ...  The present study shows that the performance of CNN is not significantly different from the best classical methods and human doctors for classifying mediastinal lymph node metastasis of NSCLC from PET/  ...  Our CNN also incorporated L2 normalization, ReLU activation function, dropout regularization, categorical cross entropy loss function, and Adadelta learning method.  ... 
doi:10.1186/s13550-017-0260-9 pmid:28130689 pmcid:PMC5272853 fatcat:3uug5wfc6rgjlcbpuqfenn2zqu

On Convergence of Nearest Neighbor Classifiers over Feature Transformations [article]

Luka Rimanic, Cedric Renggli, Bo Li, Ce Zhang
2020 arXiv   pre-print
In this paper, we take a first step towards bridging this gap. We provide a novel analysis on the convergence rates of a kNN classifier over transformed features.  ...  However, it is well known that it suffers from the curse of dimensionality, which is why in practice one often applies a kNN classifier on top of a (pre-trained) feature transformation.  ...  A kNN classifier assigns a label to an unseen point based on its k closest neighbors from the training set using the maximal vote [1] .  ... 
arXiv:2010.07765v1 fatcat:iz62nehekzfpnetoruwdlme3q4

Spatial Global Sensitivity Analysis of High Resolution classified topographic data use in 2D urban flood modelling

Morgan Abily, Nathalie Bertrand, Olivier Delestre, Philippe Gourbesville, Claire-Marie Duluc
2016 Environmental Modelling & Software  
To introduce uncertainty, a Probability Density Function and discrete spatial approach have been applied to generate 2, 000 DEMs.  ...  This paper presents a spatial Global Sensitivity Analysis (GSA) approach in a 2D shallow water equations based High Resolution (HR) flood model.  ...  Acknowledgements Photogrammetric and photo-interpreted dataset used for this study have been kindly provided by Nice Côte dAzur Metropolis for research purpose.  ... 
doi:10.1016/j.envsoft.2015.12.002 fatcat:xlojk7yiqvaxjj77s6syglch3e

Information-theoretic semantic multimedia indexing

João Magalhães, Stefan Rüger
2007 Proceedings of the 6th ACM international conference on Image and video retrieval - CIVR '07  
In our approach, for each possible query keyword we estimate a maximum entropy model based on exclusively continuous features that were preprocessed.  ...  The unique continuous feature-space of text and visual data is constructed by using a minimum description length criterion to find the optimal feature-space representation (optimal from an information  ...  The simplest approach to multi-modal analysis is to design a classifier per modality and combine the output of these classifiers. Westerveld, et al.  ... 
doi:10.1145/1282280.1282368 dblp:conf/civr/MagalhaesR07 fatcat:buqjdgf4dfdf5l76i7xyivjwou

Emergent Communication in a Multi-Modal, Multi-Step Referential Game [article]

Katrina Evtimova, Andrew Drozdov, Douwe Kiela, Kyunghyun Cho
2018 arXiv   pre-print
Inspired by previous work on emergent communication in referential games, we propose a novel multi-modal, multi-step referential game, where the sender and receiver have access to distinct modalities of  ...  The multi-modal multi-step setting allows agents to develop an internal communication significantly closer to natural language, in that they share a single set of messages, and that the length of the conversation  ...  This work is done by KE as a part of the course DS-GA 1010-001 Independent Study in Data Science at the Center for Data Science, New York University.  ... 
arXiv:1705.10369v4 fatcat:6xljjjxnqzgw7i5nr6qx5fw5ue

Integration of Entropy Maximization and Quantum Behaved Particle Swarm Algorithm for Unsupervised Change Detection of MR Skull Bone Lesions

Ankita Mitra, Arunava De, Anup Kumar Bhattacharjee
2015 International Journal of Computer Applications  
Entropy is the measure of randomness in a system whereas the entropy maximization procedure leads to the most probable state of a system behaviour.Entropy maximization using an optimization algorithm is  ...  An Quantum Particle Swarm algorithm together with Entropy maximization helps us to get the most probable threshold value which correctly segments the lesions from the background in MR of brain.  ...  RELATED WORK The histogram of multi-modal images has multiple peaks as opposed to two peaks for a bi-modal image . A.De et.al [3] proposed an Entropy Maximization based MR segmentation method.  ... 
doi:10.5120/20617-3321 fatcat:g27hiiax3ncs7ndtabcvao3j5a
« Previous Showing results 1 — 15 out of 3,376 results