Filters








18 Hits in 2.8 sec

Improving Deep Network Robustness to Unknown Inputs with Objectosphere

Akshay Raj Dhamija, Manuel Günther, Terrance E. Boult
2019 Computer Vision and Pattern Recognition  
The mis-classification of these unknown inputs as one of the known classes highlights the need for more robust deep networks.  ...  Deep Neural Networks trained on academic datasets often fail when applied to the real world. These failures generally arise from unknown inputs that are not of interest to the system.  ...  Rather than rejecting unknown samples x ∈ D u , the two novel loss functions develop deep feature representations that are more robust to unknown inputs.  ... 
dblp:conf/cvpr/DhamijaGB19 fatcat:b35i4nnw3rf4xm67zyonyy4k2q

Reducing Network Agnostophobia [article]

Akshay Raj Dhamija, Manuel Günther, Terrance E. Boult
2018 arXiv   pre-print
Experiments on networks trained to classify classes from MNIST and CIFAR-10 show that our novel loss functions are significantly better at dealing with unknown inputs from datasets such as Devanagari,  ...  These novel losses are designed to maximize entropy for unknown inputs while increasing separation in deep feature space by modifying magnitudes of known and unknown samples.  ...  Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright annotation thereon.  ... 
arXiv:1811.04110v2 fatcat:5yw56hdxmrdnzcc2oh6kfkvwdu

Improved Robustness to Open Set Inputs via Tempered Mixup [article]

Ryne Roady, Tyler L. Hayes, Christopher Kanan
2020 arXiv   pre-print
Here, we propose a simple regularization technique easily applied to existing convolutional neural network architectures that improves open set robustness without a background dataset.  ...  However, real-world classifiers must handle inputs that are far from the training distribution including samples from unknown classes.  ...  with a confidence loss framework to reduce the network activation towards these unknown samples.  ... 
arXiv:2009.04659v1 fatcat:bvvztda72nbpbgxkeivtxfjiu4

Fast and Automatic Object Registration for Human-Robot Collaboration in Industrial Manufacturing [article]

Manuela Geiß, Martin Baresch, Georgios Chasparis, Edwin Schweiger, Nico Teringl, Michael Zwick
2022 arXiv   pre-print
Moreover, we present a new loss, the intraspread-objectosphere loss, to tackle the problem of open world recognition.  ...  The intervention of a human operator reduces to providing the new object together with its label and starting the training process.  ...  In this project, we use the Deep Neural Network Development Kit (DNNDK) 3 from Xilinx to transfer our trained Faster R-CNN model to an FPGA.  ... 
arXiv:2204.00597v1 fatcat:2egtvqucnjecfjltokdpij52am

Towards Accurate Open-Set Recognition via Background-Class Regularization [article]

Wonwoo Cho, Jaegul Choo
2022 arXiv   pre-print
, or complicated network architectures.  ...  In open-set recognition (OSR), classifiers should be able to reject unknown-class samples while maintaining high closed-set classification accuracy.  ...  Since deep neural networks (DNNs) have robust classification performance by learning high-level representations of data, OSR methods for DNNs have received great attention.  ... 
arXiv:2207.10287v1 fatcat:oxk25vy3krhc7cwpol2kqrudlm

Deep Learning and Open Set Malware Classification: A Survey [article]

Jingyun Jia
2020 arXiv   pre-print
The dramatic increase of malware has led to a research area of not only using cutting edge machine learning techniques classify malware into their known families, moreover, recognize the unknown ones,  ...  Under the situation of missing unknown training samples, the OSR system should not only correctly classify the known classes, but also recognize the unknown class.  ...  Recurrent neural networks Another type of popular architecture of Deep Neural Networks is Recurrent Neural Networks (RNNs), which involves sequential inputs, such as speech and language.  ... 
arXiv:2004.04272v1 fatcat:332sfs7davh2hkxoehiyjta2y4

Are Out-of-Distribution Detection Methods Effective on Large-Scale Datasets? [article]

Ryne Roady, Tyler L. Hayes, Ronald Kemker, Ayesha Gonzales, and Christopher Kanan
2019 arXiv   pre-print
For convolutional neural networks, there have been two major approaches: 1) inference methods to separate knowns from unknowns and 2) feature space regularization strategies to improve model robustness  ...  However, deployed classifiers often require the ability to recognize inputs from outside the training set as unknowns.  ...  The goal of different feature space regularization strategies is to build robustness into the deep feature space by separating knowns from potential unknowns.  ... 
arXiv:1910.14034v1 fatcat:d63xwwy3yffhnf3g6khofors3i

A Wholistic View of Continual Learning with Deep Neural Networks: Forgotten Lessons and the Bridge to Active and Open World Learning [article]

Martin Mundt, Yong Won Hong, Iuliia Pliushch, Visvanathan Ramesh
2020 arXiv   pre-print
This poses a massive challenge as neural networks are well known to provide overconfident false predictions on unknown instances and break down in the face of corrupted data.  ...  Based on these forgotten lessons, we propose a consolidated view to bridge continual learning, active learning and open set recognition in deep neural networks.  ...  This should ultimately lead to improved, more robust and simpler machine learning systems.  ... 
arXiv:2009.01797v2 fatcat:nxsj7utiqjehlkecknpadryd2i

Recognition of Unknown Radar Emitters with Machine Learning

Sabine Apfeld, Alexander Charlish
2021 IEEE Transactions on Aerospace and Electronic Systems  
The results show that unknown emitters that do not use known waveforms are reliably recognised even with corrupted data, while unknown emitters that are more similar to known ones are harder to detect.  ...  We consider two general approaches, which are the "memoryless" Markov chain and the Long Short-Term Memory recurrent neural network, which is specially designed to "remember" the past.  ...  ACKNOWLEDGEMENT Sabine Apfeld would like to thank Isabel Schlangen for the helpful discussions that improved the paper. research on intelligent sensing with a focus on cognitive radar and resources management  ... 
doi:10.1109/taes.2021.3098125 fatcat:fwbbqh77s5hwbfspfmjoowhiae

Open Set Medical Diagnosis [article]

Viraj Prabhu, Anitha Kannan, Geoffrey J. Tso, Namit Katariya, Manish Chablani, David Sontag, Xavier Amatriain
2019 arXiv   pre-print
Further, we extend our study to a setting where training data is distributed across several healthcare sites that do not allow data pooling, and experiment with different strategies of building open-set  ...  Across both settings, we observe consistent gains from explicitly modeling unseen conditions, but find the optimal training strategy to vary across settings.  ...  deep networks under various frameworks.  ... 
arXiv:1910.02830v1 fatcat:jcdcm3wcnbe7bcn4xhyzbjp4ei

Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition

Martin Mundt, Iuliia Pliushch, Sagnik Majumder, Yongwon Hong, Visvanathan Ramesh
2022 Journal of Imaging  
Modern deep neural networks are well known to be brittle in the face of unknown data instances and recognition of the latter remains a challenge.  ...  These bounds are shown to serve a dual purpose: unseen unknown out-of-distribution data can be distinguished from already trained known tasks towards robust application.  ...  Similarly, the objectosphere loss [40] defines an objective that explicitly aims to maximize entropy for upfront available unknown inputs.  ... 
doi:10.3390/jimaging8040093 pmid:35448220 pmcid:PMC9028364 fatcat:gz6u2rxcjzebrifmrzxylla6pe

A Unified Survey on Anomaly, Novelty, Open-Set, and Out-of-Distribution Detection: Solutions and Future Challenges [article]

Mohammadreza Salehi, Hossein Mirzaei, Dan Hendrycks, Yixuan Li, Mohammad Hossein Rohban, Mohammad Sabokrou
2022 arXiv   pre-print
Detecting OOD samples is challenging due to the intractability of modeling all possible unknown distributions.  ...  Failure to recognize an out-of-distribution (OOD) sample, and consequently assign that sample to an in-class label significantly compromises the reliability of a model.  ...  (c) shows the last layer when the network is trained with the objectosphere loss.  ... 
arXiv:2110.14051v4 fatcat:clurccogknbs5f2s52oakndhju

Watchlist Adaptation: Protecting the Innocent

Manuel Günther, Akshay Raj Dhamija, Terrance E Boult
2020
the deep features of the Objectosphere network, which works much better than the direct prediction of the very same network.  ...  the deep features of the Objectosphere network, which works much better than the direct prediction of the very same network.  ...  Thus we argue that for dealing with unknown subjects in a watchlist, we ideally want to learn deep features that separate the known subjects from unknown inputs.  ... 
doi:10.5167/uzh-190793 fatcat:siwtgknmm5fvbi5lamfbuomrie

Adversarial Motorial Prototype Framework for Open Set Recognition [article]

Ziheng Xia, Penghui Wang, Ganggang Dong, Hongwei Liu
2021 arXiv   pre-print
and unknown classes with the adversarial motion of the margin constraint radius.  ...  Open set recognition is designed to identify known classes and to reject unknown classes simultaneously.  ...  et al. evaluated the open space risk of trained deep neural networks at the earliest, and they proposed a novel objectosphere loss function to reduce the open space risk by maximizing the entropy of unknown  ... 
arXiv:2108.04225v1 fatcat:fcnfhh7thvbtpjjvs52ywvqjxq

Entropic Out-of-Distribution Detection: Seamless Detection of Unknown Examples [article]

David Macêdo, Tsang Ing Ren, Cleber Zanchettin, Adriano L. I. Oliveira, Teresa Ludermir
2021 arXiv   pre-print
Our experiments showed that IsoMax loss works as a seamless SoftMax loss drop-in replacement that significantly improves neural networks' OOD detection performance.  ...  In this paper, we argue that the unsatisfactory out-of-distribution (OOD) detection performance of neural networks is mainly due to the SoftMax loss anisotropy and propensity to produce low entropy probability  ...  The Entropic Open-Set loss and the Objectosphere loss were proposed in [28] . These two losses used background samples to improve the performance of detecting unknown inputs.  ... 
arXiv:2006.04005v3 fatcat:noxs7ektcbavlingmy3p533fxu
« Previous Showing results 1 — 15 out of 18 results