6 Hits in 1.4 sec

SPot-the-Difference Self-Supervised Pre-training for Anomaly Detection and Segmentation [article]

Yang Zou, Jongheon Jeong, Latha Pemula, Dongqing Zhang, Onkar Dabeer
2022 arXiv   pre-print
Visual anomaly detection is commonly used in industrial quality inspection. In this paper, we present a new dataset as well as a new self-supervised learning method for ImageNet pre-training to improve anomaly detection and segmentation in 1-class and 2-class 5/10/high-shot training setups. We release the Visual Anomaly (VisA) Dataset consisting of 10,821 high-resolution color images (9,621 normal and 1,200 anomalous samples) covering 12 objects in 3 domains, making it the largest industrial
more » ... maly detection dataset to date. Both image and pixel-level labels are provided. We also propose a new self-supervised framework - SPot-the-difference (SPD) - which can regularize contrastive self-supervised pre-training, such as SimSiam, MoCo and SimCLR, to be more suitable for anomaly detection tasks. Our experiments on VisA and MVTec-AD dataset show that SPD consistently improves these contrastive pre-training baselines and even the supervised pre-training. For example, SPD improves Area Under the Precision-Recall curve (AU-PR) for anomaly segmentation by 5.9% and 6.8% over SimSiam and supervised pre-training respectively in the 2-class high-shot regime. We open-source the project at .
arXiv:2207.14315v1 fatcat:lczrzy3j4vgo5o7av5um7yejqm

Radio Transformer Networks: Attention Models for Learning to Synchronize in Wireless Systems [article]

Timothy J O'Shea, Latha Pemula, Dhruv Batra, T. Charles Clancy
2016 arXiv   pre-print
We introduce learned attention models into the radio machine learning domain for the task of modulation recognition by leveraging spatial transformer networks and introducing new radio domain appropriate transformations. This attention model allows the network to learn a localization network capable of synchronizing and normalizing a radio signal blindly with zero knowledge of the signals structure based on optimization of the network for classification accuracy, sparse representation, and
more » ... arization. Using this architecture we are able to outperform our prior results in accuracy vs signal to noise ratio against an identical system without attention, however we believe such an attention model has implication far beyond the task of modulation recognition.
arXiv:1605.00716v1 fatcat:7cjqev6npjgd5glkwvo6elwoxu

Towards Total Recall in Industrial Anomaly Detection [article]

Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard Schölkopf, Thomas Brox, Peter Gehler
2022 arXiv   pre-print
Being able to spot defective parts is a critical component in large-scale industrial manufacturing. A particular challenge that we address in this work is the cold-start problem: fit a model using nominal (non-defective) example images only. While handcrafted solutions per class are possible, the goal is to build systems that work well simultaneously on many different tasks automatically. The best performing approaches combine embeddings from ImageNet models with an outlier detection model. In
more » ... his paper, we extend on this line of work and propose PatchCore, which uses a maximally representative memory bank of nominal patch-features. PatchCore offers competitive inference times while achieving state-of-the-art performance for both detection and localization. On the challenging, widely used MVTec AD benchmark PatchCore achieves an image-level anomaly detection AUROC score of up to 99.6%, more than halving the error compared to the next best competitor. We further report competitive results on two additional datasets and also find competitive results in the few samples regime.^* Work done during a research internship at Amazon AWS. Code:
arXiv:2106.08265v2 fatcat:eopc2iydijeozoen3gw2gyhb7m


Nina Ariesta, Risansyah Rifansyah, Dian Arrisujaya, Mamay Maslahat
2018 Jurnal Sains Natural  
serta kulit manggis (Zein, Suhaili, Earnestly, Indrawati, & Munaf, 2010) dan yang berbasiskan geomaterial seperti tanah liat dan perlit (Dyer, Tangkawanit, & Rangsriwatananon, 2004; dan Prakash, Latha  ...  Penelitian ini didanai dengan program Penelitian Kompetitif Nasional dengan Skim Penelitian Dosen Pemula (PDP) tahun pelaksanaan 2018 sesuai kontrak nomor : 0802/K4/KM/2018 tanggal 12 Februari 2018 oleh  ... 
doi:10.31938/jsn.v8i2.157 fatcat:3t2ctptiyncp5blyje5s5lytga

Visual Dialog [article]

Abhishek Das, Satwik Kottur, Khushi Gupta, Avi Singh, Deshraj Yadav, José M. F. Moura, Devi Parikh, Dhruv Batra
2017 arXiv   pre-print
Acknowledgements We thank Harsh Agrawal, Jiasen Lu for help with AMT data collection; Xiao Lin, Latha Pemula for model discussions; Marco Baroni, Antoine Bordes, Mike Lewis, Marc'Aurelio Ranzato for helpful  ... 
arXiv:1611.08669v5 fatcat:iylbi6zjsrbhbiudq6h6tkqtzq

Anomaly Clustering: Grouping Images into Coherent Clusters of Anomaly Types [article]

Kihyuk Sohn, Jinsung Yoon, Chun-Liang Li, Chen-Yu Lee, Tomas Pfister
2021 arXiv   pre-print
Distinctive image features from scale- [50] Karsten Roth, Latha Pemula, Joaquin Zepeda, Bernhard invariant keypoints.  ... 
arXiv:2112.11573v1 fatcat:swvhwaq6tjcppo77wg2qtmi5bq