Filters








34 Hits in 1.6 sec

Explaining Deep Learning Models using Causal Inference [article]

Tanmayee Narendra, Anush Sankaran, Deepak Vijaykeerthy, Senthil Mani
<span title="2018-11-11">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Although deep learning models have been successfully applied to a variety of tasks, due to the millions of parameters, they are becoming increasingly opaque and complex. In order to establish trust for their widespread commercial use, it is important to formalize a principled framework to reason over these models. In this work, we use ideas from causal inference to describe a general framework to reason over CNN models. Specifically, we build a Structural Causal Model (SCM) as an abstraction
more &raquo; ... r a specific aspect of the CNN. We also formulate a method to quantitatively rank the filters of a convolution layer according to their counterfactual importance. We illustrate our approach with popular CNN architectures such as LeNet5, VGG19, and ResNet32.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.04376v1">arXiv:1811.04376v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q3t2jhgq4fhonoinwjaq2frv2y">fatcat:q3t2jhgq4fhonoinwjaq2frv2y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200829051234/https://arxiv.org/pdf/1811.04376v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e8/96/e896520c14f623ac6095c127ebfcea2da129be41.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.04376v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

mAnI: Movie Amalgamation using Neural Imitation [article]

Naveen Panwar, Shreya Khare, Neelamadhav Gantayat, Rahul Aralikatte, Senthil Mani, Anush Sankaran
<span title="2017-08-16">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Cross-modal data retrieval has been the basis of various creative tasks performed by Artificial Intelligence (AI). One such highly challenging task for AI is to convert a book into its corresponding movie, which most of the creative film makers do as of today. In this research, we take the first step towards it by visualizing the content of a book using its corresponding movie visuals. Given a set of sentences from a book or even a fan-fiction written in the same universe, we employ deep
more &raquo; ... g models to visualize the input by stitching together relevant frames from the movie. We studied and compared three different types of setting to match the book with the movie content: (i) Dialog model: using only the dialog from the movie, (ii) Visual model: using only the visual content from the movie, and (iii) Hybrid model: using the dialog and the visual content from the movie. Experiments on the publicly available MovieBook dataset shows the effectiveness of the proposed models.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.04923v1">arXiv:1708.04923v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ylsyxjijlffaroppm32tx57lmu">fatcat:ylsyxjijlffaroppm32tx57lmu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200824202006/https://arxiv.org/pdf/1708.04923v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/8e/4d/8e4d09662562f2eb894703d88ed447305d6bacfe.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1708.04923v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Reducing Overlearning through Disentangled Representations by Suppressing Unknown Tasks [article]

Naveen Panwar, Tarun Tater, Anush Sankaran, Senthil Mani
<span title="2020-05-20">2020</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Existing deep learning approaches for learning visual features tend to overlearn and extract more information than what is required for the task at hand. From a privacy preservation perspective, the input visual information is not protected from the model; enabling the model to become more intelligent than it is trained to be. Current approaches for suppressing additional task learning assume the presence of ground truth labels for the tasks to be suppressed during training time. In this
more &raquo; ... h, we propose a three-fold novel contribution: (i) a model-agnostic solution for reducing model overlearning by suppressing all the unknown tasks, (ii) a novel metric to measure the trust score of a trained deep learning model, and (iii) a simulated benchmark dataset, PreserveTask, having five different fundamental image classification tasks to study the generalization nature of models. In the first set of experiments, we learn disentangled representations and suppress overlearning of five popular deep learning models: VGG16, VGG19, Inception-v1, MobileNet, and DenseNet on PreserverTask dataset. Additionally, we show results of our framework on color-MNIST dataset and practical applications of face attribute preservation in Diversity in Faces (DiF) and IMDB-Wiki dataset.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.10220v1">arXiv:2005.10220v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/fn62wlzyfjgzdprgd7qreghm3e">fatcat:fn62wlzyfjgzdprgd7qreghm3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200527074334/https://arxiv.org/pdf/2005.10220v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c6/58/c658d94784bb6a19823ee7ee5ec3a52af6f7e0a0.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2005.10220v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Coverage Testing of Deep Learning Models using Dataset Characterization [article]

Senthil Mani, Anush Sankaran, Srikanth Tamilselvam, Akshay Sethi
<span title="2019-11-17">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Deep Neural Networks (DNNs), with its promising performance, are being increasingly used in safety critical applications such as autonomous driving, cancer detection, and secure authentication. With growing importance in deep learning, there is a requirement for a more standardized framework to evaluate and test deep learning models. The primary challenge involved in automated generation of extensive test cases are: (i) neural networks are difficult to interpret and debug and (ii) availability
more &raquo; ... f human annotators to generate specialized test points. In this research, we explain the necessity to measure the quality of a dataset and propose a test case generation system guided by the dataset properties. From a testing perspective, four different dataset quality dimensions are proposed: (i) equivalence partitioning, (ii) centroid positioning, (iii) boundary conditioning, and (iv) pair-wise boundary conditioning. The proposed system is evaluated on well known image classification datasets such as MNIST, Fashion-MNIST, CIFAR10, CIFAR100, and SVHN against popular deep learning models such as LeNet, ResNet-20, VGG-19. Further, we conduct various experiments to demonstrate the effectiveness of systematic test case generation system for evaluating deep learning models.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.07309v1">arXiv:1911.07309v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7mepxnianfamvdqu75hqnismjm">fatcat:7mepxnianfamvdqu75hqnismjm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200902203046/https://arxiv.org/pdf/1911.07309v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/e2/1e/e21ed2466f6b73bc8f52a5e362438640cfa4e1dc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.07309v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

DeepTriage: Exploring the Effectiveness of Deep Learning for Bug Triaging [article]

Senthil Mani, Anush Sankaran, Rahul Aralikatte
<span title="2018-01-04">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
For a given software bug report, identifying an appropriate developer who could potentially fix the bug is the primary task of a bug triaging process. A bug title (summary) and a detailed description is present in most of the bug tracking systems. Automatic bug triaging algorithm can be formulated as a classification problem, with the bug title and description as the input, mapping it to one of the available developers (classes). The major challenge is that the bug description usually contains
more &raquo; ... combination of free unstructured text, code snippets, and stack trace making the input data noisy. The existing bag-of-words (BOW) feature models do not consider the syntactical and sequential word information available in the unstructured text. We propose a novel bug report representation algorithm using an attention based deep bidirectional recurrent neural network (DBRNN-A) model that learns a syntactic and semantic feature from long word sequences in an unsupervised manner. Instead of BOW features, the DBRNN-A based bug representation is then used for training the classifier. Using an attention mechanism enables the model to learn the context representation over a long word sequence, as in a bug report. To provide a large amount of data to learn the feature learning model, the unfixed bug reports (~70% bugs in an open source bug tracking system) are leveraged, which were completely ignored in the previous studies. Another contribution is to make this research reproducible by making the source code available and creating a public benchmark dataset of bug reports from three open source bug tracking system: Google Chromium (383,104 bug reports), Mozilla Core (314,388 bug reports), and Mozilla Firefox (162,307 bug reports). Experimentally we compare our approach with BOW model and machine learning approaches and observe that DBRNN-A provides a higher rank-10 average accuracy.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1801.01275v1">arXiv:1801.01275v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/lrgolviacbhflbzqu5bax56e3e">fatcat:lrgolviacbhflbzqu5bax56e3e</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200903112430/https://arxiv.org/pdf/1801.01275v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c9/e1/c9e1272e6e21f5721555c3f90bb5f6655d95aeb9.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1801.01275v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On Causal Inference for Data-free Structured Pruning [article]

Martin Ferianc, Anush Sankaran, Olivier Mastropietro, Ehsan Saboori, Quentin Cappart
<span title="2021-12-19">2021</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Acknowledgements This work was completed, while Martin Ferianc was an intern and Anush Sankaran was a research scientist at Deeplite Inc. This research was supported by MITACS/IT26487.  ...  For example, by compression and subsequent fine-tuning, ResNet-18's compute operations' count can be reduced by 7× and its memory foot-print by 4.5× (Sankaran et al. 2021) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.10229v1">arXiv:2112.10229v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/q2wpysboxfeojhohabzy3ewxdu">fatcat:q2wpysboxfeojhohabzy3ewxdu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20211226173311/https://arxiv.org/pdf/2112.10229v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/66/d1/66d16597e3b99886515ed6eda9bf2ddd246306a7.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/2112.10229v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

AuthorGAN: Improving GAN Reproducibility using a Modular GAN Framework [article]

Raunak Sinha, Anush Sankaran, Mayank Vatsa, Richa Singh
<span title="2019-11-26">2019</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Generative models are becoming increasingly popular in the literature, with Generative Adversarial Networks (GAN) being the most successful variant, yet. With this increasing demand and popularity, it is becoming equally difficult and challenging to implement and consume GAN models. A qualitative user survey conducted across 47 practitioners show that expert level skill is required to use GAN model for a given task, despite the presence of various open source libraries. In this research, we
more &raquo; ... ose a novel system called AuthorGAN, aiming to achieve true democratization of GAN authoring. A highly modularized library agnostic representation of GAN model is defined to enable interoperability of GAN architecture across different libraries such as Keras, Tensorflow, and PyTorch. An intuitive drag-and-drop based visual designer is built using node-red platform to enable custom architecture designing without the need for writing any code. Five different GAN models are implemented as a part of this framework and the performance of the different GAN models are shown using the benchmark MNIST dataset.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.13250v1">arXiv:1911.13250v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ja3v7megqrfmrkz6jtfbgqahuq">fatcat:ja3v7megqrfmrkz6jtfbgqahuq</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200907163509/https://arxiv.org/pdf/1911.13250v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/6b/59/6b5985562576283afa2074941c89c3cfe3245e25.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1911.13250v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

Multisensor Optical and Latent Fingerprint Database

Anush Sankaran, Mayank Vatsa, Richa Singh
<span title="">2015</span> <i title="Institute of Electrical and Electronics Engineers (IEEE)"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/q7qi7j4ckfac7ehf3mjbso4hne" style="color: black;">IEEE Access</a> </i> &nbsp;
Large-scale fingerprint recognition involves capturing ridge patterns at different time intervals using various methods, such as live-scan and paper-ink approaches, introducing intraclass variations in the fingerprint. The performance of existing algorithms is significantly affected when fingerprints are captured with diverse acquisition settings such as multisession, multispectral, multiresolution, with slap, and with latent fingerprints. One of the primary challenges in developing a generic
more &raquo; ... d robust fingerprint matching algorithm is the limited availability of large data sets that capture such intraclass diversity. In this paper, we present the multisensor optical and latent fingerprint database of more than 19 000 fingerprint images with different intraclass variations during fingerprint capture. We also showcase the baseline results of various matching experiments on this database. The database is aimed to drive research in building robust algorithms toward solving the problem of latent fingerprint matching and handling intraclass variations in fingerprint capture. Some potential applications for this database are identified and the research challenges that can be addressed using this database are also discussed. INDEX TERMS Image databases, fingerprint recognition, forensics, feature extraction.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2015.2428631">doi:10.1109/access.2015.2428631</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/yt3yzpituzfx3dm5oppkzcwvzm">fatcat:yt3yzpituzfx3dm5oppkzcwvzm</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20180724042013/https://ieeexplore.ieee.org/ielx7/6287639/7042252/07098322.pdf?tp=&amp;arnumber=7098322&amp;isnumber=7042252" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/96/16/9616b1b6d08ad1047e7153d684d6ce9cda9bc35a.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/access.2015.2428631"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="unlock alternate icon" style="background-color: #fb971f;"></i> ieee.com </button> </a>

On Matching Faces with Alterations due to Plastic Surgery and Disguise [article]

Saksham Suri, Anush Sankaran, Mayank Vatsa, Richa Singh
<span title="2018-11-18">2018</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Plastic surgery and disguise variations are two of the most challenging co-variates of face recognition. The state-of-art deep learning models are not sufficiently successful due to the availability of limited training samples. In this paper, a novel framework is proposed which transfers fundamental visual features learnt from a generic image dataset to supplement a supervised face recognition model. The proposed algorithm combines off-the-shelf supervised classifier and a generic, task
more &raquo; ... ent network which encodes information related to basic visual cues such as color, shape, and texture. Experiments are performed on IIITD plastic surgery face dataset and Disguised Faces in the Wild (DFW) dataset. Results showcase that the proposed algorithm achieves state of the art results on both the datasets. Specifically on the DFW database, the proposed algorithm yields over 87% verification accuracy at 1% false accept rate which is 53.8% better than baseline results computed using VGGFace.
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.07318v1">arXiv:1811.07318v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/45mqmty6tvhvflagl3cm6yw7gy">fatcat:45mqmty6tvhvflagl3cm6yw7gy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20200911162518/https://arxiv.org/pdf/1811.07318v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/9e/31/9e317903b1afc0b9ee4544786f44ccf7087057b6.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1811.07318v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

DLPaper2Code: Auto-generation of Code from Deep Learning Research Papers [article]

Akshay Sethi, Anush Sankaran, Naveen Panwar, Shreya Khare, Senthil Mani
<span title="2017-11-09">2017</span> <i > arXiv </i> &nbsp; <span class="release-stage" >pre-print</span>
Implementing research papers takes at least a few days of effort for soft-ware engineers assuming that they have limited knowledge in DL (Sankaran et al. 2011) .  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.03543v1">arXiv:1711.03543v1</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/ecsgaxn2kzbqbax3xfwbwucsvu">fatcat:ecsgaxn2kzbqbax3xfwbwucsvu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20191027031049/https://arxiv.org/pdf/1711.03543v1.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/f0/9b/f09b2d3858e7709667ef8f8335ee2f9054a831fc.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener" href="https://arxiv.org/abs/1711.03543v1" title="arxiv.org access"> <button class="ui compact blue labeled icon button serp-button"> <i class="file alternate outline icon"></i> arxiv.org </button> </a>

On matching latent to latent fingerprints

Anush Sankaran, Tejas I. Dhamecha, Mayank Vatsa, Richa Singh
<span title="">2011</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/73tg2istqjeqjiyqefyhcj3ud4" style="color: black;">2011 International Joint Conference on Biometrics (IJCB)</a> </i> &nbsp;
This research presents a forensics application of matching two latent fingerprints. In crime scene settings, it is often required to match multiple latent fingerprints. Unlike matching latent with inked or live fingerprints, this research problem is very challenging and requires proper analysis and attention. The contribution of this paper is three fold: (i) a comparative analysis of existing algorithms is presented for this application, (ii) fusion and context switching frameworks are
more &raquo; ... to improve the identification performance, and (iii) a multi-latent fingerprint database is prepared. The experiments highlight the need for improved feature extraction and processing methods and exhibit large scope of improvement in this important research problem.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ijcb.2011.6117525">doi:10.1109/ijcb.2011.6117525</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icb/SankaranDVS11.html">dblp:conf/icb/SankaranDVS11</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/6d4mdrw4pvdopptr4642jyntiy">fatcat:6d4mdrw4pvdopptr4642jyntiy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170516165335/http://www.iab-rubric.org:80/papers/IJCB11-Latent.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/c4/0b/c40b0ca6723c49675adeeaa326dd5b3673173711.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ijcb.2011.6117525"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Adaptive latent fingerprint segmentation using feature selection and random decision forest classification

Anush Sankaran, Aayush Jain, Tarun Vashisth, Mayank Vatsa, Richa Singh
<span title="">2017</span> <i title="Elsevier BV"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/u3qqmkiofjejrnpdxh3hdgssm4" style="color: black;">Information Fusion</a> </i> &nbsp;
Anush Sankaran is partly supported by the TCS PhD research fellowship. The authors acknowledge Prof. C.-C. Jay Kuo and Dr.  ...  A recent survey by Sankaran et al.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.inffus.2016.05.002">doi:10.1016/j.inffus.2016.05.002</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/7av3fvetpfbodkxnarul3lbt7y">fatcat:7av3fvetpfbodkxnarul3lbt7y</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170227015110/http://iab-rubric.org/papers/INFFUS16-Segment.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0a/50/0a505a4fb0a07e850303d378c6cc7ccdd1390285.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1016/j.inffus.2016.05.002"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> elsevier.com </button> </a>

On latent fingerprint minutiae extraction using stacked denoising sparse AutoEncoders

Anush Sankaran, Prateekshit Pandey, Mayank Vatsa, Richa Singh
<span title="">2014</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/73tg2istqjeqjiyqefyhcj3ud4" style="color: black;">IEEE International Joint Conference on Biometrics</a> </i> &nbsp;
Acknowledgement Sankaran was partially supported through TCS Research Fellowship.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/btas.2014.6996300">doi:10.1109/btas.2014.6996300</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icb/SankaranPVS14.html">dblp:conf/icb/SankaranPVS14</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/sobja5ve2jdvbk2354nl72xkyu">fatcat:sobja5ve2jdvbk2354nl72xkyu</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170829164510/http://www.iab-rubric.org/papers/Latent-SDAE.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/0a/34/0a347e17c6614b007244b29fb4e3d8b70f88b5de.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/btas.2014.6996300"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Is gender classification across ethnicity feasible using discriminant functions?

Tejas I. Dhamecha, Anush Sankaran, Richa Singh, Mayank Vatsa
<span title="">2011</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/73tg2istqjeqjiyqefyhcj3ud4" style="color: black;">2011 International Joint Conference on Biometrics (IJCB)</a> </i> &nbsp;
Over the years, automatic gender recognition has been used in many applications. However, limited research has been done on analyzing gender recognition across ethnicity scenario. This research aims at studying the performance of discriminant functions including Principal Component Analysis, Linear Discriminant Analysis and Subclass Discriminant Analysis with the availability of limited training database and unseen ethnicity variations. The experiments are performed on a heterogeneous database
more &raquo; ... f 8112 images that includes variations in illumination, expression, minor pose and ethnicity. Contrary to existing literature, the results show that PCA provides comparable but slightly better performance compared to PCA+LDA, PCA+SDA and PCA+SVM. The results also suggest that linear discriminant functions provide good generalization capability even with limited number of training samples, principal components and with cross-ethnicity variations.
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ijcb.2011.6117524">doi:10.1109/ijcb.2011.6117524</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/icb/DhamechaSSV11.html">dblp:conf/icb/DhamechaSSV11</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/e5dldc77fvbxnkhhva5dc3np3u">fatcat:e5dldc77fvbxnkhhva5dc3np3u</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170829083947/http://www.iab-rubric.org/papers/IJCB11-Gender.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/ed/74/ed74363c54ef59e99a983e85f02be9d263f66ffb.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/ijcb.2011.6117524"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>

Hierarchical fusion for matching simultaneous latent fingerprint

Anush Sankaran, Mayank Vatsa, Richa Singh
<span title="">2012</span> <i title="IEEE"> <a target="_blank" rel="noopener" href="https://fatcat.wiki/container/filxofifdnaudfmjhwn46yso2m" style="color: black;">2012 IEEE Fifth International Conference on Biometrics: Theory, Applications and Systems (BTAS)</a> </i> &nbsp;
Sankaran and M. Vatsa is are partly supported through TCS fellowship and DST, India (under FAST track) respectively.  ... 
<span class="external-identifiers"> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/btas.2012.6374604">doi:10.1109/btas.2012.6374604</a> <a target="_blank" rel="external noopener" href="https://dblp.org/rec/conf/btas/SankaranVS12.html">dblp:conf/btas/SankaranVS12</a> <a target="_blank" rel="external noopener" href="https://fatcat.wiki/release/k5wnf6b24bb3fkgon2c47zkasy">fatcat:k5wnf6b24bb3fkgon2c47zkasy</a> </span>
<a target="_blank" rel="noopener" href="https://web.archive.org/web/20170227015329/http://iab-rubric.org/papers/SimultanouePaper_finalSubmission.pdf" title="fulltext PDF download" data-goatcounter-click="serp-fulltext" data-goatcounter-title="serp-fulltext"> <button class="ui simple right pointing dropdown compact black labeled icon button serp-button"> <i class="icon ia-icon"></i> Web Archive [PDF] <div class="menu fulltext-thumbnail"> <img src="https://blobs.fatcat.wiki/thumbnail/pdf/11/31/113108fbdc16d0996791a28577857606ac4a4dec.180px.jpg" alt="fulltext thumbnail" loading="lazy"> </div> </button> </a> <a target="_blank" rel="external noopener noreferrer" href="https://doi.org/10.1109/btas.2012.6374604"> <button class="ui left aligned compact blue labeled icon button serp-button"> <i class="external alternate icon"></i> ieee.com </button> </a>
&laquo; Previous Showing results 1 &mdash; 15 out of 34 results