Filters








1,720 Hits in 4.8 sec

Constrained Image Generation Using Binarized Neural Networks with Decision Procedures [article]

Svyatoslav Korneev, Nina Narodytska, Luca Pulina, Armando Tacchella, Nikolaj Bjorner, Mooly Sagiv
2018 arXiv   pre-print
However, embedding a PDE solver into the search procedure is computationally expensive. We use a binarized neural network to approximate a PDE solver.  ...  We consider the problem of binary image generation with given properties.  ...  The generative neural network approach One approach to tackle the constrained image generation problem is to use generative adversarial networks (GANs) [6, 11] .  ... 
arXiv:1802.08795v1 fatcat:pabarbc2drglrhd2u5agp5uzt4

Formal Analysis of Deep Binarized Neural Networks

Nina Narodytska
2018 Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence  
Then we focus on Binarized Neural Networks that can be represented and analyzed using well-developed means of Boolean Satisfiability and Integer Linear Programming.  ...  We also discuss how we can take advantage of the structure of neural networks in the search procedure.  ...  We were able to generate images for a small dataset with 16 by 16 pixels images given 3 layered neural network.  ... 
doi:10.24963/ijcai.2018/811 dblp:conf/ijcai/Narodytska18 fatcat:npesszoud5cv5in7v5guj3lxyq

Portfolio solver for verifying Binarized Neural Networks

Gergely Kovásznai, Krisztián Gajdár, Nina Narodytska
2021 Annales Mathematicae et Informaticae  
We focus on an important family of deep neural networks, the Binarized Neural Networks (BNNs) that are useful in resourceconstrained environments, like embedded devices.  ...  We also report on experiments on network equivalence with promising results.  ...  These networks have a number of useful features that are useful in resource-constrained environments, like embedded devices or mobile phones [25, 28] .  ... 
doi:10.33039/ami.2021.03.007 fatcat:pvigltjexzgftkdscfi6zojrjm

Texture for Colors: Natural Representations of Colors Using Variable Bit-Depth Textures [article]

Shumeet Baluja
2021 arXiv   pre-print
The approach uses deep-neural networks and is entirely self-supervised; no examples of good vs. bad binarizations are required.  ...  Binarization techniques, such as half-toning, stippling, and hatching, have been widely used for modeling the original image's intensity profile.  ...  Neural Architectures Finding the appropriate neural network for this task required numerous architecture decisions and extensive empirical testing.  ... 
arXiv:2105.01768v1 fatcat:tqse3l7vofh7telxldrx7iagye

Architecture Agnostic Neural Networks [article]

Sabera Talukder, Guruprasad Raghavan, Yisong Yue
2020 arXiv   pre-print
In summation, we create an architecture manifold search procedure to discover families or architecture agnostic neural networks.  ...  This contrast begs the question: Can we build artificial architecture agnostic neural networks? To ground this study we utilize sparse, binary neural networks that parallel the brain's circuits.  ...  Sparse Binarized Neural Network Throughout this paper, a binarized neural network refers to networks with weights constrained to (-1, 0, +1).  ... 
arXiv:2011.02712v2 fatcat:wm7gml465bfefhg7f2zdtalbnu

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks [chapter]

Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi
2016 Lecture Notes in Computer Science  
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks.  ...  We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.  ...  Binary-Weight-Networks In order to constrain a convolutional neural network I, W, * to have binary weights, we estimate the real-value weight filter W ∈ W using a binary filter B ∈ {+1, −1} c×w×h and a  ... 
doi:10.1007/978-3-319-46493-0_32 fatcat:n757ualbzzc5pof4hcv2e3tema

XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks [article]

Mohammad Rastegari, Vicente Ordonez, Joseph Redmon, Ali Farhadi
2016 arXiv   pre-print
We propose two efficient approximations to standard convolutional neural networks: Binary-Weight-Networks and XNOR-Networks.  ...  We compare our method with recent network binarization methods, BinaryConnect and BinaryNets, and outperform these methods by large margins on ImageNet, more than 16% in top-1 accuracy.  ...  XNOR-Net: ImageNet Classification Using Binary Convolutional Neural Networks  ... 
arXiv:1603.05279v4 fatcat:dsfl3pwi55fxzcicde65yh4s3y

Verifying Properties of Binarized Deep Neural Networks

Nina Narodytska, Shiva Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, Toby Walsh
2018 PROCEEDINGS OF THE THIRTIETH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE AND THE TWENTY-EIGHTH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE  
Using this encoding, we leverage the power of modern SAT solvers along with a proposed counterexample-guided search procedure to verify various properties of these networks.  ...  For this property, our experimental results demonstrate that our approach scales to medium-size deep neural networks used in image classification tasks.  ...  Our main contribution is a procedure for constructing a SAT encoding of a binarized neural network.  ... 
doi:10.1609/aaai.v32i1.12206 fatcat:3xq3swqvgna7dckk2uuz67fp64

Controlling Information Capacity of Binary Neural Network

Dmitry Ignatov, Andrey Ignatov
2020 Pattern Recognition Letters  
While binary convolutional networks can alleviate these problems, the limited bitwidth of weights is often leading to significant degradation of prediction accuracy.  ...  The results of experiments conducted on SVHN, CIFAR and ImageNet datasets demonstrate that the proposed approach can statistically significantly improve the accuracy of binary networks.  ...  Binary Neural Networks Training and inference with deep neural networks usually consume a large amount of computational power, which makes them hard to deploy on mobile devices.  ... 
doi:10.1016/j.patrec.2020.07.033 fatcat:eel2e7xx6nbyjfrjlohrfwyd2m

Verifying Properties of Binarized Deep Neural Networks [article]

Nina Narodytska, Shiva Prasad Kasiviswanathan, Leonid Ryzhyk, Mooly Sagiv, Toby Walsh
2018 arXiv   pre-print
Using this encoding, we leverage the power of modern SAT solvers along with a proposed counterexample-guided search procedure to verify various properties of these networks.  ...  For this property, our experimental results demonstrate that our approach scales to medium-size deep neural networks used in image classification tasks.  ...  Our main contribution is a procedure for constructing a SAT encoding of a binarized neural network.  ... 
arXiv:1709.06662v2 fatcat:dirxvig4kzbo7fhjy4vhkiadsi

Discretization based Solutions for Secure Machine Learning against Adversarial Attacks [article]

Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy
2019 arXiv   pre-print
Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the 'space' in which adversarial examples exist.  ...  Furthermore, we find that Binary Neural Networks (BNNs) and related variants are intrinsically more robust than their full precision counterparts in adversarial scenarios.  ...  Similarly, discretizing the parameter space (as in binarized neural networks (BNNs) [6] ) will introduce discontinuities and quantization in the manifold (that is non-linear by nature).  ... 
arXiv:1902.03151v2 fatcat:nboatf4h4fdsdly2q65zjetnhm

Discretization based Solutions for Secure Machine Learning against Adversarial Attacks

Priyadarshini Panda, Indranil Chakraborty, Kaushik Roy
2019 IEEE Access  
INDEX TERMS Adversarial robustness, deep learning, discretization techniques, binarized neural networks.  ...  Intuitively, constraining the dimensionality of inputs or parameters of a network reduces the "space" in which adversarial examples exist.  ...  Galloway et al. simply analysed the effect of attacking binarized neural networks (without input discretization).  ... 
doi:10.1109/access.2019.2919463 fatcat:5ynk63pcwnchhkkz7j4i2wm6p4

From Seq2Seq Recognition to Handwritten Word Embeddings

George Retsinas, Giorgos Sfikas, Christophoros Nikou, Petros Maragos
2021 British Machine Vision Conference  
Additionally, we also show how to further process these embeddings/representations with a binarization scheme to provide compact and highly efficient descriptors, suitable for Keyword Spotting.  ...  In this work, we propose a system for automatically extracting handwritten word embeddings, using the encoding module of a Sequence-to-Sequence (Seq2Seq) recognition network.  ...  Given the trained system and an input image, the recognized character sequence can be generated by a CTC decoding procedure [10] .  ... 
dblp:conf/bmvc/RetsinasSNM21 fatcat:m3tzkrkeobfx5c6bg7tfj4gov4

Training Multi-bit Quantized and Binarized Networks with A Learnable Symmetric Quantizer

Phuoc Pham, Jacob A. Abraham, Jaeyong Chung
2021 IEEE Access  
Quantizing weights and activations of deep neural networks is essential for deploying them in resource-constrained devices, or cloud platforms for at-scale services.  ...  As a result, recent quantization methods do not provide binarization, thus losing the most resource-efficient option, and quantized and binarized networks have been distinct research areas.  ...  While Zhou [57] considered binarized neural networks together with multi-bit quantization, binarized neural networks have been a distinct research topic from the quantized models.  ... 
doi:10.1109/access.2021.3067889 fatcat:okciggt7lvb2pfdxgx3gg2w2zq

BreakingBED – Breaking Binary and Efficient Deep Neural Networks by Adversarial Attacks [article]

Manoj Rohit Vemparala, Alexander Frickenstein, Nael Fasfous, Lukas Frickenstein, Qi Zhao, Sabine Kuhn, Daniel Ehrhardt, Yuankai Wu, Christian Unger, Naveen Shankar Nagaraja, Walter Stechele
2021 arXiv   pre-print
In this paper, we thoroughly study the robustness of uncompressed, distilled, pruned and binarized neural networks against white-box and black-box adversarial attacks (FGSM, PGD, C&W, DeepFool, LocalSearch  ...  Experimental results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks (BNNs) such as XNOR-Net and ABC-Net, trained on CIFAR-10 and ImageNet datasets  ...  Binarization Binarization represents the most aggressive form of quantization, where the network weights W and activations are constrained to ±1 values.  ... 
arXiv:2103.08031v1 fatcat:b3yvnuenofe3rnq5ufasmokk3e
« Previous Showing results 1 — 15 out of 1,720 results