A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
SOTERIA: In Search of Efficient Neural Networks for Private Inference
[article]
2020
arXiv
pre-print
ML-as-a-service is gaining popularity where a cloud server hosts a trained model and offers prediction (inference) service to users. ...
We use neural architecture search algorithms with the dual objective of optimizing the accuracy of the model and the overhead of using cryptographic primitives for secure inference. ...
Efficiency is one of the key factors while designing a client-server application such as a neural network inference service on the cloud. ...
arXiv:2007.12934v1
fatcat:tdch7v4uu5e3dbokfrtl27mgum
AESPA: Accuracy Preserving Low-degree Polynomial Activation for Fast Private Inference
[article]
2022
arXiv
pre-print
Hybrid private inference (PI) protocol, which synergistically utilizes both multi-party computation (MPC) and homomorphic encryption, is one of the most prominent techniques for PI. ...
Although a standard non-linear activation function can generate higher model accuracy, it must be processed via a costly garbled-circuit MPC primitive. ...
to a service provider or ii) the trained model of a service provider to a client. ...
arXiv:2201.06699v2
fatcat:7lzbejka35a2hopzghgyaigwfq
Circa: Stochastic ReLUs for Private Deep Learning
[article]
2021
arXiv
pre-print
The simultaneous rise of machine learning as a service and concerns over user privacy have increasingly motivated the need for private inference (PI). ...
The key observation is that the stochastic fault behavior is well suited for the fault-tolerant properties of neural network inference. ...
XONN [18] enables private inference using only GCs for binarized neural networks and leverages the fact that XORs can be computed for free in the GC protocol to achieve speedups. ...
arXiv:2106.08475v1
fatcat:gsbz5hur6zehvb6a37r4sf47xe
Privacy-preserving Cloud-based DNN Inference
[article]
2021
arXiv
pre-print
Although some privacy preserving deep neural network (DNN) based inference techniques have been proposed by composing cryptographic primitives, the challenges on computational efficiency have not been ...
Deep learning as a service (DLaaS) has been intensively studied to facilitate the wider deployment of the emerging deep learning applications. ...
The authors would like to thank the anonymous reviewers for their constructive comments. ...
arXiv:2102.03915v2
fatcat:zr4vgfbsu5h6lmed4is53qkage
AutoPrivacy: Automated Layer-wise Parameter Selection for Secure Neural Network Inference
[article]
2020
arXiv
pre-print
emerging Machine Learning as a Service (MLaaS). ...
In this paper, for fast and accurate secure neural network inference, we propose an automated layer-wise parameter selector, AutoPrivacy, that leverages deep reinforcement learning to automatically determine ...
Acknowledges The authors would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. ...
arXiv:2006.04219v2
fatcat:fnip7ikk5fgpnl3poze3aobweu
Selective Network Linearization for Efficient Private Inference
[article]
2022
arXiv
pre-print
Private inference (PI) enables inference directly on cryptographically secure data. While promising to address many privacy issues, it has seen limited use due to extreme runtimes. ...
To complement empirical results, we present a "no free lunch" theorem that sheds light on how and when network linearization is possible while maintaining prediction accuracy. ...
At a high level, the vision of private inference is to enable a user to (efficiently) perform inference of their data on a model owned by a cloud service provider. ...
arXiv:2202.02340v1
fatcat:mt2ckfxdefggrd4sgpo2p6fjnu
CryptoNite: Revealing the Pitfalls of End-to-End Private Inference at Scale
[article]
2021
arXiv
pre-print
The privacy concerns of providing deep learning inference as a service have underscored the need for private inference (PI) protocols that protect users' data and the service provider's model using cryptographic ...
Paired with recent optimizations that tailor networks for PI, these protocols have achieved performance levels that are tantalizingly close to being practical. ...
ACKNOWLEDGEMENTS This work was supported in part by the Applications Driving Architectures (ADA) Research Center, a JUMP Center co-sponsored by SRC and DARPA. ...
arXiv:2111.02583v1
fatcat:w5cft4qgvrcuhhcfo4nrk57xye
Accelerating 2PC-based ML with Limited Trusted Hardware
[article]
2020
arXiv
pre-print
This paper describes the design, implementation, and evaluation of Otak, a system that allows two non-colluding cloud providers to run machine learning (ML) inference without knowing the inputs to inference ...
An implementation and evaluation of Otak demonstrates that its CPU and network overhead converted to a dollar amount is 5.4-385× lower than state-of-the-art 2PC-based works. ...
Acknowledgments We thank Ishtiyaque Ahmad, Alvin Glova, Rakshith Gopalakrishna, Arpit Gupta, Abhishek Jain, Srinath Setty, Jinjin Shao, Tim Sherwood, Michael Walfish, and Rich Wolski for feedback and comments ...
arXiv:2009.05566v1
fatcat:fbh6spwmcjhkxiura3aexuyjsi
Enhanced Security in Cloud Computing Using Neural Network and Encryption
2021
IEEE Access
This technique allows the computations to be performed directly on floating-point data within a neural network with a minor computational overhead. ...
To address this problem, we propose a new security design using Artificial Neural Networks (ANN) and encryption to confirm a safe communication system in the cloud environment, by letting the third parties ...
Authors [9] implement, estimate, and design, a secure prediction scheme DELPHI that allows performing neural network inference between two parties without disclosing the data of both parties. ...
doi:10.1109/access.2021.3122938
fatcat:jpnki543zncbnij37pivanhbvi
Sphynx: ReLU-Efficient Network Design for Private Inference
[article]
2021
arXiv
pre-print
We focus on private inference (PI), where the goal is to perform inference on a user's data sample using a service provider's model. ...
Existing PI methods for deep networks enable cryptographically secure inference with little drop in functionality; however, they incur severe latency costs, primarily caused by non-linear network operations ...
These modification, compare to Appendix E.1, allows designing ReLU-efficient networks with existing NAS cells. ...
arXiv:2106.11755v1
fatcat:hmmviv2sujafjpqqlezxn6jnbm
Fusion: Efficient and Secure Inference Resilient to Malicious Server and Curious Clients
[article]
2022
arXiv
pre-print
On the basis of this method, Fusion can be used as a general compiler for converting any semi-honest inference scheme into a maliciously secure one. ...
Without leveraging expensive cryptographic techniques, a novel mix-and-check method is designed to ensure that the server uses a well-trained model as input and correctly performs the inference computations ...
A. Neural Network Inference Convolutional neural network (CNN) is one of the popular neural network nowadays. ...
arXiv:2205.03040v1
fatcat:67n5nydnn5glzpcwl6p6jc3lm4
SoK: Privacy-Preserving Computation Techniques for Deep Learning
2021
Proceedings on Privacy Enhancing Technologies
Deep Learning (DL) is a powerful solution for complex problems in many disciplines such as finance, medical research, or social sciences. ...
Due to the high computational cost of DL algorithms, data scientists often rely upon Machine Learning as a Service (MLaaS) to outsource the computation onto third-party servers. ...
Acknowledgments We thank the anonymous reviewers and our shepherd, Phillipp Schoppmann, for their valuable feedback. We also thank Alberto Di Meglio, Marco Manca ...
doi:10.2478/popets-2021-0064
fatcat:hb3kdruxozbspnowy63gynuapy
CryptoSPN: Privacy-preserving Sum-Product Network Inference
[article]
2020
arXiv
pre-print
In this paper, we present CryptoSPN, a framework for privacy-preserving inference of sum-product networks (SPNs). ...
Using cryptographic techniques, it is possible to perform inference tasks remotely on sensitive client data in a privacy-preserving way: the server learns nothing about the input data and the model predictions ...
So far, efforts were focused on deep/convolutional neural networks, see [38] for a recent systematization of knowledge. ...
arXiv:2002.00801v1
fatcat:ni36djroi5fufknem3seyflpou
Privacy-Preserving Machine Learning: Methods, Challenges and Directions
[article]
2021
arXiv
pre-print
challenges and a research roadmap for future research in PPML area. ...
A trained ML model may also be vulnerable to adversarial attacks such as membership, attribute, or property inference attacks and model inversion attacks. ...
[139] recently proposed the Delphi framework for a cryptographic inference service for neural networks. ...
arXiv:2108.04417v2
fatcat:pmxmsbs2gvh6nd4jadcz4dnsrq
Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning
[article]
2021
arXiv
pre-print
In this work, we ask: Is it feasible to substitute all ReLUs with low-degree polynomial activation functions for building deep, privacy-friendly neural networks? ...
Privacy concerns in client-server machine learning have given rise to private inference (PI), where neural inference occurs directly on encrypted inputs. ...
ACKNOWLEDGEMENTS This work was supported in part by the Applications Driving Architectures (ADA) Research Center, a JUMP Center co-sponsored by SRC and DARPA. ...
arXiv:2107.12342v2
fatcat:mjmlfmwmvjdallugcw7otxarku
« Previous
Showing results 1 — 15 out of 49 results