Filters








58,549 Hits in 2.9 sec

Privacy-Preserving Personal Model Training [article]

Sandra Servia-Rodriguez, Liang Wang, Jianxin R. Zhao, Richard Mortier, Hamed Haddadi
2018 arXiv   pre-print
Many current Internet services rely on inferences from models trained on user data.  ...  We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks.  ...  [16] , where privacy-preserving models are learned locally from disjoint datasets, and then combined on a privacy-preserving fashion.  ... 
arXiv:1703.00380v3 fatcat:7xridmsatfgelecuv5x7vvx6ia

Privacy-Preserving Personal Model Training

Sandra Servia-Rodriguez, Liang Wang, Jianxin R. Zhao, Richard Mortier, Hamed Haddadi
2018 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI)  
Many current Internet services rely on inferences from models trained on user data.  ...  We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks.  ...  [16] , where privacy-preserving models are learned locally from disjoint datasets, and then combined on a privacy-preserving fashion.  ... 
doi:10.1109/iotdi.2018.00024 dblp:conf/iotdi/RodriguezWZMH18 fatcat:azl5r4mmfzcyvedlxmwpc3nlpi

VirtualIdentity: Privacy preserving user profiling

Sisi Wang, Wing-Sea Poon, Golnoosh Farnadi, Caleb Horst, Kebra Thompson, Michael Nickels, Anderson Nascimento, Martine De Cock
2016 2016 IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM)  
pictures with trained support vector machine models in a privacy preserving manner.  ...  In this paper we show that it is possible to build user profiles without ever accessing the user's original data, and without exposing the trained machine learning models for user profiling -which are  ...  , trained machine learning models.  ... 
doi:10.1109/asonam.2016.7752438 dblp:conf/asunam/WangPFHTNNC16 fatcat:xitdqdkrfnh3nn4wipsqmjcqnq

A review of privacy-preserving human and human activity recognition

Im Y. Jung
2020 International Journal on Smart Sensing and Intelligent Systems  
This paper analyzes the cutting-edge research trends, techniques, and issues of privacy-preserving human and human activity recognition.  ...  privacy infringement issues.  ...  measures Health social network Training data privacy Table 4 . 4 Privacy-preserving approaches.  ... 
doi:10.21307/ijssis-2020-008 fatcat:fmrjiyw63jdxroj3ssd74iliq4

Privacy Preservation in Federated Learning: An insightful survey from the GDPR Perspective [article]

Nguyen Truong, Kai Sun, Siyao Wang, Florian Guitton, Yike Guo
2021 arXiv   pre-print
Conventionally, data is collected and aggregated in a data centre on which machine learning models are trained.  ...  This centralised approach has induced severe privacy risks to personal data leakage, misuse, and abuse.  ...  The privacy-preservation advantage of FL compared to the traditional centralised ML approaches is undeniable: It enables to train an ML model whilst retaining personal training data on end-users' devices  ... 
arXiv:2011.05411v5 fatcat:ossykehtlzaalbr34bequrhpeq

Challenges of Privacy-Preserving Machine Learning in IoT [article]

Mengyao Zheng, Dixing Xu, Linshan Jiang, Chaojie Gu, Rui Tan, Peng Cheng
2019 arXiv   pre-print
This paper provides a taxonomy of the existing privacy-preserving machine learning approaches developed in the context of cloud computing and discusses the challenges of applying them in the context of  ...  Moreover, we present a privacy-preserving inference approach that runs a lightweight neural network at IoT objects to obfuscate the data before transmission and a deep neural network in the cloud to classify  ...  Privacy-Preserving Training The latest privacy-preserving training approaches that leverage distributed privacy-sensitive data to construct a global ML model or multiple local ML models can be divided  ... 
arXiv:1909.09804v1 fatcat:4jek6hmekvbkjp5id2r52m2cpu

Privacy in Open Search: A Review of Challenges and Solutions

Samuel Sousa, Christian Guetl, Roman Kern
2022 Zenodo  
IR models can also be privacy-preserving by the use of searchable encryption (SE).  ...  Searches on personal data often present privacy risks from both data provider and model sides.  ... 
doi:10.5281/zenodo.5887680 fatcat:hnwrhs7bbbbgdbtjmwhkoik5ei

Privacy in Open Search: A Review of Challenges and Solutions [article]

Samuel Sousa, Christian Guetl, Roman Kern
2021 arXiv   pre-print
privacy hazards; finally, we bring insights on the tradeoffs between privacy preservation and utility performance for IR tasks.  ...  Artificial intelligence areas, such as machine learning and natural language processing, have already successfully employed privacy-preserving mechanisms in order to safeguard data privacy in a vast number  ...  This work aims, therefore, at pointing out open challenges with regards to privacy in IR tasks, as well as reviewing appropriate privacy-preserving methods to safeguard personal data or model.  ... 
arXiv:2110.10720v2 fatcat:lodtwxmakvdrtngy54mn3j4joy

A Novel Privacy-Preserved Recommender System Framework based on Federated Learning [article]

Jiangcheng Qin, Baisong Liu
2020 arXiv   pre-print
This paper proposed a novel privacy-preserved recommender system framework (PPRSF), through the application of federated learning paradigm, to enable the recommendation algorithm to be trained and carry  ...  To meet users' next click behavior, RS needs to collect users' personal information and behavior to achieve a comprehensive and profound user preference perception.  ...  Privacy-preserved RS In response to the privacy threats, privacy-preserved RS use specially designed architectures or algorithms as countermeasures.  ... 
arXiv:2011.05614v1 fatcat:rpb227km3bb3dhr3hwjj6rgv5q

Interpretable Machine Learning for Privacy-Preserving Pervasive Systems [article]

Benjamin Baron, Mirco Musolesi
2019 arXiv   pre-print
In this paper, we propose a machine learning interpretability framework that enables users to understand how these generated traces violate their privacy.  ...  Privacy-preserving component.  ...  The privacy budget controls the level of privacy provided by the privacy-preserving component.  ... 
arXiv:1710.08464v6 fatcat:fv66extdtzf65ofz7amyjwhdqq

An Efficient Privacy-preserving Deep Learning Scheme for Medical Image Analysis

J. Andrew Onesimu, J Karthikeyan
2020 Journal of Information Technology Management  
In recent privacy has emerged as one of the major concerns of deep learning, since it requires huge amount of personal data.  ...  Data providers morph the images without privacy information using image morphing component.  ...  Differential Privacy Differential privacy is one of the privacy preserving techniques used in deep learning to secure the training data leakage from the model.  ... 
doi:10.22059/jitm.2020.79191 doaj:9a2565b20f8740b98d10c745c9cae1c9 fatcat:fqbifqdmeze7vgn35wj3clngde

Practical and Secure Federated Recommendation with Personalized Masks [article]

Liu Yang, Ben Tan, Bo Liu, Vincent W. Zheng, Kai Chen, Qiang Yang
2021 arXiv   pre-print
Compared with homomorphic encryption, secret sharing largely speeds up the whole training process.  ...  Besides, we also provide the privacy guarantee and discuss the extension of the personalized mask method to the general federated learning tasks.  ...  A personalized mask is a mask that adds on the original data for preserving privacy. "Personalized" means that the mask varies according to the user's data and helps improve model accuracy.  ... 
arXiv:2109.02464v1 fatcat:pmipkteop5hixldoc2a6dwzzyy

Security and Privacy Preserving Deep Learning [article]

Saichethan Miriyala Reddy, Saisree Miriyala
2020 arXiv   pre-print
In this chapter, we introduce differential privacy, which ensures that different kinds of statistical analyses dont compromise privacy and federated learning, training a machine learning model on a data  ...  The more personal the data is it is more restricted it means some of the most important social issues cannot be addressed using machine learning because researchers do not have access to proper training  ...  Local In Local models, model is trained on device which preserves the data privacy but there are major drawbacks to this model such as Latency Battery usage Computation wastage Models will be only sub  ... 
arXiv:2006.12698v2 fatcat:qk565j54dzf5fnptrci435dpji

What Does it Mean for a Language Model to Preserve Privacy? [article]

Hannah Brown, Katherine Lee, Fatemehsadat Mireshghallah, Reza Shokri, Florian Tramèr
2022 arXiv   pre-print
Thus there is a growing interest in techniques for training language models that preserve privacy.  ...  We conclude that language models should be trained on text data which was explicitly produced for public use.  ...  What does preserving privacy in language modeling require?  ... 
arXiv:2202.05520v2 fatcat:pxlryd5bgzb4jd5ifvkrklzrty

Adaptive Random Decision Tree: A New Approach for Data Mining with Privacy Preserving
English

Hemlata B. Deorukhakar, Prof. Pradnya Kasture
2015 International Journal of Innovative Research in Computer and Communication Engineering  
Random Decision Tree with data privacy is generating equivalent and accurate model but it also slow in computational time when distributed data grows.  ...  Privacy preserving ARDT provides better accuracy with data mining while preserving data privacy and reduces the computation time as compared to RDT with privacy preserving framework.  ...  The Dissertation is based on research work in A Random Decision Tree Framework for Privacy-Preserving Data Mining by G.  ... 
doi:10.15680/ijircce.2015.0307004 fatcat:bbhvvop36ferzpw3dhrmzv4kca
« Previous Showing results 1 — 15 out of 58,549 results