Filters








1,246 Hits in 10.8 sec

Data Retention and Anonymity Services [chapter]

Stefan Berthold, Rainer Böhme, Stefan Köpsell
2009 IFIP Advances in Information and Communication Technology  
We argue that data retention requires a review of existing security evaluations against a new class of realistic adversary models.  ...  Our adversary model reflects an interpretation of the current implementation of the EC Directive on Data Retention in Germany.  ...  Finally, Section 9 concludes the paper and points to further generalisations and research topics of interest against the backdrop of the new class of realistic adversary models.  ... 
doi:10.1007/978-3-642-03315-5_7 fatcat:zusspn6smvbqvojdvuf6k6cpci

Privacy Risks and Countermeasures in Publishing and Mining Social Network Data

Chiemi Watanabe, Toshiyuki Amagasa, Ling Liu
2011 Proceedings of the 7th International Conference on Collaborative Computing: Networking, Applications and Worksharing  
We argue that using information exposure levels to characterize the utility of anonymized data can be used as a general and usage-neutral metric and query types can be used as the baseline usage driven  ...  As interests in sharing and mining social network data continue to grow, we see a growing demand for privacy preserving social network data publishing.  ...  , and Intel ISTC grant.  ... 
doi:10.4108/icst.collaboratecom.2011.247177 dblp:conf/colcom/WatanabeAL11 fatcat:fo27obltczbu7kgsrkcfk4msuq

Covert Communications Despite Traffic Data Retention [chapter]

George Danezis
2011 Lecture Notes in Computer Science  
We show that Alice and Bob can communicate covertly and anonymously, despite Eve having access to the traffic data of most machines on the Internet.  ...  The feasibility of covert communications despite stringent traffic data retention, has far reaching policy consequences.  ...  Acknowledgments Many thanks to Nick Feamster for suggesting having a look at the IPID mechanisms in IP. Klaus Kursawe suggested using shared state in on-line games for covert communications.  ... 
doi:10.1007/978-3-642-22137-8_27 fatcat:cu74ah4x6nbwphylf2ferdbw44

A comprehensive review on privacy preserving data mining

Yousra Abdul Alsahib S. Aldeen, Mazleena Salleh, Mohammad Abdur Razzaque
2015 SpringerPlus  
This article provides a panoramic overview on new perspective and systematic interpretation of a list published literatures via their meticulous organization in subcategories.  ...  The intimidation imposed via ever-increasing phishing attacks with advanced deceptions created a new challenge in terms of mitigation.  ...  A new definition of K-anonymity model for effective privacy protection of personal sequential data is introduced (Monreale et al. 2014) .  ... 
doi:10.1186/s40064-015-1481-x pmid:26587362 pmcid:PMC4643068 fatcat:twwirrmehva4pfkieiufldrxve

Information Security in Big Data: Privacy and Data Mining

Lei Xu, Chunxiao Jiang, Jian Wang, Jian Yuan, Yong Ren
2014 IEEE Access  
We briefly introduce the basics of related research topics, review state-of-the-art approaches, and present some preliminary thoughts on future research directions.  ...  The basic idea of PPDM is to modify the data in such a way so as to perform data mining algorithms effectively without compromising the security of sensitive information contained in the data.  ...  Medforth and Wang [34] identify a new class of privacy attack, named degree-trail attack, arising from publishing a sequence of graph data.  ... 
doi:10.1109/access.2014.2362522 fatcat:oxnmv2kjy5bllhotbkqvxd5rfu

SoK: Machine Learning Governance [article]

Varun Chandrasekaran, Hengrui Jia, Anvith Thudi, Adelin Travers, Mohammad Yaghini, Nicolas Papernot
2021 arXiv   pre-print
Our approach first systematizes research towards ascertaining ownership of data and models, thus fostering a notion of identity specific to ML systems.  ...  This leads us to highlight the need for techniques that allow a model owner to manage the life cycle of their system, e.g., to patch or retire their ML system.  ...  However, because ML systems introduce new, often implicit, information flows between data and models, they call for specific mechanisms.  ... 
arXiv:2109.10870v1 fatcat:7zklvf3ocjeaje6pq45cgp4zkm

DisPA: An Intelligent Agent for Private Web Search [chapter]

Marc Juarez, Vicenç Torra
2014 Studies in Computational Intelligence  
Despite that AOL claimed to have anonymized the dataset by removing identifiers, journalists of the New York Times managed to link one of the logs to a real identity [9] .  ...  Second, the user can connect to the service through an anonymous communication system that would provide him a different identity for each session.  ...  Acknowledgements The authors wish to acknowledge the anonymous reviewers for their detailed and helpful comments to the manuscript.  ... 
doi:10.1007/978-3-319-09885-2_21 fatcat:jvdbmacdkngbrpmhd3in7anysy

Differential Privacy Based Access Control [chapter]

Nadia Metoui, Michele Bezzi
2016 Lecture Notes in Computer Science  
The model allows for data access at different privacy levels, generating an anonymized data set according to the privacy clearance of each request.  ...  The huge availability of data is giving organizations the opportunity to develop and consume new data-intensive applications (e.g., predictive analytics).  ...  H2020-688797) and FP7 EU-funded project SECENTIS (FP7-PEOPLE-2012-ITN, grant no. 317387).  ... 
doi:10.1007/978-3-319-48472-3_61 fatcat:y2qvvp4uwfgzlcsxpa4fikhxpi

Obfuscating spatial point tracks with simulated crowding

Simon Scheider, Jiong Wang, Maarten Mol, Oliver Schmitz, Derek Karssenberg
2020 International Journal of Geographical Information Science  
We introduce simulated crowding as a point quality preserving obfuscation principle that is based on adding fake points.  ...  The accuracy of such analyses critically depends on the positional accuracy of the tracked points. This poses a serious privacy risk.  ...  Data and codes availability statement The code used in this study is available at https://figshare.com/ under the identifier https://doi.org/ 10.23644/uu.11295308.  ... 
doi:10.1080/13658816.2020.1712402 fatcat:n5cechr36nh3ha2pib3tmwxhdm

From "Onion Not Found" to Guard Discovery

Lennart Oldenburg, Gunes Acar, Claudia Diaz
2021 Proceedings on Privacy Enhancing Technologies  
We find that an adversary running a small number of HSDirs and providing 5 % of Tor's relay bandwidth needs 12.06 seconds to identify the guards of 50 % of the victims, while it takes 22.01 seconds to  ...  The attack works by injecting resources from non-existing onion service addresses into a webpage.  ...  We also thank the anonymous reviewers for their helpful feedback that improved this work. Lennart Oldenburg is funded by a PhD fellowship of the Fund for Scientific Research -Flanders (FWO).  ... 
doi:10.2478/popets-2022-0026 fatcat:zuih2sa4fvhn7pvwgfdn7bntsy

Safety Challenges and Solutions in Mobile Social Networks

Yashar Najaflou, Behrouz Jedari, Feng Xia, Laurence T. Yang, Mohammad S. Obaidat
2015 IEEE Systems Journal  
data.  ...  In this paper, we aim to provide a clear categorization on safety challenges and a deep exploration over some recent solutions in MSNs.  ...  In addition, they introduce an adversary model and provide an analysis of the proposed obfuscation operators to evaluate their robustness against adversaries.  ... 
doi:10.1109/jsyst.2013.2284696 fatcat:so3qnnbk65axhn2upzmemss2he

Privacy–Enhancing Face Biometrics: A Comprehensive Survey

Blaz Meden, Peter Rot, Philipp Terhorst, Naser Damer, Arjan Kuijper, Walter J. Scheirer, Arun Ross, Peter Peer, Vitomir Struc
2021 IEEE Transactions on Information Forensics and Security  
Biometric recognition technology has made significant advances over the last decade and is now used across a number of services and applications.  ...  In response to these and similar concerns, researchers have intensified efforts towards developing techniques and computational models capable of ensuring privacy to individuals, while still facilitating  ...  For more information, see https://creativecommons.org/licenses/by/4.0/ This article has been accepted for publication in a future issue of this journal, but has not been fully edited.  ... 
doi:10.1109/tifs.2021.3096024 fatcat:z5kvij6g7vgx3b24narxdyp2py

Intriguing Properties of Adversarial ML Attacks in the Problem Space [article]

Fabio Pierazzi, Feargus Pendlebury, Jacopo Cortellazzi, Lorenzo Cavallaro
2020 arXiv   pre-print
Our results demonstrate that "adversarial-malware as a service" is a realistic threat, as we automatically generate thousands of realistic and inconspicuous adversarial applications at scale, where on  ...  We shed light on the relationship between feature space and problem space, and we introduce the concept of side-effect features as the byproduct of the inverse feature-mapping problem.  ...  ACKNOWLEDGEMENTS We thank the anonymous reviewers and our shepherd, Nicolas Papernot, for their constructive feedback, as well as Battista Biggio, Konrad Rieck, and Erwin Quiring for feedback on early  ... 
arXiv:1911.02142v2 fatcat:fioc4k5eczf2toexvneuetxnhi

Explanation-Guided Backdoor Poisoning Attacks Against Malware Classifiers [article]

Giorgio Severi, Jim Meyer, Scott Coull, Alina Oprea
2021 arXiv   pre-print
We propose the use of techniques from explainable machine learning to guide the selection of relevant features and values to create effective backdoor triggers in a model-agnostic fashion.  ...  Using multiple reference datasets for malware classification, including Windows PE files, PDFs, and Android applications, we demonstrate effective attacks against a diverse set of machine learning models  ...  Acknowledgments We would like to thank Jeff Johns for his detailed feedback on a draft of this paper and many discussions on backdoor poisoning attacks, and the anonymous reviewers for their insightful  ... 
arXiv:2003.01031v3 fatcat:gbvwryhwzfdhxor2x6al5krkwe

Quantifying the Utility-Privacy Tradeoff in the Smart Grid [article]

Roy Dong and Alvaro A. Cárdenas and Lillian J. Ratliff and Henrik Ohlsson and S. Shankar Sastry
2015 arXiv   pre-print
This privacy metric assumes a strong adversary model, and provides an upper bound on the adversary's ability to infer a private parameter, independent of the algorithm he uses.  ...  Additionally, we introduce a new privacy metric, which we call inferential privacy.  ...  We introduce a new privacy metric, inferential privacy, that exploits the uncertainty intrinsic to device models and human behavior.  ... 
arXiv:1406.2568v2 fatcat:lfps7btnnjcrjozlefnqr6kh4u
« Previous Showing results 1 — 15 out of 1,246 results