Filters








1,913 Hits in 8.7 sec

Instant Restore After a Media Failure [chapter]

Caetano Sauer, Goetz Graefe, Theo Härder
2017 Lecture Notes in Computer Science  
This allows hiding log replay within the initial restore of the backup, thus substantially reducing the time and cost of media recovery and, incidentally, rendering incremental backup techniques unnecessary  ...  We introduce single-pass restore, a technique in which restoration of all backups and log replay are performed in a single operation.  ...  Acknowledgments We thank Pinar Tözün and Ryan Johnson for kindly and generously answering our questions about Shore-MT and Shore-Kits.  ... 
doi:10.1007/978-3-319-66917-5_21 fatcat:r5mm6qxyurf3bemebpej3vcoru

Clone-based Data Index in Cloud Storage Systems

Jing He, Yue Wu, Yang Fu, Wei Zhou, J.C.M. Kao, W.-P. Sung
2016 MATEC Web of Conferences  
Meanwhile, because of the increasing size of data index and its dynamic characteristics, the previous ways, which rebuilding the index or fully backup the index before the data has changed, cannot satisfy  ...  The traditional data index cannot satisfy the requirements of cloud computing because of the huge index volumes and quick response time.  ...  Acknowledgement This work is supported by the National Natural Science Foundation of China (61363021); Science Research Fund of Yunnan Provincial Education Department (2014Y013);  ... 
doi:10.1051/matecconf/20166305004 fatcat:2ufr7pichbhdhlexhlkvoalkbm

DOMe: A deduplication optimization method for the NewSQL database backups

Longxiang Wang, Zhengdong Zhu, Xingjun Zhang, Xiaoshe Dong, Yinfeng Wang, Le Zhang
2017 PLoS ONE  
Reducing duplicated data of database backups is an important application scenario for data deduplication technology. NewSQL is an emerging database system and is now being used more and more widely.  ...  The H-store is used as a typical NewSQL database system to implement DOMe method. DOMe is experimentally analyzed by two representative backup data.  ...  Acknowledgments The authors would like to thank the anonymous reviewers for providing insightful comments and providing directions for additional work which has vastly improved this paper. This work  ... 
doi:10.1371/journal.pone.0185189 pmid:29049307 pmcid:PMC5648134 fatcat:cncf5lwyajelfodmsas66nziba

A web site protection oriented remote backup and recovery method

He Qian, Guo Yafeng, Wang Yong, Qiang Baohua
2013 2013 8th International Conference on Communications and Networking in China (CHINACOM)  
A multi version control method is given to text files, and the remote transmission and backup mechanism is designed based on Rsync and FTP protocols.  ...  Rsync is used to reduce the transferred data efficiently, the experiment results show that the remote backup and recovery system can work fast and it can meet the requirements of web site protection.  ...  Based on our former work [9] , importing multi-version control and the Rsync synchronization algorithm, which is simple to realize and easy to achieve remote data synchronization fast [10] , a specific  ... 
doi:10.1109/chinacom.2013.6694628 fatcat:umeejerznneorhwozg63ww6gmy

Designing a Multi-petabyte Database for LSST [article]

Jacek Becla, Andrew Hanushevsky, Sergei Nikolaev, Ghaleb Abdulla, Alex Szalay, Maria Nieto-Santisteban, Ani Thakar, Jim Gray
2006 arXiv   pre-print
The data volume, the real-time transient alerting requirements of the LSST, and its spatio-temporal aspects require innovative techniques to build an efficient data access system at reasonable cost.  ...  Several database systems are being evaluated to understand how they perform at these data rates, data volumes, and access patterns.  ...  Additional funding comes from private donations, in-kind support at Department of Energy laboratories and other LSSTC Institutional Members.  ... 
arXiv:cs/0604112v1 fatcat:tixzwjr6v5cqhisea2bep2nrbe

An adaptive approach to better load balancing in a consumer-centric cloud environment

Qi Liu, Weidong Cai, Jian Shen, Xiaodong Liu, Nigel Linge
2016 IEEE transactions on consumer electronics  
Combing the prediction model with a multi-objective optimization algorithm, an adaptive solution to optimize the performance of space-time is obtained.  ...  Existing heterogeneous distributed computing systems provide efficient parallel and high fault tolerant and reliable services, due to its characteristics of managing largescale clusters.  ...  Apache provides an open source implementation version of the MR, which enables convenient and efficient big data processing, but also brings differences and complexity on resource requirements, data delivery  ... 
doi:10.1109/tce.2016.7613190 fatcat:kd4jgi5cvvcolmexr5mv4hocdi

Similarity and Locality Based Indexing for High Performance Data Deduplication

Wen Xia, Hong Jiang, Dan Feng, Yu Hua
2015 IEEE transactions on computers  
SiLo also employs a locality based stateless routing algorithm to parallelize and distribute data blocks to multiple backup nodes.  ...  Data deduplication has gained increasing attention and popularity as a space-efficient approach in backup storage systems.  ...  DESIGN AND IMPLEMENTATION In this section, we will first describe the architecture overview of SiLo. Then we give detailed description of its design and implementation algorithms.  ... 
doi:10.1109/tc.2014.2308181 fatcat:szqge3jt5zhsnnnn7yhntj64j4

Agentless cloud-wide monitoring of virtual disk state

Wolfgang Richter
2014 Proceedings of the 2014 workshop on PhD forum - PhD forum '14  
/cloud-history is designed to support efficient search and management of historic virtual disk state.  ...  hypervisors enabling efficient introspection, and file-level duplication of data within cloud instances.  ...  /cloud-history as described in Chapter 5, implements an agentless backup system designed to capture versions of files.  ... 
doi:10.1145/2611166.2611174 dblp:conf/mobisys/Richter14 fatcat:ukvtl4kiene4xgn226emv4ycze

Low-Overhead Asynchronous Checkpointing in Main-Memory Database Systems

Kun Ren, Thaddeus Diamond, Daniel J. Abadi, Alexander Thomson
2016 Proceedings of the 2016 International Conference on Management of Data - SIGMOD '16  
Our experiments show that CALC can capture frequent checkpoints across a variety of transactional workloads with extremely small cost to transactional throughput and low additional memory usage compared  ...  to other state-of-the-art checkpointing systems.  ...  Virtual points of consistency are instead created using full or partial multi-versioning. Systems implementing snapshot isolation via MVCC implement full multi-versioning.  ... 
doi:10.1145/2882903.2915966 dblp:conf/sigmod/RenDAT16 fatcat:nqx74ausdze4xjkagiihzkp7tu

WAN-optimized replication of backup datasets using stream-informed delta compression

Phlip Shilane, Mark Huang, Grant Wallace, Windsor Hsu
2012 ACM Transactions on Storage  
Replicating data off-site is critical for disaster recovery reasons, but the current approach of transferring tapes is cumbersome and error-prone.  ...  customers to replicate data that would otherwise fail to complete within their backup window.  ...  We would also like to acknowledge the many EMC engineers who continue to improve and support delta replication.  ... 
doi:10.1145/2385603.2385606 fatcat:gnmmchm7krfcfg6a5vlloipsri

An Adaptively Speculative Execution Strategy Based on Real-Time Resource Awareness in a Multi-Job Heterogeneous Environment

2017 KSII Transactions on Internet and Information Systems  
In addition, the performance of MRV2 is largely improved using the ASE strategy on job execution time and resource consumption, whether in a multi-job environment.  ...  Its new version MapReduce 2.0 (MRV2) developed along with the emerging of Yarn has achieved obvious improvement over MRV1. However, MRV2 suffers from long finishing time on certain types of jobs.  ...  But, the PrIter only adapts iterative algorithms, not for all algorithms. GGB and GR were implemented in [10] by Wang et al.  ... 
doi:10.3837/tiis.2017.02.004 fatcat:p2ae7rztgradxcpokfne32biei

PipeCloud

Timothy Wood, H. Andrés Lagar-Cavilla, K. K. Ramakrishnan, Prashant Shenoy, Jacobus Van der Merwe
2011 Proceedings of the 2nd ACM Symposium on Cloud Computing - SOCC '11  
replication, all while providing the same zero data loss consistency guarantees.  ...  PipeCloud, our prototype, is able to sustain these guarantees for multi-node servers composed of black-box VMs, with no need of application modification, resulting in a perfect fit for the arbitrary nature  ...  We also thank Brendan Cully for his assistance in configuring and running Remus during the early stages of this project.  ... 
doi:10.1145/2038916.2038933 dblp:conf/cloud/WoodLRSM11 fatcat:5wstknbdvnfqddizakrqn2exjm

bLSM

Russell Sears, Raghu Ramakrishnan
2012 Proceedings of the 2012 international conference on Management of Data - SIGMOD '12  
We use Bloom filters to improve index performance, and find a number of subtleties arise. First, we ensure reads can stop after finding one version of a record.  ...  , and (2) its new "spring and gear" merge scheduler bounds write latency without impacting throughput or allowing merges to block writes for extended periods of time.  ...  ACKNOWLEDGMENTS We would like to thank Mark Callaghan, Brian Cooper, the members of the PNUTS team, and our shepherd, Ryan Johnson for their invaluable feedback. bLSM is open source and available for download  ... 
doi:10.1145/2213836.2213862 dblp:conf/sigmod/SearsR12 fatcat:b6cdmxbzzrhrzckedsorsb2fpe

CloudRAMSort

Changkyu Kim, Jongsoo Park, Nadathur Satish, Hongrae Lee, Pradeep Dubey, Jatin Chhugani
2012 Proceedings of the 2012 international conference on Management of Data - SIGMOD '12  
large-scale in-memory data of current and future systems.  ...  The two most important factors in designing a high-speed in-memory sorting system are the single-node sorting performance and inter-node communication.  ...  We use 0.21.0 version of Hadoop run-time, the TeraSort implementation included in it, and Oracle Java 64-bit server SDK 1.6.0_27 version.  ... 
doi:10.1145/2213836.2213965 dblp:conf/sigmod/KimPSLDC12 fatcat:f3mwne3655hapicsqknhbgcvda

In-Memory Big Data Management and Processing: A Survey

Hao Zhang, Gang Chen, Beng Chin Ooi, Kian-Lee Tan, Meihui Zhang
2015 IEEE Transactions on Knowledge and Data Engineering  
We are witnessing a revolution in the design of database systems that exploits main memory as its data storage layer.  ...  Growing main memory capacity has fueled the development of in-memory big data management and processing. By eliminating disk I/O bottleneck, it is now possible to support interactive data analytics.  ...  We would like to thank the anonymous reviewers, and also Bingsheng He, Eric Lo and Bogdan Marius Tudor, for their insightful comments and suggestions.  ... 
doi:10.1109/tkde.2015.2427795 fatcat:u7r3rtvhxbainfeazfduxcdwrm
« Previous Showing results 1 — 15 out of 1,913 results