Filters








5,591 Hits in 7.8 sec

Using dynamic sets to overcome high I/O latencies during search

D. Steere, M. Satyanarayanan
Proceedings 5th Workshop on Hot Topics in Operating Systems (HotOS-V)  
In this paper we describe a single unifying abstraction called dynamic sets which can offer substantial benefits to search applications.  ...  These benefits include greater opportunity in the I/O subsystem to aggressively exploit prefetching and parallelism, as well as support for associative naming to complement the hierarchical naming in typical  ...  Conclusion Search on mobile computers and wide-area information systems is likely to suffer from high I/O latencies.  ... 
doi:10.1109/hotos.1995.513469 dblp:conf/hotos/SteereS95 fatcat:js3djg6zhjakjblnuiirztvsay

Pre-execution data prefetching with I/O scheduling

Yue Zhao, Kenji Yoshigoe, Mengjun Xie
2013 Journal of Supercomputing  
Parallel applications suffer from I/O latency.  ...  Pre-execution I/O prefetching is effective in hiding I/O latency, in which a pre-execution prefetching thread is created and dedicated to fetch the data for the main thread in advance.  ...  The condition variable is used as a flag managed by MT. Initially the flag is set as unlocked. When MT starts performing I/O access it first locks the flag.  ... 
doi:10.1007/s11227-013-1060-2 fatcat:k54zhhdduvhffhbcyvvduivaua

I/O limitations in parallel molecular dynamics

Terry W. Clark, L. Ridgway Scott, Stanislaw Wlodek, J. Andrew McCammon
1995 Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM) - Supercomputing '95  
We present performance data for a biomolecular simulation of the enzyme, acetylcholinesterase, which uses the parallel molecular dynamics program EulerGROMOS.  ...  The actual production rates are compared against a typical time frame for results analysis where we show that the rate limiting step is the simulation, and that to overcome this will require improved output  ...  The CalTech CCSF and San Diego Supercomputing Center staffs continue to provide exceptional support. We thank Yuan Cai for programming support.  ... 
doi:10.1145/224170.224220 dblp:conf/sc/ClarkSWM95 fatcat:tvzrxwf24fdk3g5lgmbxs2k7va

Pre-execution Data Prefetching with Inter-thread I/O Scheduling [chapter]

Yue Zhao, Kenji Yoshigoe, Mengjun Xie
2013 Lecture Notes in Computer Science  
With the rate of computing power growing much faster than that of storage I/O access, parallel applications suffer more from I/O latency. I/O prefetching is effective in hiding I/O latency.  ...  In this paper, we first identify the drawback of this pre-execution prefetching approach, and then propose a new method to overcome the drawback by scheduling the I/O operations between the main thread  ...  Figure 7 shows the corresponding results of I/O latency during the whole execution of the application.  ... 
doi:10.1007/978-3-642-38750-0_30 fatcat:eeakdd5cj5ffpehdwm7v43pmwq

Exploiting the non-determinism and asynchrony of set iterators to reduce aggregate file I/O latency

David C. Steere
1997 Proceedings of the sixteenth ACM symposium on Operating systems principles - SOSP '97  
Dynamic sets demonstrate substantial performance gains -up to 50% savings in runtime for search on NFS, and up to 90% reduction in I/O latency for Web searches.  ...  Applications that iterate on the set to access its members allow the system to reduce the aggregate I/O latency by exploiting the non-determinism and asychrony inherent in the semantics of set iterators  ...  Satyanarayanan, made significant contributions to the work, as did my thesis committee -Garth Gibson, Jeannette Wing, and Hector Garcia-Molina.  ... 
doi:10.1145/268998.266705 dblp:conf/sosp/Steere97 fatcat:nnwtipnjy5czvmss7lbdwhiway

Exploiting the non-determinism and asynchrony of set iterators to reduce aggregate file I/O latency

David C. Steere
1997 ACM SIGOPS Operating Systems Review  
Dynamic sets demonstrate substantial performance gains -up to 50% savings in runtime for search on NFS, and up to 90% reduction in I/O latency for Web searches.  ...  Applications that iterate on the set to access its members allow the system to reduce the aggregate I/O latency by exploiting the non-determinism and asychrony inherent in the semantics of set iterators  ...  Satyanarayanan, made significant contributions to the work, as did my thesis committee -Garth Gibson, Jeannette Wing, and Hector Garcia-Molina.  ... 
doi:10.1145/269005.266705 fatcat:cgi2gkcx5rcfpaklnjmoa6smhq

Alleviating I/O Interference via Caching and Rate-Controlled Prefetching without Degrading Migration Performance

Morgan Stuart, Tao Lu, Xubin He
2014 2014 9th Parallel Data Storage Workshop  
While effective in some contexts, our analysis demonstrates that performing a migration using these I/O constraining techniques will increase migration latency and limit its ability to converge.  ...  Storage Migration Offloading utilizes a buffer store populated during migration using a dynamic cache policy and rate controlled prefetching.  ...  ACKNOWLEDGMENTS The authors are grateful to the anonymous reviewers for their detailed feedback. This work was supported in part by the U.S.  ... 
doi:10.1109/pdsw.2014.8 dblp:conf/sc/StuartLH14 fatcat:viaulrysgjbnbonb62hw2nhnc4

Job-Aware File-Storage Optimization for Improved Hadoop I/O Performance

Makoto NAKAGAMI, Jose A.B. FORTES, Saneyasu YAMAGUCHI
2020 IEICE transactions on information and systems  
Hard-disk drives (HDDs) are generally used in big-data analysis, and the effectiveness of the Hadoop platform can be optimized by enhancing its I/O performance.  ...  Results of performance evaluation demonstrate that the proposed method improves Hadoop performance by 15.4% when compared to normal cases when file placement is not used.  ...  Fig. 5 I 5 /O and CPU utilization by map-heavy jobs Fig. 6 I/O and CPU utilization by shuffle-heavy jobsFig. 7 I/O and CPU utilization by reduce-heavy jobs Fig. 8 8 Used disk space by map-heavy job during  ... 
doi:10.1587/transinf.2019edp7337 fatcat:g3dcgg3ygbfdzmajesbp5a5g7i

Benefits of I/O Acceleration Technology (I/OAT) in Clusters

Karthikeyan Vaidyanathan, Dhabaleswar K. Panda
2007 2007 IEEE International Symposium on Performance Analysis of Systems & Software  
I/O Acceleration Technology (I/OAT) developed by Intel is a set of features particularly designed to reduce the receiver-side packet overhead.  ...  Though there are several techniques to reduce the packet processing overhead on the sender-side, the receiver-side continues to remain as a bottleneck.  ...  PVFS achieves high performance by striping files across a set of I/O server nodes allowing parallel accesses to the data.  ... 
doi:10.1109/ispass.2007.363752 dblp:conf/ispass/VaidyanathanP07 fatcat:gtwheg6mejgczit6h2nm637v5i

A 3x9 Gb/s Shared, All-Digital CDR for High-Speed, High-Density I/O

Matthew Loh, Azita Emami-Neyestanak
2012 IEEE Journal of Solid-State Circuits  
As the search clock is swept, its samples are compared against the data samples to generate eye information. This information is used to determine the best phase for data recovery.  ...  The scheme's generalized sampling and retiming architecture is used in an efficient sharing technique that reduces the number of clocks required, saving power and area in high-density interconnect.  ...  size the CDR is expected to track, and should be set high enough to reject false eyes, but small enough to maintain sensitivity.  ... 
doi:10.1109/jssc.2011.2178557 fatcat:nelr275rsraapd2axi2ga6stue

Serverless Network File Systems [chapter]

2009 High Performance Mass Storage and Parallel I/O  
Our approach uses this location independence, in combination with fast local area networks, to provide better performance and scalability than traditional file systems.  ...  Further, because any machine in the system can assume the responsibilities of a failed component, our serverless design also provides high availability via redundant data storage.  ...  Acknowledgments We owe several members of the Berkeley Communications Abstraction Layer group -David Culler, Lok Liu, and Rich Martin -a large debt for helping us to get the 32-node Myrinet network up.  ... 
doi:10.1109/9780470544839.ch24 fatcat:ji4enfuitrdblmyrlkgx3hoixi

Opportunistic Data-driven Execution of Parallel Programs for Efficient I/O Services

Xuechen Zhang, Kei Davis, Song Jiang
2012 2012 IEEE 26th International Parallel and Distributed Processing Symposium  
Our experiments on a 120node cluster using the PVFS2 file system show that DualPar can increase system I/O throughput by 31% on average, compared to existing MPI-IO with or without using collective I/O  ...  We propose a data-driven program execution mode in which process scheduling and request issuance are coordinated to facilitate effective I/O scheduling for high disk efficiency.  ...  This work was supported by US National Science Foundation under CAREER CCF 0845711 and CNS 1117772.  ... 
doi:10.1109/ipdps.2012.39 dblp:conf/ipps/ZhangDJ12 fatcat:jqgnjx3t4vhgzgemhq2xopd54y

I/O-Aware Deadline Miss Ratio Management in Real-Time Embedded Databases

Woochul Kang, Sang H. Son, John A. Stankovic, Mehdi Amirijoo
2007 28th IEEE International Real-Time Systems Symposium (RTSS 2007)  
Buffer cache can be used to mitigate the problem.  ...  I/O intensive: N U M data follows N ormal(150, 30). Each user transaction incurs about 100% more I/O load than CPU load. In this setting, the I/O workload varies from 50% to 190%.  ...  Setting I/O deadlines can serve two purposes; timecognizant I/O scheduling and I/O workload control. In this paper, we use I/O deadlines only for I/O workload control purpose.  ... 
doi:10.1109/rtss.2007.19 dblp:conf/rtss/KangSSA07 fatcat:ma5lrb2x4zccjfn7jfr5mavuma

Parity Logging Overcoming the Small Write Problem in Redundant Disk Arrays [chapter]

2009 High Performance Mass Storage and Parallel I/O  
Unfortunately, small writes are a substantial portion of the I/O workload of many important, demanding applications such as on-line transaction processing.  ...  Parity encoded redundant disk arrays provide highly reliable, cost effective secondary storage with high performance for read accesses and large write accesses.  ...  Section 9: Acknowledgments We would like to thank Ed Lee for the original version of Raidsim, and Brian Bershad, Peter Chen, Hugo Patterson, and Jody Prival for early reviews.  ... 
doi:10.1109/9780470544839.ch5 fatcat:m5rupf3runexrojq4bukwmpqkq

ViPIOS - VIenna Parallel Input Output System: Language, Compiler and Advanced Data Structure Support for Parallel I/O Operations [article]

Erich Schikuta, Helmut Wanek, Heinz Stockinger, Kurt Stockinger, Thomas Fürle, Oliver Jorns, Christoph Löffelhardt, Peter Brezany, Minh Dang, Thomas Mück
2018 arXiv   pre-print
We focus on the design of an advanced parallel I/O support, called ViPIOS (VIenna Parallel I/O System), to be targeted by language compilers supporting the same programming model like High Performance  ...  The main focus of this research is the parallel I/O runtime system support provided for software-generated programs produced by parallelizing compilers in the context of High Performance FORTRAN efforts  ...  If no hints are available ViPIOS uses some general heuristics to find an initial distribution and then dynamically can adopt to the application's I/O needs during runtime. 4. Usability.  ... 
arXiv:1808.01166v1 fatcat:gcfewrlk5fcqlbdztveocldqim
« Previous Showing results 1 — 15 out of 5,591 results