A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2006; you can also visit the original URL.
The file type is application/pdf
.
Filters
Source level transformations to improve I/O data partitioning
2003
Proceedings of the international workshop on Storage network architecture and parallel I/Os - SNAPI '03
We then use these LDADs to guide our I/O data partitioning that utilizes multiple disks to significantly increase I/O throughput. ...
From our previous work on I/O profiling, we found that I/O access patterns of parallel scientific applications are usually very regular and highly predictable. ...
In our future work, we will focus on using a more sophisticated data flow and I/O control flow analysis to guide data partitioning, and we will also study how better to use profiling information to guide ...
doi:10.1145/1162618.1162622
fatcat:qsdmohpq6bhkvggdpqa2whi6u4
Techniques for minimizing and balancing I/O during functional partitioning
1999
IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
A key problem during partitioning is minimizing the input/output (I/O) pins or wires between processors. The traditional structural partitioning approach is strongly restricted by such I/O. ...
The FunctionBus allows choice of any size for internal I/O by trading off I/O size for performance, while port calling allows distribution of external I/O almost arbitrarily among modules. ...
FunctionBus Versus Cut-Edges I/O During Functional Partitioning Functional partitioning using a FunctionBus approach can yield even further I/O improvements. ...
doi:10.1109/43.739060
fatcat:bysbwz6jkbabvdrjiky3lfhumu
Load Balancing using Grid-based Peer-to-Peer Parallel I/O
2005
Proceedings IEEE International Conference on Cluster Computing
Next, we describe a profile-guided data allocation algorithm that can increase the degree of I/O parallelism present in the system, as well as to balance I/O in a heterogeneous system. ...
Our experimental results show that by partitioning data across all available storage devices and carefully tuning I/O workloads in the Grid system, our Peer-to-Peer scheme can deliver scalable high performance ...
We then use the profiles to guide how to partition data set across disks to achieve high I/O throughput and to balance I/O workloads as well. ...
doi:10.1109/clustr.2005.347040
dblp:conf/cluster/WangK05
fatcat:zretqxaygffubalxp6n6llactq
Experiences in profile-guided operating system kernel optimization
2014
Proceedings of 5th Asia-Pacific Workshop on Systems - APSys '14
The technique we take advantage of is profile-guided optimization, which is a compiler optimization technique commonly used in user applications. ...
We use tmpfs in our evaluation to avoid the uncertainty of disk I/O performance. ...
On the profile-guided optimized kernel, the throughput of nginx decreases by 0.59% to 96494 requests per second. ...
doi:10.1145/2637166.2637227
dblp:conf/apsys/YuanGC14
fatcat:oayvkbengzadppyk52sh5ir44q
Parallel I/O prefetching using MPI file caching and I/O signatures
2008
2008 SC - International Conference for High Performance Computing, Networking, Storage and Analysis
In this study, we propose an I/O signature-based prefetching strategy. The idea is to use a predetermined I/O signature of an application to guide prefetching. ...
Parallel I/O prefetching is considered to be effective in improving I/O performance. ...
As shown in the fixed stride I/O signature above, initial position can be process dependent, where a process i may start accessing data from i*partition, where partition is a function of process rank. ...
doi:10.1109/sc.2008.5213604
dblp:conf/sc/BynaCSTG08
fatcat:wfsyrq32anbe3maube5n4sodxq
RIB: Analysis of I/O Automata
2019
VOLUME-8 ISSUE-10, AUGUST 2019, REGULAR ISSUE
Continuing with
RIB: Analysis of I/O Automata
K.P.Kaliyamurthie, B.Sundarraj, A.V.Allin Geo, G.Michael this rationale, instead of evaluating reliable modalities [4, 1, 3, 12 ], we answer this challenge ...
Next, any practical construction of write-back caches will clearly require that the partition table and agents are mostly incompatible; our system is no different. ...
AUTHORS PROFILE Dr ...
doi:10.35940/ijitee.i3218.0789s319
fatcat:xn2ldqsh3nezllete7ob6wvovu
Building application-specific operating systems: a profile-guided approach
2018
Science China Information Sciences
With profile collected from executing the target application on an instrumented Linux kernel, Tarax recompiles the kernel while applying profile-guided optimizations (PGOs). ...
We modify the Linux kernel and GCC to support kernel instrumentation and profile collection. We also modify GCC to reduce the size of optimized kernel images. ...
We also use tmpfs to avoid the uncertainty of disk I/O performance. ...
doi:10.1007/s11432-017-9418-9
fatcat:h532zrbsu5hhpmd7gures2f2n4
I/O processing in a virtualized platform
2007
Proceedings of the 3rd international conference on Virtual execution environments - VEE '07
We apply this methodology to study the network I/O performance of Xen (as a case study) in a full system simulation environment, using detailed cache and TLB models to profile and characterize software ...
Unfortunately, the architectural reasons behind the I/O performance overheads are not well understood. ...
In this context, I/O architectures can be broadly divided into split I/O and direct I/O. ...
doi:10.1145/1254810.1254827
dblp:conf/vee/ChadhaIIMNF07
fatcat:x375g5ofc5csvafkvxa6lx2l4u
Modular HPC I/O Characterization with Darshan
2016
2016 5th Workshop on Extreme-Scale Programming Tools (ESPT)
I/O profiling tools can help to address this challenge, but most existing tools only instrument specific components within the I/O subsystem that provide a limited perspective on I/O performance. ...
In this work, we consider how the I/O profiling tool Darshan can be improved to allow for more flexible, comprehensive instrumentation of current and future HPC I/O workloads. ...
O nodes, and details on the dimensions of the torus network for the job's compute partition. ...
doi:10.1109/espt.2016.006
dblp:conf/sc/SnyderCHRLW16
fatcat:h22nvvifindyxlvcrzmca3hpee
LDPLFS: Improving I/O Performance without Application Modification
2012
2012 IEEE 26th International Parallel and Distributed Processing Symposium Workshops & PhD Forum
Input/Output (I/O) operations can represent a significant proportion of run-time when large scientific applications are run in parallel and at scale. ...
We demonstrate our implementation of this approach, named LDPLFS, on a set of standard UNIX tools, as well on as a set of standard parallel I/O intensive mini-applications. ...
In [13] and [14] , an I/O profiling tool is utilised to guide the transparent partitioning of files written and read by a set of benchmarks. ...
doi:10.1109/ipdpsw.2012.172
dblp:conf/ipps/WrightHPMHJ12
fatcat:j33prdmqhffjtmtix67x5elzfu
End-to-end I/O Monitoring on a Leading Supercomputer
2019
Symposium on Networked Systems Design and Implementation
Online tools that can capture/analyze I/O activities and guide optimization are highly needed. ...
JSON objects, on top of which Beacon builds I/O monitoring/profiling services. ...
I/O behavior. ...
dblp:conf/nsdi/YangJMWZZELYZLX19
fatcat:f5eynixhu5helfcuwtwwlkkw5i
Light-Weight Parallel I/O Analysis at Scale
[chapter]
2011
Lecture Notes in Computer Science
Input/output (I/O) operations can represent a significant proportion of the run-time when large scientific applications are run in parallel. ...
In this paper we utilise RIOT, an input/output tracing toolkit being developed at the University of Warwick, to assess the performance of three standard industry I/O benchmarks and mini-applications. ...
In a similar vein to [23] and [24] , in which I/O throughput is vastly improved by transparently partitioning a data file (creating multiple, independent, I/O streams), PLFS uses file partitioning as ...
doi:10.1007/978-3-642-24749-1_18
fatcat:2ahogbvy7jddroo4m7acgg3szq
Competitive prefetching for concurrent sequential I/O
2007
Proceedings of the 2nd ACM SIGOPS/EuroSys European Conference on Computer Systems 2007 - EuroSys '07
During concurrent I/O workloads, sequential access to one I/O stream can be interrupted by accesses to other streams in the system. ...
Frequent switching between multiple sequential I/O streams may severely affect I/O efficiency due to long disk seek and rotational delays of disk-based storage devices. ...
Patterson et al. explored a cost-benefit model to guide prefetching decisions [26] . ...
doi:10.1145/1272996.1273017
dblp:conf/eurosys/LiSP07
fatcat:r5w3wxu24nbazhat4zrvikravy
Competitive prefetching for concurrent sequential I/O
2007
ACM SIGOPS Operating Systems Review
During concurrent I/O workloads, sequential access to one I/O stream can be interrupted by accesses to other streams in the system. ...
Frequent switching between multiple sequential I/O streams may severely affect I/O efficiency due to long disk seek and rotational delays of disk-based storage devices. ...
Patterson et al. explored a cost-benefit model to guide prefetching decisions [26] . ...
doi:10.1145/1272998.1273017
fatcat:uwqfpv3qhncermhyr7lxmc33wq
Comanche – A Compiler-Driven I/O Management System
2008
Zenodo
In contrast, compiler driven I/O management will allow a program-s data sets to be retrieved in parts, called blocks or tiles. ...
In this way, the I/O profile can help in predicting the amount of I/O and the run time required for a program similar to the one on which the I/O profile is based. ...
An I/O profile describes how much I/O has been performed and where; additionally it will represent information related to I/O wait time. ...
doi:10.5281/zenodo.1331198
fatcat:asdvqa6cmfbebfjvn4tcxm2gyi
« Previous
Showing results 1 — 15 out of 10,338 results