A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2020; you can also visit the original URL.
The file type is application/pdf
.
Filters
A Survey On Data-Centric And Data-Aware Techniques For Large Scale Infrastructures
2016
Zenodo
Finally, the authors include a discussion on future research lines and synergies among the former techniques. ...
These systems are designed to support resource-intensive and complex applications, which can be found in many scientific and industrial areas. ...
A similar approach is followed by the open map-reduce implementation, Hadoop [5] , and its partner file system Hadoop Distributed File System (HDFS) [6] . ...
doi:10.5281/zenodo.1112257
fatcat:4l7qjgwdcrffddnwgb4dd3miuu
Distributed compilation system for high-speed software build processes
2014
2014 International Conference on Big Data and Smart Computing (BIGCOMP)
This significantly reduces the compilation time of mass sources by using the idle resources. We expect gains of up to 65% compared to non-distributed compilation systems. ...
Clustering research aims at utilizing idle computer resources for processing a variable workload on a large number of computers. ...
DESIGN AND IMPLEMENTATION OF DISTCOM The Distributed Compilation System (DistCom) is a technique designed for implementing a distributed compilation platform in order to speed up the compilation of large ...
doi:10.1109/bigcomp.2014.6741419
dblp:conf/bigcomp/LimLLE14
fatcat:d3osts47sra5bmrp6pk3243eve
TACC: A Full-stack Cloud Computing Infrastructure for Machine Learning Tasks
[article]
2021
arXiv
pre-print
TACC implements a 4-layer application workflow abstraction through which system optimization techniques can be dynamically combined and applied to various types of ML applications. ...
In Machine Learning (ML) system research, efficient resource scheduling and utilization have always been an important topic given the compute-intensive nature of ML applications. ...
Acknowledgements: This paper is supported in part by Hong Kong RGC TRS T41-603/20-R. We thank Cengguang Zhang and Yuxuan Qin for their contribution in the earlystage of TACC development. ...
arXiv:2110.01556v1
fatcat:jfrpby2hjffonbfz235xevwzce
Open Internet-based Sharing for Desktop Grids in iShare
2007
2007 IEEE International Parallel and Distributed Processing Symposium
In this paper, we present a brief overview of the iShare system and describe how iShare leverages existing standards to provide novel solutions to the problems of resource dissemination, resource allocation ...
This paper presents iShare, a distributed peer-to-peer Internet-sharing system, that facilitates the sharing of diverse resources located in different administrative domains over the Internet. iShare addresses ...
Related work in this category includes systems to support high throughput computing on large collections of distributed computing resources (such as Condor [28] ), web service techniques using Application ...
doi:10.1109/ipdps.2007.370663
dblp:conf/ipps/RenBPE07
fatcat:rzuxvhjp6jhkplifhsnmffu4dm
CACHE MECHANISM TO AVOID DULPICATION OF SAME THING IN HADOOP SYSTEM TO SPEED UP THE EXTENSION
2014
International Journal of Research in Engineering and Technology
Hadoop is popular for analysis, storage and processing of very large data but require to make changes in hadoop system. ...
The main idea of the MapReduce model is to hide details of parallel execution and allow users to focus only on data processing strategies. Hadoop is an open-source implementation for MapReduce. ...
Hadoop uses Hadoop Distributed File System (HDFS) which is distributed file system, used for storing large data files. Each file is divided into numbers of blocks and replicated for fault tolerance. ...
doi:10.15623/ijret.2014.0311048
fatcat:vfirjqkjmjg3rhhnu3bcaz5eji
Spread Spectrum Storage with Mnemosyne
[chapter]
2003
Lecture Notes in Computer Science
A typical file server implements a multi-user file system, explicitly allocating storage blocks to files and metadata. ...
Mnemosyne was originally conceived to be more in line with this latter class of system, although focusing less on the wide-scale publication of data. ...
doi:10.1007/3-540-37795-6_27
fatcat:ybukbfzmnvathbuzwgqvoid2iy
Intelligent Reconfigurable Method of Cloud Computing Resources for Multimedia Data Delivery
2013
Informatica
It is necessary to manage large data in an efficient way, and to consider transmission efficiency for multimedia data of different quality. ...
The method is then for allocating resources with the scheme. ...
Acknowledgements This work was supported by the IT R&D program of MSIP/KCA [12-921-06-001, "Development of MTM-based Security Core Technology for Prevention of Information Leakage in Smart Devices"]. ...
doi:10.15388/informatica.2013.401
fatcat:ecb2qs2osrd7jgkeozmsu3uwl4
A security architecture for computational grids
1998
Proceedings of the 5th ACM conference on Computer and communications security - CCS '98
Both resources and data are often distributed in a wide-area network with components administered locally and independently. ...
State-of-the-art and emerging scientific applications require fast access to large quantities of data and commensurately fast computational resources. ...
of the Globus Resource Allocation Manager, and Bill Johnston's comments on a draft of the paper. ...
doi:10.1145/288090.288111
dblp:conf/ccs/FosterKTT98
fatcat:vw5ppsapk5gvbobt7h24bm7xzq
A federated experiment environment for emulab-based testbeds
2009
2009 5th International Conference on Testbeds and Research Infrastructures for the Development of Networks & Communities and Workshops
Testbeds that contribute resources continue to exert their own resource allocation and access policies. The architecture is designed to scale. We describe a prototype implementation. ...
The system uses cooperative resource allocation and multiple-level testbed access to create a cohesive environment for experimentation. ...
This heavy loading led us to implement the on-demand file system mounting in the federation configurations today. ...
doi:10.1109/tridentcom.2009.4976238
dblp:conf/tridentcom/FaberW09
fatcat:nqp4wop7v5hhjjm76mxkqeewwu
Improving Data Availability for Better Access Performance: A Study on Caching Scientific Data on Distributed Desktop Workstations
2009
Journal of Grid Computing
Distributed Storage Scavenging: Designed and implemented an on-line cache management algorithm for FreeLoader, a novel distributed storage scavenging file system. ...
MapReduce in opportunistic environments: Collaboratively researched a hybrid ar-
Projects
chitecture and multiple novel techniques to enable Hadoop to run MapReduce applications
on resource-scavenging ...
doi:10.1007/s10723-009-9122-7
fatcat:34dzjzvsb5fs7mi7w3vhokftv4
Case Studies in Designing Elastic Applications
2013
2013 13th IEEE/ACM International Symposium on Cluster, Cloud, and Grid Computing
Clusters, clouds, and grids offer access to large scale computational resources at low cost. ...
This is especially appealing to scientific applications that require a very large scale to compete in the research space. ...
ACKNOWLEDGEMENTS This work was supported in part by NSF grants OCI-1148330, CNS-0643229, and CNS-855047. ...
doi:10.1109/ccgrid.2013.46
dblp:conf/ccgrid/RajanTAIET13
fatcat:aviqmwf4vvecbozewaxbbkzgpy
Operating system design considerations for the packet-switching environment
1975
Proceedings of the May 19-22, 1975, national computer conference and exposition on - AFIPS '75
ACKNOWLEDGMENTS The author wishes to thank R. Brooks, J. Malman, J. Miller, M. Retz, and D. Walden who assisted in the development of this manuscript. ...
Load-sharing techniques are aimed at an allocation of resources which provides distribution of load or assignment of processing tasks to the most appropriate server sites. ...
need for development of automated techniques for managing resources in the distributed environment. ...
doi:10.1145/1499949.1499980
dblp:conf/afips/Retz75
fatcat:mvihhdvvgvepff2dl6subq32aa
Transparent access to Grid resources for user software
2006
Concurrency and Computation
The last component is 'Parrot'; it implements an interpositioning technique based on the debugger trap to provide the application with transparent file I/O over a wide-area network (WAN) without any modification ...
KLOUS ET AL. existing application ought to be restructured to be deployed on a distributed system comes at an enormous cost in development and debugging labor. ...
Furthermore, we acknowledge the valuable help of David Groep and Jeff Templon during the deployment of our application on the NIKHEF EDG testbed. ...
doi:10.1002/cpe.961
fatcat:7gvy6k4rdzczpcmsmw2zzjgvv4
Survey on Task Assignment Techniques in Hadoop
2012
International Journal of Computer Applications
Hadoop is open source implementation of MapReduce framework, which processes the vast amount of data in parallel on large clusters. ...
MapReduce is an implementation for processing large scale data parallelly. Actual benefits of MapReduce occur when this framework is implemented in large scale, shared nothing cluster. ...
Hadoop Distributed File System HDFS is a block oriented file system. Individual files are divided into blocks of 64MB. ...
doi:10.5120/9617-4256
fatcat:tnnd4l7ztfcftp7wnz6z3gcnma
Performance-driven task co-scheduling for MapReduce environments
2010
2010 IEEE Network Operations and Management Symposium - NOMS 2010
Such sharing is in line with recent trends in data center management which aim to consolidate workloads in order to achieve cost and energy savings. ...
The proposed task scheduler dynamically predicts the performance of concurrent MapReduce jobs and adjusts the resource allocation for the jobs. ...
to the distributed file system. ...
doi:10.1109/noms.2010.5488494
dblp:conf/noms/PoloCBSW10
fatcat:2r4ljikgonb23nv4vurgqvll7e
« Previous
Showing results 1 — 15 out of 93,790 results