Filters








18,009 Hits in 5.4 sec

Programming abstractions, compilation, and execution techniques for massively parallel data analysis [article]

Stephan Ewen, Technische Universität Berlin, Technische Universität Berlin, Volker Markl
2015
This observation made way for a new breed of systems with generic abstractions for data parallel programming, among which the arguably most famous one is MapReduce.  ...  Compared to relational databases, MapReduce and the other parallel programming systems sacrifice the declarative query abstraction and require programmers to implement low-level imperative programs and  ...  We revisit the basic concepts behind data-parallelism and review the programming abstraction and execution model of two well known systems for parallel data analysis.  ... 
doi:10.14279/depositonce-4395 fatcat:76w55ni4ofd5jedz724tsl5qka

Large-Scale Data Computing Performance Comparisons on SYCL Heterogeneous Parallel Processing Layer Implementations

Woosuk Shin, Kwan-Hee Yoo, Nakhoon Baek
2020 Applied Sciences  
Our analysis is available for fundamental measurements of the abstract-level cost-effective use of massively parallel computations, especially for big-data applications.  ...  There is also a need for high-level abstractions and platform-independence over those massively parallel computing platforms.  ...  information on the abstract-level cost-effective implementations of massively parallel computing, especially for big-data applications.  ... 
doi:10.3390/app10051656 fatcat:6ctkonfinrcphf7qd74btyjtli

Prototyping Fortran-90 compilers for massively parallel machines

Marina Chen, James Cowie
1992 Proceedings of the ACM SIGPLAN 1992 conference on Programming language design and implementation - PLDI '92  
Massively parallel architectures, and the languages used to program them, are among both the most dificult and the most rapidly-changing subjects for compilation.  ...  Using formal speci$cation techniques, we have produced a data-parallel Fortran-90 subset compiler for Thinking Machines ) Connection Machine/2 and Connection Machine/5.  ...  Also, thanks to Woody Liechtenstein, Bob Millstein, and Gary Sabot of Thinking Machines for their help with CM Fortran, CM/RT, and the slicewise programming model.  ... 
doi:10.1145/143095.143122 dblp:conf/pldi/ChenC92 fatcat:l2f3tzvar5gnho6jznczvyrfku

Implementing a non-strict functional programming language on a threaded architecture [chapter]

Shigeru Kusakabe, Kentaro Inenaga, Makoto Amamiya, Xinan Tang, Andres Marquez, Guang R. Gao
1999 Lecture Notes in Computer Science  
The combination of a language with ne-grain implicit parallelism and a dataow e v aluation scheme is suitable for high-level programming on massively parallel architectures.  ...  Our compiler generates codes in Threaded-C, which is a lower-level programming language for EARTH. We have developed translation rules, and integrated them into the compiler.  ...  The second technique is to transform non-strict access to data-structure into scheduled strict access, by data dependency analysis between producers and consumers at compilation phase.  ... 
doi:10.1007/bfb0097894 fatcat:kjwlaxfjd5citccnro6auetqga

Iterative parallel data processing with stratosphere

Stephan Ewen, Sebastian Schelter, Kostas Tzoumas, Daniel Warneke, Volker Markl
2013 Proceedings of the 2013 international conference on Management of data - SIGMOD '13  
With increasing interest to run those algorithms on very large data sets, we see a need for new techniques to execute iterations in a massively parallel fashion.  ...  For the first step, we show the algorithm's code and a visualization of the produced data flow programs.  ...  Acknowledgments This research is funded by the German Research Foundation under grant "FOR 1036: Stratosphere -Information Management on the Cloud" and the European Union (EU) grant no. 257859 (project  ... 
doi:10.1145/2463676.2463693 dblp:conf/sigmod/EwenSTWM13 fatcat:ldthktqvwnfrbludeek7xak2ia

Automatic Generation of Massively Parallel Hardware from Control-Intensive Sequential Programs

Michael F. Dossis
2010 2010 IEEE Computer Society Annual Symposium on VLSI  
Using compiler-generators and logic programming techniques, provably-correct hardware compilation flow is achieved.  ...  This paper describes a unified and integrated HLS framework, to automatically produce custom and massively-parallel hardware, including its memory and system interfaces from high-level sequential program  ...  CONCLUSIONS This paper discussed a method for the automatic and formal generation of custom, massively-parallel hardware architecture implementations, from whole and complete, abstract, highlevel programs  ... 
doi:10.1109/isvlsi.2010.40 dblp:conf/isvlsi/Dossis10 fatcat:iyr7ugsao5axnccjthhf54phta

Compiler Techniques for Massively Scalable Implicit Task Parallelism

Timothy G. Armstrong, Justin M. Wozniak, Michael Wilde, Ian T. Foster
2014 SC14: International Conference for High Performance Computing, Networking, Storage and Analysis  
We present a comprehensive set of compiler techniques for data-driven task parallelism, including novel compiler optimizations and intermediate representations.  ...  Swift/T is a high-level language for writing concise, deterministic scripts that compose serial or parallel codes implemented in lower-level programming models into large-scale parallel applications.  ...  -01, and by NSF award ACI 1238993 and the state of Illinois through the Blue Waters sustained-petascale computing project.  ... 
doi:10.1109/sc.2014.30 dblp:conf/sc/ArmstrongWWF14 fatcat:fn5th2mbljhavelopcxrrhljte

Performance data mining

Alois Ferscha, Allen D Malony
2001 Future generations computer systems  
Their experiments with HPF + kernels for finite element solvers on a massively parallel execution platform suggest the integration of run-time and post-mortem performance analysis tools with compile-time  ...  If we consider a single parallel program and the set of all possible executions of the program, as determined by choice of execution environment, program transformation, and input data set, we can regard  ... 
doi:10.1016/s0167-739x(01)00047-4 fatcat:hbdnqkbwt5f67ozkbnwjzmetxq

A Review of Parallelization Tools and Introduction to Easypar

Sudhakar Sah, Vinay G. Vaidya
2012 International Journal of Computer Applications  
However, exploiting parallelism from a program is not easy, as it requires parallel programming expertise. In addition, manual parallelization is a cumbersome, time consuming and inefficient process.  ...  The classification is based on different eras of tool development, role playedby these tools in various parallelization stages, and features provided by parallel program assistance tools.  ...  Dynamic data-dependence analysis technique has a limitation of scalability and it does not work for programs with large memory footprint.  ... 
doi:10.5120/8944-3108 fatcat:mxaohvalvrecrmxlplzyzq7x2i

A Theoretical Model for Global Optimization of Parallel Algorithms

Julian Miller, Lukas Trümper, Christian Terboven, Matthias S. Müller
2021 Mathematics  
It utilizes a hierarchical decomposition of parallel design patterns as well-established building blocks for algorithmic structures and captures them in an abstract pattern tree (APT).  ...  We present a parallel algorithm model that allows for global optimization of their synchronization and dataflow and optimal mapping to complex and heterogeneous architectures.  ...  Thus, significant efforts target the design of programming abstractions such as parallel programming models and automatic transformation techniques: There are various transformation techniques and capable  ... 
doi:10.3390/math9141685 fatcat:disxmrpqtfa7fbwt6n7cuyqz7q

A Survey on Domain-Specific Languages for Machine Learning in Big Data [article]

Ivens Portugal, Paulo Alencar, Donald Cowan
2016 arXiv   pre-print
New problems and novel approaches of data capture, storage, analysis and visualization are responsible for the emergence of the Big Data research field.  ...  Therefore, this literature survey identifies and describes domain-specific languages and frameworks used for Machine Learning in Big Data.  ...  for Community Mapping (COMAP) for their financial support to this research.  ... 
arXiv:1602.07637v2 fatcat:kn34njlaojdqpn35xz6q6gk5im

Limits of control flow on parallelism

Monica S. Lam, Robert P. Wilson
1992 SIGARCH Computer Architecture News  
First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program.  ...  This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative  ...  The parallelism increases for all of the programs, and especially for gcc, irsim, and espresso. However, there is still not a massive amount of parallelism.  ... 
doi:10.1145/146628.139702 fatcat:rxukdhahjffezc4vq72my5vimi

Limits of control flow on parallelism

Monica S. Lam, Robert P. Wilson
1992 Proceedings of the 19th annual international symposium on Computer architecture - ISCA '92  
First, local regions of code have limited parallelism, and control dependence analysis is useful in extracting global parallelism from different parts of a program.  ...  This paper discusses three techniques useful in relaxing the constraints imposed by control flow on parallelism: control dependence analysis, executing multiple flows of control simultaneously, and speculative  ...  The parallelism increases for all of the programs, and especially for gcc, irsim, and espresso. However, there is still not a massive amount of parallelism.  ... 
doi:10.1145/139669.139702 dblp:conf/isca/LamW92 fatcat:mcrgkh2kund4djg7kk5ucgdznu

Memory Utilization and Machine Learning Techniques for Compiler Optimization

A V Shreyas Madhav, Siddarth Singaravel, A Karmel, J. Kannan R., P. Kommers, A. S, A. Quadir Md
2021 ITM Web of Conferences  
The realm of compiler suites that possess and apply efficient optimization methods provide a wide array of beneficial attributes that help programs execute efficiently with low execution time and minimal  ...  computing experience for the developer and user.  ...  This can be considered as some intermediate representation (IR) or as a program for an abstract machine. This IR must be easy to generate and converse into target code.  ... 
doi:10.1051/itmconf/20213701021 fatcat:b7lrzsnszrbcdbxqreb2qmqlby

Object-oriented design for massively parallel computing [article]

Edward Givelberg
2019 arXiv   pre-print
We implemented a prototype of a compiler and a runtime system for parallel C++ and used them to create complex data-intensive and HPC applications.  ...  We define an abstract framework for object-oriented programming and show that object-oriented languages, such as C++, can be interpreted as parallel programming languages.  ...  abstraction for parallel programming.  ... 
arXiv:1811.09303v2 fatcat:5p3usk44knbhlfoz2njok4dt6q
« Previous Showing results 1 — 15 out of 18,009 results