Filters








237,600 Hits in 10.1 sec

Thread-Based Competitive Or-Parallelism [chapter]

Paulo Moura, Ricardo Rocha, Sara C. Madeira
2008 Lecture Notes in Computer Science  
This paper presents the logic programming concept of threadbased competitive or-parallelism, which combines the original idea of competitive or-parallelism with committed-choice nondeterminism and speculative  ...  We discuss the implementation of competitive orparallelism in the context of Logtalk, an object-oriented logic programming language, and present experimental results. ⋆ This work has been partially supported  ...  Conclusions and Future Work We have presented the logic programming concept of thread-based competitive or-parallelism supported by an implementation in the object-oriented logic programing language Logtalk  ... 
doi:10.1007/978-3-540-89982-2_63 fatcat:bfbpm3oq3fhnxjmht3r3aazqd4

High Level Thread-Based Competitive Or-Parallelism in Logtalk [chapter]

Paulo Moura, Ricardo Rocha, Sara C. Madeira
2008 Lecture Notes in Computer Science  
We discuss the implementation of thread-based competitive or-parallelism in the context of Logtalk, an object-oriented logic programming language, and present experimental results.  ...  This paper presents the logic programming concept of threadbased competitive or-parallelism, which combines the original idea of competitive or-parallelism with committed-choice nondeterminism and speculative  ...  competitive or-parallelism.  ... 
doi:10.1007/978-3-540-92995-6_8 fatcat:4s33cu4owrgu5kn2x5jnpmruti

Variant-based competitive parallel execution of sequential programs

Oliver Trachsel, Thomas R. Gross
2010 Proceedings of the 7th ACM international conference on Computing frontiers - CF '10  
Competitive parallel execution (CPE ) is a simple yet attractive technique to improve the performance of sequential programs on multi-core and multi-processor systems.  ...  A sequential program is transformed into a CPE-enabled program by introducing multiple variants for parts of the program.  ...  The original program is slightly adapted to enable competitive parallel execution of these variants for specific parts of the program.  ... 
doi:10.1145/1787275.1787325 dblp:conf/cf/TrachselG10 fatcat:t5wwybpmljhynpxzdq6u5vesne

Ibis: Real-world problem solving using real-world grids

H.E. Bal, N. Drost, R. Kemp, J. Maassen, R.V. van Nieuwpoort, C. van Reeuwijk, F.J. Seinstra
2009 2009 IEEE International Symposium on Parallel & Distributed Processing  
Ibis supports a range of programming models that yield efficient implementations, even on distributed sets of heterogeneous resources.  ...  Ibis is an open source software framework that drastically simplifies the process of programming and deploying large-scale parallel and distributed grid applications.  ...  The data analysis program itself, a sequential program implemented in C called 'SuperFind', was provided by the competition organizers.  ... 
doi:10.1109/ipdps.2009.5160960 dblp:conf/ipps/BalDKMNRS09 fatcat:5lkmaxfra5esnafhnxpuafb53e

Adaptive Scheduling Framework for Multi - Core Systems Based on the Task - Parallel Programming Model

H. M. LU, School of Computer Science and Engineering, Changchun University of Technology, Changchun 1 30012 , China, Y. J. CAO, J. J. SONG, T. Y. DI, H. Y. SUN, X. M. HAN, School of Software , Zhengzhou University, Zhengzhou 450 000, China, School of Computer Science and Engineering, Changchun University of Technology, Changchun 1 30012 , China, School of Computer Science and Engineering, Changchun University of Technology, Changchun 1 30012 , China, School of Computer Engineering, Nanyang Technological Univers ity, Singapore 639798 , Singapore, School of Computer Science and Engineering, Changchun University of Technology, Changchun 1 30012 , China
2016 Journal of Engineering Science and Technology Review  
However, the most updated multi-core parallel programming models have defects, such as poor scalability and intensive competition in processor core resources.  ...  co-scheduling system A-SYS (Adaptive SYStem) based on fine-grained task programming model was designed and implemented.  ...  In a broad sense, parallel programming models that implement parallelization of application programs can be divided into data parallel programming model and task parallel programming model [3] .  ... 
doi:10.25103/jestr.096.12 fatcat:h5vgipeojfdnnbxgkw6a6fdjba

The last mile

Caitlin Sadowski, Andrew Shewmaker
2010 Proceedings of the FSE/SDP workshop on Future of software engineering research - FoSER '10  
In order to take advantage of increasing numbers of parallel resources, numerous parallel programming systems have been proposed and deployed, usually without a systematic evaluation of their usability  ...  We posit that usability is a key factor in the effectiveness of a parallel programming system, and that theoretical performance gains can only be realized if programmers are able to successfully reason  ...  From the 2009 competition. 2 From the 2006 competition. End users are people who write programs but are not programmers [MK09] .  ... 
doi:10.1145/1882362.1882426 dblp:conf/sigsoft/SadowskiS10 fatcat:5etg4b2gavgihgjtwuacsietdy

Computer-Chess Championship Programs: Software Design, Synthesis of Evaluation Functions, Parallel Searches

Jean-Christophe Weill
1995 ICGA Journal  
After surveying all the results hitherto known in the domain of parallelizing minimax, we present the outcomes of our implementations on a distributed engine, the Connection Machine 5.  ...  The comparison ranged over simulated search trees, over a competitive Othello program and over a chess progmm. We present advantages and drawbacks of each method.  ...  After surveying all the results hitherto known in the domain of parallelizing minimax, we present the outcomes of our implementations on a distributed engine, the Connection Machine 5.  ... 
doi:10.3233/icg-1995-18305 fatcat:rfebz5jzuvdmlexld4umyr5jue

Self-paced Learning in HPC Lab Courses

Christian Terboven, Julian Miller, Sandra Wienke, Matthias S. Müller
2020 The Journal of Computational Science Education  
The learning objectives include the foundations of High-Performance Computing (HPC), such as the understanding of modern architectures, the development of parallel programming skills, and coursespecic  ...  KEYWORDS HPC education, software lab, parallel programming, programming eort, training productivity MOTIVATION With the intent to make the dedication of our chair-the High-Performance Computing (HPC)-popular  ...  Each group has to solve three tasks and provide implementations with the parallel programming model OpenMP.  ... 
doi:10.22369/issn.2153-4136/11/1/10 fatcat:rey2rzwhnfayzih4fmn63kffdy

GPU Computation in Bioinspired Algorithms: A Review [chapter]

M. G. Arenas, A. M. Mora, G. Romero, P. A. Castillo
2011 Lecture Notes in Computer Science  
For this reason, parallelization is an interesting alternative in order to decrease the execution time and to provide accurate results.  ...  This paper reviews the use of GPUs to solve scientific problems, giving an overview of current software systems.  ...  Nevertheless, the evolutionary operators implemented on GPU are only specific to the GECCO competition, and the validity of the experiments just works on a small number of problems. Tsutsui et al.  ... 
doi:10.1007/978-3-642-21501-8_54 fatcat:xhi7taielje3ndese65g336yxi

Pipelining Wavefront Computations: Experiences and Performance [chapter]

E Christopher Lewis, Lawrence Snyder
2000 Lecture Notes in Computer Science  
We address this question through a quantitative and qualitative study of three approaches to expressing pipelining: programmer implemented via message passing, compiler discovered via automatic parallelization  ...  Although it is well understood how wavefronts are pipelined for parallel execution, the question remains: How are they best presented to the compiler for the effective generation of pipelined code?  ...  This research was supported in part by a grant of HPC time from the Arctic Region Supercomputing Center.  ... 
doi:10.1007/3-540-45591-4_35 fatcat:ie3izmvqfrbrfdjwjkliq6ewgy

Prototyping Fortran-90 compilers for massively parallel machines

Marina Chen, James Cowie
1992 Proceedings of the ACM SIGPLAN 1992 conference on Programming language design and implementation - PLDI '92  
Massively parallel architectures, and the languages used to program them, are among both the most dificult and the most rapidly-changing subjects for compilation.  ...  This has created a demand for new compiier prototyping technologies that allow novel styl. of compilation and optimization to be tested in a reasonable amount of time.  ...  Also, thanks to Woody Liechtenstein, Bob Millstein, and Gary Sabot of Thinking Machines for their help with CM Fortran, CM/RT, and the slicewise programming model.  ... 
doi:10.1145/143095.143122 dblp:conf/pldi/ChenC92 fatcat:l2f3tzvar5gnho6jznczvyrfku

High-Level Multi-Threading in hProlog [article]

Timon Van Overveldt, Bart Demoen
2011 arXiv   pre-print
Two common types of high-level explicit parallelism are discussed: independent and-parallelism and competitive or-parallelism. A new type of explicit parallelism, pipeline parallelism, is proposed.  ...  This new type can be used in certain cases where independent and-parallelism and competitive or-parallelism cannot be used.  ...  Thanks to Paulo Moura for his interesting conversations, his useful insights and Logtalk's set of multi-threading benchmarks.  ... 
arXiv:1112.3786v2 fatcat:yjlatjvtljh3llowa4bvtaq7vm

Oversubscription on multicore processors

Costin Iancu, Steven Hofmeyr, Filip Blagojevic, Yili Zheng
2010 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS)  
Hybrid programming models and composability of parallel libraries are very active areas of research within the scientific programming community.  ...  In this paper we evaluate the impact of task oversubscription on the performance of MPI, OpenMP and UPC implementations of the NAS Parallel Benchmarks on UMA and NUMA multisocket architectures.  ...  The reversal of performance trends between UPC and OpenMP in the presence of competition and oversubscription, indicates that these factors might be valuable when evaluating parallel programming models  ... 
doi:10.1109/ipdps.2010.5470434 dblp:conf/ipps/IancuHBZ10 fatcat:vomcoi2h7fg43cnu4jxyhqvjbe

An Overview of the Ciao System [chapter]

Manuel V. Hermenegildo, F. Bueno, M. Carro, P. López-García, R. Haemmerlé, E. Mera, J. F. Morales, G. Puebla
2011 Lecture Notes in Computer Science  
The compiler also performs many types of optimizations, including automatic parallelization.  ...  It offers very competitive performance, while retaining the flexibility and interactive development of a dynamic language.  ...  The compiler also performs many types of optimizations, including automatic parallelization.  ... 
doi:10.1007/978-3-642-22546-8_2 fatcat:huy2tgj5lbhs5lq5qehkk3dk6i

Scheduler-Activated Dynamic Page Migration for Multiprogrammed DSM Multiprocessors

Dimitrios S. Nikolopoulos, Constantine D. Polychronopoulos, Theodore S. Papatheodorou, Jesús Labarta, Eduard Ayguadé
2002 Journal of Parallel and Distributed Computing  
This paper presents a novel dynamic page migration algorithm that remedies this problem in iterative parallel programs.  ...  The algorithm is implemented at user-level and its functionality is orthogonal to the scheduling policy of the operating system. # 2002 Elsevier Science (USA)  ...  Once the number of processors allocated to a parallel program is determined, the program uses the same number of threads to execute parallel constructs.  ... 
doi:10.1006/jpdc.2001.1817 fatcat:mm4g6niwc5e4dn4adqub77grbu
« Previous Showing results 1 — 15 out of 237,600 results