A copy of this work was available on the public web and has been preserved in the Wayback Machine. The capture dates from 2014; you can also visit the original URL.
The file type is
MULTICOMPUTERS 455 Plum, and Stiiben 14] improved the efficiency of a modified cyclic-reduction algorithm on the 16-node iPSC/2-VX. ... We study the multicomputer performance of a three-dimensional Navier-Stokes solver based on alternating-direction line-relaxation methods. ... FIG. 17 . 17 Efficiency of pipelining and iteration methods on the Delta and Parsytec multicomputers. ...doi:10.1137/s1064827593253872 fatcat:mif5344sjvgavfmpxq36tbjvpy
Index Terms-Edge elements, finite-element (FE) method, iterative solver, parallel numerical algorithms, time-domain algorithm. ... Some performance metrics are compared and discussed for the message passing interface implementation of the algorithm. ... The presented benchmark calculations show that the parallelized SSOR preconditioner gain the same performance as the most efficient, fully parallelized point Jacobi algorithm. ...doi:10.1109/tmag.2005.846055 fatcat:brt2afl43zbojitsj5iic35pke
algorithms for power system simulation. ... The parallel simulation was written in C language and implemented on a Parsytec PowerXplorer multicomputer. ... The algorithm was programmed in C language by using PARIX, an operating system based on UNIX with proper compilers and available for the multicomputer. ...doi:10.1109/59.932277 fatcat:g3xgdpbzwzfw7katonq3gwy34m
algorithms for power system simulation. ... The parallel simulation was written in C language and implemented on a Parsytec PowerXplorer multicomputer. ... The algorithm was programmed in C language by using PARIX, an operating system based on UNIX with proper compilers and available for the multicomputer. ...doi:10.1109/mper.2001.4311439 fatcat:zn7inhoh7bgtrkgatupn2rvhxm
Although it was less efficient than the ACWN algorithm of Shu and Kale*, it performed better than the complicated gradient model in various applications running on iPSC/2. ... When based instead on a model of serialized communications, the diffusion method, which is patterned after the Jacobi fashion of relaxation, becomes less effective. ...doi:10.1057/jors.1994.122 fatcat:j5kpy55zczctldcnq73jlqu67m
The Visual Computer
We propose an efficient data redistribution scheme to achieve almost perfect load balance. We also present several parallel algorithms for form-factor computation. ... Experimental results The algorithms discussed in this work were implemented (in the C language) on a 4D Intel iPSC/2 hypercube multicomputer. ... It has been experimentally observed that the SCG algorithm converges faster than commonly used Gauss-Jacobi (GJ) algorithm, which converges in almost double the number of iterations of the SCG algorithm ...doi:10.1007/s003710050085 fatcat:uuieepajp5fehl4tmsn4awu3je
Sleijpen, Henk A. van der Vorst and Ellen Mei- jerink, Efficient expansion of subspaces in the Jacobi-Davidson method for standard and generalized eigenproblems (75-89 (elec- tronic)); Karl Meerbergen, ... distributed memory multicomputers (92— 103); Andrew J. ...
The Paradigm (Parallelizing Compiler for Distributed-Memory, General-Purpose Multicomputers) project at the University of Illinois addresses this problem by developing automatic methods for efficient parallelization ... A unified approach efficiently supports regular and irregular computations using data and functional parallelism. ... To efficiently run such irregular applications on a massively parallel multicomputer. runtime compilation techniques can be used. ...doi:10.1109/2.467577 fatcat:ghmtervcfzehzlelvf2ealwgyu
We propose a simple algorithm which is based on edge-coloring of system graphs for termination detection of loosely synchronous computations. ... The optirnality analysis is based on results from a related problem, periodic gossiping in edge-colored graphs. ... In addition to being fully distributed and time efficient, our algorithm is totally symmetric. ...doi:10.1109/71.503778 fatcat:63i72tu2ofgfhllkhgzkb7rgmq
The approach taken is based on waveform relaxation in which the problem is decomposed into a sequence of subproblems which are then solved independently using VODE on each processor. ... In this paper, it is shown how to adapt an existing package (VODE) for solving systems of ordinary differential equations on serial computers to distributed memory parallel computers. ... The waveform algorithm itself is baaed on a Block Jacobi multisplitting approach. ...doi:10.1016/0898-1221(94)00194-4 fatcat:7ew3wflpsbbira2q2dlrbv4lja
Experimentally, the explicit programming model proves to be more efficient than the implicit model by 20—70%, depends on the mesh and the machine. ... In this paper we compare different parallel implementations of the same algorithm for solving nonlinear simulation problems on unstructured meshes. ... Preconditioning of the GMRES algorithm can be efficiently achieved by using any of basic iterative methods such as the standard Jacobi, Gauss-Seidel or block Jacobi, block Gauss-Seidel methods  . ...doi:10.1155/2001/681621 fatcat:odxbcq52w5gytkzb2ywewg5szq
A new methodology named CALMANT (CC-cube Algorithms on Meshes and Tori) for mapping a kind of algorithms that we call CC-cube algorithm onto multicomputers with hypercube, mesh, or torus interconnection ... In this work, we propose a methodology named CALMANT (Cc-cube ALgorithms on Meshes ANd Tori) for mapping a kind of algorithms that we call CC-cube algorithms onto multicomputers with hypercube, mesh, or ... CALMANT is based on three different techniques: embedding of hypercubes on meshes, communication pipelining, and efficient message-scheduling algorithms. ...doi:10.1109/tpds.2002.1158263 fatcat:3kgb5ca675hafe4gdsfjjmmksy
Lecture Notes in Computer Science
In particular, the Power method, deflation, Givens algorithm, Davidson methods and Jacobi methods are analized using PVM and MPI. ... This is why it is interesting to study algorithms on networks of processors. In this paper we study on networks of processors different Eigenvalue Solvers. ... The efficiencies are clearly better than in the previous algorithms, even with small matrices and execution time. Jacobi method. ...doi:10.1007/10703040_8 fatcat:rwfczle2gjdizo655yz7rlpcne
The parallel efficiency can be very close to one, implying that attained computational speed scales perfectly with the number of processors. ... First we have implemented the Coupled Dipole method on a Massively Parallel Computer. ... Salmon has successfully implemented the Barnes-Hut method on the Caltech hypercubes  . The FMM is implemented on shared memory multicomputers  , and on the connection machine CM-2  . ...doi:10.1002/ppsc.19940110304 fatcat:xywrbqww7ndbll5w7fwyacz3ru
Magnus, Alphonse (B-UCL) 86b:65006 Riccati acceleration of Jacobi continued fractions and Laguerre- Hahn orthogonal polynomials. ... Dekker, Design of languages for numerical algorithms (pp. 291- 305); S. M. ...
« Previous Showing results 1 — 15 out of 90 results