Filters








70,211 Hits in 6.0 sec

Implementation of Parallel Processing on Multi-Object Recognition System Software

Midriem Mirdanies
2018 Lontar Komputer  
Based on the experiments, it was found that the processing time in parallel is faster than sequential process, where the fastest of the processing time is obtained after optimization in the loop syntax  ...  The parallel processing was implemented in the for loop of the matching process between the capturing object from the camera and the database under two conditions, i.e., the original of the for loop syntax  ...  , and when the multi-object recognition program is running in parallel, the CPU load was used is greater than or equal to 66%, it means that the CPU load is greater than or equal to 44% compared sequentially  ... 
doi:10.24843/lkjiti.2018.v09.i03.p02 fatcat:pok4wy36wbcfpn2u6m5wt46rwe

Mitigating Amdahl's Law through EPI Throttling

Murali Annavaram, Ed Grochowski, John Shen
2005 SIGARCH Computer Architecture News  
Given this environment, our goal is to minimize the execution times of multi-threaded programs containing nontrivial parallel and sequential phases, while keeping the CMP's total power consumption within  ...  Due to the nature of the algorithms, these multi-threaded programs inherently will have phases of sequential execution; Amdahl's law dictates that the speedup of such parallel programs will be limited  ...  We would like to acknowledge Carole Dulong and her team for providing us guidance on setting up the BLAST and HMMER programs, Natalie Enright for pointing us to FFTW and Hideki Saito for patiently answering  ... 
doi:10.1145/1080695.1069995 fatcat:xq5sy7gfajh3pe3sj7vetlfa3e

Parallel Algorithm for Reduction of Data Processing Time in Big Data

Jesús Silva, Hugo Hernández Palma, William Niebles Núñez, David Ovallos-Gazabon, Noel Varela
2020 Journal of Physics, Conference Series  
Description of the databases used in the experimentation R e t r a c t e d Table 3 . 3 Sequential vs Parallel Version Run Times Quality Metrics Value Execution time Sequential time 84 min Parallel time  ...  Thus, parallel programming is an area of computing that takes advantage of hardware resources to improve algorithm execution times.  ... 
doi:10.1088/1742-6596/1432/1/012095 fatcat:mbv5xycx6bcllka7ndivervzfa

Accelerating Group Fusion for Ligand-Based Virtual Screening on Multi-core and Many-core Platforms

2016 Journal of Information Processing Systems  
The proposed parallel CUDA performed better than sequential and parallel OpenMP in terms of both execution time and speedup.  ...  The sequential, optimized sequential and parallel OpenMP of group fusion were implemented and evaluated.  ...  This type of program needs more space for temporary memory space during run time; therefore, making full use of available space is beneficial to the program.  ... 
doi:10.3745/jips.01.0012 fatcat:r72c6zmdwfenhjjp2nubjkwm7m

The JStar language philosophy

Mark Utting, Min-Hsien Weng, John G. Cleary
2013 Proceedings of the 2013 International Workshop on Programming Models and Applications for Multicores and Manycores - PMAM '13  
This paper introduces the JStar parallel programming language, which is a Javabased declarative language aimed at discouraging sequential programming, encouraging massively parallel programming, and giving  ...  We describe the execution semantics and runtime support of the language, several optimisations and parallelism strategies, with some benchmark results.  ...  Acknowledgements Thanks to James Bridgwater who developed some of the initial Java Fork/Join support for JStar, and the many other students who have worked on various aspects of JStar and Starlog.  ... 
doi:10.1145/2442992.2442996 dblp:conf/ppopp/UttingWC13 fatcat:dpbkjgoj3batvlexadwu7q2eyq

The JStar language philosophy

Mark Utting, Min-Hsien Weng, John G. Cleary
2014 Parallel Computing  
This paper introduces the JStar parallel programming language, which is a Javabased declarative language aimed at discouraging sequential programming, encouraging massively parallel programming, and giving  ...  We describe the execution semantics and runtime support of the language, several optimisations and parallelism strategies, with some benchmark results.  ...  Acknowledgements Thanks to James Bridgwater who developed some of the initial Java Fork/Join support for JStar, and the many other students who have worked on various aspects of JStar and Starlog.  ... 
doi:10.1016/j.parco.2013.11.004 fatcat:d2otqbwpdbdavindomruwy5oam

Coupling hundreds of workstations for parallel molecular sequence analysis

Volker Strumpen
1995 Software, Practice & Experience  
Recent developments show that smaller numbers of workstations connected via a local area network can be used efficiently for parallel computing.  ...  We calculated the optimal local alignment scores between a single genetic sequence and all sequences of a genetic sequence database using the ssearch code that is well known among molecular biologists.  ...  The speed-up values are calculated with respect to the elapsed parallel run-times and the sequential run-times given in Table I .  ... 
doi:10.1002/spe.4380250305 fatcat:t4g4c55uuraifm3eokjcbazevy

Discovering closed frequent itemsets on multicore: Parallelizing computations and optimizing memory accesses

B Negrevergne, A Termier, J Méhaut, T Uno
2010 2010 International Conference on High Performance Computing & Simulation  
In this paper we present PLCMQS, a parallel algorithm based on the LCM algorithm, recognized as the most efficient algorithm for sequential discovery of closed frequent itemsets.  ...  We also present a simple yet powerfull parallelism interface based on the concept of Tuple Space, which allows an efficient dynamic sharing of the work.  ...  We give below the steps of the test program: 1. load the database 2. make a "warm up" sequential sort on the database 3. perform N S database reductions, and get and average time 4. spawn N threads, make  ... 
doi:10.1109/hpcs.2010.5547082 dblp:conf/ieeehpcs/NegrevergneTMU10 fatcat:yh2lmtcrhzetbhk5p5e6xmh544

Introduction to the Special Issue on the 18th International Symposium on Computer Architecture and High Performance Computing

Alberto F. De Souza, Rajkumar Buyya
2008 International journal of parallel programming  
of parallel programs in this distributed run-time environment.  ...  Their system automatically discovers thread-level parallelism in sequential programs and dynamically exploits multithreaded hardware for improving the performance of these programs.  ... 
doi:10.1007/s10766-007-0065-y fatcat:ta7z4l4yyfcxjm4n5davlbk3oq

Performance Evaluation of Apriori on Dual Core with Multiple Threads

Anuradha. T, Satya Pasad R, S. N. Tirumalarao
2012 International Journal of Computer Applications  
Experiments are conducted to test the run time efficiency of the apriori algorithm on dual core processor by changing the number of threads for different databases at different support counts.  ...  This paper also present the comparison of real time, user time and system time with multiple threads on dual core compared to sequential implementation.  ...  This method follows a data parallel strategy and runs in two modes-sequential and parallel. We have partitioned the database into that many horizontal partitions that are equal to number of threads.  ... 
doi:10.5120/7853-1088 fatcat:kq7iks2auvhefeo7i3wvah5gd4

Implementation of Multiprocessing and Multithreading for End Node Middleware Control on Internet of Things Devices

Iwan Kurnianto Wibowo, Adnan Rachmat Anom Besari, Muh. Rifqi Rizqullah
2021 Jurnal INFORM  
The CPU division has been adjusted automatically to not work on just one core or block of memory. Several program functions can run in parallel and reduce program execution time efficiently.  ...  Middleware has a role in receiving command data from the real-time database, access sensors, actuators, and sending feedback.  ...  The sequential method is carried out 5 times the program runs because sequential programming can only run 1 loop at a time.  ... 
doi:10.25139/inform.v6i1.3346 fatcat:v3frrtutmzbi5kxyef4xlgbuem

High Fidelity Simulation of Mobile Cellular Systems with Integrated Resource Allocation and Adaptive Antennas

Hyunok Lee, Vahideh Manshadi, Donald C. Cox, Nim K. Cheung
2007 2007 IEEE Wireless Communications and Networking Conference  
The program was run on a modular super-computing platform with 32 processors interconnected by high-speed Infiniband links.  ...  Using parallel processing, the execution time of wireless network simulations can be significantly decreased without compromising the fidelity of the simulation.  ...  DB current in the figure represents the original database as in the single processor sequential program and DB new a copy of the database for parallel processing.  ... 
doi:10.1109/wcnc.2007.592 dblp:conf/wcnc/LeeMCC07 fatcat:gt7ngvfn6nfmncoq2iy4wjh5je

Enhance similarity searching algorithm with optimized fast population count method based on parallel design

SeyedVahid Dianat, Yasaman Eftekharypour, Nurul Hashimah Ahamed Hassain Malim, Nur'Aini Abdul Rashid
2013 IOSR Journal of Computer Engineering  
We achieved significant results in terms of performance and execution time in both CUDA and OPENMP designs of fast population count method with data conversion when compared to the sequential code.  ...  This research attempt to provide optimized design using different parallel hardware such as Multicore and GPU processors.  ...  It returns number of one's bits in each query fingerprint. 7) Then the program runs step 4 to count and store for all database fingerprints in parallel.  ... 
doi:10.9790/0661-1464352 fatcat:swiextx76nhupdkakgvr6moc5u

Parallel Implementation of the Wu-Manber Algorithm Using the OpenCL Framework [chapter]

Themistoklis K. Pyrgiotis, Charalampos S. Kouzinopoulos, Konstantinos G. Margaritis
2012 IFIP Advances in Information and Communication Technology  
This paper evaluates the performance of the Wu-Manber algorithm implemented with the OpenCL framework, by presenting the running time of the experiments compared with the corresponding sequential time.  ...  Graphic cards offer a high parallelism computational power improving the performance of applications.  ...  Experimental Methodology In order to evaluate the performance of the parallel WM algorithm, the practical running time has been compared with the corresponding running time of the sequential implementation  ... 
doi:10.1007/978-3-642-33412-2_59 fatcat:m6v3us27nbbabotrba6leuf7d4

Enabling HMMER for the Grid with COMP Superscalar

Enric Tejedor, Rosa M. Badia, Romina Royo, Josep L. Gelpí
2010 Procedia Computer Science  
In particular, we present a sequential version of the HMMER hmmpfam tool that, when run with COMP Superscalar, is decomposed into tasks and run on a set of distributed resources, not burdening the programmer  ...  The continuously increasing size of biological sequence databases has motivated the development of analysis suites that, by means of parallelization, are capable of performing faster searches on such databases  ...  Acknowledgment The authors gratefully acknowledge the financial support of the Comisión Interministerial de Ciencia y Tecnología (CICYT, Contract TIN2007-60625), the Generalitat de Catalunya (2009-SGR-  ... 
doi:10.1016/j.procs.2010.04.296 fatcat:2vudddibevfbrkftpg46wuilpq
« Previous Showing results 1 — 15 out of 70,211 results