Editorial introduction: special issue on advances in parallel and distributed computing for neural computing

Jianguo Chen, Ahmad Salah
2020 Neural computing & applications (Print)  
In recent years, the popularity of neural computing (NC), machine learning (ML), and artificial intelligence (AI) has grown substantially. A lot of research was carried out in both academia and industry, and it was applied in many fields. For example, deep learning achieved superhuman performance in image classification. NC/ML/AI technologies were used very successfully to play games such as Chess, Go, Atari, and Jeopardy. In addition, many companies used AI and ML technology in areas such as
more » ... alth care, natural resource management, and advertisement. Most NC/ML/AI technologies and applications require heavy use of high-performance computers and accelerators for efficient processing. Consequently, parallel computing, distributed computing, cloud computing, and high-performance computing (HPC) are key components of these systems. In scientific research and practical applications, clusters of computers and accelerators (e.g., GPUs) are routinely used to train and run various neural network models. In addition, due to time-consuming iterative training processes and massive training datasets, NC/ML/ AI technologies also become a "killer application" for parallel computing, distributed computing, and HPC. The above challenges have driven much of the research in distributed and parallel computing. For example, tailored computer architectures were devised and new parallel programming frameworks were developed to accelerate NC/ML/AI models. The objective of this special issue is to bring together the parallel and distributed computing and NC/ML/AI communities to present their applications and solutions to performance issues and also to present how NC/ML/AI can be used to solve performance problems. The range of topics covered by this special issue is broad. The papers in the special issue represent a broad spectrum of parallel and distributed computing, machine learning models, and neural network models. Papers by Zheng Xiao, Zhao Tong, and Yikun Hu et al. focused on computing task scheduling in distributed and parallel computing environments. The paper by Yuedan Chen et al. focused on the partitioning and parallelization of the general sparse matrix-sparse matrix (SpGEMM) on HPC systems, which is used as the basic kernel in many NC/ML/AI algorithms. Papers by Shuang Yang and Hao Wang focused on parallelization of ML algorithms in distributed computing environments, including mobile social networks (MSNs) and of multi-view clustering (MvC) algorithms. Papers by Xiaofeng Zou, Titinunt Kitrungrotsakul, and Keyang Cheng focused on the parallelization and performance optimization of different neural network models in distributed and parallel computing environments, including cloud computing platforms, GPU-based parallel computing systems, and HPC systems. In addition, papers by Ao Liu, Minrong Lu, Jin Zhang, and Fan Wu focused on structural optimization of neural network models. Moreover, papers by Xiaofeng Zou, Minrong Lu, Xiaoyong Tang, Titinunt Kitrungrotsakul, and Fan Wu focused on performance optimization of neural network models and their applications in various fields, such as monetary policy prediction, bioinformatics, image classification, pedestrian re-identification, fingerprint pattern recognition, and human personality classification. The paper by Zheng Xiao et al. focused on task scheduling and virtual machine (VM) allocation in distributed computing environments and described a workload-driven coordination mechanism between virtual machine allocation and task scheduling. The datasets acquired from machine learning and deep learning applications have Markov poverty and are modeled as Markov chains to extract workload characteristic operators of & Jianguo Chen
doi:10.1007/s00521-020-04887-7 fatcat:aydztes3cjfrffsw7ext7g6c4i