R301, Astronomy-Mathematics Building, NTU
(台灣大學天文數學館 301室)
Introduction to Parallel Programming for Multicore/Manycore Clusters
Takahiro Katagiri (Nagoya University)
Overview
In order to make full use of modern supercomputer systems with multicore/manycore architectures, hybrid parallel programming with message-passing and multithreading is essential. MPI for message-passing and OpenMP for multithreading are the most popular ways for parallel programming on multicore/manycore clusters.
This 4-day tutorial provides essential knowledge and experiences for parallel programming using MPI and OpenMP. Hands-on exercise by Reedbush-U supercomputer at the University of Tokyo with Intel Xeon E5-2695v4(Broadwell-EP 2.1 GHz) is also given. (http://www.cc.u-tokyo.ac.jp/system/reedbush/index-e.html).
First 2 days are focusing on training of fundamental MPI and OpenMP. The basic functions and usage of MPI and OpenMP are explained. Several trainings of parallelization by using sample programs based on fundamental numerical computations, such as matrix-matrix multiplication, are provided.
On the 3rd and 4th days, MPI and OpenMP/MPI are applied to 3D Poisson equation solver by finite-volume method (FVM) with preconditioned conjugate gradient iterative method (PCG). Detailed lectures on data structure for parallel FVM are also provided.
Prerequisites
-
Experiences in Unix/Linux
-
Experiences in emacs or vi
-
Experiences of programming (Fortran or C/C++)
-
Fundamental numerical algorithms (Gaussian Elimination, LU Factorization, Jacobi/Gauss-Seidel/SOR Iterative Solvers)
-
Experiences in SSH Public Key Authentication Method
Schedule
February 21, 2017 (T)
09:10-10:00 Introduction
10:10-11:00 FVM code (1/4)
11:10-12:00 FVM code (2/4)
13:10-14:00 FVM code (3/4)
14:10-15:00 FVM code (4/4) and sparse linear solver
15:10-16:00 Overview of OpenMP
17:10-18:00 Functions of OpenMP
February 22, 2017 (W)
09:10-10:00 Training of OpenMP
10:10-11:00 Overview of MPI
11:10-12:00 How to use the Reedbush-U
13:10-14:00 Trainings of Reedbush-U
Homework 1
14:10-15:00 Functions of MPI Non-blocking and Persistent Communication
15:10-16:00 Parallelization of dense Matrix-Vector Multiplications (1/2)
16:10-17:00 Parallelization of dense Matrix-Vector Multiplications (2/2)
Homework 2
February 23, 2017 (Th)
09:10-10:00 Parallelization of dense Power Method for eigenvalue problem (1/2)
10:10-11:00 Parallelization of dense Power Method for eigenvalue problem (2/2)
Homework 3
11:10-12:00 Parallelization of Fully Distributed dense Matrix-Matrix Multiplication (1/2)
Homework 4
13:10-14:00 Parallelization of Fully Distributed dense Matrix-Matrix Multiplication (2/2)
Homework 5
14:10-15:00 Parallel Data Structure (1/2)
15:10-16:00 Parallel Data Structure (2/2)
16:10-17:00 Parallel FVM (1/4)
February 24, 2017 (F)
09:10-10:00 Parallel FVM (2/4)
10:10-11:00 Parallel FVM (3/4)
11:10-12:00 Parallel FVM (4/4)
13:10-14:00 OpenMP/MPI Hybrid (1/4)
14:10-15:00 OpenMP/MPI Hybrid (2/4)
15:10-16:00 OpenMP/MPI Hybrid (3/4)
16:10-17:00 OpenMP/MPI Hybrid (4/4)
Report (topics including strong scaling, weak scaling,preconditioning, performance comparison, etc.)
Materials
http://www... /class-matrNTU2017.htm(Available soon)
Abstract: events_1_170102145143278.pdf