Cisco Webex, Online seminar
Speaker:
Kengo Nakajima (University of Tokyo)
Organizers:
Pochung Chen (National Tsing Hua University)
TsungMing Huang (National Taiwan Normal University)
YingJer Kao (National Taiwan University)
Weichung Wang (National Taiwan University)
Overview
This 5day intensive “online” class provides introduction to largescale scientific computing using the most advanced massively parallel supercomputers. Topics cover:

FiniteElement Method (FEM)

Message Passing Interface (MPI)

Parallel FEM using MPI and OpenMP

Parallel Numerical Algorithms for Iterative Linear Solvers
Several sample programs will be provided and participants can review the contents of lectures through handsonexercise/practices using the OakbridgeCX system at the University of Tokyo (https://www.cc.utokyo.ac.jp/en/supercomputer/obcx/service/).
FiniteElement Method is widelyused for solving various types of realworld scientific and engineering problems, such as structural analysis, fluid dynamics, electromagnetics, and etc. This lecture course provides brief introduction to procedures of FEM for 1D/3D steadystate heat conduction problems with iterative linear solvers and to parallel FEM. Lectures for parallel FEM will be focused on design of data structure for distributed local mesh files, which is the key issue for efficient parallel FEM. Introduction to MPI (Message Passing Interface), which is widely used method as "de facto standard" of parallel programming, is also provided.
Solving largescale linear equations with sparse coefficient matrices is the most expensive and important part of FEM and other methods for scientific computing, such as FiniteDifference Method (FDM) and FiniteVolume Method (FVM). Recently, families of Krylov iterative solvers are widely used for this process. In this class, details of implementations of parallel Krylov iterative methods are provided along with parallel FEM.
Moreover, lectures on programming for multicore architectures will be also given along with brief introduction to OpenMP and OpenMP/MPI Hybrid Parallel Programming Model.
Prerequisites

Experiences in Unix/Linux (vi or emacs)

List of Unix/Linux Commands (Wikipedia)

Onlie Manuarl for Emacs (Screen Editor for Linux/Unix)

Experiences in programming by Fortran or C/C++

UndergraduateLevel Mathematics and Physics (e.g. Linear Algebra, calculus)

Fundamental numerical algorithms (Gaussian Elimination, LU Factorization, Jacobi/GaussSeidel/SOR Iterative Solvers, Conjugate Gradient Method (CG))

Experiences in SSH Public Key Authentication Method (optional)

Participants are encouraged to read the following material, and to understand fundamental issues of the MWR (Method of Weighted Residual) before this course.
Preparation for PC
Date

Hour

Content

Date

Hour

Content

Feb.10 (Thu)

09:1010:00

Introduction (1/2)(2/2)

Feb.19 (Sat)

09:1010:00

MPI (6/6)

10:1011:00

10:1011:00

Exercise

11:1012:00

FEM (1/6)(4/6)

11:1012:00

13:1014:00

13:1014:00

MPI Practice (2/3)(3/3)

14:1015:00

14:1015:00

15:1016:00

15:1016:00

Exercise

16:1017:00

Exercise (Optional)

16:1017:00

Parallel FEM (1/4)(4/4)

Feb.11 (Fri)

09:1010:00

FEM (5/6)(6/6)

Feb.20 (Sun)

09:1010:00

10:1011:00

10:1011:00

11:1012:00

Exercise

11:1012:00

13:1014:00

Parallel FEM

13:1014:00

Exercise

14:1015:00

Login to OBCX

14:1015:00

OpenMP/MPI Hybrid (1/2)(2/2)

15:1016:00

MPI (1/6)

15:1016:00

16:1017:00

Exercise (Optional)

16:1017:00

Exercise (Optional)

Feb.12 (Sat)

09:1010:00

MPI (2/6)(3/6)


10:1011:00

11:1012:00

Exercise

13:1014:00

MPI Practice (1/3)

14:1015:00

MPI (4/6)(5/6)

15:1016:00

16:1017:00

Exercise (Optional)

Materials
For more information, please refer to: 2022HPCNK (google.com)
Registration: https://forms.gle/4oMZcu22T1CeF2j37