Sponsored by
 
Events
News
 
[ Events ]
 
 

Activity Search
Sort out
Field
 
Year
Seminars  
 
NCTS Winter Course: Parallel Finite Element Method using Supercomputer
 
9:10 - 17:00 on February 10, 11, 12, 19, 20, February 10 - 20, 2022
Cisco Webex, Online seminar

Speaker:
Kengo Nakajima (University of Tokyo)


Organizers:
Pochung Chen (National Tsing Hua University)
Tsung-Ming Huang (National Taiwan Normal University)
Ying-Jer Kao (National Taiwan University)
Weichung Wang (National Taiwan University)


Overview

This 5-day intensive “online” class provides introduction to large-scale scientific computing using the most advanced massively parallel supercomputers. Topics cover:

  • Finite-Element Method (FEM)
  • Message Passing Interface (MPI)
  • Parallel FEM using MPI and OpenMP
  • Parallel Numerical Algorithms for Iterative Linear Solvers

Several sample programs will be provided and participants can review the contents of lectures through hands-on-exercise/practices using the Oakbridge-CX system at the University of Tokyo (https://www.cc.u-tokyo.ac.jp/en/supercomputer/obcx/service/).

Finite-Element Method is widely-used for solving various types of real-world scientific and engineering problems, such as structural analysis, fluid dynamics, electromagnetics, and etc. This lecture course provides brief introduction to procedures of FEM for 1D/3D steady-state heat conduction problems with iterative linear solvers and to parallel FEM. Lectures for parallel FEM will be focused on design of data structure for distributed local mesh files, which is the key issue for efficient parallel FEM. Introduction to MPI (Message Passing Interface), which is widely used method as "de facto standard" of parallel programming, is also provided.

Solving large-scale linear equations with sparse coefficient matrices is the most expensive and important part of FEM and other methods for scientific computing, such as Finite-Difference Method (FDM) and Finite-Volume Method (FVM). Recently, families of Krylov iterative solvers are widely used for this process. In this class, details of implementations of parallel Krylov iterative methods are provided along with parallel FEM.

Moreover, lectures on programming for multicore architectures will be also given along with brief introduction to OpenMP and OpenMP/MPI Hybrid Parallel Programming Model.

Prerequisites

  • Experiences in Unix/Linux (vi or emacs)
  • Onlie Manuarl for Emacs (Screen Editor for Linux/Unix)
  • Experiences in programming by Fortran or C/C++
  • Undergraduate-Level Mathematics and Physics (e.g. Linear Algebra, calculus)
  • Fundamental numerical algorithms (Gaussian Elimination, LU Factorization, Jacobi/Gauss-Seidel/SOR Iterative Solvers, Conjugate Gradient Method (CG))
  • Experiences in SSH Public Key Authentication Method (optional)
  • Participants are encouraged to read the following material, and to understand fundamental issues of the MWR (Method of Weighted Residual) before this course.

Preparation for PC

Schedule

Date

Hour

Content

Date

Hour

Content

Feb.10 (Thu)

09:10-10:00

Introduction (1/2)-(2/2)

Feb.19 (Sat)

09:10-10:00

MPI (6/6)

10:10-11:00

10:10-11:00

Exercise

11:10-12:00

FEM (1/6)-(4/6)

11:10-12:00

13:10-14:00

13:10-14:00

MPI Practice (2/3)-(3/3)

14:10-15:00

14:10-15:00

15:10-16:00

15:10-16:00

Exercise

16:10-17:00

Exercise (Optional)

16:10-17:00

Parallel FEM (1/4)-(4/4)

Feb.11 (Fri)

09:10-10:00

FEM (5/6)-(6/6)

Feb.20 (Sun)

09:10-10:00

10:10-11:00

10:10-11:00

11:10-12:00

Exercise

11:10-12:00

13:10-14:00

Parallel FEM

13:10-14:00

Exercise

14:10-15:00

Login to OBCX

14:10-15:00

OpenMP/MPI Hybrid (1/2)-(2/2)

15:10-16:00

MPI (1/6)

15:10-16:00

16:10-17:00

Exercise (Optional)

16:10-17:00

Exercise (Optional)

Feb.12 (Sat)

09:10-10:00

MPI (2/6)-(3/6)

 

10:10-11:00

11:10-12:00

Exercise

13:10-14:00

MPI Practice (1/3)

14:10-15:00

MPI (4/6)-(5/6)

15:10-16:00

16:10-17:00

Exercise (Optional)

 

Materials

For more information, please refer to: 2022-HPC-NK (google.com)

Registration: https://forms.gle/4oMZcu22T1CeF2j37






back to list
 (C) 2021 National Center for Theoretical Sciences