Sponsored by
 
Events
News
 
[ Events ]
 
 

Activity Search
Sort out
Field
 
Year
Seminars  
 
Taiwan Mathematics School: HPC for Tomorrow-​​​Scientific Computing and Machine Learning on Multi- and Manycore Architectures​
 
9:30 -12:40, 13:40 -15:30, January 15-19, 22-26, 29-30, 2018
R440, Astronomy-Mathematics Building, NTU

Speaker:
Hartwig Anzt (Karlsruher Institut für Technologie)


Organizers:
Weichung Wang (National Taiwan University)


Course Description: 

This course covers the fundamentals of designing and implementing numerical linear algebra operations and algorithms on modern multi- and manycore architectures. New trends in the direction of Machine Learning/ Deep Learning will also be covered. The course bridges between mathematical theory on linear solvers, iteration methods, and preconditioning and programming aspects like MPI, OpenMP and CUDA programming. The course splits into five parts: 

  • Part I will start with an overview about HPC, current trends in high-end computing systems and environments, and continue with an introduction of common architecture designs and programming methodologies.
  • Part II covers programming techniques for multi- and manycore. Particular we will learn OpenMP and CUDA programming. Besides that, we will look at distributed memory systems and using MPI. A side aspect here is performance modeling and the introduction of tools that allow assessing the efficiency of implementations when running in parallel.
  • Part III will cover central routines and algorithms needed for scientific computing: BLAS, SpMV, linear solvers based on factorization and inversion routines, Singular Value Decompositions. Also, sparse solvers will be addressed.
  • Part IV we focus on batched routines as they are becoming increasingly popular and relevant. We look into batched BLAS efforts, hardware-specific optimization efforts, and sophisticated algorithms composed of batched routines. 
  • Part V of the course is devoted to Machine Learning in the widest sense. We will in particular look at the concept of Deep Neural Networks (DNN), available ML libraries, and the performance of convolution kernels.
  • Part VI contains the presentation of the student projects. Each student chooses a topic in agreement with the lecturer and prepares a presentation along with a project class paper. This project accounts for 60% of the course grade. Although students are encouraged to come up with their own project ideas, a list containing possible topics will be distributed at the beginning of the course.

Each day contains two blocks: one morning block (3 hours) and one afternoon block (2 hours). While the morning block covers the more challenging theoretical background, the afternoon block aims at student involvement by containing practical examples, small exercises, and discussions. The students will have to complete 8 homework assignments covering the distinct topics.



Contact: murphyyu@ncts.ntu.edu.tw

Poster: events_3_1121712045218154023.pdf


back to list
 (C) 2021 National Center for Theoretical Sciences