Course details
Practical Parallel Programming
PPP Acad. year 2019/2020 Summer semester 5 credits
The course covers architecture and programming of parallel systems with functional and data parallelism. First, the parallel system theory and program parallelization are discussed. The detailed description of most proliferated supercomputing systems, interconnection network typologies and routing algorithms is followed by the architecture of parallel and distributed storage systems. The course goes on in message passing programming in standardized interface MPI. Consequently, techniques for parallel debugging and profiling are discussed. Last part of the course is devoted to the description of parallel programming patterns and case studies from the are of linear algebra, physical systems described by partial differential equations, N-Body systems and Monte-Carlo methods.
Guarantor
Course coordinator
Language of instruction
Completion
Time span
- 26 hrs lectures
- 16 hrs pc labs
- 10 hrs projects
Assessment points
- 80 pts final exam (written part)
- 20 pts projects
Department
Lecturer
Instructor
Subject specific learning outcomes and competences
Overview of principles of current parallel system design and of interconnection networks, communication techniques and algorithms. Survey of parallelization techniques of fundamental scientific problems, knowledge of parallel programming in MPI. Knowledge of basic parallel programming patterns. Practical experience with the work on supercomputers, ability to identify performance issues and propose their solution.
Knowledge of capabilities and limitations of parallel processing, ability to estimate performance of parallel applications. Language means for process/thread communication and synchronization. Competence in hardware-software platforms for high-performance computing and simulations.
Learning objectives
To get familiar with the architecture of distributed supercomputing systems, their interconnection networks and storage. To orientate oneself in parallel systems on the market, be able to assess communication and computing possibilities of a particular architecture and to predict the performance of parallel applications. Learn how to write portable programs using standardized interfaces and languages, specify parallelism and process communication. To learn how to practically use supercoputer for solving complex engineering problems.
Why is the course taught
This course will take you into the area of high performance computing where a single computer is far from being powerful enough to satisfy application demands. The only solution in such cases is to distribute computation across a supercomputing cluster. This course first examines the architecture of top machines, and then focuses on their software equipment. We will learn the MPI library which is an industry standard in high performance computing. Finally, we will introduce a few typical applications ranging such as physical simulation of heat distribution, fluid dynamics, n-body gravitational and Coulomb systems of galaxies and molecules or Monte-Carlo methods.
Prerequisite knowledge and skills
Von-Neumann computer architecture, computer memory hierarchy, cache memories and their organization, programming in assembly and in C/C++. Knowledge gained in courses PRL and AVS.
Study literature
- MPI Tutoriál: http://mpitutorial.com/
- Alternativní kurz o paralelním programování http://www.cs.kent.edu/~jbaker/ParallelProg-Sp11/
Syllabus of lectures
- Introduction to parallel processing.
- Architectures with distributed memory,
- Interconnection networks: topology and routing algorithms, switching, flow control.
- Technologies of interconnection networks (Infiniband).
- Distributed file systems (Lustre, HPFS).
- Message passing interface, pair-wise communications, data types
- Collective communications and communicators,
- Hybrid programming OpenMP/MPI and one-sided communications.
- Parallel code debugging, profiling and tracing.
- Programming patterns for parallel programming.
- Case studies: matrix calculations, linear equation systems
- Case studies: solution of PDE systems, finite difference, spectral methods
- Case studies: Fluid dynamics, N-Body systems, Monte-Carlo.
Syllabus of computer exercises
- MPI: Point-to-point communications
- MPI: Collective communications
- MPI: Communicators
- MPI: Data types, reduction
- MPI: Parallel input and output
- Profiling and tracing of parallel applications
- Matrix calculations.
- Finite difference methods.
Syllabus - others, projects and individual work of students
- A parallel program in MPI on the supercomputer.
Progress assessment
Assessment of a project, 10 hours in total and a midterm examination.
Exam prerequisites:
To get 20 out of 40 points for projects and midterm examination.
Controlled instruction
- Missed labs can be substituted in alternative dates (monday or friday)
- There will be a place for missed labs in the last week of the semester.
Exam prerequisites
To get 20 out of 40 points for projects and midterm examination.
Course inclusion in study plans
- Programme IT-MGR-2, field MBI, MGM, any year of study, Compulsory-Elective
- Programme IT-MGR-2, field MBS, MIN, MIS, MMM, any year of study, Elective
- Programme IT-MGR-2, field MPV, MSK, 1st year of study, Compulsory
- Programme MITAI, field NADE, NCPS, NGRI, NIDE, NISD, NISY, NMAL, NMAT, NNET, NSEC, NSEN, NSPE, NVER, NVIZ, any year of study, Elective
- Programme MITAI, field NBIO, any year of study, Compulsory
- Programme MITAI, field NEMB, 2nd year of study, Compulsory
- Programme MITAI, field NHPC, 1st year of study, Compulsory