This course introduces the students to the paradigm of parallel computing. Today almost all computer systems include so-called multi-core chips. To exploit the full performance of such systems one needs to employ parallel programming. |
|
The course covers shared-memory parallelization as well as parallelization with message passing on distributed-memory architectures. The following parallelising techniques will be treated, depending on the underlying hardware
- OpenMP for shared memory multiprocessing programming.
- MPI for message-passing multiprocessing programming.
- CUDA for GPU computing and parallelising tasks on Nvidia GPUs.
|
|
|
|
|
The course description will be finalized in the course of August-September. |
|