703650 VO Parallel Systems

winter semester 2018/2019 | Last update: 06.09.2018 Place course on memo list
VO Parallel Systems
VO 2

Students of this course will learn methods for programming the most important classes of parallel computers with shared and distributed memory as well as aquire basic knowledge in the area of parallel computing. The two most important programming languages (MPI and OpenMP) will be used. Students will be able to analyse and find parallel solutions that are implemented as parallel programs for modern parallel computers.

All major microprocessor vendors (Intel, AMD, Sun, etc.) are by now exclusively developing microprocessors with multiple cores per chip as of today. Any device ranging from regular desktop computers, notebooks, PDAs, cellular phones, servers, game consoles, supercomputers, and even industry computers and any device requiring embedded systems is based on this new parallel processors. The main problem is that there are very few computer scientists and programmers who have expertise on how to program multicore processors which will be a major disadvantage for all graduates leaving university for IT industry or staying on in research to deal with any programming for modern computers. Parallel processing is thus no longer limited to a few specialists interested in supercomputing, but instead parallel processing has become mainstream which is the only way forward towards new IT-infrastructure and modern computer programming.

Parallel processing has matured to the point where it has begun to make a considerable impact on the computer marketplace. The ultimate efficiency in parallel systems is to achieve a computation speedup factor of p with p processors. Although often this ideal cannot be achieved, some speedup is generally possible by using a multiprocessor architecture. The actual speed gain depends on the system's architecture and the algorithm run on it. This course serves as an introduction to the area of parallel systems with a special focus on programming for parallel architectures. Basic concepts and important techniques will be presented. The major approaches to parallel programming, including shared-memory multiprocessing and message-passing, will be covered in detail. Students will gain programming experience in each of these paradigms through an accompanying laboratory. Architectural considerations, parallelization techniques, program analysis, and measures of performance will be covered. We will not follow any particular text through out the entire class. In stead, we will use several text books as the general guideline of the lecture covering both basic concepts and programming skills.

As part of this lecture we thus offer an introduction to most important basic concepts of parallel processing which is crucial know-how needed to deal with basically any new computer put on the market. This course is designed for all graduate students interested in parallel processing and high performance computing.

slide lecture with many code examples

written exam at the end of the lecture

  • Parallel Programming in C with MPI and OpenMP, by Michael J. Quinn, McGraw Hill
  • Parallel Computer Architecture: A HW/SW Approach, by David E. Culler et al. Morgan Kaufmann publishing company
  • Designin and Building Parallel Programs, Ian Foster, www.mcs.anl.gov/dbpp

programming in C/C++

not applicable
see dates
Group 0
Date Time Location
Fri 2018-10-05
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-10-12
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-11-09
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-11-16
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-11-23
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-11-30
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-12-07
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2018-12-14
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2019-01-11
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2019-01-18
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2019-01-25
10.15 - 12.00 HSB 2 HSB 2 Barrier-free
Fri 2019-02-01
10.15 - 12.00 HSB 2 HSB 2 Barrier-free