703506 Parallel Systems

winter semester 2011/2012 | Last update: 19.03.2012 Place course on memo list
703506
Parallel Systems
VO 2
4
weekly
annually
English
All major microprocessor vendors (Intel, AMD, Sun, etc.) are more or less exclusively developing microprocessors with multiple cores per chip as of today. Any device ranging from regular desktop computers, notebooks, PDAs, cellular phones, servers, game consoles, supercomputers, and even industry computers and any device requiring embedded systems is based on this new parallel processors. The main problem is that there are very few computer scientists and programmers who have expertise on how to program multicore processors which will be a major disadvantage for all graduates leaving university for IT industry or staying on in research to deal with any programming for modern computers. Parallel processing is thus no longer limited to a few specialists interested in supercomputing, but instead parallel processing has become mainstream which is the only way forward towards new IT-infrastructure and modern computer programming. As part of this lecture we thus offer an introduction to most important basic concepts of parallel processing which is crucial know-how needed to deal with basically any new computer put on the market.
Parallel processing has matured to the point where it has begun to make a considerable impact on the computer marketplace. The ultimate efficiency in parallel systems is to achieve a computation speedup factor of p with p processors. Although often this ideal cannot be achieved, some speedup is generally possible by using a multiprocessor architecture. The actual speed gain depends on the system's architecture and the algorithm run on it. This course serves as an introduction to the area of parallel systems with a special focus on programming for parallel architectures. Basic concepts and important techniques will be presented. The major approaches to parallel programming, including shared-memory multiprocessing and message-passing, will be covered in detail. Students will gain programming experience in each of these paradigms through an accompanying laboratory. Architectural considerations, parallelization techniques, program analysis, and measures of performance will be covered. We will not follow any particular text through out the entire class. In stead, we will use several text books as the general guideline of the lecture covering both basic concepts and programming skills. This course is designed for all graduate students interested in parallel processing and high performance computing which has become mainstream recently.
FWF DK-plus: Programming, Analysis and Optimization for High Performance Computing DK-plus
Group 0
Date Time Location
Fri 2011-10-07
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-10-14
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-10-21
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-10-28
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-11-11
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-11-18
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-11-25
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-12-02
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-12-09
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2011-12-16
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2012-01-13
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2012-01-20
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2012-01-20
10.15 - 12.00 HSB 3 HSB 3 Barrier-free Prüfung
Fri 2012-01-27
10.15 - 12.00 HSB 1 HSB 1 Barrier-free
Fri 2012-02-03
10.15 - 12.00 HSB 1 HSB 1 Barrier-free