TDDB78 | Programming of Parallel Computers, embedded systems, ECTS-points /PROGRAMMERING AV PARALLELLDATORER, INBYGGDA SYSTEM/ Advancement level: D | |
Aim: To give knowledge about methods and languages for programming parallel computer architectures, together with skills to program such computers. The course shall also give an overview of how parallel computers can be used in some application areas, e.g. embedded systems, image analysis and technical computations. Prerequisites: Basic course in programming. A course in process programming such as TDDB12 since understanding the process concept is assumed. For the course variant technical computation knowledge in Fortran and numerical analysis is recommended. For the course variant massiv parallelism programming skills in C/C++ is recommended. Course organization: The course i given in two variants, the largest part common. The variant massiv parallelism, TDDB78, is described here, while the variant technical computations, TANA77, can be found among the courses given by the mathematical department. Approximately 24 of the 30 hours lectures are common to both courses. The lectures contains theory and principles, while the laboratory assignments are practical programming excercises in parallel programming and tool usage. Course content: Parallel computer architecture: memory hierarchies, shared memory and distributed memory architectures. Vector operations. Parallel execution models and programming languages. Performance measurmenst and optimizing parallel programs. Messagebased programming and dataparallel programming. Principles for dataparallel languages. Time complexity. Scaling ability. Scheduling of parallel programs. Vectorising and parallilizing of serial programs. Tools and support for parallel programing. MPI (Message passing interface). Basic parallel algorithms and BLAS (Basic Linear Algebra Subprograms). Application areas. Parallel solving of equation systems. The laboratory course gives practical experience in programming parallel systems (dataparallel, message based and shared memory). For dataparallel programming a 16384-processor MASPAR computer is used. A 128-processor Parsytec and a 184-processor Cray T3E are used for messagebased programming. A 20-processor Sparc-center 2000 is used for shared memory programming. Course literature: Ian T. Foster, "Designing and Building Parallel Programs", Addison-Wesley, 1995. Compendium | ||
TEN 1 | Written examination | |
LAB 1 | Labratory work |