Virtually all semiconductor market domains, including PCs, game consoles, mobile handsets, servers, and supercomputers, are converging to concurrent platforms. There are two important reasons for this trend. First, these concurrent processors can potentially offer more effective use of chip space and power than traditional monolithic microprocessors for many demanding applications. Second, an increasing number of applications that traditionally used Application Specific Integrated Circuits (ASICs) can be implemented using concurrent processors to improve functionality and reduce engineering cost. The real challenge is to develop applications software that effectively uses these concurrent processors to achieve efficiency and performance goals.
The aim of this course is to provide students with knowledge and hands-on experience in developing applications massively parallel processors as well, an understanding of current research topics in this area, and an opportunity to write and critically review research papers. Many commercial offerings from NVIDIA, AMD, and Intel already offer concurrency at massive scale. Effectively programming these processors requires in-depth knowledge about parallel programming principles, as well as the parallelism models, communication models, and resource limitations of these processors.
The course will cover fundamental issues related to architecting and building applications that harness massively parallel processors, with a focus on graphical processing units (GPUs): programming models for massive parallelism, application optimization techniques, compile and runtime support, as well as case studies for specific application-domains.