An Introduction to Parallel Programming, Second Edition presents a tried-and-true tutorial approach that shows students how to develop effective parallel programs with MPI, Pthreads and OpenMP.
As the first undergraduate text to directly address compiling and running parallel programs on multi-core and cluster architecture, this second edition carries forward its clear explanations for designing, debugging and evaluating the performance of distributed and shared-memory programs while adding coverage of accelerators via new content on GPU programming and heterogeneous programming. New and improved user-friendly exercises teach students how to compile, run and modify example programs.
As the first undergraduate text to directly address compiling and running parallel programs on multi-core and cluster architecture, this second edition carries forward its clear explanations for designing, debugging and evaluating the performance of distributed and shared-memory programs while adding coverage of accelerators via new content on GPU programming and heterogeneous programming. New and improved user-friendly exercises teach students how to compile, run and modify example programs.
Table of Contents
1. Why parallel computing2. Parallel hardware and parallel software
3. Distributed memory programming with MPI
4. Shared-memory programming with Pthreads
5. Shared-memory programming with OpenMP
6. GPU programming with CUDA
7. Parallel program development
8. Where to go from here
Authors
Peter Pacheco University of San Francisco, USA.Peter Pacheco received a PhD in mathematics from Florida State University. After completing graduate school, he became one of the first professors in UCLA's "Program in Computing,? which teaches basic computer science to students at the College of Letters and Sciences there. Since leaving UCLA, he has been on the faculty of the University of San Francisco. At USF Peter has served as chair of the computer science department and is currently chair of the mathematics department.
His research is in parallel scientific computing. He has worked on the development of parallel software for circuit simulation, speech recognition, and the simulation of large networks of biologically accurate neurons. Peter has been teaching parallel computing at both the undergraduate and graduate levels for nearly twenty years. He is the author of Parallel Programming with MPI, published by Morgan Kaufmann Publishers.
Matthew Malensek Assistant Professor, Department of Computer Science, University of San Francisco, CA, USA.
Matthew Malensek is an Assistant Professor in the Department of Computer Science at the University of San Francisco. His research interests are centered around big data, parallel/distributed systems, and cloud computing. This includes systems approaches for processing and managing data at scale in a variety of domains, including fog computing and Internet of Things (IoT) devices.