Parallel computing
What is Parallel Computing?
##Serial Computing:
- Traditionally, software has been written for serial computation:
- A problem is broken into a discrete series of instructions
- Instructions are executed sequentially one after another
- Executed on a single processor
- Only one instruction may execute at any moment in time Serial computing
For example:
##Parallel Computing:
- In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem:
- A problem is broken into discrete parts that can be solved concurrently
- Each part is further broken down to a series of instructions
- Instructions from each part execute simultaneously on different processors
- An overall control/coordination mechanism is employed Parallel computing
For example:
- The computational problem should be able to:
- Be broken apart into discrete pieces of work that can be solved simultaneously;
- Execute multiple program instructions at any moment in time;
- Be solved in less time with multiple compute resources than with a single compute resource.
- The compute resources are typically:
- A single computer with multiple processors/cores
- An arbitrary number of such computers connected by a network
##Parallel Computers:
- Virtually all stand-alone computers today are parallel from a hardware perspective:
- Multiple functional units (L1 cache, L2 cache, branch, prefetch, decode, floating-point, graphics processing (GPU), integer, etc.)
- Multiple execution units/cores Multiple hardware threads IBM BG/Q Compute Chip with 18 cores (PU) and 16 L2 Cache units (L2)
- Networks connect multiple stand-alone computers (nodes) to make larger parallel computer clusters.
For example, the schematic below shows a typical LLNL parallel computer cluster: Each compute node is a multi-processor parallel computer in itself Multiple compute nodes are networked together with an Infiniband network Special purpose nodes, also multi-processor, are used for other purposes