In the simplest sense, parallel computing is the simultaneous use of multiple co
ID: 3699805 • Letter: I
Question
In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. A problem is broken into parts that can be concurrently solved. Each part is broken down to a series of instructions. Instructions from each part execute simultaneously on different processors, and an overall coordination is utilized. What is the difference between the parallel computing nowadays and a decade ago? How is it implemented now and then? Search online and report back the improvements of the Windows operating system after using the parallel computing. In addition, research and report back the tradeoffs of the parallel computing done in hardware vs software. In the simplest sense, parallel computing is the simultaneous use of multiple compute resources to solve a computational problem. A problem is broken into parts that can be concurrently solved. Each part is broken down to a series of instructions. Instructions from each part execute simultaneously on different processors, and an overall coordination is utilized. What is the difference between the parallel computing nowadays and a decade ago? How is it implemented now and then? Search online and report back the improvements of the Windows operating system after using the parallel computing. In addition, research and report back the tradeoffs of the parallel computing done in hardware vs software.Explanation / Answer
PARALLEL COMPUTING
Definition:: Parallel computing is the use of two or more processors(cores,computers) in combination to solve a single problem.
Differences between the parallel computing nowadays and a decade age:
Parallel computing is not new but it is becoming common now a days.This is essentially driven by the need of the users and also because of physical limitations on chip designing .e.g :power dissipation limits forcing t manufacturers to move to multicore CPUs.
Three decades ago research in to cache-coherent (cache coherence is the uniformit fo shared resource data that ends up stored in multiple local caches) shared memory was just begining .There were machines where you could send messages between processors.
By 1983 there were a few machines(Room Sized) where processors could communicate through a shared memory interface,but they didn't have cache coherence.
Most communication between processors in the early to mid 80's was done using Message passing APIs.Today almost everyone uses the MPI(Message-Passing Interface) standard for message passing APIs.
Cache-coherent shared memory multiprocessors and shared memory programing started becoming somewhat common in the 1990's for systems with 10s of processors.The OpenMP standard for specifying parallel computation in Fortran or C.
Today you can buy a shared memory machine with 8 hardware contexts for a few hundred dollars.
Each CPU in teh Cray X-MP in 1983 could complete a theoretical peak of 200MFLOPS with a 105MHs clock, whereas a single core today might be able to do 12GFLOPS with a 3GHz clock.
So many computatins that would have needed to be hand parallelized 30 years ago do not need to be explicitly parallelized today.
Implementing parallel processing:
There is an old network saying ;Bandwidth problems can be cured with money.Latency problems are harder because the speed of light is fixed.
To attain speedup and scaleup,you much effectively implement parallel processing and parallel database technology. This means desingning and building your system for parallel processing from the start.
Successfully Implementing Parallel Processing:
. The Four Level of Scalability You need
. whan is parallel processing Advnatages
. when is parallel processing not advanteageous
. Guidelines for effective partitioning
. COmmon Misconcejptions about parallel procesing
Improvements of Windows operating systems:
Bit-level parallelism::
From the advent fo very large-cale integration computer chip fabrication technology in the 1970s until about 1986 ,speed up in computer architecture was driven by doubling computer word size -The amount of information the processor can manipulate per cycle.
Instruction level parallelism::
A computer program is ,in esence ,a stream of instructions executed by a processor .Without instruction-level parallelism,a processor can only issue less than one instruction per clock cycle.
Multi-core computing:
A multi-core processor is a processor that includes multiple processing units on the chip.
Symmetric multiprocessing:
A symmetric multioprocessor is a computer system with multiple identical processors that share memory and connect via a bus.
Distributed computing:
A distributed computer is a distributed memory computer system in which the processing elements are connected by a network.
parallel computers based on interconnected networks need to have some kind of routing to enable the passing of messages between nodes that are not directly connected.The medium used for communicatin between the processors is likely to be hieracchical inlarge multiprocessor machines.
Hardware vs Software::
Hardware parallelism:
- Defined by machine architecture, hardware multioplicity (no.of processors available) and connectivity
- Often a function of cost/performance tradeoffs
- Characterized in a single processor by the number of instructions K issued in a single cycle.
- A multiprocessor system with n K-issue processor can handle a maximun limit of nk paralled instructions or n parallel threads level parallelism level.
Software parallelism::
- Defined by the control and data dependence of programs.
- Revealed in program profiling or program dependency(data flow) graph.
- A function of algorithm, parallel task grain size, programming style and compiler optimization.