Active In SP
Joined: Jun 2010
02-10-2010, 11:15 AM
Pipeline.doc (Size: 729.5 KB / Downloads: 50)
This article is presented by:
Pipeline is a type of parallel processing. Parallel processing is a term used to denote a large class of techniques hat are used to provide simultaneous data-processing tasks for the purpose of increasing the computational speed of a computer system. For example, while an instruction is being executed in the ALU, the next instruction can be read from memory. The system may have two or more ALUs and be able to execute two or more instructions at the same time. The system may have two or more processors operating concurrently.
Parallel processing can be achieved by having a multiplicity of functional units that perform identical or different operations simultaneously. It is established by distributing the data among the multiple functional units. For example, the arithmetic, logic, and shift operations can be separated into three units and the operators can be separated into three units and the operands diverted to each unit under the supervision of a control unit. Figure shows the possible ways of separating the execution unit into eight functional units operating in parallel.
Parallel processing can be classified according to the following:
1.Internal organization of the processors.
2.Interconnection structure between processors.
3.Flow of information through the system.
M.J.Flynn considers the organization of a computer system by the number of instructions and data item that are manipulated simultaneously. The sequence of instructions read from memory constitutes an instruction stream. The operations performed on the data in the processor constitute a data stream.
Flynn classification divides the computers into four major groups as follows:
Single instruction stream, single data stream (SISD):
It represents the organization of a simple computer containing a control unit, a processor, and a memory unit. In this case parallel processing may be achieved by means of mul-iple functional unit or by pipeline processing.
Single instruction stream, multiple data stream (SIMD):
It represents an organization that includes many processing units under the supervision of common control unit. All processors received the same instruction, but operate on different items of data.
Multiple instruction stream, single data stream:
It is of only theoretical interest since no practical system has been constructed using this organization.
Multiple instruction stream, multiple data stream:
It refers to a computer system capable of processing several programs at the same time. Most multiprocessor and multicomputer systems can be classified in this category.
Flynn’s classification depends on the distinction between the performance of the control unit and the data –processing unit. It emphasizes the behavioral characteristics of the computer system rather than its operational and structural interconnections. Pipelining does not fit Flynn classification.
Joined: Apr 2012
12-10-2012, 11:42 AM
PIPELINING.pptx (Size: 215.9 KB / Downloads: 23)
What is PIPELINING?
A pipeline processor is comprised of a
sequential, linear list of segments, where each segment performs one
computational task or group of tasks.
How Pipeline Works?
The pipeline is divided into segments and each segment can execute it operation concurrently with the other segments.
The instruction Fetch (IF) stage is responsible for obtaining the requested instruction from memory.
The Instruction Decode (ID) stage is responsible for decoding the instruction and sending out the various control lines to the other parts of the processor.
Memory and IO
The Memory and IO (MEM) stage is responsible for storing and loading values to and from memory.
More efficient use of processor
Quicker time of execution of large number of
Pipelining involves adding hardware to the chip
Inability to continuously run the pipeline
at full speed because of pipeline hazards
which disrupt the smooth execution of the
Pipelined processors represent an intelligent approach to speeding up instruction processing when the memory access time has improved to a certain extent.