CPU-Scheduling Criteria
Scheduling Criteria
Different CPU-scheduling algorithms have
different properties, and the choice of a particular algorithm may favor one
class of processes over another. In choosing which algorithm to use in a
particular situation, we must consider the properties of the various
algorithms.
Many criteria have been suggested for
comparing CPU-scheduling algorithms. Which characteristics are used for comparison
can make a substantial difference in which algorithm is judged to be best. The
criteria include the following:
·
CPU utilization: We want to keep the CPU as busy as possible. CPU
utilization may range from 0 to 100 percent. In a real system, it should range
from 40 percent (for a lightly loaded system) to 90 percent (for a heavily used
system).
·
Throughput: If the CPU is busy executing processes,
then work is being done. One measure of work is the number of processes
completed per time unit, called throughput. For long processes, this rate may
be 1 process per hour; for short transactions, throughput might be 10 processes per second.
·
Turnaround time: From the point of view of a particular
process, the important criterion is how long it takes to execute that process.
The interval from the time of submission of a process to the time of completion
is the turnaround time. Turnaround time is the sum of the periods spent waiting
to get into memory, waiting in the ready queue, executing on the CPU, and doing
I/O.
·
Waiting time: The CPU-scheduling algorithm does not
affect the amount of time during which a process executes or does I/O; it affects only the amount of time that a process
spends waiting in the ready queue. Waiting time is the sum of the periods spent
waiting in the ready queue.
·
Response time: In an interactive system, turnaround time
may not be the best criterion. Often, a process can produce some output fairly
early, and can continue computing new results while previous results are being
output to the user. Thus, another measure is the time from the submission of a
request until the first response is produced. This measure, called response
time, is the amount of time it takes to start responding, but not the time that
it takes to output that response. The output time is generally limited by the
speed of the output device.
We want to maximize CPU utilization and
throughput, and to minimize turnaround time, waiting time, and response time.
In most cases, we optimize the average measure. However, in some circumstances
we want to optimize the minimum or maximum values, rather than the average. For
example, to guarantee that all users get good service, we may want to minimize
the maximum response time.
For interactive systems (such as
time-sharing systems), some analysts suggest that minimizing the variance in
the response time is more important than minimizing the average response time.
A system with reasonable and predictable response time may be considered more
desirable than a system that is faster on the average, but is highly variable.
However, little work has been done on CPU-scheduling algorithms to minimize
variance.
As we discuss various CPU-scheduling
algorithms, we want to illustrate their operation. An accurate illustration
should involve many processes, each being a sequence of several hundred CPU
bursts and I/O bursts. For simplicity of illustration, we
consider only one CPU burst (in milliseconds) per process in our examples. Our measure of comparison is the average
waiting time.
Comments
Post a Comment