📄 soln9-14.txt
字号:
so we can seek over 195 tracks (about 4% of the disk) during
an average rotational latency.
Question 13.13:
Why is it important to balance file system I/O among the disks and
controllers on a system in a multitasking environment?
Answer:
A system can only perform at the speed of its slowest bottleneck. Disks
or disk controllers are frequently the bottleneck in modern systems as
their individual performance cannot keep up with that of the CPU and
system bus. By balancing I/O among disks and controllers, neither an
individual disk nor a controller is overwhelmed, so that bottleneck
is avoided.
Question 13.17:
The term fast wide SCSI-II denotes a SCSI bus that operates at a
data rate of 20 megabytes per second when it moves a packet of bytes
between the host and a device. Suppose that a fast wide SCSI-II disk
drive spins at 7200 RPM, has a sector size of 512 bytes, and holds
160 sectors per track.
a. Estimate the sustained transfer rate of this drive in megabytes
per second.
b. Suppose that the drive has 7000 cylinders, 20 tracks per cylinder,
a head switch time (from one platter to another) of 0.5 millisecond,
and an adjacent cylinder seek time of 2 milliseconds. Use this
additional information to give an accurate estimate of the sustained
transfer rate for a huge transfer.
c. Suppose that the average seek time for the drive is 8
milliseconds. Estimate the I/Os per second and the effective
transfer rate for a random-access workload that reads individual
sectors that are scattered across the disk.
d. Calculate the random-access I/Os per second and transfer rate for
I/O sizes of 4 kilobytes, 8 kilobytes, and 64 kilobytes.
e. If multiple requests are in the queue, a scheduling algorithm such
as SCAN should be able to reduce the average seek distance. Suppose
that a random-access workload is reading 8- kilobyte pages, the
average queue length is 10, and the scheduling algorithm reduces
the average seek time to 3 milliseconds. Now calculate the I/Os
per second and the effective transfer rate of the drive.
Answer:
a. The disk spins 120 times per second, and each spin transfers a track
of 80 KB. Thus, the sustained transfer rate can be approximated as
9600 KB/s.
b. Suppose that 100 cylinders is a huge transfer. The transfer rate
is total bytes divided by total time. Bytes: 100 cyl * 20 trk/cyl
* 80 KB/trk, i.e., 160,000 KB. Time: rotation time + track switch
time + cylinder switch time. Rotation time is 2000 trks / 120 trks
per sec, i.e., 16.667 s. Track switch time is 19 switch per cyl *
100 cyl * 0.5 ms, i.e., 950 ms. Cylinder switch time is 99 * 2 ms,
i.e., 198 ms. Thus, the total time is 16.667 + 0.950 + 0.198, i.e.,
17.815 s. (We are ignoring any initial seek and rotational latency,
which might add about 12 ms to the schedule, i.e. 0.1%.) Thus the
transfer rate is 8981.2 KB/s. The overhead of track and cylinder
switching is about 6.5%.
c. The time per transfer is 8 ms to seek + 4.167 ms average rotational
latency + 0.052 ms (calcu-lated from 1 / (120 trk per second *
160 sector per trk)) to rotate one sector past the disk head during
reading. We calculate the transfers per second as 1/(0.012219), i.e.,
81.8. Since each transfer is 0.5 KB, the transfer rate is 40.9 KB/s.
d. We ignore track and cylinder crossings for simplicity. For reads
of size 4 KB, 8 KB, and 64 KB, the corresponding I/Os per second
are calculated from the seek, rotational latency, and rotational
transfer time as in the previous item, giving (respectively)
1/(0.0126), 1/(0.013), and 1/(0.019). Thus we get 79.4, 76.9,
and 52.6 transfers per second, respectively. Transfer rates are
obtained from 4, 8, and 64 times these I/O rates, giving 318 KB/s,
615 KB/s, and 3366 KB/s, respectively.
e. From 1/(3+4.167+0.83) we obtain 125 I/Os per second. From 8 KB per
I/O we obtain 1000 KB/s.
Question 13.18:
More than one disk drive can be attached to a SCSI bus. In particular, a
fast wide SCSI-II bus (see Exercise 13.17) can be connected to at most
15 disk drives. Recall that this bus has a bandwidth of 20 megabytes
per second. At any time, only one packet can be transferred on the bus
between some disk s internal cache and the host. However, a disk can
be moving its disk arm while some other disk is transferring a packet
on the bus. Also, a disk can be transferring data between its magnetic
platters and its internal cache while some other disk is transferring a
packet on the bus. Considering the transfer rates that you calculated
for the various workloads in Exercise 13.17, discuss how many disks
can be used effectively by one fast wide SCSI-II bus.
Answer:
For 8 KB random I/Os on a lightly loaded disk, where the random
access time is calculated to be about 13 ms (see Exercise 13.17),
the effective transfer rate is about 615 MB/s. In this case, 15 disks
would have an aggregate transfer rate of less than 10 MB/s, which should
not saturate the bus. For 64 KB random reads to a lightly loaded disk,
the transfer rate is about 3.4 MB/s, so 5 or fewer disk drives would
saturate the bus. For 8 KB reads with a large enough queue to reduce
the average seek to 3 ms, the transfer rate is about 1 MB/s, so the
bus bandwidth may be adequate to accommodate 15 disks.
Question 13.25:
You can use simple estimates to compare the cost and performance
of a terabyte storage system made entirely from disks with one that
incorporates tertiary storage. Suppose that magnetic disks each hold
10 gigabytes, cost $1000, transfer 5 megabytes per second, and have
an average access la-tency of 15 milliseconds. Suppose that a tape
library costs $10 per gigabyte, transfers 10 megabytes per second,
and has an average access latency of 20 seconds. Compute the total
cost, the maximum total data rate, and the average waiting time for
a pure disk system. If you make any assumptions about the workload,
describe and justify them. Now, suppose that 5 percent of the data are
fre-quently used, so they must reside on disk, but the other 95 percent
are archived in the tape library. Further suppose that 95 percent of
the requests are handled by the disk system and the other 5 percent are
handled by the library. What are the total cost, the maximum total data
rate, and the average waiting time for this hierarchical storage system?
Answer:
First let s consider the pure disk system. One terabyte is 1024 GB. To
be correct, we need 103 disks at 10 GB each. But since this question
is about approximations, we will simplify the arithmetic by rounding
off the numbers. The pure disk system will have 100 drives. The cost
of the disk drives would be $100,000, plus about 20% for cables,
power supplies, and enclosures, i.e., around $120,000. The aggregate
data rate would be 100 5 MB/s, or 500 MB/s. The average waiting time
depends on the workload. Suppose that the requests are for transfers
of size 8 KB, and suppose that the requests are randomly distributed
over the disk drives. If the system is lightly loaded, a typical request
will arrive at an idle disk, so the response time will be 15 ms access
time plus about 2 ms transfer time. If the system is heavily loaded,
the delay will increase, roughly in proportion to the queue length.
Now let s consider the hierarchical storage system. The total disk
space required is 5% of 1 TB, which is 50 GB. Consequently, we need
5 disks, so the cost of the disk storage is $5,000 (plus 20%, i.e.,
$6,000). The cost of the 950 GB tape library is $9500. Thus the
total storage cost is $15,500. The maximum total data rate depends on
the number of drives in the tape library. We suppose there is only 1
drive. Then the aggregate data rate is 6 10 MB/s, i.e., 60 MB/s. For
a lightly loaded system, 95% of the requests will be satisfied by
the disks with a delay of about 17 ms. The other 5% of the requests
will be satisfied by the tape library, with a delay of slightly more
than 20 seconds. Thus the average delay will be (95 0.017 +5 20)/
100, or about 1 second. Even with an empty request queue at the tape
library, the latency of the tape drive is responsible for almost all
of the system s response latency, because 1/ 20 th of the workload is
sent to a device that has a 20 second latency. If the system is more
heavily loaded, the average delay will increase in proportion to the
length of the queue of requests waiting for service from the tape drive.
The hierarchical system is much cheaper. For the 95% of the requests
that are served by the disks, the performance is as good as a pure-disk
system. But the maximum data rate of the hierarchical system is much
worse than for the pure-disk system, as is the average response time.
Question 14.3:
Explain why a doubling of the speed of the systems on an Ethernet
segment may result in decreased network performance. What changes
could be made to ameliorate the problem?
Answer:
Faster systems may be able to send more packets in a shorter amount
of time. The network would then have more packets traveling on it,
resulting in more collisions, and therefore less throughput relative
to the number of packets being sent. More net-works can be used,
with fewer systems per network, to reduce the number of collisions.
Question 14.4:
Under what circumstances is a token-ring network more effective than
an Ethernet network?
Answer:
A token ring is very effective under high sustained load, as no
collisions can occur and each slot may be used to carry a message,
providing high throughput. A token ring is less effective when the
load is light (token processing takes longer than bus access, so any
one packet can take longer to reach its destination), or sporadic.
Question 14.8:
The original HTTP protocol used TCP/IP as the underlying network
protocol. For each page, graphic, or applet, a separate TCP session was
contructed, used, and torn down. Because of the overhead of building
and destroying TCP/IP connections, there were performance problems
with this implementation method. Would using UDP rather than TCP have
been a good alternative? What other changes could be made to improve
HTTP per-formance?
Answer:
No one answer.
Question 14.9:
Of what use is an address resolution protocol? Why is the use of such
a protocol better than making each host read each packet to determine
to whom it is destined? Does a token-ring network need such a protocol?
Answer:
An ARP translates general-purpose addresses into hardware interface
numbers so the interface can know which packets are for it. Software
need not get involved. It is more efficient than passing each packet
to the higher layers. Yes, for the same reason.
Question 14.10:
What are the advantages and disadvantages of making the computer
network transparent to the user?
Answer: No answer.
Question 14.11:
What are two formidable problems that designers must solve to implement
a network-transparent system?
Answer:
No answer
Question 14.14:
Is it always crucial to know that the message you have sent has arrived
at its destination safely? If your answer is yes, explain why. If
your answer is no, give appropriate examples.
Answer:
No answer.
Question 14.16:
Consider a distributed system with two sites, A and B. Consider whether
site A can distinguish among the following:
a. B goes down.
b. The link between A and B goes down.
c. B is extremely overloaded and its response time is 100 times
longer than normal. What implications does your answer have for
recovery in distributed systems?
Answer:
No answer
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -