📄 00000000.htm
字号:
EDORAM: Extended Data Out RAM. Like DRAM, but allowed <BR> reading of multiple bytes in parallel. <BR> <BR> SRAM: Static DRAM. Usually faster than other methods, <BR> but more power-hungry. Used for caches. <BR> <BR> SDRAM: Synchronous DRAM. Operates in sync with the CPU <BR> clock. Increases throughput by using pipelining to hide <BR> setup times. This is the most common type of RAM in use <BR> today. <BR> <BR> WRAM: Windowed RAM. A dual ported memory that allowed <BR> simultaneous reading and writing. Only ever appeared in <BR> Video cards. <BR> <BR> ECCRAM: Error Checking / Correcting RAM. Contains <BR> several extra bits per byte; the memory controller uses <BR> this to detect and correct all single bit errors and <BR> detect most multiple bit errors. Typically only used in <BR> servers, where the cost of an error is very high. <BR> <BR>See also Tom's Hardware Guide. <BR> <BR>4.3 Parity Checking <BR> <BR>Standard main memory is Dynamic RAM of one sort or another, in <BR>which each bit is represented by voltage on a capacitor. If a <BR>cosmic ray happens to dump too much charge onto one of those <BR>capacitors, you can end up with a bit error. This happens very <BR>infrequently, so desktop systems don't need to worry too much <BR>about it. To detect bit errors, some memory chips come with a <BR>parity bit on each byte; the parity bit is a simple checksum of <BR>the byte. When the computer reads the byte, and detects that the <BR>checksum on the byte is wrong, it declares a bus error, and halts <BR>the current program. <BR> <BR>Not many computers use simple parity checking anymore; it doesn't <BR>detect as many errors as full ECC memory (which has four or so <BR>parity bits per byte, and can even correct single-bit errors). <BR> <BR>4.4 Leveling the Cache <BR> <BR>In computing, the faster you want to access something, the more <BR>expensive it is to manufacture. Have you ever wondered why we <BR>don't see all hard drives replaced with RAM? Its cheaper to <BR>produce the same amount of storage on a disk than it is in <BR>memory. This price saving comes at a penalty - speed. A hard <BR>drive is about 100 times slower to access than standard memory. <BR> <BR>Your CPU screams along at 400 million instructions a second <BR>(obviously this depends on CPU clock speed). On a good day, that <BR>means it needs to read one instruction every 2.5 nano seconds <BR>(2.5 x 10^-9). An average hard drive has an access time of around <BR>8 milliseconds (8 x 10 -3). If the CPU was going to read every <BR>thing from the hard drive, for each instruction it executes, <BR>there are about 3.2 million clock cycles that it doesn't do <BR>anything for. Pretty obviously that's a big waste of resources. <BR> <BR>In order to keep the CPU feed with instructions, we need memory <BR>that is going to keep up. The problem now becomes one of how to <BR>get data from slow moving devices into the fast moving ones - <BR>that is the job of the various levels of memory. On your typical <BR>computer you have the following: <BR> <BR> ~ Level 1 - The CPU cache built into the <BR> microprocessor. Currently that is between 8K and 32K <BR> bytes. Quickest as it operates at exactly the same <BR> speed as the CPU core <BR> <BR> ~ Level 2 - External Cache. The next level is much <BR> bigger at between 256K and 2MB. This normally operates <BR> at about half the CPU core clock speed. <BR> <BR> ~ Level 3 - Main Memory. Your standard RAM. Anywhere <BR> between 16MB and a few gigabytes. Access to this is <BR> around 66 or 100MHz, or probably a quarter of the CPU <BR> core clock speed. <BR> <BR> ~ Level 4 - Storage devices. Hard drives, CDs, Floppies <BR> etc. Access speed is around 10-50MHz depending on the <BR> device and connection (eg IDE is 16MHz, UW SCSI is <BR> 80MHz, FireWire/FibreChannel is around 300MHz). <BR> <BR>There are some real over simplifications here, but it should give <BR>you a rough idea of how each level slows down compared to the one <BR>below it as we move away from the CPU. As another gross <BR>simplification, for each unit at each level, the cost stays <BR>roughly the same of around US$250 so you can see how cost <BR>influences size and speed. <BR> <BR>Now, that's a long introduction to this point. As you can see, we <BR>get to smaller and smaller memory sizes the closer we get to the <BR>CPU. That means we fit less in, and the chances of having to <BR>fetch something from the next level up increases. Since this next <BR>level is slower, we pay a penalty each time. <BR> <BR>The simplistic approach to tuning your memory and cache is to put <BR>as much of the fastest memory that you can buy into the machine. <BR>This is the reason the Xeon, Sparc and Alpha chips can have as <BR>much as 2MB of on board cache. For server applications where <BR>there is huge amounts of number crunching to do, the more you can <BR>cache the quicker everything runs. <BR> <BR>Buying big caches doesn't always gain you extra. For example, if <BR>you are mail serving or doing only static web pages, you will <BR>gain almost nothing compared to more reasonable "standard" <BR>amounts. For many operations, information gets dumped straight <BR>from the hard drive or main memory, straight to the output <BR>device. For example, sound clips or image textures may go <BR>straight to output devices for processing. If you are dishing up <BR>files to a server, you are much better off trying to store the <BR>files in main memory rather than on the hard drive. Tuning a file <BR>server usually involves buying lots of standard RAM rather than <BR>big caches. <BR> <BR>4.5 Bus speed <BR> <BR>As you saw in the previous section, accessing a byte of <BR>information is dependent on the rate at which you can access the <BR>information in the device. However, that device needs to be <BR>connected to the other devices so that means it must travel over <BR>some intermediary connection. Like a dam with a straw allowing <BR>water to escape, it doesn't matter how much or how quick the <BR>device might be capable of delivering information if the bus <BR>connecting the two runs like a glacier. <BR> <BR>One immediate and fairly easy way to get information out quicker <BR>is to start playing with the bus speeds for everything. A normal <BR>computer comes with adjustments for PCI, ISA, Memory and a few <BR>other items set to fairly conservative, safe settings. On the <BR>other hand, the devices plugged into these busses usually have <BR>some amount of tolerance for the bus speed moving around a bit. <BR>This gives you room to tinker. <BR> <BR>Playing with bus speeds is a bit of a trial and error approach. <BR>Good quality components usually have quite a margin to play with, <BR>but el cheapo components (like your $10 NE2000 clone network <BR>card) won't tolerate it too much. <BR> <BR>Bus speeds are usually only available in the BIOS setup. To tune <BR>your bus, up the speed, reboot the machine and see if things <BR>start locking up. Assuming the machine still boots OK, run <BR>benchmarks over it. In some cases, increasing the bus speed will <BR>_slow_down_ your machine due to the devices not adequately <BR>dealing with higher than specified settings. Don't just set it as <BR>high as possible and assume everything will be better. Find <BR>benchmarks for the particular subsystem that you've played with <BR>and check! <BR> <BR>-- <BR>※ 来源:·BBS 水木清华站 bbs.net.tsinghua.edu.cn·[FROM: 162.105.179.11] <BR><CENTER><H1>BBS水木清华站∶精华区</H1></CENTER></BODY></HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -