📄 http:^^www.cs.wisc.edu^~glew^generic-phd-research-interests.html
字号:
in most computer systems, primarily software, design: not-so-much in the sense of hard real time, but in the sense of "compile this program, giving me the best code you can produce in half an hour". <p> Although my chief focus has been in computer design, I have remained somewhat involved in the first two areas of research, by investigating computer enhancements to increase performance in these areas specifically, and also through my involvement in Intel's "Natural Datatypes Technical Committee". <p> In case these interests are insufficiently abstruse, I also admit to a continued interest in: (1) sociology and economics, specifically market imperfections, which seem to me to be closely related to issues such as the cost of computation; and also (2) the theory of imperfect, or incomplete, systems, both logical and algorithmic. <p> However, in order to make progress in life, one must focus; and I propose to focus on the areas described in section (4.2). <H3>(4.2) Specific Research Focus</H3> The area that I propose to do research in, which has been the primary focus of my last ten years, is increasing the performance of computers to facilitate the applications, both "thinking" and "user interface", I describe above. Specifically, my interest is in increasing the computing power available to the average member of our society. <p> Although I have worked for supercomputer manufacturer, in my research capacity I am not concerned with techniques that are applicable only to supercomputers. Although many supercomputer techniques are relevant to the mass market microprocessors that most of us use, many are not. In fact, at the moment, mass market microprocessor design is investigating many techniques that the old supercomputer manufacturers never considered. <p> More specifically, I propose to do research in uniprocessor, single CPU performance. This is not because I think that research in multiprocessors is inappropriate; just that I think that there is a lot of headroom for improvement in single CPU performance. Furthermore, as may become obvious in the more detailed explanation of my research interests, the style of CPU microarchitecture which I propose to investigate may also serve as a bridge between uniprocessor and multiprocessor CPU design. <p> The basic problem in modern computer design is that the speed of the CPU is increasing faster than the speed of memory. Also, the economics of the memory (DRAM) market tend to prohibit the parallel, interleaved, memory subsystem techniques that have been used on traditional supercomputers to increase performance. Although such techniques will eventually be applied, the gap between CPU and memory performance is growing. <p> My basic approach is to investigate extremely aggressive, advanced, CPU microarchitectures, extending out-of-order, speculative, and dynamic execution far beyond what we are currently implementing. The hope is that, by creating as large a pool as possible of memory references, that we can use the existing limited memory bandwidth in as efficient a manner possible. The techniques that I currently have in mind include: <DL> <DT> Microarchitecture <DD> <p> The following list of techniques starts with the most aggressive, ending with techniques that are only a little bit beyond the present state of the art in industry. </p> <p> <DL> <DT> Skip ahead <DD> <p> Modern CPUs <I>execute</I> instructions in parallel and out-of-order, but they continue to <I>fetch</I> instructions in a largely sequential manner. Rather than "looking ahead" in the sequential instruction stream, I propose to "skip ahead", and fetch discontiguous instruction packets. For example, I propose microarchitectures that can determine when subsequent procedures are independent of each other, and which will fetch and execute those procedures in parallel. </p> <p> Related research includes the general topic of "multithreading", as advocated by Tera Computers, and also the "multiscalar" research being carried out by Guri Sohi at Wisconsin. However, while these other researchers posit an explicitly parallel instruction set (which I will also consider, see below), I am optimistic that these techniques can be applied equally well to existing instruction sets, seeking parallelism implicit in an existing "single-threaded" program. </p> <p> I.e., rather than the micro-scale parallelism of present CPU designs, or the macro-scale parallelism of true multiprocessors, I propose to investigate meso-scale parallelism. I have some hope that such designs might allow a more gradual evolution into true multiprocessing, hence overcoming the market barriers that have hindered multiprocessing in real life. </p> <p> I therefore envisage computer systems in which micro-scale parallelism is taken advantage of by "dynamic execution" techniques such as I and my coworkers employed in the P6 (Pentium Pro) processor; meso-scale parallelism is taken advantage of by skip-ahead mechanisms as I describe above; and macro-scale parallelism is taken advantage of by explicit multiprocessors. </p> <DT> Convergent Code <DD> <p> Also known as "Minimal Control Dependencies", from Tjaden and Flynn's seminal paper in the 1970s, this is based on the observation that much of the time, in a modern processor with speculative branch execution, work after the close of the "ENDIF" part of an "IF" is independent of the path through through the "IF". The goal is to avoid needlessly throwing useful work away. </p> <DT> Incremental and Selective Speculation Recovery <DD> <p> In fact, the theme of avoiding needlessly throwing correct work away can be extended to forms of speculation other than branch prediction, such as data speculation. A joyful synergy is found because the general mechanism for solving this problem also seems to solve the problems of convergent code and skip-ahead processing. </p> <DT> Eager Execution <DD> <p> Finally, I will consider Eager Execution - executing both sides of a branch. However, since this is something that is already on the verge of being implemented in industry, and since research to date shows that its benefits are mixed (introducing extra memory traffic due to executing unneeded paths is undesirable on the memory starved processor model I have in mind), I consider that Eager Execution is ancillary to the techniques I already mention. However, the same mechanisms that support the previous techniques also support Eager Execution, so it would be foolish not to investigate it. </p> </DL> </p> <DT> Instruction Set Design <DD> <p> Finally, some of my betters at Intel are encouraging me to investigate possible new ISA paradigms better matched to modern microarchitecture. Just as RISC instruction sets were well matched to the simple pipelined processors of the early 1980s, and VLIWs are well matched to the processor designs of the early 1990s, so it is possible that a new style of instruction set may be better matched to the out-of-order, speculative, processor designs of the present, or perhaps to the memory starved processors of the future. </p> <p>I always keep the possibility of new instruction set principles at the back of my mind, but I also take care to resist succumbing to temptation too easily. I have often found, however, that designing new instruction set <I>features</I> is a useful technique, since once new features have been designed you have only to devise an equivalent way of implicitly predicting or performing the same function in hardware, without the new instruction set features. </p> </DL> </p> <H2>(6) Conclusion</H2> In conclusion, therefore, my interests are broad, but I plan to focus on a single area, applied, experimental, research in computer architecture, I believe this area to be both relevant to industry, but also challenging enough to warrant a Ph.D. <hr> <p> $Header: /u/g/l/glew/public/html/RCS/generic-PhD-research-interests.html,v 1.1 1996/09/12 23:37:44 glew Exp $ </BODY></HTML>
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -