⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 http:^^www.cs.wisc.edu^~solomon^projects.html

📁 This data set contains WWW-pages collected from computer science departments of various universities
💻 HTML
📖 第 1 页 / 共 3 页
字号:
<h2>Security Audit:</h2></a>A properly managed computer system shouldbe secure from illegal entry.Normal users should not be able to obtain privileges beyond what they aregiven.Most systems in everyday have security holes.Normally, it is considered a violation of standards of ethical behaviorto take advantage of such holes.However, a``tiger team''is a team specifically authorized to find as many security holes as possibleand report them to responsible management.<p>Select a facility in the Computer SciencesDepartment or elsewhere and find, demonstrate, and document as many securityproblems as possible.You may attack the system either from the position of an``ordinary''user, with an account but no special privileges, or from the point of viewof an outsider--someone who is not supposed to be able to access thefacility at all.You should find as many security problems as possible.These problems include system flaws, improper management, and carelessusers.The results of this study should be a report of the problems, withsuggestions for fixes in the system, system design,and changes in management procedures.You should<em>not</em>explore``denial of service''attacks such as jamming networks or crashing systems.<b>Warning:A project of this kind must be approved <blink>in advance</blink> by theperson responsible for the facility you are proposing to attack!</b><a name="fileserver"><h2>File Servers for Workstations:</h2></a>Workstations are available with and without local disks.Bulk storage is provided by a combination of remote file servers, localdisk, and local RAM memory.Servers provide remote devices, remote files,or other abstractions.A variety of schemes for providing a``seamless''global file service have been suggested, including remote disk simulation,remote file access (e.g. NFS from Sun Microsystems)whole-file cachingon local disk as in the Carnegie-Mellon ITC system (Andrew file system)and use of large localRAM for file caching, as in the Sprite system from Berkeley.The Locus system should also be studied for ideas about transparent globalfile naming.<p>Design a scheme for file access for a network of workstations.You should specify the functionality that is provided by the server and theresponsibility of the client workstation.You will want to discuss reliability, fault tolerance, protection, andperformance.Compare your design to the designs published in the literature.Evaluate the design by performing a simulation.See the``Spritely NFS''paper by Srinivasan and Mogul and the award-winning paper by Shirriff andOusterhout from the Winter 1992 USENIX (see me for a copy)for examples of similar studies.See also related papers in SOSP proceedings over the last several years.<a name="loadbalencing"><h2>Load Balancing:</h2></a>Many systems such as LOCUS, Sprite, or Condor allows you to startprocesses on any machine, move processes during execution,and access files (transparently) across machine boundaries.Automatic placement of processes and other system resources couldsubstantially improve overall system performance.There are several interesting issues in load balancing, including<ol><li><em>Collection of Data for Load Balancing:</em>To make a load balancing decision, you might need data from eachmachine in the network.There are many forms that this data can take, and many designs forcommunicating this among machines.You must decide what data is needed, from where the data must come,and how it must be communicated.<p>This problem becomes interesting in the scope of a very large networkof computers (1000's of machines).You do not want to consume huge amounts of system resources makingthese decisions, and you do not want to make decisions based onextremely old data.<li><em>Policies for Load Balancing Decisions:</em>How do you decided when to move a process?On what do you base your decision?How frequently can we move processes (what is thrashing like in thisenvironment)?What about groups of processes that are cooperating?<li><em>Metrics for Load Evaluation:</em>What load metrics do you use in evaluating an individual machine'scapacity?Are these related to processing?  storage?  communication?How do we (can we) measure these?Are they accurate reflections of a machine's performance?How can you demonstrate this?<li><em>File migration:</em>We can move files, as well as processes.When do you move files vs. processes?Is only one needed?Which is better?How can you tell?</ol><p>You are warned that is quite easy to suggest many<em>plausible</em>schemes for load balancing but not so easy to evaluate them.Therefore, a major component of any project in this area will beevaluation through simulation.<a name="kerberos"><h2>Security and Authentication:</h2></a>The Popek and Kline paper on the reading list discusses use of encryptionfor authentication in distributed systems.It considers both conventional and public-key schemes.One popular implementation based on these ideas is the Kerberos systemfrom MIT.Kerberos has been used to provide secure remote login, file transfer,and remote file access.<p>Use Kerberos or an<em>ad hoc</em>package to enhance the security of some existing system.<a name="fuzz"><h2>Random Software testing:</h2></a>This suggestion is from Prof. Bart Miller.<p>This past Fall, in CS736, I had some students work on more of that randomsoftware testing.The result is<!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><!WA27><a href="ftp://grilled.cs.wisc.edu/technical_papers/fuzz-revisited.ps.Z">a pretty nice paper</a>that we just submitted to CACM.One interesting result was that the utilities from GNU and Linux weresubstantiallymore crash-resistant than ones from the seven commercial systems that we tested(SunOS, Solaris, AIX, Ultrix, HP-UX, Irix, NEXTSTEP).<p>There are a bunch more things that can be done in this work:<ol><li>test more of thenew BSD UNIX systems, such as netBSD, freeBSD, BSDi;<li>test applications onWindows and Macs;<li>test more of the system library interfaces.</ol>I'd be happy to help supervise any projects in this area.<a name="webcrawler"><h2>Navigating the World-Wide Web</h2></a>The World-Wide Web is growing at an unbelievable pace.There's a tremendous amount of information available, butfinding what you want can be next to impossible.Quite a fewon-line search engineshave been created to aid in resource location onthe web.Check the<!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><!WA28><a href="http://home.netscape.com/home/internet-search.html">Directory pull-down menu of</a>NetScape for some examples.(Of particular note is WebCrawler, written by Wisconsin alumnus Brian Pinkerton,who recently sold it to America Online, reputedly for over $1 million!)<p>There are lots of ways of tackling this problem, but none discovered thusfar is entirely satisfactory.Among the variables in design space are<dl><dt>Server Support<dd>Does the provider of information cooperate in advertising it, oris the search entirely client-driven?<dt>Caching<dd>Does each search start from scratch, or is some sort of ``database'' usedto guide the search?In the latter case, where is the database kept (at the client, the server,or somewhere in between)?How is it created?How is stale information detected and updated?How is the cache purged of valid, but seldom-referenced information?<dt>Search Strategy<dd>How does the search determine which information will be of interestto the user?How does determine which links to traverse, and in what order?When does it know when it has gone far enough?</dl><a name="webtopology"><h2>Topology of the Web</h2></a>A project closely related to <!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><!WA29><a href="#webcrawler"> the previous suggestion</a> is to collect and analyze information about the current structure ofthe web.The web can be viewed as a vast directed graph.Gather as much information you can about this graph an analyze it.What is the average number of links out of a page?What is the average size of a page.What is the average distance between the pages at the two ends of a link(where ``distance'' is the number of links along a shortest path)?More generally, what are the distributions of these statistics?How do these things vary over time?<p>Information from this project would be of great interest to people proposingalgorithms for traversing the web.This project has two distinct parts, both potentially quite challenging:gathering the data and analyzing it.<a name="worms"><h2>Self-perpetuating programs:</h2></a>The``Worm''program propagated itself across many machines,automatically repairing parts that were damaged or destroyed.A worm is extremely difficult to kill.You should design a strategy to building worms on one of our systems.You will also need to determine how you might (constructively) use aworm program--i.e., what applications are there for this type ofprogram?<p>This project could involve a design, test implementation(s), and studyand evaluation of the implementation.Is there a generic structure such that you can take alarge class of algorithms and automatically make them into worm-typeprograms?<a name="transactions"><h2>A General-Purpose Transaction Package:</h2></a>The concept of a<em>transaction</em>--a sequence of actions that are executed atomicly andeither commit (are reliably preserved forever) or abort (are completelyundone)--wasdeveloped in the context of database systems, but transactions are useful inmany areas outside of traditional database applications.Design and implement a portable transaction package.Look at<em>Camelot </em>,developed in the context of Mach, and<em>libtb </em>,built by Margo Seltzer and described in a recent Usenix proceedings.<a name="dsm"><h2>Distributed Shared Memory:</h2></a>There been a great deal of interest recently in an architecture called``distributed shared memory''.The basic idea is to simulate a shared-memory multiprocessor programmingmodel on top of a distributed system (a local-area network) by alteringthe page-fault handler of a traditional operating system to fetchpages over the network rather than the local disk.The 1991 SOSP contains a paper on an operating system called<em>Munin </em>,whichexplores some of the tradeoffs in page placement and replacement policiesto support a variety of applications efficiently.Explore these issues by constructing a simulation.See also the <!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><!WA30><a href="#tempest"> Wisconsin Wind Tunnel (WWT) </a>project for related research.<a name="performance"><h2>Performance Study:</h2></a>Monitor one or more of the Computer Science Department's machines ornetworks to determine its characteristics.Where are the bottlenecks?What sorts of programs are producing most of the load.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -