rfc1588.txt
来自「中、英文RFC文档大全打包下载完全版 .」· 文本 代码 · 共 1,478 行 · 第 1/5 页
TXT
1,478 行
Postel & Anderson [Page 16]RFC 1588 White Pages Report February 19944.C. NETFIND Right now, the white pages service with the most coverage in the Internet is Mike Schwartz' Netfind. Netfind works in two stages: 1) find out where to ask, and 2) start asking. The first stage is based on a database of netnews articles, UUCP maps, NIC WHOIS databases, and DNS traversals, which then maps organizations and localities to domain names. The second stage consists of finger queries, Whois queries, smtp expns and vrfys, and DNS lookups. The key feature of Netfind is that it is proactive. It doesn't require that the system administrator bring up a new server, populate it with all kinds of information, keep the information in sync, worry about update, etc. It just works. A suggestion was made that Netfind could be used as a way to populate the X.500 directory. A tool might do a series of Netfind queries, making the corresponding X.500 entries as it progresses. Essentially, X.500 entries would be "discovered" as people look for them using Netfind. Others do not believe this is feasible. Another perhaps less interesting merger of Netfind and X.500 is to have Netfind add X.500 as one of the places it looks to find organizations (and people). A search can lead you to where a person has an account (e.g., law.xxx.edu) only to find a problem with the DNS services for that domain, or the finger service is unavailable, or the machines are not be running Unix (there are lots of VMS machines and IBM mainframes still out there). In addition, there are security gateways. The trends in computing are towards the use of powerful portables and mobile computing and hence Netfind's approach may not work. However, Netfind proves to be an excellent yellow-pages service for domain information in DNS servers - given a set of keywords it lists a set of possible domain names. Suppose we store a pointer in DNS to a white-pages server for a domain. We can use Netfind to come up with a list of servers to search, query these servers, then combine the responses. However, we need a formal method of gathering white-pages data and informal methods will not work and may even get into legal problems.Postel & Anderson [Page 17]RFC 1588 White Pages Report February 1994 The user search phase of Netfind is a short-term solution to providing an Internet white pages. For the longer term, the applicability of the site discovery part of Netfind is more relevant, and more work has been put into that part of the system over the past 2 years than into the user search phase. Given Netfind's "installed customer base" (25k queries per day, users in 4875 domains in 54 countries), one approach that might make sense is to use Netfind as a migration path to a better directory, and gradually phase Netfind's user search scheme out of existence. The idea of putting a record in the DNS to point to the directory service to search at a site is a good start. One idea for further development is to have the DNS record point to a "customization" server that a site can install to tailor the way Netfind (or whatever replaces Netfind) searches their site. This would provide sites a choice of degrees of effort and levels of service. The least common denominator is what Netfind presently does: DNS/SMTP/finger. A site could upgrade by installing a customization server that points to the best hosts to finger, or that says "we don't want Netfind to search here" (if people are sufficiently concerned about the legal/privacy issues, the default could be changed so that searches must be explicitly enabled). The next step up is to use the customization server as a gateway to a local Whois, CSO, X.500, or home grown white pages server. In the long run, if X.500 (or Whois++, etc.) really catches on, it could subsume the site indexing part of Netfind and use the above approach as an evolution path to full X.500 deployment. However, other approaches may be more productive. One key to Netfind's success has been not relying on organizations to do anything to support Netfind, however the customization server breaks this model. Netfind is very useful. Users don't have to do anything to wherever they store their people data to have it "included" in Netfind. But just like archie, it would be more useful if there were a more common structure to the information it gives you, and therefore to the information contained in the databases it accesses. It's this common structure that we should be encouraging people to move toward. As a result of suggestions made at the November meeting, Netfind has been extended to make use of URL information stored in DNS records. Based on this mechanism, Netfind can now interoperate with X.500, WHOIS, and PH, and can also allow sites to tune which hosts Netfind uses for SMTP or Finger, or restrict Netfind from searching their site entirely.Postel & Anderson [Page 18]RFC 1588 White Pages Report February 19944.D. ARCHIE Archie is a success because it is a directory of files that are accessible over the network. Every FTP site makes a "conscious" decision to make the files available for anonymous FTP over the network. The mechanism that archie uses to gather the data is the same as that used to transfer the files. Thus, the success rate is near 100%. In a similar vein, if Internet sites make a "conscious" decision to make white-pages data available over the network, it is possible to link these servers to create a world-wide directory, such as X.500, or build an index that helps to isolate the servers to be searched, Whois++. Users don't have to do anything to their FTP archives to have them included in archie. But everybody recognizes that it could be more useful if only there were some more common structure to the information, and to the information contained in the archives. Archie came after the anonymous FTP sites were in wide- spread use. Unfortunately for white-pages, we are building tools, but there is no data.4.E. FINGER The Finger program that allows one to get either information about an individual with an account, or a list of currently logged in users, from a host running the server, can be used to check a suggestion that a particular individual has an account on a particular host. This does not provide an efficient method to search for an individual.4.F. GOPHER A "gateway" between Gopher and X.500 has been created so that one can examine X.500 data from a Gopher client. Similar "gateways" are needed for other white pages systems.4.G. WWW One extension to WWW would be an attribute type for the WWW URI/URL with the possibility for any client to request from the X.500 server (1) either the locator (thus the client would decide to access or not the actual data), or (2) for client not capable of accessing this data, the data itself (packed) in the ASN.1 encoded result. This would give access to potentially any piece of information available on the network through X.500, and in the white pages case to photos or voice messages for persons.Postel & Anderson [Page 19]RFC 1588 White Pages Report February 1994 This solution is preferable to one consisting of storing this multimedia information directly in the directory, because it allows WWW capable DUIs to access directly any piece of data no matter how large. This work on URIs is not WWW-specific.5. ISSUES5.A. DATA PROTECTION Outside of the U.S., nearly all developed countries have rather strict data protection acts (to ensure privacy mostly) that governs any database on personal data. It is mandatory for the people in charge of such white pages databases to have full control over the information that can be stored and retrieved in such a database, and to provide access controls over the information that is made available. If modification is allowed, then authentication is required. The database manager must be able to prevent users from making available unallowed information. When we are dealing with personal records the issues are a little more involved than exporting files. We can not allow trawling of data and we need access-controls so that several applications can use the directory and hence we need authentication. X.500 might have developed faster if security issues were not part of the implementation. There is tension between quick lightweight implementations and the attempt to operate in a larger environment with business issues incorporated. The initial belief was that data is owned by the people who put the data into the system, however, most data protection laws appoint the organizations holding the data responsible for the quality of the data of their individuals. Experience also shows that the people most affected by inaccurate data are the people who are trying to access the data. These problems apply to all technologies.5.B. STANDARDS Several types of standards are needed: (1) standards for interoperation between different white pages systems (e.g., X.500 and Whois++), (2) standards for naming conventions, and (3) and standards within the structured data of each system (what fields or attributes are required and optional, and what are their data types).Postel & Anderson [Page 20]RFC 1588 White Pages Report February 1994 The standards for interoperation may be developed from the work now in progress on URLs, with some additional protocol developed to govern the types of messages and message sequences. Both the naming of the systems and the naming of individuals would benefit from consistent naming conventions. The use of the NADF naming scheme should be considered. When structured data is exchanged, standards are needed for the data types and the structural organization. In X.500, much effort has gone into the definition of various structures or schemas, and yet few standard schemas have emerged. There is a general consensus that a "cookbook" for Administrators would make X.500 implementation easier and more attractive. These are essential for getting X.500 in wider use. It is also essential that other technologies such as Whois++, Netfind, and archie also have complete user guides available.5.C. SEARCHING AND RETRIEVING The main complaint, especially from those who enjoyed using a centralized database (such as the InterNIC Whois service), is the need to search for all the John Doe's in the world. Given that the directory needs to be distributed, there is no way of answering this question without incurring additional cost. This is a problem with any distributed directory - you just can't search every leaf in the tree in any reasonable amount of time. You need to provide some mechanism to limit the number of servers that need to be contacted. The traditional way to handle this is with hierarchy. This requires the searcher to have some idea of the structure of the directory. It also comes up against one of the standard problems with hierarchical databases - if you need to search based on a characteristic that is NOT part of the hierarchy, you are back to searching every node in the tree, or you can search an index (see below). In general: -- the larger the directory the more need for a distributed solution (for upkeep and managability). -- once you are distributed, the search space for any given search MUST be limited. -- this makes it necessary to provide more information as part of the query (and thus makes the directory harder to use).Postel & Anderson [Page 21]RFC 1588 White Pages Report February 1994 Any directory system can be used in a manner that makes searching less than easy. With a User Friendly Name (UFN) query, a user can
⌨️ 快捷键说明
复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?