rfc1588.txt

来自「中、英文RFC文档大全打包下载完全版 .」· 文本 代码 · 共 1,478 行 · 第 1/5 页

TXT
1,478
字号
   usually find an entry (presuming it exists) without a lot of trouble.   Using additional listings (as per NADF SD-5) helps to hide geographic   or civil naming infrastructure knowledge requirements.   Search power is a function of DSA design in X.500, not a function of   Distinguished Naming.  Search can be aided by addition in X.500 of   non-distinguishing attributes, and by using the NADF Naming Scheme it   is possible to lodge an entry anywhere in the DIT that you believe is   where it will be looked for.   One approach to the distributed search problem is to create another   less distributed database to search, such as an index.  This is done   by doing a (non-interactive) pre-search, and collecting the results   in an index.  When a user wants to do a real time search, one first   searches the index to find pointers to the appropriate data records   in the distributed database.  One example of this is the building of   centroids that contain index information.  There may be a class of   servers that hold indices, called "index-servers".5.D. INDEXING   The suggestion for how to do fast searching is to do indexing.  That   is to pre-compute an index of people from across the distributed   database and hold that index in an index server.  When a user wants   to search for someone, he first contacts the index-server.  The   index-server searches its index data and returns a pointer (or a few   pointers) to specific databases that hold data on people that match   the search criteria.  Other systems which do something comparable to   this are archie (for FTP file archives), WAIS, and Netfind.5.E. COLLECTION AND MAINTENANCE   The information must be "live" - that is, it must be used.  Often one   way to ensure this is to use the data (perhaps locally) for something   other than white pages.  If it isn't, most people won't bother to   keep the information up to date.  The white pages in the phone book   have the advantage that the local phone company is in contact with   the listee monthly (through the billing system), and if the address   is not up to date, bills don't get delivered, and there is feedback   that the address is wrong.  There is even better contact for the   phone number, since the local phone company must know that for their   basic service to work properly.  It is this aspect of directory   functionality that leads towards a distributed directory system for   the Internet.Postel & Anderson                                              [Page 22]RFC 1588                   White Pages Report              February 1994   One approach is to use existing databases to supply the white pages   data.  It then would be helpful to define a particular use of SQL   (Structured Query Language) as a standard interface language between   the databases and the X.500 DSA or other white pages server.  Then   one needs either to have the directory service access the existing   database using an interface language it already knows (e.g., SQL), or   to have tools that periodically update the directory database from   the existing database.  Some sort of "standard" query format (and   protocol) for directory queries, with "standard" field names will be   needed to make this work in general.  In a way, both X.500 and   Whois++ provide this.  This approach implies customization at every   existing database to interface to the "standard" query format.   Some strongly believe that the white pages service needs to be   created from the bottom up with each organization supplying and   maintaining its own information, and that such information has to be   the same -- or a portion of the same -- information the organization   uses locally.  Otherwise the global information will be stale and   incomplete.   One way to make this work is to distribute software that:      - is useful locally,      - fits into the global scheme,      - is available free, and      - works on most Unix systems.   With respect to privacy, it would be good for the local software to   have controls that make it possible to put company sensitive   information into the locally maintained directory and have only a   portion of it exported for outsiders.5.F. NAMING STRUCTURE   We need a clear naming scheme capable of associating a name with   attributes, without any possible ambiguities, that is stable over   time, but also capable of coping with changes.  This scheme should   have a clear idea of naming authorities and be able to store   information required by authentication mechanisms (e.g., PEM or X.509   certificates).   The NADF is working to establish a National Public Directory Service,   based on the use of existing Civil Naming Authorities to register   entry owners' names, and to deal with the shared-entry problem with a   shared public DIT supported by competing commercial servicePostel & Anderson                                              [Page 23]RFC 1588                   White Pages Report              February 1994   providers.  At this point, we do not have any sense at the moment as   to how [un]successful the NADF may be in accomplishing this.   The NADF eventually concluded that the directory should be organized   so entries can be found where people (or other entities) will look   for them, not where civil naming authorities would place their   archival name registration records.   There are some incompatibilities between use of the NADF Naming   Scheme, the White Pages Pilot Naming Scheme, and the PARADISE Naming   Scheme.  This should be resolved.5.G. CLAYMAN PROPOSAL   RFC 1107 offered a "strawman" proposal for an Internet Directory   Service.  The next step after strawman is sometimes called "clayman",   and here a clayman proposal is presented.   We assume only white pages service is to be provided, and we let   sites run whatever access technologies they want to (with whatever   access controls they feel comfortable).   Then the architecture can be that the discovery process leads to a   set of URLs.  A URL is like an address, but it is a typed address   with identifiers, access method, not a protocol.  The client sorts   the URLs and may discard some that it cannot deal with.  The client   talks to "meaningful URLs" (such as Whois, Finger, X.500).   This approach results in low entry cost for the servers that want to   make information available, a Darwinian selection of access   technologies, coalescence in the Internet marketplace, and a white   pages service will tend toward homogeneity and ubiquity.   Some issues for further study are what discovery technology to use   (Netfind together with Whois++ including centroids?), how to handle   non-standard URLs (one possible solution is to put server on top of   these (non-standard URLs) which reevaluates the pointer and acts as a   front-end to a database), which data model to use (Finger or X.500),   and how to utilize a common discovery technology (e.g., centroids) in   a multiprotocol communication architecture.   The rationale for this meta-WPS approach is that it builds on current   practices, while striving to provide a ubiquitous directory service.   Since there are various efforts going on to develop WPS based on   various different protocols, one can envisage a future with a meta-   WPS that uses a combination of an intelligent user agent and a   distributed indexing service to access the requested data from any   available WPS.  The user perceived functionality of such a meta-WPSPostel & Anderson                                              [Page 24]RFC 1588                   White Pages Report              February 1994   will necessarily be restricted to the lowest common denominator.  One   will hope that through "market" forces, the number of protocols used   will decrease (or converge), and that the functionality will   increase.   The degree to which proactive data gathering is permitted may be   limited by national laws.  It may be appropriate to gather data about   which hosts have databases, but not about the data in those   databases.6. CONCLUSIONS   We now revisit the questions we set out to answer and briefly   describe the key conclusions.6.A.  WHAT FUNCTIONS SHOULD A WHITE PAGES DIRECTORY PERFORM?   After all the discussion we come to the conclusion that there are two   functions the white pages service must provide: searching and   retrieving.   Searching is the ability to find people given some fuzzy information   about them.  Such as "Find the Postel in southern California".   Searches may often return a list of matches.   The recognition of the importance of indexing in searching is a major   conclusion of these discussions.  It is clear that users want fast   searching across the distributed database on attributes different   from the database structure.  It is possible that pre-computed   indices can satisfy this desire.   Retrieval is obtaining additional information associated with a   person, such as address, telephone number, email mailbox, and   security certificate.   This last, security certificates, is a type of information associated   with an individual that is essential for the use of end-to-end   authentication, integrity, and privacy, in Internet applications.   The development of secure application in the Internet is dependent on   a directory system for retrieving the security certificate associated   with an individual.  The PEM system has been developed and is ready   to go into service, but is now held back by the lack of an easily   used directory of security certificates.   PEM security certificates are part of the X.509 standard.  If X.500   is going to be set aside, then other alternatives need to be   explored.  If X.500 distinguished naming is scrapped, some other   structure will need to come into existence to replace it.Postel & Anderson                                              [Page 25]RFC 1588                   White Pages Report              February 19946.B.  WHAT APPROACHES WILL PROVIDE US WITH A WHITE PAGES DIRECTORY?   It is clear that there will be several technologies in use.  The   approach must be to promote the interoperation of the multiple   technologies.  This is traditionally done by having conventions or   standards for the interfaces and communication forms between the   different systems.  The need is for a specification of the simplest   common communication form that is powerful enough to provide the   necessary functionality.  This allows a variety of user interfaces on   any number of client systems communicating with different types of   servers.  The IETF working group (WG) method of developing standards   seems well suited to this problem.   This "common ground" approach aims to provide the ubiquitous WPS with   a high functionality and a low entry cost.  This may done by singling   out issues that are common for various competing WPS and coordinate   work on these in specific and dedicated IETF WGs (e.g., data model   coordination).  The IETF will continue development of X.500 and   Whois++ as two separate entities.  The work on these two protocols   will be broken down in various small and focussed WGs that address   specific technical issues, using ideas from both X.500 and Whois++.   The goal being to produce common standards for information formats,   data model and access protocols.  Where possible the results of such   a WG will be used in both Whois++ and X.500, although it is envisaged   that several WGs may work on issues that remain specific to one of   the protocols.  The IDS (Integrated Directory Services) WG continues   to work on non-protocol specific issues.  To achieve coordination   that leads to convergence rather than divergence, the applications   area directorate will provide guidance to the Application Area   Directors as well as to the various WGs, and the User Services Area   Council (USAC) will provide the necessary user perspective.6.C.  WHAT ARE THE PROBLEMS TO BE OVERCOME?   There are several problems that can be solved to make progress   towards a white pages service more rapid.  We need:   To make it much easier to be part of the Internet white pages than   bringing up a X.500 DSA, yet making good use of the already deployed   X.500 DSAs.   To define new simpler white pages services (such as Whois++) such   that numerous people can create implementations.   To provide some central management of the X.500 system to promote   good operation.   To select a naming scheme.Postel & Anderson                                              [Page 26]RFC 1588                   White Pages Report              February 1994   To develop a set of index-servers, and indexing techniques, to   provide for fast searching.   To provide for the storage and retrieval of security certificates.6.D.  WHAT SHOULD THE DEPLOYMENT STRATEGY BE?   We should capitalize on the existing infrastructure of already   deployed X.500 DSAs.  This means that some central management must be   provided, and easy to use user interfaces (such as the Gopher   "gateway"), must be widely deployed.   -- Document the selection of a naming scheme (e.g., the NADF scheme).   -- Adopt the "common ground" model.  Encourage the development of      several different services, with 

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?