rfc1588.txt

来自「中、英文RFC文档大全打包下载完全版 .」· 文本 代码 · 共 1,478 行 · 第 1/5 页

TXT
1,478
字号
   can talk to CSO (a phonebook service developed by University of   Illinois), WAIS (Wide Area Information Server), etc.  WWW can talk to   everything.  Netfind knows about several other protocols.   Gopher and WAIS "achieved orbit" simply by providing means for people   to export and to access useful information; neither system had to   provide ubiquitous service.  For white pages, if the service doesn't   provide answers to specific user queries some reasonable proportion   of the time, users view it as as failure.  One way to achieve a high   hit rate in an exponentially growing Internet is to use a proactive   data gathering architecture (e.g., as realized by archie and   Netfind).  Important as they are, replication, authentication, etc.,   are irrelevant if no one uses the service.   There are pluses and minuses to a proactive data gathering method.   On the plus side, one can build a large database quickly.  On the   minus side, one can get garbage in the database.  One possibility is   to use a proactive approach to (a) acquire data for administrative   review before being added to the database, and/or (b) to check the   data for consistency with the real world.  Additionally, there is   some question about the legality of proactive methods in some   countries.   One solution is to combine existing technology and infrastructure to   provide a good white pages service, based on a X.500 core plus a set   of additional index/references servers.  DNS can be used to "refer"   to the appropriate zone in the X.500 name space, using WAIS or   Whois++, to build up indexes to the X.500 server which will be able   to process a given request.  These can be index-servers or centroids   or something new.   Some X.500 purists might feel this approach muddles the connecting   fabric among X.500 servers, since the site index, DNS records, and   customization gateways are all outside of X.500.  On the other hand,   making X.500 reachable from a common front-end would provide added   incentive for sites to install X.500 servers.  Plus, it provides an   immediate (if interim) solution to the need for a global site index   in X.500.  Since the goal is to have a good white pages service,Postel & Anderson                                               [Page 6]RFC 1588                   White Pages Report              February 1994   X.500 purity is not essential.   It may be that there are parts of the white pages problem that cannot   be addressed without "complex technology".  A solution that allows   the user to progress up the ladder of complexity, according to taste,   perceived need, and available resources may be a much healthier   approach.  However, experience to date with simpler solutions   (Whois++, Netfind, archie) indicates that a good percentage of the   problem of finding information can be addressed with simpler   approaches.  Users know this and will resist attempts to make them   pay the full price for the full solution when it is not needed.   Whereas managers and funders may be concerned with the complexity of   the technology, users are generally more concerned with the quality   and ease of use of the service.  A danger in supporting a mix of   technologies is that the service may become so variable that the   loose constraints of weak service in some places lead users to see   the whole system as too loose and weak.   Some organizations will not operate services that they cannot get for   free or they cannot try cheaply before investing time and money.   Some people prefer a bare-bones, no support solution that only gives   them 85 percent of what they want.  Paying for the service would not   be a problem for many sites, once the value of the service has been   proven.  Although there is no requirement to provide free software   for everybody, we do need viable funding and support mechanisms.  A   solution can not be simply dictated with any expectation that it will   stick.   Finally, are there viable alternative technologies to X.500 now or do   we need to design something new?  What kind of time frame are we   talking about for development and deployment?  And will the new   technology be extensible enough to provide for the as yet unimagined   uses that will be required of directory services 5 years from now?   And will this directory service ultimately provide more capabilities   than just white pages?3.C. WHAT ARE THE PROBLEMS TO BE OVERCOME?   There are two classes of problems to be examined; technology issues   and infrastructure.   TECHNOLOGY:   How do we populate the database and make software easily available?   Many people suggest that a public domain version of X.500 is   necessary before a wide spread X.500 service is operational.  The   current public domain version is said to be difficult to install andPostel & Anderson                                               [Page 7]RFC 1588                   White Pages Report              February 1994   to bring into operation, but many organizations have successfully   installed it and have had their systems up and running for some time.   Note that the current public domain program, quipu, is not quite   standard X.500, and is more suited to research than production   service.  Many people who tried earlier versions of quipu abandoned   X.500 due to its costly start up time, and inherent complexity.   The ISODE (ISO Development Environment) Consortium is currently   developing newer features and is addressing most of the major   problems.  However, there is the perception that the companies in the   consortium have yet to turn these improvements into actual products,   though the consortium says the companies have commercial off-the-   shelf (COTS) products available now.  The improved products are   certainly needed now, since if they are too late in being deployed,   other solutions will be implemented in lieu of X.500.   The remaining problem with an X.500 White Pages is having a high   quality public domain DSA.  The ISODE Consortium will make its   version available for no charge to Universities (or any non-profit or   government organization whose primary purpose is research) but if   that leaves a sizeable group using the old quipu implementation, then   there is a significant problem.  In such a case, an answer may be for   some funding to upgrade the public version of quipu.   In addition, the quipu DSA should be simplified so that it is easy to   use.  Tim Howes' new disk-based quipu DSA solves many of the memory   problems in DSA resource utilization.  If one fixes the DSA resource   utilization problem, makes it fairly easy to install, makes it freely   available, and publishes a popular press book about it, X.500 may   have a better chance of success.   The client side of X.500 needs more work.  Many people would rather   not expend the extra effort to get X.500 up.  X.500 takes a sharp   learning curve.  There is a perception that the client side also   needs a complex Directory User Interface (DUI) built on ISODE.  Yet   there are alternative DUIs, such as those based on LDAP.  Another   aspect of the client side is that access to the directory should be   built into other applications like gopher and email (especially,   accessing PEM X.509 certificates).   We also need data conversion tools to make the transition between   different systems possible.  For example, NASA had more than one   system to convert.   Searching abilities for X.500 need to be improved.  LDAP is great   help, but the following capabilities are still needed:Postel & Anderson                                               [Page 8]RFC 1588                   White Pages Report              February 1994   -- commercial grade easily maintainable servers with back-end      database support.   -- clients that can do exhaustive search and/or cache useful      information and use heuristics to narrow the search space in case      of ill-formed queries.   -- index servers that store index information on a "few" key      attributes that DUIs can consult in narrowing the search space.      How about index attributes at various levels in the tree that      capture the information in the corresponding subtree?   Work still needs to be done with Whois++ to see if it will scale to   the level of X.500.   An extended Netfind is attractive because it would work without any   additional infrastructure changes (naming, common schema, etc.), or   even the addition of any new protocols.   INFRASTRUCTURE:   The key issues are central management and naming rules.   X.500 is not run as a service in the U.S., and therefore those using   X.500 in the U.S. are not assured of the reliability of root servers.   X.500 cannot be taken seriously until there is some central   management and coordinated administration support in place.  Someone   has to be responsible for maintaining the root; this effort is   comparable to maintaining the root of the DNS.  PSI provided this   service until the end of the FOX project [9]; should they receive   funding to continue this?  Should this be a commercial enterprise?   Or should this function be added to the duties of the InterNIC?   New sites need assistance in getting their servers up and linked to a   central server.   There are two dimensions along which to consider the infrastructure:   1) general purpose vs. specific, and 2) tight vs. loose information   framework.   General purpose leads to more complex protocols - the generality is   an overhead, but gives the potential to provide a framework for a   wide variety of services.  Special purpose protocols are simpler, but   may lead to duplication or restricted scope.   Tight information framework costs effort to coerce existing data and   to build structures.  Once in place, it gives better managability and   more uniform access.  The tight information framework can bePostel & Anderson                                               [Page 9]RFC 1588                   White Pages Report              February 1994   subdivided further into: 1) the naming approach, and 2) the object   and attribute extensibility.   Examples of systems placed in this space are: a) X.500 is a general   purpose and tight information framework, b) DNS is a specific and   tight information framework, c) there are various research efforts in   the general purpose and loose information framework, and d) Whois++   employs a specific and loose information framework.   We need to look at which parts of this spectrum we need to provide   services.  This may lead to concluding that several services are   desirable.3.D. WHAT SHOULD THE DEPLOYMENT STRATEGY BE?   No solution will arise simply by providing technical specifications.   The solution must fit the way the Internet adopts information   technology.  The information systems that have gained real momentum   in the Internet (WAIS, Gopher, etc.) followed the model:   -- A small group goes off and builds a piece of software that      supplies badly needed functionality at feasible effort to      providers and users.   -- The community rapidly adopts the system as a de facto standard.   -- Many people join the developers in improving the system and      standardizing the protocols.   What can this report do to help make this happen for Internet white   pages?   Deployment Issues.   -- A strict hierarchical layout is not suitable for all directory      applications and hence we should not force fit it.   -- A typical organization's hierarchical information itself is often      proprietary; they may not want to divulge it to the outside world.      It will always be true that Institutions (not just commercial)      will always have some information that they do not wish to display      to the public in any directory.  This is especially true for      Institutions that want to protect themselves from headhunters, and      sales personnel.Postel & Anderson                                              [Page 10]RFC 1588                   White Pages Report              February 1994   -- There is the problem of multiple directory service providers, but      see NADF work on "Naming Links" and their "CAN/KAN" technology      [7].      A more general approach such as using a knowledge server (or a set      of servers) might be better.  The knowledge servers would have to      know about which server to contact for a given query and thus may      refer to either service provider servers or directly to      institution-operated servers.  The key problem is how to collect      the knowledge and keep it up to date.  There are some questions      about the viability of "naming links" without a protocol      modification.   -- Guidelines are needed for methods of searching and using directory      information.   -- A registration authority is needed to register names at various      levels of the hierarchy to ensure uniqueness or adoption of the      civil naming structure as delineated by the NADF.   It is true that deployment of X.500 has not seen exponential growth   as have other popular services on the Internet.  But rather than   abandoning X.500 now, these efforts, which are attempting to address   some of the causes, should continue to move forward.  Certainly   installation complexity and performance problems with the quipu   implementation need solutions.  These problems are being worked on.

⌨️ 快捷键说明

复制代码Ctrl + C
搜索代码Ctrl + F
全屏模式F11
增大字号Ctrl + =
减小字号Ctrl + -
显示快捷键?