⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 rfc2196.txt

📁 著名的RFC文档,其中有一些文档是已经翻译成中文的的.
💻 TXT
📖 第 1 页 / 共 5 页
字号:
   There may be regulatory requirements that affect some aspects of your   security policy (e.g., line monitoring).  The creators of the   security policy should consider seeking legal assistance in the   creation of the policy.  At a minimum, the policy should be reviewed   by legal counsel.   Once your security policy has been established it should be clearly   communicated to users, staff, and management.  Having all personnel   sign a statement indicating that they have read, understood, and   agreed to abide by the policy is an important part of the process.   Finally, your policy should be reviewed on a regular basis to see if   it is successfully supporting your security needs.Fraser, Ed.                Informational                       [Page 10]RFC 2196              Site Security Handbook              September 19972.3  Keeping the Policy Flexible   In order for a security policy to be viable for the long term, it   requires a lot of flexibility based upon an architectural security   concept. A security policy should be (largely) independent from   specific hardware and software situations (as specific systems tend   to be replaced or moved overnight).  The mechanisms for updating the   policy should be clearly spelled out.  This includes the process, the   people involved, and the people who must sign-off on the changes.   It is also important to recognize that there are exceptions to every   rule.  Whenever possible, the policy should spell out what exceptions   to the general policy exist.  For example, under what conditions is a   system administrator allowed to go through a user's files.  Also,   there may be some cases when multiple users will have access to the   same userid.  For example, on systems with a "root" user, multiple   system administrators may know the password and use the root account.   Another consideration is called the "Garbage Truck Syndrome."  This   refers to what would happen to a site if a key person was suddenly   unavailable for his/her job function (e.g., was suddenly ill or left   the company unexpectedly).  While the greatest security resides in   the minimum dissemination of information, the risk of losing critical   information increases when that information is not shared.  It is   important to determine what the proper balance is for your site.3.  Architecture3.1  Objectives3.1.1  Completely Defined Security Plans   All sites should define a comprehensive security plan.  This plan   should be at a higher level than the specific policies discussed in   chapter 2, and it should be crafted as a framework of broad   guidelines into which specific policies will fit.   It is important to have this framework in place so that individual   policies can be consistent with the overall site security   architecture.  For example, having a strong policy with regard to   Internet access and having weak restrictions on modem usage is   inconsistent with an overall philosophy of strong security   restrictions on external access.   A security plan should define: the list of network services that will   be provided; which areas of the organization will provide the   services; who will have access to those services; how access will be   provided; who will administer those services; etc.Fraser, Ed.                Informational                       [Page 11]RFC 2196              Site Security Handbook              September 1997   The plan should also address how incident will be handled.  Chapter 5   provides an in-depth discussion of this topic, but it is important   for each site to define classes of incidents and corresponding   responses.  For example, sites with firewalls should set a threshold   on the number of attempts made to foil the firewall before triggering   a response?  Escallation levels should be defined for both attacks   and responses.  Sites without firewalls will have to determine if a   single attempt to connect to a host constitutes an incident? What   about a systematic scan of systems?   For sites connected to the Internet, the rampant media magnification   of Internet related security incidents can overshadow a (potentially)   more serious internal security problem.  Likewise, companies who have   never been connected to the Internet may have strong, well defined,   internal policies but fail to adequately address an external   connection policy.3.1.2  Separation of Services   There are many services which a site may wish to provide for its   users, some of which may be external.  There are a variety of   security reasons to attempt to isolate services onto dedicated host   computers.  There are also performance reasons in most cases, but a   detailed discussion is beyond to scope of this document.   The services which a site may provide will, in most cases, have   different levels of access needs and models of trust.  Services which   are essential to the security or smooth operation of a site would be   better off being placed on a dedicated machine with very limited   access (see Section 3.1.3 "deny all" model), rather than on a machine   that provides a service (or services) which has traditionally been   less secure, or requires greater accessability by users who may   accidentally suborn security.   It is also important to distinguish between hosts which operate   within different models of trust (e.g., all the hosts inside of a   firewall and any host on an exposed network).   Some of the services which should be examined for potential   separation are outlined in section 3.2.3. It is important to remember   that security is only as strong as the weakest link in the chain.   Several of the most publicized penetrations in recent years have been   through the exploitation of vulnerabilities in electronic mail   systems.  The intruders were not trying to steal electronic mail, but   they used the vulnerability in that service to gain access to other   systems.Fraser, Ed.                Informational                       [Page 12]RFC 2196              Site Security Handbook              September 1997   If possible, each service should be running on a different machine   whose only duty is to provide a specific service.  This helps to   isolate intruders and limit potential harm.3.1.3  Deny all/ Allow all   There are two diametrically opposed underlying philosophies which can   be adopted when defining a security plan.  Both alternatives are   legitimate models to adopt, and the choice between them will depend   on the site and its needs for security.   The first option is to turn off all services and then selectively   enable services on a case by case basis as they are needed. This can   be done at the host or network level as appropriate.  This model,   which will here after be referred to as the "deny all" model, is   generally more secure than the other model described in the next   paragraph.  More work is required to successfully implement a "deny   all" configuration as well as a better understanding of services.   Allowing only known services provides for a better analysis of a   particular service/protocol and the design of a security mechanism   suited to the security level of the site.   The other model, which will here after be referred to as the "allow   all" model, is much easier to implement, but is generally less secure   than the "deny all" model.  Simply turn on all services, usually the   default at the host level, and allow all protocols to travel across   network boundaries, usually the default at the router level.  As   security holes become apparent, they are restricted or patched at   either the host or network level.   Each of these models can be applied to different portions of the   site, depending on functionality requirements, administrative   control, site policy, etc.  For example, the policy may be to use the   "allow all" model when setting up workstations for general use, but   adopt a "deny all" model when setting up information servers, like an   email hub.  Likewise, an "allow all" policy may be adopted for   traffic between LAN's internal to the site, but a "deny all" policy   can be adopted between the site and the Internet.   Be careful when mixing philosophies as in the examples above.  Many   sites adopt the theory of a hard "crunchy" shell and a soft "squishy"   middle.  They are willing to pay the cost of security for their   external traffic and require strong security measures, but are   unwilling or unable to provide similar protections internally.  This   works fine as long as the outer defenses are never breached and the   internal users can be trusted.  Once the outer shell (firewall) is   breached, subverting the internal network is trivial.Fraser, Ed.                Informational                       [Page 13]RFC 2196              Site Security Handbook              September 19973.1.4  Identify Real Needs for Services   There is a large variety of services which may be provided, both   internally and on the Internet at large.  Managing security is, in   many ways, managing access to services internal to the site and   managing how internal users access information at remote sites.   Services tend to rush like waves over the Internet.  Over the years   many sites have established anonymous FTP servers, gopher servers,   wais servers, WWW servers, etc. as they became popular, but not   particularly needed, at all sites.  Evaluate all new services that   are established with a skeptical attitude to determine if they are   actually needed or just the current fad sweeping the Internet.   Bear in mind that security complexity can grow exponentially with the   number of services provided.  Filtering routers need to be modified   to support the new protocols.  Some protocols are inherently   difficult to filter safely (e.g., RPC and UDP services), thus   providing more openings to the internal network.  Services provided   on the same machine can interact in catastrophic ways.  For example,   allowing anonymous FTP on the same machine as the WWW server may   allow an intruder to place a file in the anonymous FTP area and cause   the HTTP server to execute it.3.2  Network and Service Configuration3.2.1  Protecting the Infrastructure   Many network administrators go to great lengths to protect the hosts   on their networks.  Few administrators make any effort to protect the   networks themselves.  There is some rationale to this.  For example,   it is far easier to protect a host than a network.  Also, intruders   are likely to be after data on the hosts; damaging the network would   not serve their purposes.  That said, there are still reasons to   protect the networks.  For example, an intruder might divert network   traffic through an outside host in order to examine the data (i.e.,   to search for passwords).  Also, infrastructure includes more than   the networks and the routers which interconnect them.  Infrastructure   also includes network management (e.g., SNMP), services (e.g., DNS,   NFS, NTP, WWW), and security (i.e., user authentication and access   restrictions).   The infrastructure also needs protection against human error.  When   an administrator misconfigures a host, that host may offer degraded   service.  This only affects users who require that host and, unlessFraser, Ed.                Informational                       [Page 14]RFC 2196              Site Security Handbook              September 1997   that host is a primary server, the number of affected users will   therefore be limited.  However, if a router is misconfigured, all   users who require the network will be affected.  Obviously, this is a   far larger number of users than those depending on any one host.3.2.2  Protecting the Network   There are several problems to which networks are vulnerable.  The   classic problem is a "denial of service" attack.  In this case, the   network is brought to a state in which it can no longer carry   legitimate users' data.  There are two common ways this can be done:   by attacking the routers and by flooding the network with extraneous   traffic.  Please note that the term "router" in this section is used   as an example of a larger class of active network interconnection   components that also includes components like firewalls, proxy-   servers, etc.   An attack on the router is designed to cause it to stop forwarding   packets, or to forward them improperly.  The former case may be due   to a misconfiguration, the injection of a spurious routing update, or   a "flood attack" (i.e., the router is bombarded with unroutable   packets, causing its performance to degrade).  A flood attack on a   network is similar to a flood attack on a router, except that the   flood packets are usually broadcast.  An ideal flood attack would be   the injection of a single packet which exploits some known flaw in   the network nodes and causes them to retransmit the packet, or

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -