⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 readme

📁 Google 推出一套免費的 Web 安全評估工具
💻
📖 第 1 页 / 共 2 页
字号:
===========================================================ratproxy - passive web application security assessment tool===========================================================  http://code.google.com/p/ratproxy  * Written and maintained by Michal Zalewski <lcamtuf@google.com>.  * Copyright 2007, 2008 Google Inc, rights reserved.  * Released under terms and conditions of the Apache License, version 2.0. -----------------What is ratproxy?-----------------Ratproxy is a semi-automated, largely passive web application security audit tool. It is meant to complement active crawlers and manual proxies more commonly used for this task, and is optimized specifically for an accurate and sensitive detection, and automatic annotation, of potential problems and security-relevant design patterns based on the observation of existing, user-initiated traffic in complex web 2.0 environments. The approach taken with ratproxy offers several important advantages over more traditional methods:  * No risk of disruptions. In the default operating mode, tool does not     generate a high volume of attack-simulating traffic, and as such may be     safely employed against production systems at will, for all types of ad hoc,    post-release audits. Active scanners may trigger DoS conditions or persistent    XSSes, and hence are poorly suited for live platforms.   * Low effort, high yield. Compared to active scanners or fully manual     proxy-based testing, ratproxy assessments take very little time or bandwidth    to run, and proceed in an intuitive, distraction-free manner - yet provide a    good insight into the inner workings of a product, and the potential security    vulnerabilities therein. They also afford a consistent and predictable     coverage of user-accessible features.   * Preserved control flow of human interaction. By silently following the     browser, the coverage in locations protected by nonces, during other     operations valid only under certain circumstances, or during dynamic events     such as cross-domain Referer data disclosure, is greatly enhanced.     Brute-force crawlers and fuzzers usually have no way to explore these areas    in a reliable manner.   * WYSIWYG data on script behavior. Javascript interfaces and event handlers     are explored precisely to a degree they are used in the browser, with no need    for complex guesswork or simulations. Active scanners often have a     significant difficulty exploring JSON responses, XMLHttpRequest() behavior,     UI-triggered event data flow, and the like.   * Easy process integration. The proxy can be transparently integrated into     an existing manual security testing or interface QA processes without     introducing a significant setup or operator training overhead. -----------------------Is it worth trying out?-----------------------There are numerous alternative proxy tools meant to aid security auditors - most notably WebScarab, Paros, Burp, and ProxMon. Stick with whatever suits your needs, as long as you get the data you need in the format you like.That said, ratproxy is there for a reason. It is designed specifically to deliver concise reports that focus on prioritized issues of clear relevance to contemporary web 2.0 applications, and to do so in a hands-off, repeatable manner. It should not overwhelm you with raw HTTP traffic dumps, and it goes far beyond simply providing a framework to tamper with the application by hand.Ratproxy implements a number of fairly advanced and unique checks based on our experience with these applications, as well as all the related browser quirks and content handling oddities. It features a sophisticated content-sniffing functionality capable of distinguishing between stylesheets and Javascript code snippets, supports SSL man-in-the-middle, on the fly Flash ActionScriptdecompilation, and even offers an option to confirm high-likelihood flaw candidates with very lightweight, a built-in active testing module.Last but not least, if you are undecided, the proxy may be easily chained with third-party security testing proxies of your choice.----------------------------------How does it avoid false positives?----------------------------------Operating in a non-disruptive mode makes the process of discovering security flaws particularly challenging, as the presence of some vulnerabilities must be deduced based on very subtle, not always reliable cues - and even in active testing modes, ratproxy strives to minimize the amount of rogue traffic generated, and side effects caused.The set of checks implemented by ratproxy is outlined later on - but just as importantly, underneath all the individual check logic, the proxy uses a number of passively or semi-passively gathered signals to more accurately prioritize reported problems and reduce the number of false alarms as much as possible. The five core properties examined for a large number of checks are:  * What the declared and actually detected MIME type for the document is.     This is a fairly important signal, as many problems manifest themselves     only in presence of subtle mismatches between these two - whereas other     issues need to be treated as higher or lower priority based on this data.     More fundamentally, the distinction between certain classes of content - such    as "renderables" that may be displayed inline by the browser - is     very important to many checks.   * How pages respond to having cookie-based authentication removed. This     provides useful information on whether the resource is likely to contain     user-specific data, amongst other things. Carefully preselected requests that    fail some security checks are replayed as-is, but with authentication data     removed; responses are then compared, with virtually no risk of undesirable     side effects in common applications.   * Whether requests seem to contain non-trivial, sufficiently complex     security tokens, or other mechanisms that may make the URL difficult to     predict. This provides information needed to determine the presence of XSRF    defenses, to detect cross-domain token leakage, and more. (In active testing    mode, the function of such tokens is further validated by replaying the    request with modified values.)   * Whether any non-trivial parts of the query are echoed back in the response,     and in what context. This is used to pick particularly interesting     candidates for XSS testing - or, in active mode, to schedule low-overhead,    lightweight probes.   * Whether the interaction occurs on a boundary of a set of domains defined     by runtime settings as the trusted environment subjected to the audit, and    the rest of the world. Many boundary behaviors have a special significance,    as they outline cross-domain trust patterns and information disclosure     routes.In addition to this, several places employ check-specific logic to further fine-tune the results.------------------------------------What specific tests are implemented?------------------------------------Key low-level check groups implemented by ratproxy are:  * Potentially unsafe JSON-like responses that may be vulnerable to     cross-domain script inclusion. JSON responses may be included across domains     by default, unless safe serialization schemes, security tokens, or parser     breaking syntax is used. Ratproxy will check for these properties, and    highlight any patterns of concern.   * Bad caching headers on sensitive content. Ratproxy is able to accurately     detect presence of several types of sensitive documents, such as locations    that return user-specific data, or resources that set new, distinctive    cookies. If the associated requests have predictable URLs, and lack HTTP     caching directives that would prevent proxy-level caching, there is a risk     of data leakage.     In pedantic mode, ratproxy will also spot differences in HTTP/1.1 and     HTTP/1.0 caching intents - as these may pose problems for a fraction of     users behind legacy cache engines (such as several commercial systems used    to date by some corporations).   * Suspicious cross-domain trust relationships. Based on the observation of     dynamic control flow, and a flexible definition of trusted perimeter,     ratproxy is capable of accurately detecting dangerous interactions between    domains, including but not limited to:      * Security token leakage via Referer headers,      * Untrusted script or stylesheet inclusion,      * General references to third-party domains,      * Mixed content issues in HTTPS-only applications,      * Tricky cross-domain POST requests in single sign-on systems.   * Numerous classes of content serving issues - a broad class of problems     that lead to subtles XSSes, and includes MIME type mismatches, charset     problems, Flash issues, and more. Research indicates that a vast number of    seemingly minor irregularities in content type specifications may trigger    cross-site scripting in unusal places; for example, subtle mistakes such as    serving GIF files as image/jpeg, typing utf8 instead of utf-8 in     Content-Type headers, or confusing HTTP charset with XML declaration     charset values are all enough to cause trouble. Even seemingly harmless     actions such as serving valid, attacker-controlled PNG images inline were     known to cause problems due to browser design flaws.     Likewise, certain syntax patterns are dangerous to return to a browser     regardless of MIME types, as there are known methods to have MIME types     overridden or ignored altogether. Ratproxy uses a set of fairly advanced     checks that spot these problems with a considerable accuracy and relatively    few false positives in contemporary scenarios, accounting for various     classes of content served.   * Queries with insufficient XSRF defenses (POSTs, plus any requests that     set cookies by default; and other suspicious looking GET requests as an     option). In active testing mode, the proxy will also actually try to     validate XSRF protections by replaying requests with modified token values,    and comparing responses.   * Suspected or confirmed XSS / data injection vectors, including attacks     through included JSON-based script injection, or response header splitting.    In the default, passive mode, ratproxy does not attempt to confirm the     quality of XSS filtering in tested applications, but it will automatically     enumerate and annotate the best subjects for manual inspection - and will     offer the user the ability to feed this data to external programs, or modify    and replay interesting requests on the fly. The proxy will also take note    of any seemingly successful manual XSS attempts taken by the user.     In active testing mode, the proxy will go one step further and attempt a     single-shot verification of XSS filtering mechanisms, carefully tweaking     only these request parameters that truly need to be tested at the time (and    carefully preserving XSRF tokens, and more).   * HTTP and META redirectors. Redirectors, unless properly locked down, may     be used without owner's consent, which in some contexts may be seen as    undesirable. Furthermore, in extreme cases, poorly implemented redirectors     may open up cross-site scripting vectors in less common browsers.     Ratproxy will take note of any redirectors observed for further testing.   * A broad set of other security problems, such as alarming Javascript,     OGNL, Java, SQL, file inclusion patterns, directory indexes, server errors,     and so forth. Ratproxy will preselect particularly interesting candidates for    further testing.     Although in the initial beta, not all web technologies may necessarily be     analyzed to greatest extent possible, we intend to actively improve the tool     based on your feedback.  * Several additional, customizable classes of requests and responses useful     in understanding the general security model of the application (file upload     forms, POST requests, cookie setters, etc). For a full list of individual issues reported, please see messages.list in the source tarball.------------------------------------------What is the accuracy of reported findings?------------------------------------------Ratproxy usually fares very well with typical, rich, modern web applications - that said, by the virtue of operating in passive mode most of the time, all the findings reported merely highlight areas of concern, and are not necessarily indicative of actual security flaws. The information gathered during a testing session should be then interpreted by a security professional with a good understanding of the common problems and security models employed in web applications.Please keep in mind that the tool is still in beta, and you may run into problems with technologies we had no chance to examine, or that were not a priority at this time. Please contact the author to report any issues encountered.---------------------How to run the proxy?---------------------  NOTE: Please do not be evil. Use ratproxy only against services you own, or   have a permission to test. Keep in mind that although the proxy is mostly   passive and unlikely to cause disruptions, it is not stealth. Furthermore, the  proxy is not designed for dealing with rogue and misbehaving HTTP servers and   clients - and offers no guarantees of safe (or sane) behavior there. Initiating ratproxy sessions is fairly straigtforward, once an appropriate set of runtime options is dediced upon. Please familiarize yourself with these settings, as they have a very significant impact on the quality of produced reports.The main binary, ./ratproxy, takes the following arguments:

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -