📄 rewrite_guide_advanced.html.en
字号:
<dl> <dt>Description:</dt> <dd> <p>A typical FAQ about URL rewriting is how to redirect failing requests on webserver A to webserver B. Usually this is done via <code class="directive"><a href="../mod/core.html#errordocument">ErrorDocument</a></code> CGI scripts in Perl, but there is also a <code class="module"><a href="../mod/mod_rewrite.html">mod_rewrite</a></code> solution. But note that this performs more poorly than using an <code class="directive"><a href="../mod/core.html#errordocument">ErrorDocument</a></code> CGI script!</p> </dd> <dt>Solution:</dt> <dd> <p>The first solution has the best performance but less flexibility, and is less safe:</p><div class="example"><pre>RewriteEngine onRewriteCond /your/docroot/%{REQUEST_FILENAME} <strong>!-f</strong>RewriteRule ^(.+) http://<strong>webserverB</strong>.dom/$1</pre></div> <p>The problem here is that this will only work for pages inside the <code class="directive"><a href="../mod/core.html#documentroot">DocumentRoot</a></code>. While you can add more Conditions (for instance to also handle homedirs, etc.) there is a better variant:</p><div class="example"><pre>RewriteEngine onRewriteCond %{REQUEST_URI} <strong>!-U</strong>RewriteRule ^(.+) http://<strong>webserverB</strong>.dom/$1</pre></div> <p>This uses the URL look-ahead feature of <code class="module"><a href="../mod/mod_rewrite.html">mod_rewrite</a></code>. The result is that this will work for all types of URLs and is safe. But it does have a performance impact on the web server, because for every request there is one more internal subrequest. So, if your web server runs on a powerful CPU, use this one. If it is a slow machine, use the first approach or better an <code class="directive"><a href="../mod/core.html#errordocument">ErrorDocument</a></code> CGI script.</p> </dd> </dl> </div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div><div class="section"><h2><a name="archive-access-multiplexer" id="archive-access-multiplexer">Archive Access Multiplexer</a></h2> <dl> <dt>Description:</dt> <dd> <p>Do you know the great CPAN (Comprehensive Perl Archive Network) under <a href="http://www.perl.com/CPAN">http://www.perl.com/CPAN</a>? CPAN automatically redirects browsers to one of many FTP servers around the world (generally one near the requesting client); each server carries a full CPAN mirror. This is effectively an FTP access multiplexing service. CPAN runs via CGI scripts, but how could a similar approach be implemented via <code class="module"><a href="../mod/mod_rewrite.html">mod_rewrite</a></code>?</p> </dd> <dt>Solution:</dt> <dd> <p>First we notice that as of version 3.0.0, <code class="module"><a href="../mod/mod_rewrite.html">mod_rewrite</a></code> can also use the "<code>ftp:</code>" scheme on redirects. And second, the location approximation can be done by a <code class="directive"><a href="../mod/mod_rewrite.html#rewritemap">RewriteMap</a></code> over the top-level domain of the client. With a tricky chained ruleset we can use this top-level domain as a key to our multiplexing map.</p><div class="example"><pre>RewriteEngine onRewriteMap multiplex txt:/path/to/map.cxanRewriteRule ^/CxAN/(.*) %{REMOTE_HOST}::$1 [C]RewriteRule ^.+\.<strong>([a-zA-Z]+)</strong>::(.*)$ ${multiplex:<strong>$1</strong>|ftp.default.dom}$2 [R,L]</pre></div><div class="example"><pre>#### map.cxan -- Multiplexing Map for CxAN##de ftp://ftp.cxan.de/CxAN/uk ftp://ftp.cxan.uk/CxAN/com ftp://ftp.cxan.com/CxAN/ :##EOF##</pre></div> </dd> </dl> </div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div><div class="section"><h2><a name="browser-dependent-content" id="browser-dependent-content">Browser Dependent Content</a></h2> <dl> <dt>Description:</dt> <dd> <p>At least for important top-level pages it is sometimes necessary to provide the optimum of browser dependent content, i.e., one has to provide one version for current browsers, a different version for the Lynx and text-mode browsers, and another for other browsers.</p> </dd> <dt>Solution:</dt> <dd> <p>We cannot use content negotiation because the browsers do not provide their type in that form. Instead we have to act on the HTTP header "User-Agent". The following config does the following: If the HTTP header "User-Agent" begins with "Mozilla/3", the page <code>foo.html</code> is rewritten to <code>foo.NS.html</code> and the rewriting stops. If the browser is "Lynx" or "Mozilla" of version 1 or 2, the URL becomes <code>foo.20.html</code>. All other browsers receive page <code>foo.32.html</code>. This is done with the following ruleset:</p><div class="example"><pre>RewriteCond %{HTTP_USER_AGENT} ^<strong>Mozilla/3</strong>.*RewriteRule ^foo\.html$ foo.<strong>NS</strong>.html [<strong>L</strong>]RewriteCond %{HTTP_USER_AGENT} ^<strong>Lynx/</strong>.* [OR]RewriteCond %{HTTP_USER_AGENT} ^<strong>Mozilla/[12]</strong>.*RewriteRule ^foo\.html$ foo.<strong>20</strong>.html [<strong>L</strong>]RewriteRule ^foo\.html$ foo.<strong>32</strong>.html [<strong>L</strong>]</pre></div> </dd> </dl> </div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div><div class="section"><h2><a name="dynamic-mirror" id="dynamic-mirror">Dynamic Mirror</a></h2> <dl> <dt>Description:</dt> <dd> <p>Assume there are nice web pages on remote hosts we want to bring into our namespace. For FTP servers we would use the <code>mirror</code> program which actually maintains an explicit up-to-date copy of the remote data on the local machine. For a web server we could use the program <code>webcopy</code> which runs via HTTP. But both techniques have a major drawback: The local copy is always only as up-to-date as the last time we ran the program. It would be much better if the mirror was not a static one we have to establish explicitly. Instead we want a dynamic mirror with data which gets updated automatically as needed on the remote host(s).</p> </dd> <dt>Solution:</dt> <dd> <p>To provide this feature we map the remote web page or even the complete remote web area to our namespace by the use of the <dfn>Proxy Throughput</dfn> feature (flag <code>[P]</code>):</p><div class="example"><pre>RewriteEngine onRewriteBase /~quux/RewriteRule ^<strong>hotsheet/</strong>(.*)$ <strong>http://www.tstimpreso.com/hotsheet/</strong>$1 [<strong>P</strong>]</pre></div><div class="example"><pre>RewriteEngine onRewriteBase /~quux/RewriteRule ^<strong>usa-news\.html</strong>$ <strong>http://www.quux-corp.com/news/index.html</strong> [<strong>P</strong>]</pre></div> </dd> </dl> </div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div><div class="section"><h2><a name="reverse-dynamic-mirror" id="reverse-dynamic-mirror">Reverse Dynamic Mirror</a></h2> <dl> <dt>Description:</dt> <dd>...</dd> <dt>Solution:</dt> <dd><div class="example"><pre>RewriteEngine onRewriteCond /mirror/of/remotesite/$1 -URewriteRule ^http://www\.remotesite\.com/(.*)$ /mirror/of/remotesite/$1</pre></div> </dd> </dl> </div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div><div class="section"><h2><a name="retrieve-missing-data" id="retrieve-missing-data">Retrieve Missing Data from Intranet</a></h2> <dl> <dt>Description:</dt> <dd> <p>This is a tricky way of virtually running a corporate (external) Internet web server (<code>www.quux-corp.dom</code>), while actually keeping and maintaining its data on an (internal) Intranet web server (<code>www2.quux-corp.dom</code>) which is protected by a firewall. The trick is that the external web server retrieves the requested data on-the-fly from the internal one.</p> </dd> <dt>Solution:</dt> <dd> <p>First, we must make sure that our firewall still protects the internal web server and only the external web server is allowed to retrieve data from it. On a packet-filtering firewall, for instance, we could configure a firewall ruleset like the following:</p><div class="example"><pre><strong>ALLOW</strong> Host www.quux-corp.dom Port >1024 --> Host www2.quux-corp.dom Port <strong>80</strong><strong>DENY</strong> Host * Port * --> Host www2.quux-corp.dom Port <strong>80</strong></pre></div> <p>Just adjust it to your actual configuration syntax. Now we can establish the <code class="module"><a href="../mod/mod_rewrite.html">mod_rewrite</a></code> rules which request the missing data in the background through the proxy throughput feature:</p><div class="example"><pre>RewriteRule ^/~([^/]+)/?(.*) /home/$1/.www/$2RewriteCond %{REQUEST_FILENAME} <strong>!-f</strong>RewriteCond %{REQUEST_FILENAME} <strong>!-d</strong>RewriteRule ^/home/([^/]+)/.www/?(.*) http://<strong>www2</strong>.quux-corp.dom/~$1/pub/$2 [<strong>P</strong>]</pre></div> </dd> </dl> </div><div class="top"><a href="#page-header"><img alt="top" src="../images/up.gif" /></a></div><div class="section"><h2><a name="load-balancing" id="load-balancing">Load Balancing</a></h2> <dl> <dt>Description:</dt> <dd> <p>Suppose we want to load balance the traffic to <code>www.example.com</code> over <code>www[0-5].example.com</code> (a total of 6 servers). How can this be done?</p> </dd> <dt>Solution:</dt> <dd> <p>There are many possible solutions for this problem. We will first discuss a common DNS-based method, and then one based on <code class="module"><a href="../mod/mod_rewrite.html">mod_rewrite</a></code>:</p> <ol> <li> <strong>DNS Round-Robin</strong> <p>The simplest method for load-balancing is to use DNS round-robin. Here you just configure <code>www[0-9].example.com</code> as usual in your DNS with A (address) records, e.g.,</p><div class="example"><pre>www0 IN A 1.2.3.1www1 IN A 1.2.3.2www2 IN A 1.2.3.3www3 IN A 1.2.3.4www4 IN A 1.2.3.5www5 IN A 1.2.3.6</pre></div> <p>Then you additionally add the following entries:</p><div class="example"><pre>www IN A 1.2.3.1www IN A 1.2.3.2www IN A 1.2.3.3www IN A 1.2.3.4www IN A 1.2.3.5</pre></div> <p>Now when <code>www.example.com</code> gets resolved, <code>BIND</code> gives out <code>www0-www5</code> - but in a permutated (rotated) order every time. This way the clients are spread over the various servers. But notice that this is not a perfect load balancing scheme, because DNS resolutions are cached by clients and other nameservers, so once a client has resolved <code>www.example.com</code> to a particular <code>wwwN.example.com</code>, all its subsequent requests will continue to go to the same IP (and thus a single server), rather than being
⌨️ 快捷键说明
复制代码
Ctrl + C
搜索代码
Ctrl + F
全屏模式
F11
切换主题
Ctrl + Shift + D
显示快捷键
?
增大字号
Ctrl + =
减小字号
Ctrl + -