⭐ 欢迎来到虫虫下载站! | 📦 资源下载 📁 资源专辑 ℹ️ 关于我们
⭐ 虫虫下载站

📄 benchmarks

📁 文件传输协议linux 下vsftpd2.1.0.tar.gz
💻
字号:
- See also SPEEDUpdate 2nd Nov 2001ftp.redhat.com ran vsftpd for the RedHat 7.2 release. vsftpd achieved 4,000concurrent users on a single machine with 1Gb RAM. Even with this insane usercount, bandwidth remained totally saturated. The user count could have beenhigher, but the machine ran out of processes.--Below are some quick benchmark figures vs. wu-ftpd. This is an untuned BETAversion of vsftpd (0.0.10)The executive summary is that wu-ftpd got a thorough thrashing. The mosttelling statistic is wu-ftpd typically failing to sustain 400 users, whereasvsftpd copes with 1000 with room to spare.A 2.2.x kernel was used. A 2.4.x kernel should make vsftpd look even betterrelative to wu-ftpd thanks to the sendfile() boosts in 2.4.x. A 2.4.x kernelwith zerocopy should be amazing.Many thanks to Andrew Anderson <andrew@redhat.com>--Here's some benchmarks that I did on vsftpd vs. wu-ftpd.  The tests wererun with "dkftpbench -hftpserver -n500 -t600 -f/pub/dkftp/<file>".  Theattached file are the summary output with time to reach the steady-statecondition.The interesting things I noticed are:- In the raw test results, vsftpd had a much higher peak on the x10k.dattransfer run than wu-ftpd did.  Wu-ftpd peaked at ~150 connections andbled down to ~130 connections, while vsftpd peaked at ~400 connections andbled down to ~160 connections.  I tend to believe the peaks more than thefinal steady-state that dkftpbench reports, though.- For the other tests, our wu-ftpd setup was limited to 400 connections,but in about half of the x100k/x1000k runs could not even sustain 400connections, while vsftpd handled 500 easily on those runs.- During the peak runs at x10k, the machine load with vsftpd looked likethis (I don't have this data still for the wu-ftpd runs):01:01:00 AM       all      4.92      0.00     21.23     73.8503:31:00 AM       all      4.89      0.00     19.53     75.5805:11:00 AM       all      4.19      0.00     16.89     78.9207:01:00 AM       all      5.61      0.00     22.47     71.92The steady-state loads were more in the 3-5% user, 10-15% system. For thex100/x1000 loads with vsftpd, the system load looked like this:x100k.dat:09:01:00 AM       all      2.27      0.00      9.79     87.94x1000k.dat:11:01:00 AM       all      0.42      0.00      5.75     93.83Not bad -- 500 concurrent users for ~7% system load.- Just for kicks I ran the x1000k test with 1000 users.  At peak load:X1000k.dat with 1000 users:04:41:00 PM       all      1.23      0.00     46.59     52.18Based on what I'm seeing, it looks like if a server had enough bandwidth,it could indeed sustain ~2000 users with the current 2 process modelthat's implemented in vsftpd.  I did notice that dkftpbench slowed downthe connection rate after 800 connections.  I'm not sure if that was adkftpbench issue, or if I ran into something other limit.

⌨️ 快捷键说明

复制代码 Ctrl + C
搜索代码 Ctrl + F
全屏模式 F11
切换主题 Ctrl + Shift + D
显示快捷键 ?
增大字号 Ctrl + =
减小字号 Ctrl + -