more_tweaking.txt
上传用户:ladybrid91
上传日期:2007-01-04
资源大小:287k
文件大小:5k
- Ken Peterson: >>>Speed Of A WWW Server Tue, 09 Jan 1996 16:41
- RE: Performance bottlenecks for www servers, please consider the following:
-
- [This is a message I send to slow Web sites where I see evidence of
- **redundant data packets.** I have every reason to believe this is a big
- and widespread problem. Some sites, such as c|net and Point
- Communications, have changed the packet-retransmission interval to 1.0 or
- more seconds, with a huge improvement.]
- ***************
- Your <NAME> Web site appears to suffer from an important problem that
- slows it down and makes it demand much more bandwidth than the content
- requires: duplicate data packets generated by (probably) your Web-server
- software or TCP implimentation.
-
- BACKGROUND: Simultaneously with my browser (NetScape/Macintosh), I run a
- graphic indicator of TCP throughput that codes with color the nature of
- the data passing through: received = green, sent = yellow, errors = red.
- (Mac TCP Monitor 1.0b34.) On many sites, yours included, many red "error"
- bars are generated, especially when it begins to serve a page to me; the
- more compex the page (meaning more channels open, and more opening and
- closing of channels), the more persistent the errors. It's all visible at
- a glance, and, indeed, many sites generate more "red" data than valid
- "green" data!
-
- I use a separate program (Mac TCP Watcher) to determine the nature of
- these errors. (I was initially concerned that it had something to do with
- my equipment or my ISP.) The errors are reported as "duplicate packets,"
- not retransmissions of dropped packets. Sometimes the error rate exceeds
- 50%, meaning that for every packet of information sent, there are one or
- two superfluous *duplicates* generated. In other words, the data pipeline
- to me is full, but only 30-50% of the data can be used!
-
- Some sites generate no duplicates at all, irrespective of the Web-page
- complexity and the site's distance from me. Ftp sites seem to generate no
- duplicates. It seems entirely site-specific, independent of time-of-day,
- which ISP I use at my end, etc.
-
- POSSIBLE CAUSES: The webmaster at c|net.com <jr@cnet.com (Jonathan
- Rosenberg)> has suggested that the tcp-packet-retransmission interval (an
- adjustable parameter in the server O/S) is set too low: not enough time is
- being allowed for confirmation before rushing off another copy of the same
- packet. In real-time tests I conducted with Jonathan on zuni.cnet.com
- (they run Sun Solaris-UNIX), I received over 60% redundant packets when
- the interval was set to 0.2 sec (their original setting, and the Solaris
- default), but almost *none* when it was increased to 1.5 sec -- a huge
- improvement! It may be that the 200ms default is OK for local networks but
- may be totally inappropriate for distant PPP and SLIP asynchronous TCP
- transmissions.
-
- Note: Even if the retransmission interval self-adjusts to a larger value,
- as it does in some versions of Solaris, it may take tens of seconds to do
- this; meanwhile you're putting out 40-60% duplicate packets while it
- adjusts. Each new access means it starts over! So there is still an
- argument for setting the minimum value higher than a mere 200ms.
-
- (If your are not using Solaris, but your O/S has a similar
- packet-retransmission timing parameter, you might examing how it's set.
- Maybe this problem isn't peculiar to Solaris.)
-
- A less-likely cause: A savvy guy at an ISP said that he was aware of older
- Web-server software that was quite guilty of generating redundant data
- flow by not "letting go of routing (?) forks when they aren't needed" --
- or something similar. Some of the newer software apparently does not have
- this problem, or there may be an upgrade available for the software you're
- using now.
-
- A PLEA: Please, please, look into this! It makes your Web site behave
- poorly and uses inflated Net bandwidth. As use of the Internet increases
- without bounds, bandwidth conservation has become a front-burner issue:
- the use of intelligently-compressed graphics in Web pages, organizing
- sites so information is pulled by the user rather than shoveled wholesale,
- and so forth. This redundant-packet phenomenon is as significant a
- bandwidth waste as any other. Unless you look for it with a proper
- analyzer, it's invisible -- but still takes a big toll, adding up to a LOT
- of useless Net traffic. It's certainly in your immediate interest to
- attend to this.
-
- REFERENCE: Document 1202-02 concerning slow response with TCP, published
- by Sun under their Solaris FAQs. It can be found at:
-
- http://access1.sun.com/cgi-bin/info2html?faqs/120202.faq
-
- A quote from this document:
-
- "The "ndd" function allows the tcp_rexmit_interval_min to be set. The
- /etc/rc2.d/S69inet script file is a good place to set a value. The
- default value is 200 and may be increased to 1000."
-
- PLEASE send me feedback on this matter. I hope I can have an effect by
- notifying you and others of this problem where it exists.
-
- Ken Peterson / Peterson TechSystems
- Portland, OR
- <kmp@xplor.com>
- voice: +1 503.452.8639, 9am-8pm PST (1700-0400 UT)
-
- "Any nitwit can understand computers. Many do"