[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: Setting TCP_NODELAY on TCP sockets

Gaël Roualland wrote:
> Hello,
> I am setting up a FreeBSD LDAP server and I was experiencing very poor
> performance (5 ldap_search/sec using only one connection and a small
> database) when querying the LDAP server (remotely and localy as well). I
> however noticed that I could get better performance (around 30 req/s)
> running my test client on a linux system.
> After some experimentation (tcpdumping and so on) it camed out that the
> slow performance was due to the fact that ip paquets between the server
> and the client were smaller than the mss, and such the following ip
> paquets were not sent until the previous one was ack'ed. (thus the
> difference between freebsd and linux came from the difference in delay
> between acks in the tcp stack of both systems). Setting TCP_NODELAY on
> the socket in the server and the client solves the problem by disabling
> nagle algorithm, and improves performance a lot (up to 300 req/s now).
> I enclose a little patch I wrote to set up TCP_NODELAY (for 1.2.4). Do
> you think this could be included in the main distribution ?
> Gaël.

Systems where TCP_NODELAY defaults to FALSE (i.e. Nagle ON) are
broken.  The vast majority of TCP sockets will always be for
bulk/batch connections.  It should be the burden of protocols
that *don't* transfer large blocks, but rather can benefit from
Nagle, to explicitly enable it (i.e. X and telnet/rlogin).