[Date Prev][Date Next] [Chronological] [Thread] [Top]

I think there's a bug with p->sasl_maxbuf in cyrus.c (ITS#2770)



Full_Name: David Abouav
Version: 2.1.22
OS: FreeBSD
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (66.121.131.203)


The best description can be found from the emails I posted to the newsgroup.
I've reproduced them here:

--- first email, sent on 10/15/03

Hi All,

I think there's a bug with p->sasl_maxbuf in cyrus.c.
I'll explain what I think the bug is first, and then
explain how I came across it afterward.

In sb_sasl_setup of cyrus.c, the variable
p->sasl_maxbuf is set like this:

  sasl_getprop( p->sasl_context, SASL_MAXOUTBUF,
               (SASL_CONST void **) &p->sasl_maxbuf );

And it's then later used in sb_sasl_read like this:

  /* The new packet always starts at 
     p->sec_buf_in.buf_base 
   */
  ret = sb_sasl_pkt_length( p->sec_buf_in.buf_base, 
                           *p->sasl_maxbuf, 
                           sbiod->sbiod_sb->sb_debug
);

  /* Grow the packet buffer if neccessary */
  if ( ( p->sec_buf_in.buf_size < (ber_len_t) ret ) &&
       ber_pvt_sb_grow_buffer( &p->sec_buf_in, ret ) <
0 )
    {
      errno = ENOMEM;
      return -1;
    }

In other words, it's used to limit the length of a
packet a client can read from a server. I think that
using this buffer size value for this is incorrect.

The sasl_getprop call returns the value of the SASL
variable o->sasl_context->oparams.maxoutbuf.
oparams.maxoutbuf is the maximum data that can be fed
into the SASL layer for encoding. In other words, the
maximum amount of data that can be *sent to the
server* by a client. However, here in sb_sasl_read
it's instead being used to set a limit on the maximum
packet size we can *read from the server*. 

I base all of this on the following code in Cyrus Sasl
2.1.15:

function gssapi_client_mech_step in plugins/gssapi.c:

  oparams->maxoutbuf =
   (((unsigned char *) output_token->value)[1] << 16)
|
   (((unsigned char *) output_token->value)[2] << 8) |
   (((unsigned char *) output_token->value)[3] << 0);

where output_token has been unwrapped from a server
response. To further backup my claim, the only place
in the entire SASL code where oparams->maxoutbuf is
referenced is in the function sasl_encodev, which is a
function used for encoding data to be sent (in this
case, sent to the server). In that function,
oparams->maxoutbuf is checked to make sure that not
too much data is being encoded (i.e. more than the
server advertised it could handle).

Incidentally, why is there a need for incoming data to
be limited to any negotiated size, since the code just
below the call to sb_sasl_pkt_length grows the local
buffer anyhow?

Am I way off base here, or am I correct that the code
in cyrus.c is using the wrong buffer size to
unnecessarily limit the packets read from the server?

Here's how I discovered this:

I have an LDAP client that talks to Windows domain
controllers using GSSAPI authentication and
signing/sealing. It worked fine against Windows 2000
domain controllers, but then broke against Windows
2003 DCs (in fact, it caused LSASS to die on the DC
and the DC to then reboot). So I tried upgrading to
use Cyrus 2.1.15. This fixed all problems with 2003,
but now Windows 2000 no longer worked (though it
didn't crash).

I found the problem to be in in plugins/gssapi.c. The
code which reads the oparams->maxoutbuf used to look
like this in Cyrus 2.1.4:
  oparams->maxoutbuf = (*(unsigned long
*)output_token->value & 0xFFFFFF);

but in Cyrus 2.1.15, looks like this (same as above):

  oparams->maxoutbuf =
   (((unsigned char *) output_token->value)[1] << 16)|
   (((unsigned char *) output_token->value)[2] << 8) |
   (((unsigned char *) output_token->value)[3] << 0);

Also, I found out that the output_token->value sent
from the server is different for Windows 2000 versus
2003:
  Windows 2000: 00 00 a0 07
  Windows 2003: 00 40 00 07

Note that byte 0 (0x07) is not part of the maxoutbuf.
So it seems that at some point the SASL folks realized
that they were incorrectly parsing this field, and in
Windows 2003 the Microsoft folks realized that they
were incorrectly sending this field.

In either case, the incorrect parsing of the field
resulted in a server buffer size which was lower than
the packet sizes that I was actually receiving from
the server. I didn't think that this should really
matter, since I send very small packets (requests) to
the server anyway. Researching this problem led me to
the OpenLDAP find.

------ end of first email

------ first response, sent by Howard Chu

Perhaps there's a bug here. The Cyrus code has changed enough times that we
may have missed something. But the SASL RFCs (see RFC 2222 and 2831) specify
that both the client and the server send each other a maxbuf value, and I
presume that we have to honor it. A comment was made at one time that the
SASL library insures this itself, so perhaps we can remove those checks. I've
found that relying on the Cyrus SASL library to Do The Right Thing has often
led to frustration... Maybe you should submit a new ITS for this if you
actually want someone to investigate it.

  -- Howard Chu

------ next response, sent by myself

What you said about the max buffer sizes is true, but
it should only ever apply to writes, and not reads.

The server advertises the most data it can receive,
which means that SASL and or OpenLDAP needs to ensure
that it doesn't write more than this when it's used as
a client (i.e. ldapsearch performing a query).

The client also advertises how much data it can
receive, which means that SASL and or OpenLDAP needs
to ensure that it doesn't write more than this when
it's used as a server (i.e. the slapd LDAP server
writing a response).

The code in sb_sasl_read should not care about any
buffer sizes except those needed to store the data
read. In OpenLDAP's case, this is malloced anyway.

What is an ITS and how do I submit one?

Dave

----- next response, sent by Quanah Gibson-Mount

http://www.openldap.org/its/

It is the Issue Tracking System.

--Quanah 

----- next response, sent by myself

Thanks.

Incidentally, the correct way to check for SASL's read
buffer size is with a call to:

ldap_int_sasl_get_option(ld,
LDAP_OPT_X_SASL_MAXBUFSIZE, &value);

Unfortunately, doing this the proper way will break
Windows 2000 compatibility if you're using the latest
Cyrus-SASL (2.1.15). Why? I just noticed that my
response packets from a Windows 2000 DC were sometimes
quite large (~272KB). This is despite the fact that
the SASL client maxbufsize (advertised to the server)
is set by the OpenLDAP code by default to 65KB.
Windows 2003 is better behaved, and never returns
responses bigger than this.

By a miracle though, everything still worked with
Windows 2000 and SASL 2.1.4 up until now because the
OpenLDAP code is using the wrong buffer size to check
the size of the server response with. It's using the
buffer size of how much we can write to the server,
which is generally very large (> 4MB). But using the
newer SASL against a 2000 DC results in this server
buffer size being very small (~16KB) due to the flag
misinterpretation I mentioned before. In other words,
SNAFU.