[Date Prev][Date Next] [Chronological] [Thread] [Top]

Re: slapd <defunct> frequently for a moment (ITS#522)



At 08:21 AM 5/3/00 GMT, Kurt@OpenLDAP.org wrote:
>That's LinuxThreads in action....
>	Kurt

That is, defunct processes (zombies) are just processes waiting
to reaped by their parent.  The fact that you are seeing many
of these only implies that the parent(s) are busy.

Most threads in slapd are created by the "listener" thread.
It is quite normal for the listener thread to be busy.  In
particular, given the time of the apparent hang, I suspect
your DNS system is not well configured to support DNS
reverse lookups.  See the FAQ for how to disable DNS
reverse lookups.

Kurt

>At 12:25 PM 5/2/00 GMT, saurabhkothari@diskonnet.com wrote:
>>Full_Name: Saurabh Kothari
>>Version: 1.2.7
>>OS: RedHat Linux 6.1
>>URL: ftp://ftp.openldap.org/incoming/
>>Submission from: (NULL) (202.142.88.6)
>>
>>
>>We are using OPENLDAP with Livingstone RADIUS 2.2 for authentication of Dial-Up
>>users to an ISP setup. Other Netscape API's also poll the LDAP server for users
>>information such as online user creation, users changing there own password and
>>online user account information.
>>
>>As per the former BUGS posted for slapd <defunct> it shows the problem was with
>>avalibility of space on /var and other outcome was problem with Java cookies
>>sync. But in our case both are not in dought as /var has 3.4 GB and there is no
>>use of Java Cookies.
>>
>>The problem we face is slapd shows <defunct> for a moment and then goes away
>>after a random period of time which can be 15 seconds to 60 seconds. Then after
>>another random period of time of 1 to 5 minutes it again shows the same thing.
>>Due to which the user tring to login at that moment is not allowed and in the
>>radius log it is says can't communicate to the LDAP server. But then at next
>>instance it allows the user to login as the <defunct> process goes off. This is
>>happing so frequently due to which sometimes the users get error of user already
>>logged in.
>>
>>The system condition is :
>>
>>There are already three process of slapd and four process of slurpd running in
>>the system 
>>-----------------------------------------------------------------------------------
>>root      5305  0.0  0.2  4536 2732 ?        S    11:30   0:00 slapd
>>root      5306  0.0  0.2  4536 2732 ?        S    11:30   0:00 slapd
>>root      5307  0.0  0.2  4536 2732 ?        S    11:30   0:04 slapd
>>root      5319  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd
>>root      5320  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd
>>root      5321  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd
>>root      5322  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd  
>>----------------------------------------------------------------------------------
>>and then at the end it show a slapd <defunct>
>>==================================================================================
>>root     18803  0.0  0.0     0    0 ?        Z    17:15   0:00 [slapd <defunct>
>>==================================================================================
>>
>>Sometimes the forth process of slapd comes properly with R status but at that
>>time the second process of sldap also has a R status instead of S status.
>>===================================================================================
>>root      5305  0.0  0.2  4536 2732 ?        S    11:30   0:00 slapd
>>root      5306  0.0  0.2  4536 2732 ?        R    11:30   0:00 slapd
>>root      5307  0.0  0.2  4536 2732 ?        S    11:30   0:04 slapd
>>root      5319  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd
>>root      5320  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd
>>root      5321  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd
>>root      5322  0.0  0.0  1656  808 ?        S    11:30   0:00 slurpd          
>>-----------------------------------------------------------------------------------
>>root     18961  0.0  0.2  4536 2732 ?        R    17:19   0:00 slapd   
>>===================================================================================
>>
>>We have enabled the loglevel in slapd.conf with 256 value to trace the problem
>>some of the output's are as under:
>>
>>===================================================================================
>>May  2 17:39:04 radblr slapd[19878]: conn=1861 op=0 BIND
>>dn="UID=HELPDEL,OU=PEOP
>>LE,DC=ZEEACCESS,DC=COM" method=128
>>May  2 17:39:04 radblr slapd[19878]: conn=1861 op=0 RESULT err=49 tag=97
>>nentrie
>>s=0
>>May  2 17:39:05 radblr slapd[19879]: conn=1861 op=1 BIND
>>dn="UID=HELPDEL,OU=PEOP
>>LE,DC=ZEEACCESS,DC=COM" method=128
>>May  2 17:39:05 radblr slapd[19879]: conn=1861 op=1 RESULT err=49 tag=97
>>nentrie
>>s=0
>>May  2 17:39:05 radblr slapd[19880]: conn=1861 op=2 UNBIND
>>May  2 17:39:05 radblr slapd[19880]: conn=1861 op=2 fd=28 closed errno=0   
>>-----------------------------------------------------------------------------------
>>May  2 17:49:31 radblr slapd[20247]: conn=1907 op=0 BIND
>>dn="UID=IMPULSE,OU=PEOP
>>LE,DC=ZEEACCESS,DC=COM" method=128
>>May  2 17:49:31 radblr slapd[20247]: conn=1907 op=0 RESULT err=48 tag=97
>>nentrie
>>s=0
>>May  2 17:49:32 radblr slapd[20248]: conn=1907 op=1 BIND
>>dn="UID=IMPULSE,OU=PEOP
>>LE,DC=ZEEACCESS,DC=COM" method=128
>>May  2 17:49:32 radblr slapd[20248]: conn=1907 op=1 RESULT err=48 tag=97
>>nentrie
>>s=0
>>May  2 17:49:33 radblr slapd[20249]: conn=1907 op=2 UNBIND
>>May  2 17:49:33 radblr slapd[20249]: conn=1907 op=2 fd=23 closed errno=0
>>May  2 17:49:59 radblr slapd[20253]: conn=5 op=3641 SRCH
>>base="DC=ZEEACCESS,DC=C
>>OM" scope=2 filter="(uid=TELERATE)"
>>May  2 17:49:59 radblr slapd[20253]: conn=5 op=3641 RESULT err=0 tag=101
>>nentrie
>>s=1                                                                            
>>===================================================================================
>>
>>One observation from my end is whenever ladp tries to generate the fourth
>>process of slapd it goes <defunct> due to which the process ID of the <defunct>
>>process keeps on continously changing (increamenting).    
>>
>>If more info. or full log is required kindly let me know. 
>>
>>Saurabh Kothari
>>
>>
>>
>
>