[Date Prev][Date Next] [Chronological] [Thread] [Top]

(ITS#8226) long searches are bad for back-mdb



Full_Name: Howard Chu
Version: 2.4
OS: 
URL: ftp://ftp.openldap.org/incoming/
Submission from: (NULL) (70.87.222.79)
Submitted by: hyc


Long-lived read transactions prevent old pages from being reused in LMDB. We've
seen a situation triggered by the syncprov overlay that aggravates this problem
- a consumer connects and performs a syncrepl refresh. Its sync cookie is too
old for the syncprov sessionlog to be used, so syncprov must generate a
presentlist of all of the entryUUIDs in the database. This is a very long search
(in this case, around 1.5 million objects) and writes are continuing while this
happens. Also, there are multiple consumers repeating this at staggered times
(due to the situation described in ITS#8225 they all tend to reconnect at
closely situated times).

Because of these long read transactions the incoming writes are forced to use
new database pages, and so the DB file size grows drastically during these
refreshes.

This specific case for syncprov can be mitigated by using a much larger syncprov
sessionlog (and in the future, by enhancing syncprov to use accesslog), but
there are many other instances when clients may legitimately issue searches that
span the entire database. The fix from ITS#7904 should be extended to release
and reacquire the read transaction every N entries to make large searches
friendlier to write traffic.

The default value of N probably should not be zero, but could be a value (e.g.
1000, with a default sizelimit of 500) that won't affect the majority of normal
search requests.