[Date Prev][Date Next]
Re: [Fwd: Re: Paged Results and limits]
At 07:43 AM 4/9/2004, Pierangelo Masarati wrote:
>Kurt D. Zeilenga wrote:
>>I do not think the paged results mechanism should be
>>viewed as a mechanism for going around server-side limits.
>>It should viewed as a flow-control mechanism.
>OK, so you suggest that the total entry count of pagedresults be
>equal to hard limits.
Well, I think, that limits which apply to the non-paged result
set should apply to the paged result set, not to pages of the
By default, the same limit values which apply to non-paged
result sets should apply to paged result sets. The paged
limit values can overwrite these (but follow the same semantics).
>>If the server limits a client search operation to N entries
>>and M candidates, these limits apply whether the results
>>are paged or not.
>>Likewise, when a client provided size limit, I think this
>>should apply to the total result set as well. This makes
>>sense because the client is required to provide the size
>>limit up-front (on the first operation) and is not allowed
>>to change it on page requests.
>If the requested entry number is lower than the page size,
>this already happens, but it's a trivial case. I can make it work
>also the other way, e.g. do not return more than the requested
>entries as a total count.
It makes little sense to request a size limit which is
smaller than the page size. If a client does so, then
no paging occurs.
But if client asks for size limit of 15 on pages of 10. The
client should get no more than 15 entries (10 on first page,
5 on the second).
The client is allowed to vary the page size per page request,
but the size limit is constant and applies to the whole result
>>This means that we need to keep a running count of total
>>entries returned across page requests...
>This is done in my last fix; I also added a special limit for
>paged results which defaults equal to "any"; all I need to do
>is make it default equal to the hard limit (I interpret pagedResults
>as a request for a specific value of entries, so the hard limit should
>apply). This allows fine grained crafting of limits for this type
>of request, and by default will implement what you described above.
>Watch for my next commit.