lkml.org 
[lkml]   [2010]   [Jun]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH] ppp_generic: fix multilink fragment sizes
Paoloni, Gabriele wrote:
> The proposed patch looks wrong to me.
>
> nbigger is already doing the job; I didn't use DIV_ROUND_UP because in general we don't have always to roundup, otherwise we would exceed the total bandwidth.

I was basing this on the original code prior to your patch, which used
DIV_ROUND_UP to get the fragment size. Looking more closely I see your
point, the original code was starting with the larger fragment size and
decrementing rather than starting with the smaller size and incrementing
as your code does, so that makes sense.


>
> flen = len;
> if (nfree > 0) {
> if (pch->speed == 0) {
> - flen = totlen/nfree;
> + if (nfree > 1)
> + flen = DIV_ROUND_UP(len, nfree);
> if (nbigger > 0) {
> flen++;
> nbigger--;

The important change here is the use of 'len' instead of 'totlen'.
'nfree' and 'len' should decrease roughly proportionally with each
iteration of the loop whereas 'totlen' remains unchanged. Thus
(totlen/nfree) gets bigger on each iteration whereas len/nfree should
give roughly the same. However, without rounding up here I'm not sure
the logic is right either, since the side effect of nbigger is to make
len decrease faster so it is not quite proportional to the decrease in
nfree. Is there a risk of ending up on the nfree == 1 iteration with
flen == len - 1 and thus generating a superfluous extra 1 byte long
fragment? This would be a far worse situation than a slight imbalance
in the size of the fragments.

Perhaps the solution is to go back to a precalculated fragment size for
the pch->speed == 0 case as per original code?

Regards,
Ben.


\
 
 \ /
  Last update: 2010-06-02 17:59    [W:0.136 / U:0.080 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site