lkml.org 
[lkml]   [2023]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 1/2] xen/netback: don't do grant copy across page boundary
On 27.03.23 17:38, Jan Beulich wrote:
> On 27.03.2023 12:07, Juergen Gross wrote:
>> On 27.03.23 11:49, Jan Beulich wrote:
>>> On 27.03.2023 10:36, Juergen Gross wrote:
>>>> @@ -413,6 +418,13 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
>>>> cop->dest.u.gmfn = virt_to_gfn(skb->data + skb_headlen(skb)
>>>> - data_len);
>>>>
>>>> + /* Don't cross local page boundary! */
>>>> + if (cop->dest.offset + amount > XEN_PAGE_SIZE) {
>>>> + amount = XEN_PAGE_SIZE - cop->dest.offset;
>>>> + XENVIF_TX_CB(skb)->split_mask |= 1U << copy_count(skb);
>>>
>>> Maybe worthwhile to add a BUILD_BUG_ON() somewhere to make sure this
>>> shift won't grow too large a shift count. The number of slots accepted
>>> could conceivably be grown past XEN_NETBK_LEGACY_SLOTS_MAX (i.e.
>>> XEN_NETIF_NR_SLOTS_MIN) at some point.
>>
>> This is basically impossible due to the size restriction of struct
>> xenvif_tx_cb.
>
> If its size became a problem, it might simply take a level of indirection
> to overcome the limitation.

Maybe.

OTOH this would require some rework, which should take such problems into
consideration.

In the end I'd be fine to add such a BUILD_BUG_ON(), as the code is
complicated enough already.

>
>>>> @@ -420,7 +432,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
>>>> pending_idx = queue->pending_ring[index];
>>>> callback_param(queue, pending_idx).ctx = NULL;
>>>> copy_pending_idx(skb, copy_count(skb)) = pending_idx;
>>>> - copy_count(skb)++;
>>>> + if (!split)
>>>> + copy_count(skb)++;
>>>>
>>>> cop++;
>>>> data_len -= amount;
>>>> @@ -441,7 +454,8 @@ static void xenvif_get_requests(struct xenvif_queue *queue,
>>>> nr_slots--;
>>>> } else {
>>>> /* The copy op partially covered the tx_request.
>>>> - * The remainder will be mapped.
>>>> + * The remainder will be mapped or copied in the next
>>>> + * iteration.
>>>> */
>>>> txp->offset += amount;
>>>> txp->size -= amount;
>>>> @@ -539,6 +553,13 @@ static int xenvif_tx_check_gop(struct xenvif_queue *queue,
>>>> pending_idx = copy_pending_idx(skb, i);
>>>>
>>>> newerr = (*gopp_copy)->status;
>>>> +
>>>> + /* Split copies need to be handled together. */
>>>> + if (XENVIF_TX_CB(skb)->split_mask & (1U << i)) {
>>>> + (*gopp_copy)++;
>>>> + if (!newerr)
>>>> + newerr = (*gopp_copy)->status;
>>>> + }
>>>
>>> It isn't guaranteed that a slot may be split only once, is it? Assuming a
>>
>> I think it is guaranteed.
>>
>> No slot can cover more than XEN_PAGE_SIZE bytes due to the grants being
>> restricted to that size. There is no way how such a data packet could cross
>> 2 page boundaries.
>>
>> In the end the problem isn't the copies for the linear area not crossing
>> multiple page boundaries, but the copies for a single request slot not
>> doing so. And this can't happen IMO.
>
> You're thinking of only well-formed requests. What about said request
> providing a large size with only tiny fragments? xenvif_get_requests()
> will happily process such, creating bogus grant-copy ops. But them failing
> once submitted to Xen will be only after damage may already have occurred
> (from bogus updates of internal state; the logic altogether is too
> involved for me to be convinced that nothing bad can happen).

There are sanity checks after each relevant RING_COPY_REQUEST() call, which
will bail out if "(txp->offset + txp->size) > XEN_PAGE_SIZE" (the first one
is after the call of xenvif_count_requests(), as this call will decrease the
size of the request, the other check is in xenvif_count_requests()).

> Interestingly (as I realize now) the shifts you add are not be at risk of
> turning UB in this case, as the shift count won't go beyond 16.
>
>>> near-64k packet with all tiny non-primary slots, that'll cause those tiny
>>> slots to all be mapped, but due to
>>>
>>> if (ret >= XEN_NETBK_LEGACY_SLOTS_MAX - 1 && data_len < txreq.size)
>>> data_len = txreq.size;
>>>
>>> will, afaict, cause a lot of copying for the primary slot. Therefore I
>>> think you need a loop here, not just an if(). Plus tx_copy_ops[]'es
>>> dimension also looks to need further growing to accommodate this. Or
>>> maybe not - at least the extreme example given would still be fine; more
>>> generally packets being limited to below 64k means 2*16 slots would
>>> suffice at one end of the scale, while 2*MAX_PENDING_REQS would at the
>>> other end (all tiny, including the primary slot). What I haven't fully
>>> convinced myself of is whether there might be cases in the middle which
>>> are yet worse.
>>
>> See above reasoning. I think it is okay, but maybe I'm missing something.
>
> Well, the main thing I'm missing is a "primary request fits in a page"
> check, even more so with the new copying logic that the commit referenced
> by Fixes: introduced into xenvif_get_requests().

When xenvif_get_requests() gets called, all requests are sanity checked
already (note that xenvif_get_requests() is working on the local copies of
the requests).


Juergen
[unhandled content-type:application/pgp-keys][unhandled content-type:application/pgp-signature]
\
 
 \ /
  Last update: 2023-03-27 18:23    [W:0.053 / U:0.392 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site