lkml.org 
[lkml]   [2020]   [Jan]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V3,net-next, 1/2] hv_netvsc: Add XDP support
On Wed, 22 Jan 2020 09:23:33 -0800
Haiyang Zhang <haiyangz@microsoft.com> wrote:

> +u32 netvsc_run_xdp(struct net_device *ndev, struct netvsc_channel *nvchan,
> + struct xdp_buff *xdp)
> +{
> + void *data = nvchan->rsc.data[0];
> + u32 len = nvchan->rsc.len[0];
> + struct page *page = NULL;
> + struct bpf_prog *prog;
> + u32 act = XDP_PASS;
> +
> + xdp->data_hard_start = NULL;
> +
> + rcu_read_lock();
> + prog = rcu_dereference(nvchan->bpf_prog);
> +
> + if (!prog)
> + goto out;
> +
> + /* allocate page buffer for data */
> + page = alloc_page(GFP_ATOMIC);

The alloc_page() + __free_page() alone[1] cost 231 cycles(tsc) 64.395 ns.
Thus, the XDP_DROP case will already be limited to just around 10Gbit/s
14.88 Mpps (67.2ns).

XDP is suppose to be done for performance reasons. This looks like a
slowdown.

Measurement tool:
[1] https://github.com/netoptimizer/prototype-kernel/blob/master/kernel/mm/bench/page_bench01.c

> + if (!page) {
> + act = XDP_DROP;
> + goto out;
> + }
> +
> + xdp->data_hard_start = page_address(page);
> + xdp->data = xdp->data_hard_start + NETVSC_XDP_HDRM;
> + xdp_set_data_meta_invalid(xdp);
> + xdp->data_end = xdp->data + len;
> + xdp->rxq = &nvchan->xdp_rxq;
> + xdp->handle = 0;
> +
> + memcpy(xdp->data, data, len);

And a memcpy.

> +
> + act = bpf_prog_run_xdp(prog, xdp);
> +
> + switch (act) {
> + case XDP_PASS:
> + case XDP_TX:
> + case XDP_DROP:
> + break;
> +
> + case XDP_ABORTED:
> + trace_xdp_exception(ndev, prog, act);
> + break;
> +
> + default:
> + bpf_warn_invalid_xdp_action(act);
> + }
> +
> +out:
> + rcu_read_unlock();
> +
> + if (page && act != XDP_PASS && act != XDP_TX) {
> + __free_page(page);

Given this runs under NAPI you could optimize this easily for XDP_DROP
(and XDP_ABORTED) by recycling the page in a driver local cache. (The
page_pool also have a driver local cache build in, but it might be
overkill to use page_pool in this simple case).

You could do this in a followup patch.

> + xdp->data_hard_start = NULL;
> + }
> +
> + return act;
> +}



--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer

\
 
 \ /
  Last update: 2020-01-22 20:52    [W:0.269 / U:0.388 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site