Messages in this thread | | | Date | Thu, 4 Jul 2019 12:43:30 +0300 | From | Ivan Khoronzhuk <> | Subject | Re: [PATCH v6 net-next 5/5] net: ethernet: ti: cpsw: add XDP support |
| |
On Thu, Jul 04, 2019 at 12:39:02PM +0300, Ilias Apalodimas wrote: >On Thu, Jul 04, 2019 at 11:19:39AM +0200, Jesper Dangaard Brouer wrote: >> On Wed, 3 Jul 2019 13:19:03 +0300 >> Ivan Khoronzhuk <ivan.khoronzhuk@linaro.org> wrote: >> >> > Add XDP support based on rx page_pool allocator, one frame per page. >> > Page pool allocator is used with assumption that only one rx_handler >> > is running simultaneously. DMA map/unmap is reused from page pool >> > despite there is no need to map whole page. >> > >> > Due to specific of cpsw, the same TX/RX handler can be used by 2 >> > network devices, so special fields in buffer are added to identify >> > an interface the frame is destined to. Thus XDP works for both >> > interfaces, that allows to test xdp redirect between two interfaces >> > easily. Aslo, each rx queue have own page pools, but common for both >> > netdevs. >> > >> > XDP prog is common for all channels till appropriate changes are added >> > in XDP infrastructure. Also, once page_pool recycling becomes part of >> > skb netstack some simplifications can be added, like removing >> > page_pool_release_page() before skb receive. >> > >> > In order to keep rx_dev while redirect, that can be somehow used in >> > future, do flush in rx_handler, that allows to keep rx dev the same >> > while reidrect. It allows to conform with tracing rx_dev pointed >> > by Jesper. >> >> So, you simply call xdp_do_flush_map() after each xdp_do_redirect(). >> It will kill RX-bulk and performance, but I guess it will work. >> >> I guess, we can optimized it later, by e.g. in function calling >> cpsw_run_xdp() have a variable that detect if net_device changed >> (priv->ndev) and then call xdp_do_flush_map() when needed. >I tried something similar on the netsec driver on my initial development. >On the 1gbit speed NICs i saw no difference between flushing per packet vs >flushing on the end of the NAPI handler. >The latter is obviously better but since the performance impact is negligible on >this particular NIC, i don't think this should be a blocker. >Please add a clear comment on this and why you do that on this driver, >so people won't go ahead and copy/paste this approach Sry, but I did this already, is it not enouph?
-- Regards, Ivan Khoronzhuk
| |