lkml.org 
[lkml]   [2022]   [Sep]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [EXT] Re: [PATCH 1/1] net: fec: using page pool to manage RX buffers
Date


> -----Original Message-----
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: Friday, September 30, 2022 2:52 PM
> To: Shenwei Wang <shenwei.wang@nxp.com>
> Cc: David S . Miller <davem@davemloft.net>; Eric Dumazet
> <edumazet@google.com>; Jakub Kicinski <kuba@kernel.org>; Paolo Abeni
> <pabeni@redhat.com>; Alexei Starovoitov <ast@kernel.org>; Daniel Borkmann
> <daniel@iogearbox.net>; Jesper Dangaard Brouer <hawk@kernel.org>; John
> Fastabend <john.fastabend@gmail.com>; Wei Fang <wei.fang@nxp.com>;
> netdev@vger.kernel.org; linux-kernel@vger.kernel.org; imx@lists.linux.dev
> Subject: [EXT] Re: [PATCH 1/1] net: fec: using page pool to manage RX buffers
>
> Caution: EXT Email
>
> On Fri, Sep 30, 2022 at 02:37:51PM -0500, Shenwei Wang wrote:
> > This patch optimizes the RX buffer management by using the page pool.
> > The purpose for this change is to prepare for the following XDP
> > support. The current driver uses one frame per page for easy
> > management.
> >
> > The following are the comparing result between page pool
> > implementation and the original implementation (non page pool).
> >
> > --- Page Pool implementation ----
> >
> > shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1
> > ------------------------------------------------------------
> > Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416
> > KByte (WARNING: requested 1.91 MByte)
> > ------------------------------------------------------------
> > [ 1] local 10.81.17.20 port 43204 connected with 10.81.16.245 port 5001
> > [ ID] Interval Transfer Bandwidth
> > [ 1] 0.0000-1.0000 sec 111 MBytes 933 Mbits/sec
> > [ 1] 1.0000-2.0000 sec 111 MBytes 934 Mbits/sec
> > [ 1] 2.0000-3.0000 sec 112 MBytes 935 Mbits/sec
> > [ 1] 3.0000-4.0000 sec 111 MBytes 933 Mbits/sec
> > [ 1] 4.0000-5.0000 sec 111 MBytes 934 Mbits/sec
> > [ 1] 5.0000-6.0000 sec 111 MBytes 933 Mbits/sec
> > [ 1] 6.0000-7.0000 sec 111 MBytes 931 Mbits/sec
> > [ 1] 7.0000-8.0000 sec 112 MBytes 935 Mbits/sec
> > [ 1] 8.0000-9.0000 sec 111 MBytes 933 Mbits/sec
> > [ 1] 9.0000-10.0000 sec 112 MBytes 935 Mbits/sec
> > [ 1] 0.0000-10.0077 sec 1.09 GBytes 933 Mbits/sec
> >
> > --- Non Page Pool implementation ----
> >
> > shenwei@5810:~$ iperf -c 10.81.16.245 -w 2m -i 1
> > ------------------------------------------------------------
> > Client connecting to 10.81.16.245, TCP port 5001 TCP window size: 416
> > KByte (WARNING: requested 1.91 MByte)
> > ------------------------------------------------------------
> > [ 1] local 10.81.17.20 port 49154 connected with 10.81.16.245 port 5001
> > [ ID] Interval Transfer Bandwidth
> > [ 1] 0.0000-1.0000 sec 104 MBytes 868 Mbits/sec
> > [ 1] 1.0000-2.0000 sec 105 MBytes 878 Mbits/sec
> > [ 1] 2.0000-3.0000 sec 105 MBytes 881 Mbits/sec
> > [ 1] 3.0000-4.0000 sec 105 MBytes 879 Mbits/sec
> > [ 1] 4.0000-5.0000 sec 105 MBytes 878 Mbits/sec
> > [ 1] 5.0000-6.0000 sec 105 MBytes 878 Mbits/sec
> > [ 1] 6.0000-7.0000 sec 104 MBytes 875 Mbits/sec
> > [ 1] 7.0000-8.0000 sec 104 MBytes 875 Mbits/sec
> > [ 1] 8.0000-9.0000 sec 104 MBytes 873 Mbits/sec
> > [ 1] 9.0000-10.0000 sec 104 MBytes 875 Mbits/sec
> > [ 1] 0.0000-10.0073 sec 1.02 GBytes 875 Mbits/sec
>
> What SoC? As i keep saying, the FEC is used in a lot of different SoCs, and you
> need to show this does not cause any regressions in the older SoCs. There are
> probably a lot more imx5 and imx6 devices out in the wild than imx8, which is
> what i guess you are testing on. Mainline needs to work well on them all, even if
> NXP no longer cares about the older Socs.
>

The testing above was on the imx8 platform. The following are the testing result
On the imx6sx board:

######### Original implementation ######

shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1
------------------------------------------------------------
Client connecting to 10.81.16.245, TCP port 5001
TCP window size: 416 KByte (WARNING: requested 1.91 MByte)
------------------------------------------------------------
[ 1] local 10.81.17.20 port 36486 connected with 10.81.16.245 port 5001
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-1.0000 sec 70.5 MBytes 591 Mbits/sec
[ 1] 1.0000-2.0000 sec 64.5 MBytes 541 Mbits/sec
[ 1] 2.0000-3.0000 sec 73.6 MBytes 618 Mbits/sec
[ 1] 3.0000-4.0000 sec 73.6 MBytes 618 Mbits/sec
[ 1] 4.0000-5.0000 sec 72.9 MBytes 611 Mbits/sec
[ 1] 5.0000-6.0000 sec 73.4 MBytes 616 Mbits/sec
[ 1] 6.0000-7.0000 sec 73.5 MBytes 617 Mbits/sec
[ 1] 7.0000-8.0000 sec 73.4 MBytes 616 Mbits/sec
[ 1] 8.0000-9.0000 sec 73.4 MBytes 616 Mbits/sec
[ 1] 9.0000-10.0000 sec 73.9 MBytes 620 Mbits/sec
[ 1] 0.0000-10.0174 sec 723 MBytes 605 Mbits/sec


###### Page Pool implémentation ########

shenwei@5810:~/pktgen$ iperf -c 10.81.16.245 -w 2m -i 1
------------------------------------------------------------
Client connecting to 10.81.16.245, TCP port 5001
TCP window size: 416 KByte (WARNING: requested 1.91 MByte)
------------------------------------------------------------
[ 1] local 10.81.17.20 port 57288 connected with 10.81.16.245 port 5001
[ ID] Interval Transfer Bandwidth
[ 1] 0.0000-1.0000 sec 78.8 MBytes 661 Mbits/sec
[ 1] 1.0000-2.0000 sec 82.5 MBytes 692 Mbits/sec
[ 1] 2.0000-3.0000 sec 82.4 MBytes 691 Mbits/sec
[ 1] 3.0000-4.0000 sec 82.4 MBytes 691 Mbits/sec
[ 1] 4.0000-5.0000 sec 82.5 MBytes 692 Mbits/sec
[ 1] 5.0000-6.0000 sec 82.4 MBytes 691 Mbits/sec
[ 1] 6.0000-7.0000 sec 82.5 MBytes 692 Mbits/sec
[ 1] 7.0000-8.0000 sec 82.4 MBytes 691 Mbits/sec
[ 1] 8.0000-9.0000 sec 82.4 MBytes 691 Mbits/sec
^C[ 1] 9.0000-9.5506 sec 45.0 MBytes 686 Mbits/sec
[ 1] 0.0000-9.5506 sec 783 MBytes 688 Mbits/sec


Thanks,
Shenwei

> Andrew

\
 
 \ /
  Last update: 2022-09-30 21:59    [W:0.134 / U:1.596 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site