lkml.org 
[lkml]   [2021]   [Dec]   [21]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
Date


> -----Original Message-----
> From: Andrew Lunn <andrew@lunn.ch>
> Sent: Tuesday, December 21, 2021 4:39 PM
> To: Chen, Mike Ximing <mike.ximing.chen@intel.com>
> Cc: linux-kernel@vger.kernel.org; arnd@arndb.de; gregkh@linuxfoundation.org; Williams, Dan J
> <dan.j.williams@intel.com>; pierre-louis.bossart@linux.intel.com; netdev@vger.kernel.org;
> davem@davemloft.net; kuba@kernel.org
> Subject: Re: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
>
> On Tue, Dec 21, 2021 at 08:56:42PM +0000, Chen, Mike Ximing wrote:
> >
> >
> > > -----Original Message-----
> > > From: Andrew Lunn <andrew@lunn.ch>
> > > Sent: Tuesday, December 21, 2021 4:54 AM
> > > To: Chen, Mike Ximing <mike.ximing.chen@intel.com>
> > > Cc: linux-kernel@vger.kernel.org; arnd@arndb.de;
> > > gregkh@linuxfoundation.org; Williams, Dan J
> > > <dan.j.williams@intel.com>; pierre-louis.bossart@linux.intel.com;
> > > netdev@vger.kernel.org; davem@davemloft.net; kuba@kernel.org
> > > Subject: Re: [RFC PATCH v12 01/17] dlb: add skeleton for DLB driver
> > >
> > > > +The following diagram shows a typical packet processing pipeline with the Intel DLB.
> > > > +
> > > > + WC1 WC4
> > > > + +-----+ +----+ +---+ / \ +---+ / \ +---+ +----+ +-----+
> > > > + |NIC | |Rx | |DLB| / \ |DLB| / \ |DLB| |Tx | |NIC |
> > > > + |Ports|---|Core|---| |-----WC2----| |-----WC5----| |---|Core|---|Ports|
> > > > + +-----+ -----+ +---+ \ / +---+ \ / +---+ +----+ ------+
> > > > + \ / \ /
> > > > + WC3 WC6
> > >
> > > This is the only mention of NIC here. Does the application interface
> > > to the network stack in the usual way to receive packets from the
> > > TCP/IP stack up into user space and then copy it back down into the
> > > MMIO block for it to enter the DLB for the first time? And at the end of the path, does the application
> copy it from the MMIO into a standard socket for TCP/IP processing to be send out the NIC?
> > >
> > For load balancing and distribution purposes, we do not handle packets
> > directly in DLB. Instead, we only send QEs (queue events) to MMIO for
> > DLB to process. In an network application, QEs (64 bytes each) can
> > contain pointers to the actual packets. The worker cores can use these pointers to process packets and
> forward them to the next stage. At the end of the path, the last work core can send the packets out to NIC.
>
> Sorry for asking so many questions, but i'm trying to understand the architecture. As a network maintainer,
> and somebody who reviews network drivers, i was trying to be sure there is not an actual network MAC
> and PHY driver hiding in this code.
>
> So you talk about packets. Do you actually mean frames? As in Ethernet frames? TCP/IP processing has not
> occurred? Or does this plug into the network stack at some level? After TCP reassembly has occurred? Are
> these pointers to skbufs?
>
There is no network MAC or PHY driver in the code. Actually DLB and the driver does not have any direct access to
the network ports/sockets. In the above diagram, the Rx/Tx CPU core receives/transmits packet (or frames)
from/to the NIC. These can be either L2 or L3 packets/frames. The Rx CPU core sends corresponding QEs with
proper meta data (such as pointers to packets/frames) to DLB, which distributes QEs to a set of worker cores.
the worker cores receive QEs, process the corresponding packets/frames, and send QEs back to DLB for
the next stage processing. After several stages of processing, the worker cores in the last stage send the QEs
to Tx core, which then transmits the packets/frames to NIC ports. So between the Rx core and Tx core is where
DLB and the driver operates. The DLB operation itself does not involve any network access.

I am not very familiar with skbufs, but they sound like queue buffers in the kernel. Most of the DLB applications
are in user space. So these pointers can be for any buffers that an application uses. DLB does not process any
packets/frames, it distributes QEs to worker cores which process the corresponding packets/frames.

> > > Do you even needs NICs here? Could the data be coming of a video
> > > camera and you are distributing image processing over a number of cores?
> > No, the diagram is just an example for packet processing applications.
> > The data can come from other sources such video cameras. The DLB can
> > schedule up to 100 million packets/events per seconds. The frame rate from a single camera is normally
> much, much lower than that.
>
> So i'm trying to understand the scope of this accelerator. Is it just a network accelerator? If so, are you
> pointing to skbufs? How are the lifetimes of skbufs managed? How do you get skbufs out of the NIC? Are
> you using XDP?

This is not a network accelerator in the sense that it does not have direct access to the network sockets/ports. We do not use XDP.
What it does is to effectively distribute workloads (such as packet processing) among CPU cores and therefore
increases the total packet/frame processing throughput of the CPU processors (such as Intel's Xeon processors).
Imagine, for example, that the Rx core receives 1000 packets/frames in a burst with random payloads, how to
distribute the packet processing to (say) 16 CPU cores is the job of the DLB hardware. The driver is responsible
for the resource management, system configuration and reset, multiple user/application support, and virtualization
enablement.

Thanks
Mike

\
 
 \ /
  Last update: 2021-12-22 00:06    [W:0.151 / U:0.244 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site