lkml.org 
[lkml]   [2020]   [Jun]   [29]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC PATCH v3 0/5] scsi: ufs: Add Host Performance Booster Support
Date


>
> Hi Avri
>
> On Mon, 2020-06-29 at 05:24 +0000, Avri Altman wrote:
> > Hi Bean,
> > >
> > > Hi Daejun
> > >
> > > Seems you intentionally ignored to give you comments on my
> > > suggestion.
> > > let me provide the reason.
> > >
> > > Before submitting your next version patch, please check your L2P
> > > mapping HPB reqeust submission logical algorithem. I have did
> > > performance comparison testing on 4KB, there are about 13%
> > > performance
> > > drop. Also the hit count is lower. I don't know if this is related
> > > to
> > > your current work queue scheduling, since you didn't add the timer
> > > for
> > > each HPB request.
> >
> > In device control mode, the various decisions,
> > and specifically those that are causing repetitive evictions,
> > are made by the device.
> > Is this the issue that you are referring to?
> >
>
> For this device mode, if HPB mapping table of the active region becomes
> dirty in the UFS device side, there is repetitive inactive rsp, but it
> is not the reason for the condition I mentioned here.
>
> > As for the driver, do you see any issue that is causing unnecessary
> > latency?
> >
>
> In Daejun's patch, it now uses work_queue, and as long there is new RSP of
> thesubregion to be activated, the driver will queue "work" to this work
> queue, actually, this is deferred work. we don't know when it will be
> scheduled/finished. we need to optimize it.
But those "to-do" lists are checked on every completion interrupt and on every resume.
Do you see any scenario in which the "to-be-activated" or "to-be-inactivate" work is getting starved?

\
 
 \ /
  Last update: 2020-06-29 20:37    [W:0.079 / U:0.564 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site