lkml.org 
[lkml]   [2015]   [Jul]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH RFC tip/core/rcu 0/5] Expedited grace periods encouraging normal ones
On Wed, Jul 01, 2015 at 01:09:36PM -0700, Paul E. McKenney wrote:
> On Wed, Jul 01, 2015 at 07:02:42PM +0200, Peter Zijlstra wrote:
> > USB sure, but a backing dev is involved in nfs clients, loopback and all
> > sorts of block/filesystem like setups.
> >
> > unmount an NFS mount and voila expedited rcu, unmount a loopback, tada.
> >
> > All you need is a regular server workload triggering any of that on a
> > semi regular basis and even !rt people might start to notice something
> > is up.
>
> I don't believe that latency-sensitive systems are going to be messing
> with remapping their storage at runtime, let alone on a regular basis.
> If they are not latency sensitive, and assuming that the rate of
> storage remapping is at all sane, I bet that they won't notice the
> synchronize_rcu_expedited() overhead. The overhead of the actual
> remapping will very likely leave the synchronize_rcu_expedited() overhead
> way down in the noise.
>
> And if they are doing completely insane rates of storage remapping,
> I suspect that the batching in the synchronize_rcu_expedited()
> implementation will reduce the expedited-grace-period overhead still
> further as a fraction of the total.

Consider the case of container-based systems, calling mount as part of
container setup and umount as part of container teardown.

And those workloads are often sensitive to latency, not throughput.

- Josh Triplett


\
 
 \ /
  Last update: 2015-07-01 23:41    [W:0.099 / U:0.108 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site