lkml.org 
[lkml]   [2019]   [Jan]   [31]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/3] locking/qspinlock: Introduce CNA into the slow path of qspinlock
From
Date
On 01/30/2019 10:01 PM, Alex Kogan wrote:
> In CNA, spinning threads are organized in two queues, a main queue for
> threads running on the same socket as the current lock holder, and a
> secondary queue for threads running on other sockets. For details,
> see https://arxiv.org/abs/1810.05600.
>
> Note that this variant of CNA may introduce starvation by continuously
> passing the lock to threads running on the same socket. This issue
> will be addressed later in the series.
>
> Signed-off-by: Alex Kogan <alex.kogan@oracle.com>
> Reviewed-by: Steve Sistare <steven.sistare@oracle.com>

Just wondering if you have tried include PARVIRT_SPINLOCKS option to see
if that patch may screw up the PV qspinlock code.

Anyway, I do believe your claim that NUMA-aware qspinlock is good for
large systems with many nodes. However, all these extra code are
overhead for small systems that have a single node/socket, for instance.

I will support doing something similar to what had been done to support
PV qspinlock. IOW, a separate slowpath function that can be patched to
become the default depending on the system being run on or a kernel boot
option setting.

I would like to keep the core slowpath function simple and easy to
understand. So most of the CNA code should be encapsulated into some
helper functions and put into a separated file.

Thanks,
Longman

\
 
 \ /
  Last update: 2019-01-31 18:39    [W:0.096 / U:1.556 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site