lkml.org 
[lkml]   [2020]   [May]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v3 0/7] Introduce local_lock()
Date
This is v3 of the local_lock() series. The v2 can be found at 

https://lore.kernel.org/lkml/20200524215739.551568-1-bigeasy@linutronix.de/

v2…v3:
- Use `local_lock_t' instead of `struct local_lock' because it is
tiny data structure in general (similar to spinlock_t). Use also
consistent file names `local_lock.h'.

- Export the data structure in radix-tree so that the `lock' member
can be accessed externally. The common case of 'local_unlock()' (no
lockdep, no preemption) will then be optimized away. Otherwise
`idr_preload_end()' will be a function containing only a return
opcode.

- Reorganize the struct member names in mm/swap and connector/cn_proc.

- Make the `lock' member comes before the member that it aims to
protect.

- Two hunks from patch #6 appeared under mysteries circumstances in
patch #7. They have been moved back to patch #6.
Also applied comments to patch #7 as suggested by Ingo.

v1…v2:
- Remove static initializer so a local_lock is not used a single
per-CPU variable but a as a member of an existing structure, that is
used per-CPU.

- Use LD_WAIT_CONFIG as wait-type in the dep_map.

- Expect a pointer like value as argument (same as this_cpu_ptr()).

- Drop the SRCU patch. A different sollution is worked on.

- Drop the zswap patch. That code part will be reworked.


preempt_disable() and local_irq_disable/save() are in principle per CPU big
kernel locks. This has several downsides:

- The protection scope is unknown

- Violation of protection rules is hard to detect by instrumentation

- For PREEMPT_RT such sections, unless in low level critical code, can
violate the preemptability constraints.

To address this PREEMPT_RT introduced the concept of local_locks which are
strictly per CPU.

The lock operations map to preempt_disable(), local_irq_disable/save() and
the enabling counterparts on non RT enabled kernels.

If lockdep is enabled local locks gain a lock map which tracks the usage
context. This will catch cases where an area is protected by
preempt_disable() but the access also happens from interrupt context. local
locks have identified quite a few such issues over the years, the most
recent example is:

b7d5dc21072cd ("random: add a spinlock_t to struct batched_entropy")

Aside of the lockdep coverage this also improves code readability as it
precisely annotates the protection scope.

PREEMPT_RT substitutes these local locks with 'sleeping' spinlocks to
protect such sections while maintaining preemtability and CPU locality.

The followin series introduces the infrastructure including
documentation and provides a couple of examples how they are used to
adjust code to be RT ready.

Sebastian

\
 
 \ /
  Last update: 2020-05-27 22:13    [W:2.389 / U:1.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site