Messages in this thread | | | Subject | Re: [RFC PATCH 4/4 v0.3] sched/umcg: RFC: implement UMCG syscalls | From | Thierry Delisle <> | Date | Wed, 21 Jul 2021 15:55:59 -0400 |
| |
> Yes, this is naturally supported in the current patchset on the kernel > side, and is supported in libumcg (to be posted, later when the kernel > side is settled); internally at Google, some applications use > different "groups" of workers/servers per NUMA node.
Good to know. Cforall has the same feature, where we refer to these groups as "clusters". https://doi.org/10.1002/spe.2925 (Section 7)
> Please see the attached atomic_stack.h file - I use it in my tests, > things seem to be working. Specifically, atomic_stack_gc does the > cleanup. For the kernel side of things, see the third patch in this > patchset.
I don't believe the atomic_stack_gc function is robust enough to be offering any guarantee. I believe that once a node is unlinked, its next pointer should be reset immediately, e.g., by writing 0xDEADDEADDEADDEAD. Do your tests work if the next pointer is reset immediately on reclaimed nodes?
As far as I can tell, the reclaimed nodes in atomic_stack_gc still contain valid next fields. I believe there is a race which can lead to the kernel reading reclaimed nodes. If atomic_stack_gc does not reset the fields, this bug could be hidden in the testing.
An more aggressive test is to put each node in a different page and remove read permissions when the node is reclaimed. I'm not sure this applies when the kernel is the one reading.
> To keep the kernel side light and simple. To also protect the kernel > from spinning if userspace misbehaves. Basically, the overall approach > is to delegate most of the work to the userspace, and keep the bare > minimum in the kernel.
I'll try to keep this in mind then.
After some thought, I'll suggest a scheme to significantly reduce complexity. As I understand, the idle_workers_ptr are linked to form one or more Multi-Producer Single-Consumer queues. If each head is augmented with a single volatile tid-sized word, servers that want to go idle can simply write their id in the word. When the kernel adds something to the idle_workers_ptr list, it simply does an XCHG with 0 or any INVALID_TID. This scheme only supports one server blocking per idle_workers_ptr list. To keep the "kernel side light and simple", you can simply ask that any extra servers must synchronize among each other to pick which server is responsible for wait on behalf of everyone.
| |