lkml.org 
[lkml]   [2019]   [Apr]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 5/6] powerpc/mmu: drop mmap_sem now that locked_vm is atomic
From
Date


Le 02/04/2019 à 22:41, Daniel Jordan a écrit :
> With locked_vm now an atomic, there is no need to take mmap_sem as
> writer. Delete and refactor accordingly.

Could you please detail the change ? It looks like this is not the only
change. I'm wondering what the consequences are.

Before we did:
- lock
- calculate future value
- check the future value is acceptable
- update value if future value acceptable
- return error if future value non acceptable
- unlock

Now we do:
- atomic update with future (possibly too high) value
- check the new value is acceptable
- atomic update back with older value if new value not acceptable and
return error

So if a concurrent action wants to increase locked_vm with an acceptable
step while another one has temporarily set it too high, it will now fail.

I think we should keep the previous approach and do a cmpxchg after
validating the new value.

Christophe

>
> Signed-off-by: Daniel Jordan <daniel.m.jordan@oracle.com>
> Cc: Alexey Kardashevskiy <aik@ozlabs.ru>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Benjamin Herrenschmidt <benh@kernel.crashing.org>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Davidlohr Bueso <dave@stgolabs.net>
> Cc: Michael Ellerman <mpe@ellerman.id.au>
> Cc: Paul Mackerras <paulus@samba.org>
> Cc: <linux-mm@kvack.org>
> Cc: <linuxppc-dev@lists.ozlabs.org>
> Cc: <linux-kernel@vger.kernel.org>
> ---
> arch/powerpc/mm/mmu_context_iommu.c | 27 +++++++++++----------------
> 1 file changed, 11 insertions(+), 16 deletions(-)
>
> diff --git a/arch/powerpc/mm/mmu_context_iommu.c b/arch/powerpc/mm/mmu_context_iommu.c
> index 8038ac24a312..a4ef22b67c07 100644
> --- a/arch/powerpc/mm/mmu_context_iommu.c
> +++ b/arch/powerpc/mm/mmu_context_iommu.c
> @@ -54,34 +54,29 @@ struct mm_iommu_table_group_mem_t {
> static long mm_iommu_adjust_locked_vm(struct mm_struct *mm,
> unsigned long npages, bool incr)
> {
> - long ret = 0, locked, lock_limit;
> + long ret = 0;
> + unsigned long lock_limit;
> s64 locked_vm;
>
> if (!npages)
> return 0;
>
> - down_write(&mm->mmap_sem);
> - locked_vm = atomic64_read(&mm->locked_vm);
> if (incr) {
> - locked = locked_vm + npages;
> lock_limit = rlimit(RLIMIT_MEMLOCK) >> PAGE_SHIFT;
> - if (locked > lock_limit && !capable(CAP_IPC_LOCK))
> + locked_vm = atomic64_add_return(npages, &mm->locked_vm);
> + if (locked_vm > lock_limit && !capable(CAP_IPC_LOCK)) {
> ret = -ENOMEM;
> - else
> - atomic64_add(npages, &mm->locked_vm);
> + atomic64_sub(npages, &mm->locked_vm);
> + }
> } else {
> - if (WARN_ON_ONCE(npages > locked_vm))
> - npages = locked_vm;
> - atomic64_sub(npages, &mm->locked_vm);
> + locked_vm = atomic64_sub_return(npages, &mm->locked_vm);
> + WARN_ON_ONCE(locked_vm < 0);
> }
>
> - pr_debug("[%d] RLIMIT_MEMLOCK HASH64 %c%ld %ld/%ld\n",
> - current ? current->pid : 0,
> - incr ? '+' : '-',
> - npages << PAGE_SHIFT,
> - atomic64_read(&mm->locked_vm) << PAGE_SHIFT,
> + pr_debug("[%d] RLIMIT_MEMLOCK HASH64 %c%lu %lld/%lu\n",
> + current ? current->pid : 0, incr ? '+' : '-',
> + npages << PAGE_SHIFT, locked_vm << PAGE_SHIFT,
> rlimit(RLIMIT_MEMLOCK));
> - up_write(&mm->mmap_sem);
>
> return ret;
> }
>

\
 
 \ /
  Last update: 2019-04-03 06:59    [W:0.150 / U:0.344 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site