lkml.org 
[lkml]   [2021]   [Mar]   [30]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Subject[PATCH v3 0/8] make hugetlb put_page safe for all calling contexts
Date
This effort is the result a recent bug report [1].  Syzbot found a
potential deadlock in the hugetlb put_page/free_huge_page_path.
WARNING: SOFTIRQ-safe -> SOFTIRQ-unsafe lock order detected
Since the free_huge_page_path already has code to 'hand off' page
free requests to a workqueue, a suggestion was proposed to make
the in_irq() detection accurate by always enabling PREEMPT_COUNT [2].
The outcome of that discussion was that the hugetlb put_page path
(free_huge_page) path should be properly fixed and safe for all calling
contexts.

This series is based on v5.12-rc3-mmotm-2021-03-17-22-24. At a high
level, the series provides:
- Patches 1 & 2 change CMA bitmap mutex to an irq safe spinlock
- Patch 3 adds a mutex for proc/sysfs interfaces changing hugetlb counts
- Patches 4, 5 & 6 are aimed at reducing lock hold times. To be clear
the goal is to eliminate single lock hold times of a long duration.
Overall lock hold time is not addressed.
- Patch 7 makes hugetlb_lock and subpool lock IRQ safe. It also reverts
the code which defers calls to a workqueue if !in_task.
- Patch 8 adds some lockdep_assert_held() calls

[1] https://lore.kernel.org/linux-mm/000000000000f1c03b05bc43aadc@google.com/
[2] http://lkml.kernel.org/r/20210311021321.127500-1-mike.kravetz@oracle.com

v2 -> v3
- Update commit message in patch 1 as suggested by Michal
- Do not use spin_lock_irqsave/spin_unlock_irqrestore when we know we
are in task context as suggested by Michal
- Remove unnecessary INIT_LIST_HEAD() as suggested by Muchun

v1 -> v2
- Drop Roman's cma_release_nowait() patches and just change CMA mutex
to an IRQ safe spinlock.
- Cleanups to variable names, commets and commit messages as suggested
by Michal, Oscar, Miaohe and Muchun.
- Dropped unnecessary INIT_LIST_HEAD as suggested by Michal and list_del
as suggested by Muchun.
- Created update_and_free_pages_bulk helper as suggested by Michal.
- Rebased on v5.12-rc4-mmotm-2021-03-28-16-37
- Added Acked-by: and Reviewed-by: from v1

RFC -> v1
- Add Roman's cma_release_nowait() patches. This eliminated the need
to do a workqueue handoff in hugetlb code.
- Use Michal's suggestion to batch pages for freeing. This eliminated
the need to recalculate loop control variables when dropping the lock.
- Added lockdep_assert_held() calls
- Rebased to v5.12-rc3-mmotm-2021-03-17-22-24

Mike Kravetz (8):
mm/cma: change cma mutex to irq safe spinlock
hugetlb: no need to drop hugetlb_lock to call cma_release
hugetlb: add per-hstate mutex to synchronize user adjustments
hugetlb: create remove_hugetlb_page() to separate functionality
hugetlb: call update_and_free_page without hugetlb_lock
hugetlb: change free_pool_huge_page to remove_pool_huge_page
hugetlb: make free_huge_page irq safe
hugetlb: add lockdep_assert_held() calls for hugetlb_lock

include/linux/hugetlb.h | 1 +
mm/cma.c | 18 +--
mm/cma.h | 2 +-
mm/cma_debug.c | 8 +-
mm/hugetlb.c | 337 +++++++++++++++++++++-------------------
mm/hugetlb_cgroup.c | 8 +-
6 files changed, 195 insertions(+), 179 deletions(-)

--
2.30.2

\
 
 \ /
  Last update: 2021-03-31 05:44    [W:0.168 / U:0.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site