Messages in this thread | | | Subject | Re: [PATCH] mm: avoid slub allocation while holding list_lock | From | Tetsuo Handa <> | Date | Tue, 10 Sep 2019 05:57:22 +0900 |
| |
On 2019/09/10 1:00, Kirill A. Shutemov wrote: > On Mon, Sep 09, 2019 at 12:10:16AM -0600, Yu Zhao wrote: >> If we are already under list_lock, don't call kmalloc(). Otherwise we >> will run into deadlock because kmalloc() also tries to grab the same >> lock. >> >> Instead, allocate pages directly. Given currently page->objects has >> 15 bits, we only need 1 page. We may waste some memory but we only do >> so when slub debug is on. >> >> WARNING: possible recursive locking detected >> -------------------------------------------- >> mount-encrypted/4921 is trying to acquire lock: >> (&(&n->list_lock)->rlock){-.-.}, at: ___slab_alloc+0x104/0x437 >> >> but task is already holding lock: >> (&(&n->list_lock)->rlock){-.-.}, at: __kmem_cache_shutdown+0x81/0x3cb >> >> other info that might help us debug this: >> Possible unsafe locking scenario: >> >> CPU0 >> ---- >> lock(&(&n->list_lock)->rlock); >> lock(&(&n->list_lock)->rlock); >> >> *** DEADLOCK *** >> >> Signed-off-by: Yu Zhao <yuzhao@google.com> > > Looks sane to me: > > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com> >
Really?
Since page->objects is handled as bitmap, alignment should be BITS_PER_LONG than BITS_PER_BYTE (though in this particular case, get_order() would implicitly align BITS_PER_BYTE * PAGE_SIZE). But get_order(0) is an undefined behavior.
| |