lkml.org 
[lkml]   [2019]   [Mar]   [27]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v4] kmemleak: survive in a low-memory situation
From
Date
On 3/27/19 7:44 AM, Michal Hocko wrote> What? Normal spin lock implementation
doesn't disable interrupts. So
> either I misunderstand what you are saying or you seem to be confused.
> the thing is that in_atomic relies on preempt_count to work properly and
> if you have CONFIG_PREEMPT_COUNT=n then you simply never know whether
> preemption is disabled so you do not know that a spin_lock is held.
> irqs_disabled on the other hand checks whether arch specific flag for
> IRQs handling is set (or cleared). So you would only catch irq safe spin
> locks with the above check.

Exactly, because kmemleak_alloc() is only called in a few call sites, slab
allocation, neigh_hash_alloc(), alloc_page_ext(), sg_kmalloc(),
early_amd_iommu_init() and blk_mq_alloc_rqs(), my review does not yield any of
those holding irq unsafe spinlocks.

Could future code changes suddenly call kmemleak_alloc() with a irq unsafe
spinlock held? Always possible, but it is unlikely to happen. I could put some
comments on kmemleak_alloc() about this though.

\
 
 \ /
  Last update: 2019-03-27 14:06    [W:0.150 / U:0.384 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site