lkml.org 
[lkml]   [2018]   [Apr]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC] Is it correctly that the usage for spin_{lock|unlock}_irq in clear_page_dirty_for_io
From
Date


On 4/4/2018 7:12 AM, Greg Thelen wrote:
> On Tue, Apr 3, 2018 at 5:03 AM Michal Hocko <mhocko@kernel.org> wrote:
>
>> On Mon 02-04-18 19:50:50, Wang Long wrote:
>>> Hi, Johannes Weiner and Tejun Heo
>>>
>>> I use linux-4.4.y to test the new cgroup controller io and the current
>>> stable kernel linux-4.4.y has the follow logic
>>>
>>>
>>> int clear_page_dirty_for_io(struct page *page){
>>> ...
>>> ...
>>> memcg = mem_cgroup_begin_page_stat(page); ----------(a)
>>> wb = unlocked_inode_to_wb_begin(inode, &locked);
> ---------(b)
>>> if (TestClearPageDirty(page)) {
>>> mem_cgroup_dec_page_stat(memcg,
> MEM_CGROUP_STAT_DIRTY);
>>> dec_zone_page_state(page, NR_FILE_DIRTY);
>>> dec_wb_stat(wb, WB_RECLAIMABLE);
>>> ret =1;
>>> }
>>> unlocked_inode_to_wb_end(inode, locked); -----------(c)
>>> mem_cgroup_end_page_stat(memcg); -------------(d)
>>> return ret;
>>> ...
>>> ...
>>> }
>>>
>>>
>>> when memcg is moving, and I_WB_SWITCH flags for inode is set. the logic
>>> is the following:
>>>
>>>
>>> spin_lock_irqsave(&memcg->move_lock, flags); -------------(a)
>>> spin_lock_irq(&inode->i_mapping->tree_lock); ------------(b)
>>> spin_unlock_irq(&inode->i_mapping->tree_lock); -----------(c)
>>> spin_unlock_irqrestore(&memcg->move_lock, flags); -----------(d)
>>>
>>>
>>> after (c) , the local irq is enabled. I think it is not correct.
>>>
>>> We get a deadlock backtrace after (c), the cpu get an softirq and in the
>>> irq it also call mem_cgroup_begin_page_stat to lock the same
>>> memcg->move_lock.
>>>
>>> Since the conditions are too harsh, this scenario is difficult to
>>> reproduce. But it really exists.
>>>
>>> So how about change (b) (c) to spin_lock_irqsave/spin_lock_irqrestore?
>> Yes, it seems we really need this even for the current tree. Please note
>> that At least clear_page_dirty_for_io doesn't lock memcg anymore.
>> __cancel_dirty_page still uses lock_page_memcg though (former
>> mem_cgroup_begin_page_stat).
>> --
>> Michal Hocko
>> SUSE Labs
> I agree the issue looks real in 4.4 stable and upstream. It seems like
> unlocked_inode_to_wb_begin/_end should use spin_lock_irqsave/restore.
>
> I'm testing a little patch now.
Thanks.

When fix it on upstream. The longterm kernel 4.9 and 4.14 also need to fix.

\
 
 \ /
  Last update: 2018-04-04 08:32    [W:8.511 / U:0.032 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site