lkml.org 
[lkml]   [2022]   [Mar]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH] mm/mempolicy: fix potential mpol_new leak in shared_policy_replace
From
Date
On 2022/3/17 17:34, Miaohe Lin wrote:
> On 2022/3/17 17:03, Michal Hocko wrote:
>> On Thu 17-03-22 10:05:08, Miaohe Lin wrote:
>>> On 2022/3/16 17:56, Michal Hocko wrote:
>>>> On Wed 16-03-22 14:39:37, Miaohe Lin wrote:
>>>>> On 2022/3/15 23:27, Michal Hocko wrote:
>>>>>> On Tue 15-03-22 21:42:29, Miaohe Lin wrote:
>>>>>>> On 2022/3/15 0:44, Michal Hocko wrote:
>>>>>>>> On Fri 11-03-22 17:36:24, Miaohe Lin wrote:
>>>>>>>>> If mpol_new is allocated but not used in restart loop, mpol_new will be
>>>>>>>>> freed via mpol_put before returning to the caller. But refcnt is not
>>>>>>>>> initialized yet, so mpol_put could not do the right things and might
>>>>>>>>> leak the unused mpol_new.
>>>>>>>>
>>>>>>>> The code is really hideous but is there really any bug there? AFAICS the
>>>>>>>> new policy is only allocated in if (n->end > end) branch and that one
>>>>>>>> will set the reference count on the retry. Or am I missing something?
>>>>>>>>
>>>>>>>
>>>>>>> Many thanks for your comment.
>>>>>>> IIUC, new policy is allocated via the below code:
>>>>>>>
>>>>>>> shared_policy_replace:
>>>>>>> alloc_new:
>>>>>>> write_unlock(&sp->lock);
>>>>>>> ret = -ENOMEM;
>>>>>>> n_new = kmem_cache_alloc(sn_cache, GFP_KERNEL);
>>>>>>> if (!n_new)
>>>>>>> goto err_out;
>>>>>>> mpol_new = kmem_cache_alloc(policy_cache, GFP_KERNEL);
>>>>>>> if (!mpol_new)
>>>>>>> goto err_out;
>>>>>>> goto restart;
>>>>>>>
>>>>>>> And mpol_new' reference count will be set before used in n->end > end case. But
>>>>>>> if that is "not" the case, i.e. mpol_new is not inserted into the rb_tree, mpol_new
>>>>>>> will be freed via mpol_put before return:
>>>>>>
>>>>>> One thing I have missed previously is that the lock is dropped during
>>>>>> the allocation so I guess the memory policy could have been changed
>>>>>> during that time. Is this possible? Have you explored this possibility?
>>>>>> Is this a theoretical problem or it can be triggered intentionally.
>>>>>>
>>>>>
>>>>> This is found via code investigation. I think this could be triggered if there
>>>>> are many concurrent mpol_set_shared_policy in place. But the user-visible effect
>>>>> might be obscure as only sizeof(struct mempolicy) bytes leaks possiblely every time.
>>>>>
>>>>>> These details would be really interesting for the changelog so that we
>>>>>> can judge how important this would be.
>>>>>
>>>>> This might not be that important as this issue should have been well-concealed for
>>>>> almost ten years (since commit 42288fe366c4 ("mm: mempolicy: Convert shared_policy mutex to spinlock")).
>>>>
>>>> I think it is really worth to drill down to the bottom of the issue.
>>>> While theoretically possible can be a good enough to justify the change
>>>> it is usually preferred to describe the underlying problem for future
>>>> maintainability.
>>>
>>> This issue mainly causes mpol_new memory leaks and this is pointed out in the commit log.
>>> Am I supposed to do something more to move forward this patch ? Could you point that out
>>> for me?
>>
>> Sorry if I was not really clear. My main request is to have a clear
>> insight whether this is a theretical issue or the leak could be really
>> triggered. If the later we need to mark it properly and backport to
>> older kernels because memory leaks can lead to DoS when they are
>> reasonably easy to trigger.
>>
>> Is this more clear now?
>
> I see. Many thanks. I would have a try to trigger this. :)
>

This would be triggered easily with below code snippet in my virtual machine:

shmid = shmget((key_t)5566, 1024 * PAGE_SIZE, 0666|IPC_CREAT);
shm = shmat(shmid, 0, 0);
loop {
mbind(shm, 1024 * PAGE_SIZE, MPOL_LOCAL, mask, maxnode, 0);
mbind(shm + 128 * PAGE_SIZE, 128 * PAGE_SIZE, MPOL_DEFAULT, mask, maxnode, 0);
}

If there're many process doing the above work, mpol_new will be leaked easily.
So should I resend this patch with Cc stable? But it seems I'am not supposed
to make this decision and the maintainer will take care of this?

Many thanks. :)

>>
>

\
 
 \ /
  Last update: 2022-03-19 11:43    [W:0.083 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site