lkml.org 
[lkml]   [2013]   [Jul]   [18]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: hugepage related lockdep trace.
From
On Fri, Jul 19, 2013 at 1:42 AM, Aneesh Kumar K.V
<aneesh.kumar@linux.vnet.ibm.com> wrote:
> Minchan Kim <minchan@kernel.org> writes:
>> IMHO, it's a false positive because i_mmap_mutex was held by kswapd
>> while one in the middle of fault path could be never on kswapd context.
>>
>> It seems lockdep for reclaim-over-fs isn't enough smart to identify
>> between background and direct reclaim.
>>
>> Wait for other's opinion.
>
> Is that reasoning correct ?. We may not deadlock because hugetlb pages
> cannot be reclaimed. So the fault path in hugetlb won't end up
> reclaiming pages from same inode. But the report is correct right ?
>
>
> Looking at the hugetlb code we have in huge_pmd_share
>
> out:
> pte = (pte_t *)pmd_alloc(mm, pud, addr);
> mutex_unlock(&mapping->i_mmap_mutex);
> return pte;
>
> I guess we should move that pmd_alloc outside i_mmap_mutex. Otherwise
> that pmd_alloc can result in a reclaim which can call shrink_page_list ?
>
Hm, can huge pages be reclaimed, say by kswapd currently?

> Something like ?
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 83aff0a..2cb1be3 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -3266,8 +3266,8 @@ pte_t *huge_pmd_share(struct mm_struct *mm, unsigned long addr, pud_t *pud)
> put_page(virt_to_page(spte));
> spin_unlock(&mm->page_table_lock);
> out:
> - pte = (pte_t *)pmd_alloc(mm, pud, addr);
> mutex_unlock(&mapping->i_mmap_mutex);
> + pte = (pte_t *)pmd_alloc(mm, pud, addr);
> return pte;
> }
>


\
 
 \ /
  Last update: 2013-07-19 05:01    [W:0.118 / U:0.408 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site