lkml.org 
[lkml]   [2022]   [Aug]   [22]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
SubjectRe: [PATCH 4/6] mm: hugetlb_vmemmap: add missing smp_wmb() before set_pte_at()
From
Date
On 2022/8/22 18:23, Muchun Song wrote:
>
>
>> On Aug 22, 2022, at 16:45, Miaohe Lin <linmiaohe@huawei.com> wrote:
>>
>> On 2022/8/20 16:12, Muchun Song wrote:
>>>
>>>
>>>> On Aug 16, 2022, at 21:05, Miaohe Lin <linmiaohe@huawei.com> wrote:
>>>>
>>>> The memory barrier smp_wmb() is needed to make sure that preceding stores
>>>> to the page contents become visible before the below set_pte_at() write.
>>>
>>> I found another place where is a similar case. See kasan_populate_vmalloc_pte() in
>>> mm/kasan/shadow.c.
>>
>> Thanks for your report.
>>
>>>
>>> Should we fix it as well?
>>
>> I'm not familiar with kasan yet, but I think memory barrier is needed here or memory corrupt
>> can't be detected until the contents are visible. smp_mb__after_atomic before set_pte_at should
>> be enough? What's your opinion?
>
> I didn’t see any atomic operation between set_pte_at() and memset(), I don’t think
> smp_mb__after_atomic() is feasible if we really need to insert a barrier. I suggest

Oh, it should be smp_mb__after_spinlock(), i.e. something like below:

diff --git a/mm/kasan/shadow.c b/mm/kasan/shadow.c
index 0e3648b603a6..38e503c89740 100644
--- a/mm/kasan/shadow.c
+++ b/mm/kasan/shadow.c
@@ -277,6 +277,7 @@ static int kasan_populate_vmalloc_pte(pte_t *ptep, unsigned long addr,

spin_lock(&init_mm.page_table_lock);
if (likely(pte_none(*ptep))) {
+ smp_mb__after_spinlock();
set_pte_at(&init_mm, addr, ptep, pte);
page = 0;
}
Does this make sense for you?

> you to send a RFC patch to KASAN maintainers, they are more familiar with this than
> us.

Sounds like a good idea. Will do it.

Thanks,
Miaohe Lin

\
 
 \ /
  Last update: 2022-08-23 03:43    [W:0.096 / U:1.004 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site