lkml.org 
[lkml]   [2023]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] iommu/vt-d: debugfs: Increment the reference count of page table page
From
Hi Kevin,

On 7/3/2023 10:48 PM, Liu, Jingqi wrote:
>
> On 7/3/2023 3:00 PM, Tian, Kevin wrote:
>>> From: Liu, Jingqi <jingqi.liu@intel.com>
>>> Sent: Sunday, June 25, 2023 10:28 AM
>>>
>>> There may be a race with iommu_unmap() interface when traversing a page
>>> table.
>>>
>>> When debugfs traverses an IOMMU page table, iommu_unmap() may clear
>>> entries and free the page table pages pointed to by the entries.
>>> So debugfs may read invalid or freed pages.
>>>
>>> To avoid this, increment the refcount of a page table page before
>>> traversing the page, and decrement its refcount after traversing it.
>> I'm not sure how this race can be fully avoided w/o cooperation in the
>> unmap path...
> Thanks.
> Indeed, in order to fully avoid this, need to cooperate in the unmap
> path. :)
>>> Signed-off-by: Jingqi Liu <Jingqi.liu@intel.com>
>>> ---
>>>   drivers/iommu/intel/debugfs.c | 36 +++++++++++++++++++++++++++++++++-
>>> -
>>>   1 file changed, 34 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/drivers/iommu/intel/debugfs.c
>>> b/drivers/iommu/intel/debugfs.c
>>> index 1f925285104e..d228e1580aec 100644
>>> --- a/drivers/iommu/intel/debugfs.c
>>> +++ b/drivers/iommu/intel/debugfs.c
>>> @@ -333,9 +333,41 @@ static void pgtable_walk_level(struct seq_file *m,
>>> struct dma_pte *pde,
>>>           path[level] = pde->val;
>>>           if (dma_pte_superpage(pde) || level == 1)
>>>               dump_page_info(m, start, path);
>>> -        else
>>> -            pgtable_walk_level(m,
>>> phys_to_virt(dma_pte_addr(pde)),
>>> +        else {
>>> +            struct page *pg;
>>> +            u64 pte_addr;
>>> +
>>> +            /*
>>> +             * The entry references a Page-Directory Table
>>> +             * or a Page Table.
>>> +             */
>>> +retry:
>>> +            pte_addr = dma_pte_addr(pde);
>>> +            pg = pfn_to_page(pte_addr >> PAGE_SHIFT);
>>> +            if (!get_page_unless_zero(pg))
>>> +                /*
>>> +                 * If this page has a refcount of zero,
>>> +                 * it has been freed, or will be freed.
>>> +                 */
>>> +                continue;
>>> +
>>> +            /* Check if the value of the entry is changed. */
>>> +            if (pde->val != path[level]) {
>>> +                put_page(pg);
>>> +
>>> +                if (!dma_pte_present(pde))
>>> +                    /* The entry is invalid. Skip it. */
>>> +                    continue;
>>> +
>>> +                /* The entry has been updated. */
>>> +                path[level] = pde->val;
>>> +                goto retry;
>>> +            }
>>> +
>>> +            pgtable_walk_level(m, phys_to_virt(pte_addr),
>>>                          level - 1, start, path);
>> What about pde->val getting cleared after phys_to_virt(pte_addr) leading
>> to all the levels below 'pg' being freed? In that case this code
>> still walks
>> the stale 'pg' content which however all point to invalid pages then.
> There are 2 cases for the page pointed to by the PTE below 'pg'.
> 1) the page has been freed.
>      It will be skipped after the following check:
>                         if (!get_page_unless_zero(pg))
>                                 /*
>                                  * If this page has a refcount of zero,
>                                  * it has been freed, or will be freed.
>                                  */
>                                 continue;
>      Debugfs won't walk further.
>
> 2) The page has not been freed.
>      The content of this page is stale.
>      Dumping these stale content seems to be acceptable for debugfs.
>
>      If all the PTEs below 'pg' can be cleared before being freed in
> the unmap path,
>      the following check can avoid to walk these stale pages.
>                 if (!dma_pte_present(pde))
>                         continue;
>> It's probably simpler to just check the format of each PTE (e.g. whether
>> any reserved bit is set) and if abnormal then break the current level of
>> walk.
Thanks for your suggestion.
If the PTE references a page directory/table, bit 3 is ignored by
hardware according the spec.
In the IOMMU driver, bit 3 is set to 0 by default.

How about setting bit 3 of the corresponding PTE to 1 in the unmap path
to indicate that the page pointed to by the PTE is stale ?

The code modified in the unmap path is as follows:

static void dma_pte_list_pagetables(struct dmar_domain *domain,
                                    int level, struct dma_pte *pte,
                                    struct list_head *freelist)
{
        struct page *pg;

        pg = pfn_to_page(dma_pte_addr(pte) >> PAGE_SHIFT);
        list_add_tail(&pg->lru, freelist);
+
+       pte->val |= BIT_ULL(3);
......

Then during debugfs traversal, check bit 3 of the PTE before calling
pgtable_walk_level().
If this bit is 1, debugfs will stop traversing.
The related code in debugfs is like below:

+retry:
+                       pte_addr = dma_pte_addr(pde);
+                       pg = pfn_to_page(pte_addr >> PAGE_SHIFT);
+                       if (!get_page_unless_zero(pg))
+                               /*
+                                * If this page has a refcount of zero,
+                                * it has been freed, or will be freed.
+                                */
+                               continue;
+
+                       /*
+                        * Check if the page that pointed to
+                        * by the PTE is stale.
+                        */
+                       if (pde->val & BIT_ULL(3)) {
+                               put_page(pg);
+                               continue;
+                       }
+
+                       /* Check if the value of this entry is changed. */
+                       if (pde->val != path[level]) {
+                               put_page(pg);
+
+                               if (!dma_pte_present(pde))
+                                       /* This entry is invalid. Skip
it. */
+                                       continue;
+
+                               /* The entry has been updated. */
+                               path[level] = pde->val;
+                               goto retry;
+                       }
+
+                       pgtable_walk_level(m, phys_to_virt(pte_addr),
                                           level - 1, start, path);

Thanks,
Jingqi


\
 
 \ /
  Last update: 2023-08-03 13:23    [W:0.041 / U:0.128 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site