lkml.org 
[lkml]   [2021]   [Apr]   [15]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH v3 01/12] iommu: Introduce dirty log tracking framework
From
Date
On 4/15/21 2:18 PM, Keqian Zhu wrote:
> Hi Baolu,
>
> Thanks for the review!
>
> On 2021/4/14 15:00, Lu Baolu wrote:
>> Hi Keqian,
>>
>> On 4/13/21 4:54 PM, Keqian Zhu wrote:
>>> Some types of IOMMU are capable of tracking DMA dirty log, such as
>>> ARM SMMU with HTTU or Intel IOMMU with SLADE. This introduces the
>>> dirty log tracking framework in the IOMMU base layer.
>>>
>>> Three new essential interfaces are added, and we maintaince the status
>>> of dirty log tracking in iommu_domain.
>>> 1. iommu_switch_dirty_log: Perform actions to start|stop dirty log tracking
>>> 2. iommu_sync_dirty_log: Sync dirty log from IOMMU into a dirty bitmap
>>> 3. iommu_clear_dirty_log: Clear dirty log of IOMMU by a mask bitmap
>>>
>>> A new dev feature are added to indicate whether a specific type of
>>> iommu hardware supports and its driver realizes them.
>>>
>>> Signed-off-by: Keqian Zhu <zhukeqian1@huawei.com>
>>> Signed-off-by: Kunkun Jiang <jiangkunkun@huawei.com>
>>> ---
>>> drivers/iommu/iommu.c | 150 ++++++++++++++++++++++++++++++++++++++++++
>>> include/linux/iommu.h | 53 +++++++++++++++
>>> 2 files changed, 203 insertions(+)
>>>
>>> diff --git a/drivers/iommu/iommu.c b/drivers/iommu/iommu.c
>>> index d0b0a15dba84..667b2d6d2fc0 100644
>>> --- a/drivers/iommu/iommu.c
>>> +++ b/drivers/iommu/iommu.c
>>> @@ -1922,6 +1922,7 @@ static struct iommu_domain *__iommu_domain_alloc(struct bus_type *bus,
>>> domain->type = type;
>>> /* Assume all sizes by default; the driver may override this later */
>>> domain->pgsize_bitmap = bus->iommu_ops->pgsize_bitmap;
>>> + mutex_init(&domain->switch_log_lock);
>>> return domain;
>>> }
>>> @@ -2720,6 +2721,155 @@ int iommu_domain_set_attr(struct iommu_domain *domain,
>>> }
>>> EXPORT_SYMBOL_GPL(iommu_domain_set_attr);
>>> +int iommu_switch_dirty_log(struct iommu_domain *domain, bool enable,
>>> + unsigned long iova, size_t size, int prot)
>>> +{
>>> + const struct iommu_ops *ops = domain->ops;
>>> + int ret;
>>> +
>>> + if (unlikely(!ops || !ops->switch_dirty_log))
>>> + return -ENODEV;
>>> +
>>> + mutex_lock(&domain->switch_log_lock);
>>> + if (enable && domain->dirty_log_tracking) {
>>> + ret = -EBUSY;
>>> + goto out;
>>> + } else if (!enable && !domain->dirty_log_tracking) {
>>> + ret = -EINVAL;
>>> + goto out;
>>> + }
>>> +
>>> + ret = ops->switch_dirty_log(domain, enable, iova, size, prot);
>>> + if (ret)
>>> + goto out;
>>> +
>>> + domain->dirty_log_tracking = enable;
>>> +out:
>>> + mutex_unlock(&domain->switch_log_lock);
>>> + return ret;
>>> +}
>>> +EXPORT_SYMBOL_GPL(iommu_switch_dirty_log);
>>
>> Since you also added IOMMU_DEV_FEAT_HWDBM, I am wondering what's the
>> difference between
>>
>> iommu_switch_dirty_log(on) vs. iommu_dev_enable_feature(IOMMU_DEV_FEAT_HWDBM)
>>
>> iommu_switch_dirty_log(off) vs. iommu_dev_disable_feature(IOMMU_DEV_FEAT_HWDBM)
> Indeed. As I can see, IOMMU_DEV_FEAT_AUX is not switchable, so enable/disable
> are not applicable for it. IOMMU_DEV_FEAT_SVA is switchable, so we can use these
> interfaces for it.
>
> IOMMU_DEV_FEAT_HWDBM is used to indicate whether hardware supports HWDBM, so we should

Previously we had iommu_dev_has_feature() and then was cleaned up due to
lack of real users. If you have a real case for it, why not bringing it
back?

> design it as not switchable. I will modify the commit message of patch#12, thanks!

I am not sure that I fully get your point. But I can't see any gaps of
using iommu_dev_enable/disable_feature() to switch dirty log on and off.
Probably I missed anything.

>
>>
>>> +
>>> +int iommu_sync_dirty_log(struct iommu_domain *domain, unsigned long iova,
>>> + size_t size, unsigned long *bitmap,
>>> + unsigned long base_iova, unsigned long bitmap_pgshift)
>>> +{
>>> + const struct iommu_ops *ops = domain->ops;
>>> + unsigned int min_pagesz;
>>> + size_t pgsize;
>>> + int ret = 0;
>>> +
>>> + if (unlikely(!ops || !ops->sync_dirty_log))
>>> + return -ENODEV;
>>> +
>>> + min_pagesz = 1 << __ffs(domain->pgsize_bitmap);
>>> + if (!IS_ALIGNED(iova | size, min_pagesz)) {
>>> + pr_err("unaligned: iova 0x%lx size 0x%zx min_pagesz 0x%x\n",
>>> + iova, size, min_pagesz);
>>> + return -EINVAL;
>>> + }
>>> +
>>> + mutex_lock(&domain->switch_log_lock);
>>> + if (!domain->dirty_log_tracking) {
>>> + ret = -EINVAL;
>>> + goto out;
>>> + }
>>> +
>>> + while (size) {
>>> + pgsize = iommu_pgsize(domain, iova, size);
>>> +
>>> + ret = ops->sync_dirty_log(domain, iova, pgsize,
>>> + bitmap, base_iova, bitmap_pgshift);
>>
>> Any reason why do you want to do this in a per-4K page manner? This can
>> lead to a lot of indirect calls and bad performance.
>>
>> How about a sync_dirty_pages()?
> The function name of iommu_pgsize() is a bit puzzling. Actually it will try to
> compute the max size that fit into size, so the pgsize can be a large page size
> even if the underlying mapping is 4K. The __iommu_unmap() also has a similar logic.

This series has some improvement on the iommu_pgsize() helper.

https://lore.kernel.org/linux-iommu/20210405191112.28192-1-isaacm@codeaurora.org/

Best regards,
baolu

\
 
 \ /
  Last update: 2021-04-15 09:14    [W:0.142 / U:0.620 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site