lkml.org 
[lkml]   [2023]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH v2 08/12] iommu: Prepare for separating SVA and IOPF
From
On 2023/8/3 16:16, Tian, Kevin wrote:
>> From: Lu Baolu <baolu.lu@linux.intel.com>
>> Sent: Thursday, July 27, 2023 1:49 PM
>>
>> @@ -82,7 +82,7 @@ static void iopf_handler(struct work_struct *work)
>> if (!domain || !domain->iopf_handler)
>> status = IOMMU_PAGE_RESP_INVALID;
>>
>> - list_for_each_entry_safe(iopf, next, &group->faults, list) {
>> + list_for_each_entry(iopf, &group->faults, list) {
>> /*
>> * For the moment, errors are sticky: don't handle
>> subsequent
>> * faults in the group if there is an error.
>> @@ -90,14 +90,20 @@ static void iopf_handler(struct work_struct *work)
>> if (status == IOMMU_PAGE_RESP_SUCCESS)
>> status = domain->iopf_handler(&iopf->fault,
>> domain->fault_data);
>> -
>> - if (!(iopf->fault.prm.flags &
>> - IOMMU_FAULT_PAGE_REQUEST_LAST_PAGE))
>> - kfree(iopf);
>> }
>>
>> iopf_complete_group(group->dev, &group->last_fault, status);
>> - kfree(group);
>> + iopf_free_group(group);
>> +}
>
> this is perf-critical path. It's not good to traverse the list twice.

Freeing the fault group is not critical anymore, right?

>
>> +
>> +static int iopf_queue_work(struct iopf_group *group, work_func_t func)
>> +{
>> + struct iopf_device_param *iopf_param = group->dev->iommu-
>>> iopf_param;
>> +
>> + INIT_WORK(&group->work, func);
>> + queue_work(iopf_param->queue->wq, &group->work);
>> +
>> + return 0;
>> }
>
> Is there plan to introduce further error in the future? otherwise this should
> be void.

queue_work() return true or false. I should check and return the value.

>
> btw the work queue is only for sva. If there is no other caller this can be
> just kept in iommu-sva.c. No need to create a helper.

The definition of struct iopf_device_param is in this file. So I added a
helper to avoid making iopf_device_param visible globally.

>
>> @@ -199,8 +204,11 @@ int iommu_queue_iopf(struct iommu_fault *fault,
>> struct device *dev)
>> list_move(&iopf->list, &group->faults);
>> }
>>
>> - queue_work(iopf_param->queue->wq, &group->work);
>> - return 0;
>> + ret = iopf_queue_work(group, iopf_handler);
>> + if (ret)
>> + iopf_free_group(group);
>> +
>> + return ret;
>>
>
> Here we can document that the iopf handler (in patch10) should free the
> group, allowing the optimization inside the handler.

Yeah!

Best regards,
baolu

\
 
 \ /
  Last update: 2023-08-04 05:29    [W:0.551 / U:0.316 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site