lkml.org 
[lkml]   [2020]   [Apr]   [1]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH 06/10] iommu/ioasid: Convert to set aware allocations
On Wed, Mar 25, 2020 at 10:55:27AM -0700, Jacob Pan wrote:
> The current ioasid_alloc function takes a token/ioasid_set then record it
> on the IOASID being allocated. There is no alloc/free on the ioasid_set.
>
> With the IOASID set APIs, callers must allocate an ioasid_set before
> allocate IOASIDs within the set. Quota and other ioasid_set level
> activities can then be enforced.
>
> This patch converts existing API to the new ioasid_set model.
>
> Signed-off-by: Liu Yi L <yi.l.liu@intel.com>
> Signed-off-by: Jacob Pan <jacob.jun.pan@linux.intel.com>

[...]

> @@ -379,6 +391,9 @@ ioasid_t ioasid_alloc(struct ioasid_set *set, ioasid_t min, ioasid_t max,
> }
> data->id = id;
>
> + /* Store IOASID in the per set data */
> + xa_store(&sdata->xa, id, data, GFP_KERNEL);

I couldn't figure out why you're maintaining an additional xarray for each
set. We're already storing that data in active_allocator->xa, why the
duplication? If it's for the gPASID -> hPASID translation mentioned by
the cover letter, maybe you could add this xa when introducing that
change?

Thanks,
Jean

\
 
 \ /
  Last update: 2020-04-01 15:56    [W:0.363 / U:1.336 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site