lkml.org 
[lkml]   [2018]   [Feb]   [14]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH v7 6/6] drm/msm: iommu: Replace runtime calls with runtime suppliers
On Wed, Feb 14, 2018 at 1:17 PM, Vivek Gautam
<vivek.gautam@codeaurora.org> wrote:
> Hi Tomasz,
>
> On Wed, Feb 14, 2018 at 8:31 AM, Tomasz Figa <tfiga@chromium.org> wrote:
>> On Wed, Feb 14, 2018 at 11:13 AM, Rob Clark <robdclark@gmail.com> wrote:
>>> On Tue, Feb 13, 2018 at 8:59 PM, Tomasz Figa <tfiga@chromium.org> wrote:
>>>> On Wed, Feb 14, 2018 at 3:03 AM, Rob Clark <robdclark@gmail.com> wrote:
>>>>> On Tue, Feb 13, 2018 at 4:10 AM, Tomasz Figa <tfiga@chromium.org> wrote:
>>>>>> Hi Vivek,
>>>>>>
>>>>>> Thanks for the patch. Please see my comments inline.
>>>>>>
>>>>>> On Wed, Feb 7, 2018 at 7:31 PM, Vivek Gautam
>>>>>> <vivek.gautam@codeaurora.org> wrote:
>>>>>>> While handling the concerned iommu, there should not be a
>>>>>>> need to power control the drm devices from iommu interface.
>>>>>>> If these drm devices need to be powered around this time,
>>>>>>> the respective drivers should take care of this.
>>>>>>>
>>>>>>> Replace the pm_runtime_get/put_sync(<drm_device>) with
>>>>>>> pm_runtime_get/put_suppliers(<drm_device>) calls, to power-up
>>>>>>> the connected iommu through the device link interface.
>>>>>>> In case the device link is not setup these get/put_suppliers()
>>>>>>> calls will be a no-op, and the iommu driver should take care of
>>>>>>> powering on its devices accordingly.
>>>>>>>
>>>>>>> Signed-off-by: Vivek Gautam <vivek.gautam@codeaurora.org>
>>>>>>> ---
>>>>>>> drivers/gpu/drm/msm/msm_iommu.c | 16 ++++++++--------
>>>>>>> 1 file changed, 8 insertions(+), 8 deletions(-)
>>>>>>>
>>>>>>> diff --git a/drivers/gpu/drm/msm/msm_iommu.c b/drivers/gpu/drm/msm/msm_iommu.c
>>>>>>> index b23d33622f37..1ab629bbee69 100644
>>>>>>> --- a/drivers/gpu/drm/msm/msm_iommu.c
>>>>>>> +++ b/drivers/gpu/drm/msm/msm_iommu.c
>>>>>>> @@ -40,9 +40,9 @@ static int msm_iommu_attach(struct msm_mmu *mmu, const char * const *names,
>>>>>>> struct msm_iommu *iommu = to_msm_iommu(mmu);
>>>>>>> int ret;
>>>>>>>
>>>>>>> - pm_runtime_get_sync(mmu->dev);
>>>>>>> + pm_runtime_get_suppliers(mmu->dev);
>>>>>>> ret = iommu_attach_device(iommu->domain, mmu->dev);
>>>>>>> - pm_runtime_put_sync(mmu->dev);
>>>>>>> + pm_runtime_put_suppliers(mmu->dev);
>>>>>>
>>>>>> For me, it looks like a wrong place to handle runtime PM of IOMMU
>>>>>> here. iommu_attach_device() calls into IOMMU driver's attach_device()
>>>>>> callback and that's where necessary runtime PM gets should happen, if
>>>>>> any. In other words, driver A (MSM DRM driver) shouldn't be dealing
>>>>>> with power state of device controlled by driver B (ARM SMMU).
>>>>>
>>>>> Note that we end up having to do the same, because of iommu_unmap()
>>>>> while DRM driver is powered off.. it might be cleaner if it was all
>>>>> self contained in the iommu driver, but that would make it so other
>>>>> drivers couldn't call iommu_unmap() from an irq handler, which is
>>>>> apparently something that some of them want to do..
>>>>
>>>> I'd assume that runtime PM status is already guaranteed to be active
>>>> when the IRQ handler is running, by some other means (e.g.
>>>> pm_runtime_get_sync() called earlier, when queuing some work to the
>>>> hardware). Otherwise, I'm not sure how a powered down device could
>>>> trigger an IRQ.
>>>>
>>>> So, if the master device power is already on, suppliers should be
>>>> powered on as well, thanks to device links.
>>>>
>>>
>>> umm, that is kindof the inverse of the problem.. the problem is
>>> things like gpu driver (and v4l2 drivers that import dma-buf's,
>>> afaict).. they will potentially call iommu->unmap() when device is not
>>> active (due to userspace or things beyond the control of the driver)..
>>> so *they* would want iommu to do pm get/put calls.
>>
>> Which is fine and which is actually already done by one of the patches
>> in this series, not for map/unmap, but probe, add_device,
>> remove_device. Having parts of the API doing it inside the callback
>> and other parts outside sounds at least inconsistent.
>>
>>> But other drivers
>>> trying to unmap from irq ctx would not. Which is the contradictory
>>> requirement that lead to the idea of iommu user powering up iommu for
>>> unmap.
>>
>> Sorry, maybe I wasn't clear. My last message was supposed to show that
>> it's not contradictory at all, because "other drivers trying to unmap
>> from irq ctx" would already have called pm_runtime_get_*() earlier
>> from a non-irq ctx, which would have also done the same on all the
>> linked suppliers, including the IOMMU. The ultimate result would be
>> that the map/unmap() of the IOMMU driver calling pm_runtime_get_sync()
>> would do nothing besides incrementing the reference count.
>
> The entire point was to avoid the slowpath that pm_runtime_get/put_sync()
> would add in map/unmap. It would not be correct to add a slowpath in irq_ctx
> for taking care of non-irq_ctx and for the situations where master is already
> powered-off.

Correct me if I'm wrong, but I believe that with what I'm proposing
there wouldn't be any slow path.

a) For IRQ context, the master is already powered on and so the SMMU
is also powered on, through respective device link.
pm_runtime_get_sync() would ultimately just increment the runtime PM
usage count.

b) For a case when the master is already powered off (which wouldn't
be IRQ context, for the reason stated in a)), powering on the SMMU is
unavoidable, if the SMMU hardware really needs to be accessed (i.e.
some TLBs need to be invalidated, if their state is preserved despite
master being powered down).

>
>>
>>>
>>> There has already been some discussion about this on various earlier
>>> permutations of this patchset. I think we have exhausted all other
>>> options.
>>
>> I guess I should have read those. Let me do that now.
> Yea, i point to the thread in cover letter and [PATCH 1/6].
> Thanks.

I read through all the links in the cover letter and I could see other
attempts not working out indeed, but they were different from what I'm
proposing.

There was also a point raised that __pm_runtime_resume() called from
pm_runtime_get_sync() would grab dev->power_lock spinlock, which is
true, except that if the device is already active, it would do it only
for the time of checking device state, so I doubt it would really be a
significant point of contention.

Best regards,
Tomasz

\
 
 \ /
  Last update: 2018-02-14 06:40    [W:0.090 / U:1.048 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site