lkml.org 
[lkml]   [2020]   [Feb]   [5]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/3] soc: qcom: rpmh: Update rpm_msgs offset address and add list_del
From
Date

On 2/5/2020 6:01 AM, Evan Green wrote:
> On Mon, Feb 3, 2020 at 10:14 PM Maulik Shah <mkshah@codeaurora.org> wrote:
>> rpm_msgs are copied in continuously allocated memory during write_batch.
>> Update request pointer to correctly point to designated area for rpm_msgs.
>>
>> While at this also add missing list_del before freeing rpm_msgs.
>>
>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>> ---
>> drivers/soc/qcom/rpmh.c | 9 ++++++---
>> 1 file changed, 6 insertions(+), 3 deletions(-)
>>
>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>> index c3d6f00..04c7805 100644
>> --- a/drivers/soc/qcom/rpmh.c
>> +++ b/drivers/soc/qcom/rpmh.c
>> @@ -65,7 +65,7 @@ struct cache_req {
>> struct batch_cache_req {
>> struct list_head list;
>> int count;
>> - struct rpmh_request rpm_msgs[];
>> + struct rpmh_request *rpm_msgs;
>> };
>>
>> static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
>> @@ -327,8 +327,10 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>> unsigned long flags;
>>
>> spin_lock_irqsave(&ctrlr->cache_lock, flags);
>> - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>> + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) {
>> + list_del(&req->list);
>> kfree(req);
>> + }
>> INIT_LIST_HEAD(&ctrlr->batch_cache);
> Hm, I don't get it. list_for_each_entry_safe ensures you can traverse
> the list while freeing it behind you. ctrlr->batch_cache is now a
> bogus list, but is re-inited with the lock held. From my reading,
> there doesn't seem to be anything wrong with the current code. Can you
> elaborate on the bug you found?

Hi Evan,

when we don't do list_del, there might be access to already freed memory.
Even after current item free via kfree(req), without list_del, the next
and prev item's pointer are still pointing to this freed region.
it seem best to call list_del to ensure that before freeing this area,
no other item in list refer to this.

>
>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>> }
>> @@ -377,10 +379,11 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>> return -ENOMEM;
>>
>> req = ptr;
>> + rpm_msgs = ptr + sizeof(*req);
>> compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
>>
>> req->count = count;
>> - rpm_msgs = req->rpm_msgs;
>> + req->rpm_msgs = rpm_msgs;
> I don't really understand what this is fixing either, can you explain?
the continous memory allocated via below is for 3 items,

ptr = kzalloc(sizeof(*req) + count * (sizeof(req->rpm_msgs[0]) +
sizeof(*compls)), GFP_ATOMIC);

1. batch_cache_req,  followed by
2. total count of rpmh_request,  followed by
3. total count of compls

current code starts using (3) compls from proper offset in memory
        compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);

however for (2) rpmh_request it does

        rpm_msgs = req->rpm_msgs;

because of this it starts 8 byte before its designated area and overlaps
with (1) batch_cache_req struct's last entry.
this patch corrects it via below to ensure rpmh_request uses correct
start address in memory.

        rpm_msgs = ptr + sizeof(*req);

Hope this explains.

Thanks,

Maulik

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

\
 
 \ /
  Last update: 2020-02-05 06:12    [W:0.099 / U:1.920 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site