lkml.org 
[lkml]   [2020]   [Feb]   [12]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH 2/3] soc: qcom: rpmh: Update rpm_msgs offset address and add list_del
From
Date
On 2/5/2020 11:51 PM, Evan Green wrote:
> On Tue, Feb 4, 2020 at 9:12 PM Maulik Shah <mkshah@codeaurora.org> wrote:
>>
>> On 2/5/2020 6:01 AM, Evan Green wrote:
>>> On Mon, Feb 3, 2020 at 10:14 PM Maulik Shah <mkshah@codeaurora.org> wrote:
>>>> rpm_msgs are copied in continuously allocated memory during write_batch.
>>>> Update request pointer to correctly point to designated area for rpm_msgs.
>>>>
>>>> While at this also add missing list_del before freeing rpm_msgs.
>>>>
>>>> Signed-off-by: Maulik Shah <mkshah@codeaurora.org>
>>>> ---
>>>> drivers/soc/qcom/rpmh.c | 9 ++++++---
>>>> 1 file changed, 6 insertions(+), 3 deletions(-)
>>>>
>>>> diff --git a/drivers/soc/qcom/rpmh.c b/drivers/soc/qcom/rpmh.c
>>>> index c3d6f00..04c7805 100644
>>>> --- a/drivers/soc/qcom/rpmh.c
>>>> +++ b/drivers/soc/qcom/rpmh.c
>>>> @@ -65,7 +65,7 @@ struct cache_req {
>>>> struct batch_cache_req {
>>>> struct list_head list;
>>>> int count;
>>>> - struct rpmh_request rpm_msgs[];
>>>> + struct rpmh_request *rpm_msgs;
>>>> };
>>>>
>>>> static struct rpmh_ctrlr *get_rpmh_ctrlr(const struct device *dev)
>>>> @@ -327,8 +327,10 @@ static void invalidate_batch(struct rpmh_ctrlr *ctrlr)
>>>> unsigned long flags;
>>>>
>>>> spin_lock_irqsave(&ctrlr->cache_lock, flags);
>>>> - list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list)
>>>> + list_for_each_entry_safe(req, tmp, &ctrlr->batch_cache, list) {
>>>> + list_del(&req->list);
>>>> kfree(req);
>>>> + }
>>>> INIT_LIST_HEAD(&ctrlr->batch_cache);
>>> Hm, I don't get it. list_for_each_entry_safe ensures you can traverse
>>> the list while freeing it behind you. ctrlr->batch_cache is now a
>>> bogus list, but is re-inited with the lock held. From my reading,
>>> there doesn't seem to be anything wrong with the current code. Can you
>>> elaborate on the bug you found?
>> Hi Evan,
>>
>> when we don't do list_del, there might be access to already freed memory.
>> Even after current item free via kfree(req), without list_del, the next
>> and prev item's pointer are still pointing to this freed region.
>> it seem best to call list_del to ensure that before freeing this area,
>> no other item in list refer to this.
> I don't think that's true. the "_safe" part of
> list_for_each_entry_safe ensures that we don't touch the ->next member
> of any node after freeing it. So I don't think there's any case where
> we could touch freed memory. The list_del still seems like needless
> code to me.

Hmm, ok. i can drop list_del.

see the reason below to include list_del.

>>>> spin_unlock_irqrestore(&ctrlr->cache_lock, flags);
>>>> }
>>>> @@ -377,10 +379,11 @@ int rpmh_write_batch(const struct device *dev, enum rpmh_state state,
>>>> return -ENOMEM;
>>>>
>>>> req = ptr;
>>>> + rpm_msgs = ptr + sizeof(*req);
>>>> compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
>>>>
>>>> req->count = count;
>>>> - rpm_msgs = req->rpm_msgs;
>>>> + req->rpm_msgs = rpm_msgs;
>>> I don't really understand what this is fixing either, can you explain?
>> the continous memory allocated via below is for 3 items,
>>
>> ptr = kzalloc(sizeof(*req) + count * (sizeof(req->rpm_msgs[0]) +
>> sizeof(*compls)), GFP_ATOMIC);
>>
>> 1. batch_cache_req, followed by
>> 2. total count of rpmh_request, followed by
>> 3. total count of compls
>>
>> current code starts using (3) compls from proper offset in memory
>> compls = ptr + sizeof(*req) + count * sizeof(*rpm_msgs);
>>
>> however for (2) rpmh_request it does
>>
>> rpm_msgs = req->rpm_msgs;
>>
>> because of this it starts 8 byte before its designated area and overlaps
>> with (1) batch_cache_req struct's last entry.
>> this patch corrects it via below to ensure rpmh_request uses correct
>> start address in memory.
>>
>> rpm_msgs = ptr + sizeof(*req);
> I don't follow that either. The empty array declaration (or the
> GCC-specific version of it would be "struct rpmh_request
> rpm_msgs[0];") is a flexible array member, meaning the member itself
> doesn't take up any space in the struct. So, for instance, it holds
> true that &(req->rpm_msgs[0]) == (req + 1). By my reading the existing
> code is correct, and your patch just adds a needless pointer
> indirection. Check out this wikipedia entry:
>
> https://en.wikipedia.org/wiki/Flexible_array_member
Thanks Evan,

Agree that code works even without this.

However from the same wiki,

>>It is common to allocate sizeof(struct) + array_len*sizeof(array
element) bytes.

>>This is not wrong, however it may allocate a few more bytes than
necessary:

this is what i wanted to convery above, currently it allocated 8 more
bytes than necessary.

The reason for the change was one use after free reported in rpmh driver.

After including this change, we have not seen this reported again.

I can drop this change in new revision if we don't want it.

Thanks,

Maulik

--
QUALCOMM INDIA, on behalf of Qualcomm Innovation Center, Inc. is a member of Code Aurora Forum, hosted by The Linux Foundation

\
 
 \ /
  Last update: 2020-02-12 13:16    [W:0.062 / U:0.124 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site