lkml.org 
[lkml]   [2019]   [Jun]   [20]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [PATCH net-next 3/3] net: stmmac: Convert to phylink and remove phylib logic
From
Date

On 18/06/2019 20:44, Jon Hunter wrote:
>
> On 18/06/2019 16:20, Jon Hunter wrote:
>>
>> On 18/06/2019 11:18, Jon Hunter wrote:
>>>
>>> On 18/06/2019 10:46, Jose Abreu wrote:
>>>> From: Jon Hunter <jonathanh@nvidia.com>
>>>>
>>>>> I am not certain but I don't believe so. We are using a static IP address
>>>>> and mounting the root file-system via NFS when we see this ...
>>>>
>>>> Can you please add a call to napi_synchronize() before every
>>>> napi_disable() calls, like this:
>>>>
>>>> if (queue < rx_queues_cnt) {
>>>> napi_synchronize(&ch->rx_napi);
>>>> napi_disable(&ch->rx_napi);
>>>> }
>>>>
>>>> if (queue < tx_queues_cnt) {
>>>> napi_synchronize(&ch->tx_napi);
>>>> napi_disable(&ch->tx_napi);
>>>> }
>>>>
>>>> [ I can send you a patch if you prefer ]
>>>
>>> Yes I can try this and for completeness you mean ...
>>>
>>> diff --git a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
>>> index 4ca46289a742..d4a12cb64d8e 100644
>>> --- a/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
>>> +++ b/drivers/net/ethernet/stmicro/stmmac/stmmac_main.c
>>> @@ -146,10 +146,15 @@ static void stmmac_disable_all_queues(struct stmmac_priv *priv)
>>> for (queue = 0; queue < maxq; queue++) {
>>> struct stmmac_channel *ch = &priv->channel[queue];
>>>
>>> - if (queue < rx_queues_cnt)
>>> + if (queue < rx_queues_cnt) {
>>> + napi_synchronize(&ch->rx_napi);
>>> napi_disable(&ch->rx_napi);
>>> - if (queue < tx_queues_cnt)
>>> + }
>>> +
>>> + if (queue < tx_queues_cnt) {
>>> + napi_synchronize(&ch->tx_napi);
>>> napi_disable(&ch->tx_napi);
>>> + }
>>> }
>>> }
>>
>> So good news and bad news ...
>>
>> The good news is that the above change does fix the initial crash
>> I am seeing. However, even with this change applied on top of
>> -next, it is still dying somewhere else and so there appears to
>> be a second issue.
>
> Further testing has shown that actually this does NOT resolve the issue
> and I am still seeing the crash. Sorry for the false-positive.

Any further feedback? I am still seeing this issue on today's -next.

Thanks
Jon

--
nvpublic

\
 
 \ /
  Last update: 2019-06-20 16:06    [W:0.415 / U:0.024 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site