lkml.org 
[lkml]   [2022]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
Date
SubjectRe: [PATCH 7/7] memory: renesas-rpc-if: Reinitialize registers during system resume
On Mon, Jun 27, 2022 at 5:31 PM Geert Uytterhoeven
<geert+renesas@glider.be> wrote:
> During PSCI system suspend, R-Car Gen3 SoCs may be powered down, and
> thus the RPC-IF register state may be lost. Consequently, when using
> the RPC-IF after system resume, data corruption may happen.
>
> Fix this by reinitializing the hardware state during system resume.
> As this requires resuming the RPC-IF core device, this can only be done
> when the device is under active control of the HyperBus or SPI child
> driver.
>
> Signed-off-by: Geert Uytterhoeven <geert+renesas@glider.be>

For v2, I dropped this patch from the series.

Apparently this patch is not needed, nor does it have any impact on
HyperFLASH read operation on Salvator-XS with R-Car M3-N ES1.0 and
Ebisu-4D with R-Car E3 ES1.0.

On Salvator-X with R-Car M3-W ES1.0, there is a different issue causing
random bit flips (which is not solved by the Strobe Timing Adjustment
bit (STRTIM) fix for R-Car M3-W ES1.x in the BSP).

On Salvator-XS with R-Car H3-ES2.0, corruption is seen after s2ram.
TL;DR: while this patch does have impact on that, RPC operation after
s2ram is still not guaranteed, and the core issue is still not
understood.

---
For testing, I use the following command to read /dev/mtdblock1 (which
contains the BL2 bootloader) and check its checksum 100 times:

time sha256sum -c <(yes $(cat mtdblock1.sha256sum) | head -100)

After boot and s2idle, the success rate is 100%.

1. Without this patch, the failure rate after s2ram is ca. 65%.

When splitting and comparing the data read back, some blocks of 64 KiB
(but not always the same on different runs) have been replaced by bad
data, containing either data from elsewhere, or all ones. The latter is
probably the same symptom as the former, as the HyperFLASH does contain
blocks with all-ones data.

The data from elsewhere looks like e.g.:

00046f10 75 75 69 64 5f 64 69 73 75 75 69 64 5f 64 69 73
|uuid_disuuid_dis|
00046f20 64 72 27 0a 20 20 20 20 64 72 27 0a 20 20 20 20 |dr'.
dr'. |
00046f30 6c 69 67 6e 65 64 20 74 6c 69 67 6e 65 64 20 74
|ligned tligned t|
00046f40 61 64 64 72 32 20 63 6f 61 64 64 72 32 20 63 6f |addr2
coaddr2 co|
00046f50 0a 6d 6d 63 20 72 70 6d 0a 6d 6d 63 20 72 70 6d |.mmc
rpm.mmc rpm|
00046f60 20 6c 61 72 67 65 72 0a 20 6c 61 72 67 65 72 0a |
larger. larger.|
00046f70 6d 32 00 63 6d 6d 31 00 6d 32 00 63 6d 6d 31 00
|m2.cmm1.m2.cmm1.|
00046f80 32 5f 64 61 74 61 34 00 32 5f 64 61 74 61 34 00
|2_data4.2_data4.|
00046f90 31 00 47 50 5f 35 5f 31 31 00 47 50 5f 35 5f 31
|1.GP_5_11.GP_5_1|
00046fa0 65 78 63 65 65 64 65 64 65 78 63 65 65 64 65 64
|exceededexceeded|
00046fb0 30 34 2d 72 63 34 2d 30 30 34 2d 72 63 34 2d 30
|04-rc4-004-rc4-0|

which seems to originate from two copies of the first 8 bytes of each of
the following lines, found elsewhere in the HyperFLASH:

006f1000 75 75 69 64 5f 64 69 73 6b 3d 00 6e 61 6d 65 3d
|uuid_disk=.name=|
...
006f2000 64 72 27 0a 20 20 20 20 20 20 70 61 73 73 69 6e |dr'.
passin|
...
006f3000 6c 69 67 6e 65 64 20 74 6f 0a 20 20 20 20 20 20
|ligned to. |
...
006f4000 61 64 64 72 32 20 63 6f 75 6e 74 00 6d 65 6d 6f |addr2
count.memo|
...
006f5000 0a 6d 6d 63 20 72 70 6d 62 20 6b 65 79 20 3c 61 |.mmc
rpmb key <a|
...
006f6000 20 6c 61 72 67 65 72 0a 00 75 6e 7a 69 70 00 75 |
larger..unzip.u|
...
006f7000 6d 32 00 63 6d 6d 31 00 63 6d 6d 30 00 63 73 69
|m2.cmm1.cmm0.csi|
...
006f8000 32 5f 64 61 74 61 34 00 73 64 68 69 32 5f 64 61
|2_data4.sdhi2_da|
...
006f9000 31 00 47 50 5f 35 5f 31 32 00 47 50 5f 35 5f 31
|1.GP_5_12.GP_5_1|
...
006fa000 65 78 63 65 65 64 65 64 00 0a 25 73 3b 20 73 74
|exceeded..%s; st|
...
006fb000 30 34 2d 72 63 34 2d 30 30 30 38 32 2d 67 35 34
|04-rc4-00082-g54|

For both hexdumps above, offsets are absolute FLASH offsets, not relative
partition offsets. Still, there is some similarity (e.g. 0x46f10 vs.
0x6f1000).

2. With this patch, the failure rate of 100 reads after s2ram is either
0%, or 100%. The same is true after subsequent s2ram operations.

Contrary to before this patch, in case of corruption, the data read back
is always the same: i.e. either the good data is read back, or the exact
same bad data is read back. In case of bad data, some blocks of 64 KiB
have been replaced by blocks containing either data from elsewhere,
or all ones again....

Gr{oetje,eeting}s,

Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
-- Linus Torvalds

\
 
 \ /
  Last update: 2022-11-23 15:58    [W:0.137 / U:0.700 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site