Messages in this thread | | | Date | Thu, 6 Apr 2023 10:42:50 +0800 | From | Chen Yu <> | Subject | Re: [PATCH] PM: hibernate: Do not get block device exclusively in test_resume mode |
| |
Hi Pavan, On 2023-04-05 at 12:30:00 +0530, Pavan Kondeti wrote: > On Sun, Apr 02, 2023 at 12:55:40AM +0800, Chen Yu wrote: > > The system refused to do a test_resume because it found that the > > swap device has already been taken by someone else. Specificly, > > the swsusp_check()->blkdev_get_by_dev(FMODE_EXCL) is supposed to > > do this check. > > > > Steps to reproduce: > > dd if=/dev/zero of=/swapfile bs=$(cat /proc/meminfo | > > awk '/MemTotal/ {print $2}') count=1024 conv=notrunc > > mkswap /swapfile > > swapon /swapfile > > swap-offset /swapfile > > echo 34816 > /sys/power/resume_offset > > echo test_resume > /sys/power/disk > > echo disk > /sys/power/state > > > > PM: Using 3 thread(s) for compression > > PM: Compressing and saving image data (293150 pages)... > > PM: Image saving progress: 0% > > PM: Image saving progress: 10% > > ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) > > ata1.00: configured for UDMA/100 > > ata2: SATA link down (SStatus 0 SControl 300) > > ata5: SATA link down (SStatus 0 SControl 300) > > ata6: SATA link down (SStatus 0 SControl 300) > > ata3: SATA link down (SStatus 0 SControl 300) > > ata4: SATA link down (SStatus 0 SControl 300) > > PM: Image saving progress: 20% > > PM: Image saving progress: 30% > > PM: Image saving progress: 40% > > PM: Image saving progress: 50% > > pcieport 0000:00:02.5: pciehp: Slot(0-5): No device found > > PM: Image saving progress: 60% > > PM: Image saving progress: 70% > > PM: Image saving progress: 80% > > PM: Image saving progress: 90% > > PM: Image saving done > > PM: hibernation: Wrote 1172600 kbytes in 2.70 seconds (434.29 MB/s) > > PM: S| > > PM: hibernation: Basic memory bitmaps freed > > PM: Image not found (code -16) > > > > This is because when using the swapfile as the hibernation storage, > > the block device where the swapfile is located has already been mounted > > by the OS distribution(usually been mounted as the rootfs). This is not > > an issue for normal hibernation, because software_resume()->swsusp_check() > > happens before the block device(rootfs) mount. But it is a problem for the > > test_resume mode. Because when test_resume happens, the block device has > > been mounted already. > > > > Thus remove the FMODE_EXCL for test_resume mode. This would not be a > > problem because in test_resume stage, the processes have already been > > frozen, and the race condition described in > > Commit 39fbef4b0f77 ("PM: hibernate: Get block device exclusively in swsusp_check()") > > is unlikely to happen. > > > > Fixes: 39fbef4b0f77 ("PM: hibernate: Get block device exclusively in swsusp_check()") > > Reported-by: Yifan Li <yifan2.li@intel.com> > > Signed-off-by: Chen Yu <yu.c.chen@intel.com> > > +int swsusp_check(bool safe) > > { > > + fmode_t mode = FMODE_READ; > > int error; > > void *holder; > > > > + if (!safe) > > + mode |= FMODE_EXCL; > > + > > hib_resume_bdev = blkdev_get_by_dev(swsusp_resume_device, > > - FMODE_READ | FMODE_EXCL, &holder); > > + mode, &holder); > > if (!IS_ERR(hib_resume_bdev)) { > > set_blocksize(hib_resume_bdev, PAGE_SIZE); > > clear_page(swsusp_header); > > @@ -1547,7 +1551,7 @@ int swsusp_check(void) > > > > put: > > if (error) > > - blkdev_put(hib_resume_bdev, FMODE_READ | FMODE_EXCL); > > + blkdev_put(hib_resume_bdev, mode); > > else > > pr_debug("Image signature found, resuming\n"); > > } else { > > The patch looks good to me and it works. I have just one > question/comment. > > What is "safe" here? Because I worked on this problem [1], so I > understood it. but it is not very clear / explicit. I see. > One approach I thought would be to the codepaths aware of "test_resume" via a > global variable called "snapshot_testing" similar to freezer_test_done. > if snapshot_testing is true, don't use exclusive flags. This looks reasonable, with this change, we don't have to add "safe" parameter to swsusp_check() and load_image_and_restore().
thanks, Chenyu > > Thanks, > Pavan >
| |