lkml.org 
[lkml]   [2019]   [Oct]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Subject[PATCH] mm/filemap: do not allocate cache pages beyond end of file at read
From
Date
Page cache could contain pages beyond end of file during write or
if read races with truncate. But generic_file_buffered_read() always
allocates unneeded pages beyond eof if somebody reads here and one
extra page at the end if file size is page-aligned.

Function generic_file_buffered_read() calls page_cache_sync_readahead()
if page not found in cache and then do another lookup. Readahead checks
file size in __do_page_cache_readahead() before allocating pages.
After that generic_file_buffered_read() falls back to slow path and
allocates page for ->readpage() without checking file size.

This patch checks file size before allocating page for ->readpage().

Signed-off-by: Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
---
mm/filemap.c | 4 ++++
1 file changed, 4 insertions(+)

diff --git a/mm/filemap.c b/mm/filemap.c
index 85b7d087eb45..92abf5f348a9 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2225,6 +2225,10 @@ static ssize_t generic_file_buffered_read(struct kiocb *iocb,
goto out;

no_cached_page:
+ /* Do not allocate cache pages beyond end of file. */
+ if (((loff_t)index << PAGE_SHIFT) >= i_size_read(inode))
+ goto out;
+
/*
* Ok, it wasn't cached, so we need to create a new
* page..
\
 
 \ /
  Last update: 2019-10-28 11:01    [W:0.056 / U:0.948 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site