Messages in this thread | | | From | "Kirill A. Shutemov" <> | Subject | Re: [PATCHv4 37/39] thp: handle write-protect exception to file-backed huge pages | Date | Thu, 23 May 2013 15:33:18 +0300 (EEST) |
| |
Hillf Danton wrote: > On Thu, May 23, 2013 at 8:08 PM, Kirill A. Shutemov > <kirill.shutemov@linux.intel.com> wrote: > > Hillf Danton wrote: > >> On Sun, May 12, 2013 at 9:23 AM, Kirill A. Shutemov > >> <kirill.shutemov@linux.intel.com> wrote: > >> > @@ -1120,7 +1119,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma, > >> > > >> > page = pmd_page(orig_pmd); > >> > VM_BUG_ON(!PageCompound(page) || !PageHead(page)); > >> > - if (page_mapcount(page) == 1) { > >> > + if (PageAnon(page) && page_mapcount(page) == 1) { > >> > >> Could we avoid copying huge page if > >> no-one else is using it, no matter anon? > > > > No. The page is still in page cache and can be later accessed later. > > We could isolate the page from page cache, but I'm not sure whether it's > > good idea. > > > Hugetlb tries to avoid copying pahe. > > /* If no-one else is actually using this page, avoid the copy > * and just make the page writable */ > avoidcopy = (page_mapcount(old_page) == 1);
It makes sense for hugetlb, since it RAM-backed only.
Currently, the project supports only ramfs, but I hope we will bring storage-backed filesystems later. For them it would be much cheaper to copy the page then bring it back later from storage.
And one more point: we must not ever reuse dirty pages, since it will lead to data lost. And ramfs pages are always dirty.
-- Kirill A. Shutemov
| |