lkml.org 
[lkml]   [2020]   [Sep]   [16]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
Subject[PATCH] pmem: fix __copy_user_flushcache


On Wed, 16 Sep 2020, Dan Williams wrote:

> On Wed, Sep 16, 2020 at 10:24 AM Mikulas Patocka <mpatocka@redhat.com> wrote:
> >
> >
> >
> > On Wed, 16 Sep 2020, Dan Williams wrote:
> >
> > > On Wed, Sep 16, 2020 at 3:57 AM Mikulas Patocka <mpatocka@redhat.com> wrote:
> > > >
> > > >
> > > >
> > > > I'm submitting this patch that adds the required exports (so that we could
> > > > use __copy_from_user_flushcache on x86, arm64 and powerpc). Please, queue
> > > > it for the next merge window.
> > >
> > > Why? This should go with the first user, and it's not clear that it
> > > needs to be relative to the current dax_operations export scheme.
> >
> > Before nvfs gets included in the kernel, I need to distribute it as a
> > module. So, it would make my maintenance easier. But if you don't want to
> > export it now, no problem, I can just copy __copy_user_flushcache from the
> > kernel to the module.
>
> That sounds a better plan than exporting symbols with no in-kernel consumer.

BTW, this function is buggy. Here I'm submitting the patch.



From: Mikulas Patocka <mpatocka@redhat.com>

If we copy less than 8 bytes and if the destination crosses a cache line,
__copy_user_flushcache would invalidate only the first cache line. This
patch makes it invalidate the second cache line as well.

Signed-off-by: Mikulas Patocka <mpatocka@redhat.com>
Cc: stable@vger.kernel.org

---
arch/x86/lib/usercopy_64.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)

Index: linux-2.6/arch/x86/lib/usercopy_64.c
===================================================================
--- linux-2.6.orig/arch/x86/lib/usercopy_64.c 2020-09-05 10:01:27.000000000 +0200
+++ linux-2.6/arch/x86/lib/usercopy_64.c 2020-09-16 20:48:31.000000000 +0200
@@ -120,7 +120,7 @@ long __copy_user_flushcache(void *dst, c
*/
if (size < 8) {
if (!IS_ALIGNED(dest, 4) || size != 4)
- clean_cache_range(dst, 1);
+ clean_cache_range(dst, size);
} else {
if (!IS_ALIGNED(dest, 8)) {
dest = ALIGN(dest, boot_cpu_data.x86_clflush_size);
\
 
 \ /
  Last update: 2020-09-16 20:57    [W:0.159 / U:0.372 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site