lkml.org 
[lkml]   [2022]   [Feb]   [4]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [PATCH V8 38/44] memremap_pages: Define pgmap_mk_{readwrite|noaccess}() calls
On Fri, Feb 04, 2022 at 10:35:59AM -0800, Dan Williams wrote:
> On Thu, Jan 27, 2022 at 9:55 AM <ira.weiny@intel.com> wrote:
> >

[snip]

I'll address the other comments later but wanted to address the idea below.

> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index f5b2be39a78c..5020ed7e67b7 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -1492,6 +1492,13 @@ struct task_struct {
> > struct callback_head l1d_flush_kill;
> > #endif
> >
> > +#ifdef CONFIG_DEVMAP_ACCESS_PROTECTION
> > + /*
> > + * NOTE: pgmap_prot_count is modified within a single thread of
> > + * execution. So it does not need to be atomic_t.
> > + */
> > + u32 pgmap_prot_count;
> > +#endif
>
> It's not at all clear why the task struct needs to be burdened with
> this accounting. Given that a devmap instance is needed to manage page
> protections, why not move the nested protection tracking to a percpu
> variable relative to an @pgmap arg? Something like:
>
> void __pgmap_mk_readwrite(struct dev_pagemap *pgmap)
> {
> migrate_disable();
> preempt_disable();

Why burden threads like this? kmap_local_page() is perfectly able to migrate
or be preempted?

I think this is way to restrictive.

> if (this_cpu_add_return(pgmap->pgmap_prot_count, 1) == 1)
> pks_mk_readwrite(PKS_KEY_PGMAP_PROTECTION);
> }
> EXPORT_SYMBOL_GPL(__pgmap_mk_readwrite);
>
> void __pgmap_mk_noaccess(struct dev_pagemap *pgmap)
> {
> if (!this_cpu_sub_return(pgmap->pgmap_prot_count, 1))
> pks_mk_noaccess(PKS_KEY_PGMAP_PROTECTION);
> preempt_enable();
> migrate_enable();
> }
> EXPORT_SYMBOL_GPL(__pgmap_mk_noaccess);
>
> The naming, which I had a hand in, is not aging well. When I see "mk"
> I expect it to be building some value like a page table entry that
> will be installed later. These helpers are directly enabling and
> disabling access and are meant to be called symmetrically. So I would
> expect symmetric names like:
>
> pgmap_enable_access()
> pgmap_disable_access()

Names are easily changed. I'll look at changing the names.

Ira

>
>
> > /*
> > * New fields for task_struct should be added above here, so that
> > * they are included in the randomized portion of task_struct.
> > diff --git a/init/init_task.c b/init/init_task.c
> > index 73cc8f03511a..948b32cf8139 100644
> > --- a/init/init_task.c
> > +++ b/init/init_task.c
> > @@ -209,6 +209,9 @@ struct task_struct init_task
> > #ifdef CONFIG_SECCOMP_FILTER
> > .seccomp = { .filter_count = ATOMIC_INIT(0) },
> > #endif
> > +#ifdef CONFIG_DEVMAP_ACCESS_PROTECTION
> > + .pgmap_prot_count = 0,
> > +#endif
> > };
> > EXPORT_SYMBOL(init_task);
> >
> > diff --git a/mm/memremap.c b/mm/memremap.c
> > index d3e6f328a711..b75c4f778c59 100644
> > --- a/mm/memremap.c
> > +++ b/mm/memremap.c
> > @@ -96,6 +96,20 @@ static void devmap_protection_disable(void)
> > static_branch_dec(&dev_pgmap_protection_static_key);
> > }
> >
> > +void __pgmap_mk_readwrite(struct dev_pagemap *pgmap)
> > +{
> > + if (!current->pgmap_prot_count++)
> > + pks_mk_readwrite(PKS_KEY_PGMAP_PROTECTION);
> > +}
> > +EXPORT_SYMBOL_GPL(__pgmap_mk_readwrite);
> > +
> > +void __pgmap_mk_noaccess(struct dev_pagemap *pgmap)
> > +{
> > + if (!--current->pgmap_prot_count)
> > + pks_mk_noaccess(PKS_KEY_PGMAP_PROTECTION);
> > +}
> > +EXPORT_SYMBOL_GPL(__pgmap_mk_noaccess);
> > +
> > bool pgmap_protection_available(void)
> > {
> > return pks_available();
> > --
> > 2.31.1
> >

\
 
 \ /
  Last update: 2022-02-05 01:10    [W:0.109 / U:1.012 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site