lkml.org 
[lkml]   [2020]   [Nov]   [23]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
Patch in this message
/
Date
From
SubjectRe: [EXT] Re: [PATCH v5 9/9] task_isolation: kick_all_cpus_sync: don't kick isolated cpus
On Mon, Nov 23, 2020 at 10:39:34PM +0000, Alex Belits wrote:
>
> On Mon, 2020-11-23 at 23:29 +0100, Frederic Weisbecker wrote:
> > External Email
> >
> > -------------------------------------------------------------------
> > ---
> > On Mon, Nov 23, 2020 at 05:58:42PM +0000, Alex Belits wrote:
> > > From: Yuri Norov <ynorov@marvell.com>
> > >
> > > Make sure that kick_all_cpus_sync() does not call CPUs that are
> > > running
> > > isolated tasks.
> > >
> > > Signed-off-by: Yuri Norov <ynorov@marvell.com>
> > > [abelits@marvell.com: use safe task_isolation_cpumask()
> > > implementation]
> > > Signed-off-by: Alex Belits <abelits@marvell.com>
> > > ---
> > > kernel/smp.c | 14 +++++++++++++-
> > > 1 file changed, 13 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/kernel/smp.c b/kernel/smp.c
> > > index 4d17501433be..b2faecf58ed0 100644
> > > --- a/kernel/smp.c
> > > +++ b/kernel/smp.c
> > > @@ -932,9 +932,21 @@ static void do_nothing(void *unused)
> > > */
> > > void kick_all_cpus_sync(void)
> > > {
> > > + struct cpumask mask;
> > > +
> > > /* Make sure the change is visible before we kick the cpus */
> > > smp_mb();
> > > - smp_call_function(do_nothing, NULL, 1);
> > > +
> > > + preempt_disable();
> > > +#ifdef CONFIG_TASK_ISOLATION
> > > + cpumask_clear(&mask);
> > > + task_isolation_cpumask(&mask);
> > > + cpumask_complement(&mask, &mask);
> > > +#else
> > > + cpumask_setall(&mask);
> > > +#endif
> > > + smp_call_function_many(&mask, do_nothing, NULL, 1);
> > > + preempt_enable();
> >
> > Same comment about IPIs here.
>
> This is different from timers. The original design was based on the
> idea that every CPU should be able to enter kernel at any time and run
> kernel code with no additional preparation. Then the only solution is
> to always do full broadcast and require all CPUs to process it.
>
> What I am trying to introduce is the idea of CPU that is not likely to
> run kernel code any soon, and can afford to go through an additional
> synchronization procedure on the next entry into kernel. The
> synchronization is not skipped, it simply happens later, early in
> kernel entry code.

Ah I see, this is ordered that way:

ll_isol_flags = ISOLATED

CPU 0 CPU 1
------------------ -----------------
// kernel entry
data_to_sync = 1 ll_isol_flags = ISOLATED_BROKEN
smp_mb() smp_mb()
if ll_isol_flags(CPU 1) == ISOLATED READ data_to_sync
smp_call(CPU 1)
You should document that, ie: explain why what you're doing is safe.

Also Beware though that the data to sync in question doesn't need to be visible
in the entry code before task_isolation_kernel_enter(). You need to audit all
the callers of kick_all_cpus_sync().

\
 
 \ /
  Last update: 2020-11-24 00:25    [W:0.086 / U:0.400 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site