lkml.org 
[lkml]   [2020]   [Mar]   [26]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: [RFC PATCH 03/11] Drivers: hv: vmbus: Replace the per-CPU channel lists with a global array of channels
On Thu, Mar 26, 2020 at 03:31:20PM +0100, Vitaly Kuznetsov wrote:
> "Andrea Parri (Microsoft)" <parri.andrea@gmail.com> writes:
>
> > When Hyper-V sends an interrupt to the guest, the guest has to figure
> > out which channel the interrupt is associated with. Hyper-V sets a bit
> > in a memory page that is shared with the guest, indicating a particular
> > "relid" that the interrupt is associated with. The current Linux code
> > then uses a set of per-CPU linked lists to map a given "relid" to a
> > pointer to a channel structure.
> >
> > This design introduces a synchronization problem if the CPU that Hyper-V
> > will interrupt for a certain channel is changed. If the interrupt comes
> > on the "old CPU" and the channel was already moved to the per-CPU list
> > of the "new CPU", then the relid -> channel mapping will fail and the
> > interrupt is dropped. Similarly, if the interrupt comes on the new CPU
> > but the channel was not moved to the per-CPU list of the new CPU, then
> > the mapping will fail and the interrupt is dropped.
> >
> > Relids are integers ranging from 0 to 2047. The mapping from relids to
> > channel structures can be done by setting up an array with 2048 entries,
> > each entry being a pointer to a channel structure (hence total size ~16K
> > bytes, which is not a problem). The array is global, so there are no
> > per-CPU linked lists to update. The array can be searched and updated
> > by simply loading and storing the array at the specified index. With no
> > per-CPU data structures, the above mentioned synchronization problem is
> > avoided and the relid2channel() function gets simpler.
> >
> > Suggested-by: Michael Kelley <mikelley@microsoft.com>
> > Signed-off-by: Andrea Parri (Microsoft) <parri.andrea@gmail.com>
> > ---
> > drivers/hv/channel_mgmt.c | 158 ++++++++++++++++++++++----------------
> > drivers/hv/connection.c | 38 +++------
> > drivers/hv/hv.c | 2 -
> > drivers/hv/hyperv_vmbus.h | 14 ++--
> > drivers/hv/vmbus_drv.c | 48 +++++++-----
> > include/linux/hyperv.h | 5 --
> > 6 files changed, 139 insertions(+), 126 deletions(-)
> >
> > diff --git a/drivers/hv/channel_mgmt.c b/drivers/hv/channel_mgmt.c
> > index 1191f3d76d111..9b1449c839575 100644
> > --- a/drivers/hv/channel_mgmt.c
> > +++ b/drivers/hv/channel_mgmt.c
> > @@ -319,7 +319,6 @@ static struct vmbus_channel *alloc_channel(void)
> > init_completion(&channel->rescind_event);
> >
> > INIT_LIST_HEAD(&channel->sc_list);
> > - INIT_LIST_HEAD(&channel->percpu_list);
> >
> > tasklet_init(&channel->callback_event,
> > vmbus_on_event, (unsigned long)channel);
> > @@ -340,23 +339,28 @@ static void free_channel(struct vmbus_channel *channel)
> > kobject_put(&channel->kobj);
> > }
> >
> > -static void percpu_channel_enq(void *arg)
> > +void vmbus_channel_map_relid(struct vmbus_channel *channel)
> > {
> > - struct vmbus_channel *channel = arg;
> > - struct hv_per_cpu_context *hv_cpu
> > - = this_cpu_ptr(hv_context.cpu_context);
> > -
> > - list_add_tail_rcu(&channel->percpu_list, &hv_cpu->chan_list);
> > + if (WARN_ON(channel->offermsg.child_relid >= MAX_CHANNEL_RELIDS))
> > + return;
> > + /*
> > + * Pairs with the READ_ONCE() in vmbus_chan_sched(). Guarantees
> > + * that vmbus_chan_sched() will find up-to-date data.
> > + */
> > + smp_store_release(
> > + &vmbus_connection.channels[channel->offermsg.child_relid],
> > + channel);
> > }
> >
> > -static void percpu_channel_deq(void *arg)
> > +void vmbus_channel_unmap_relid(struct vmbus_channel *channel)
> > {
> > - struct vmbus_channel *channel = arg;
> > -
> > - list_del_rcu(&channel->percpu_list);
> > + if (WARN_ON(channel->offermsg.child_relid >= MAX_CHANNEL_RELIDS))
> > + return;
> > + WRITE_ONCE(
> > + vmbus_connection.channels[channel->offermsg.child_relid],
> > + NULL);
>
> I don't think this smp_store_release()/WRITE_ONCE() fanciness gives you
> anything. Basically, without proper synchronization with a lock there is
> no such constructions which will give you any additional guarantee on
> top of just doing X=1. E.g. smp_store_release() is just
> barrier();
> *p = v;
> if I'm not mistaken. Nobody tells you when *some other CPU* will see the
> update - 'eventually' is your best guess. Here, you're only setting one
> pointer.
>
> Percpu structures have an advantage: we (almost) never access them from
> different CPUs so just doing updates atomically (and writing 64bit
> pointer on x86_64 is atomic) is OK.
>
> I haven't looked at all possible scenarios but I'd suggest protecting
> this array with a spinlock (in case we can have simultaneous accesses
> from different CPUs and care about the result, of course).

The smp_store_release()+READ_ONCE() pair should guarantee that any store
to the channel fields performed before (in program order) the "mapping"
of the channel are visible to the CPU which observes that mapping; this
guarantee is expected to hold for all architectures.

Notice that this apporach follows the current/upstream code, cf. the
rcu_assign_pointer() in list_add_tail_rcu() and notice that (both before
and after this series) vmbus_chan_sched() accesses the channel array
without any mutex/lock held.

I'd be inclined to stick to the current code (unless more turns out to
be required). Thoughts?

Thanks,
Andrea

\
 
 \ /
  Last update: 2020-03-26 18:05    [W:0.082 / U:0.100 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site