lkml.org 
[lkml]   [2008]   [Jul]   [28]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
SubjectRe: PCI: MSI interrupts masked using prohibited method
On Fri, 25 Jul 2008 10:56:55 -0600
Matthew Wilcox <matthew@wil.cx> wrote:

> On Fri, Jul 25, 2008 at 05:37:49PM +0100, David Vrabel wrote:
> > The spec says that system software should enable MSI before setting
> > message address and data (PCI 3.0 section 6.8.3.1 MSI
> > configuration). The kernel doesn't do this.
>
> I think you meant "disable"? I can't find anything in 6.8.3.1 of 3.0
> that refers to this.
>
> > I really don't think we should be enabling/disabling MSI while
> > interrupts might be being generated. There are cases where
> > interrupts will be lost. Consider PCIe where we might end up with
> > a situation where MSI is disabled and then enabled sufficiently
> > quickly that no periodic line interrupt message is sent by the
> > device.
>
> I don't think there's a difference here between PCIe and conventional
> PCI. A device raising a line based interrupt is perfectly equivalent
> to a device sending an INTx message.
>
> > The message address and data should only be modified while the
> > vector is masked (to avoid the aforementioned 'tearing'). This
> > means that setting IRQ affinity cannot be done on devices without
> > per-vector mask bits. I don't think this is a problem.
>
> I agree. I think it's fine to have this limitation.

I can imagine this being a problem e.g. for people wanting to isolate
selected CPUs from interrupts for realtime tasks.

> > In vague psuedo-code, set_affinity() should be something like this:
> >
> > int did_mask = msi_mask_vector();
> > if (!did_mask) {
> > return -ENOTSUPP;
> > }
> > /* fiddle with address and mask now */
> > msi_unmask_vector();
>
> Yes, something like that.


\
 
 \ /
  Last update: 2008-07-28 12:01    [W:0.065 / U:0.088 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site