lkml.org 
[lkml]   [2018]   [Apr]   [19]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
SubjectRe: [RFC PATCH v2 3/4] acpi: apei: Do not panic() when correctable errors are marked as fatal.
From
Date

On 04/19/2018 10:40 AM, Borislav Petkov wrote:
> On Thu, Apr 19, 2018 at 09:57:07AM -0500, Alex G. wrote:
>> ghes_severity() is a one-to-one mapping from a set of unsorted
>> severities to monotonically increasing numbers. The "one-to-one" mapping
>> part of the sentence is obvious from the function name. To change it to
>> parse the entire GHES would completely destroy this, and I think it
>> would apply policy in the wrong place.
>
> So do a wrapper or whatever. Do a ghes_compute_severity() or however you
> would wanna call it and do the iteration there.

That doesn't sound right. There isn't a formula to compute. What we're
doing is we're looking at individual error sources, and deciding what
errors we can handle based both on the error, and our ability to handle
the error.

>> Should I do that, I might have to call it something like
>> ghes_parse_and_apply_policy_to_severity(). But that misses the whole
>> point if these changes.
>
> What policy? You simply compute the severity like we do in the mce code.

As explained above, our ability to resolve an error depends on the
interaction between the error and error handler. This is very closely
tied to the capabilities of each individual handler. I'll do it your
way, but I don't think ignoring this tight coupling is the right thing
to do.

>
>> I would like to get to the handlers first, and then decide if things are
>> okay or not,
>
> Why? Give me an example why you'd handle an error first and then decide
> whether we're ok or not?
>
> Usually, the error handler decides that in one place. So what exactly
> are you trying to do differently that doesn't fit that flow?

In the NMI case you don't make it to the error handler. James and I beat
this subject to the afterlife in v1.

>> I don't want to leave people scratching their heads, but I also don't
>> want to make AER a special case without having a generic way to handle
>> these cases. People are just as susceptible to scratch their heads
>> wondering why AER is a special case and everything else crashes.
>
> Not if it is properly done *and* documented why we applying the
> respective policy for the error type.
>
>> Maybe it's better move the AER handling to NMI/IRQ context, since
>> ghes_handle_aer() is only scheduling the real AER andler, and is irq
>> safe. I'm scratching my head about why we're messing with IRQ work from
>> NMI context, instead of just scheduling a regular handler to take care
>> of things.
>
> No, first pls explain what exactly you're trying to do

I realize v1 was quite a while back, so I'll take this opportunity to
restate:

At a very high level, I'm working with Dell on improving server
reliability, with a focus on NVME hotplug and surprise removal. One of
the features we don't support is surprise removal of NVME drives;
hotplug is supported with 'prepare to remove'. This is one of the
reasons NVME is not on feature parity with SAS and SATA.

My role is to solve this issue on linux, and to not worry about other
OSes. This puts me in a position to have a linux-centric view of the
problem, as opposed to the more common firmware-centric view.

Part of solving the surprise removal issue involves improving FFS error
handling. This is required because the servers which are shipping use
FFS instead of native error notifications. As part of extensive testing,
I have found the NMI handler to be the most common cause of crashes, and
hence this series.

> and then we can talk about how to do it.

Your move.

> Btw, a real-life example to accompany that intention goes a long way.

I'm not sure if this is the example you're looking for, but
take an r740xd server, and slowly unplug an Intel NVME drives at an
angle. You're likely to crash the machine.

Alex

\
 
 \ /
  Last update: 2018-04-19 18:27    [W:0.082 / U:0.844 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site