lkml.org 
[lkml]   [2020]   [Oct]   [2]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
From
SubjectRE: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on short time period
Date
Hi Boris, Hi James,

>-----Original Message-----
>From: Borislav Petkov [mailto:bp@alien8.de]
>Sent: 02 October 2020 13:44
>To: Shiju Jose <shiju.jose@huawei.com>
>Cc: linux-edac@vger.kernel.org; linux-acpi@vger.kernel.org; linux-
>kernel@vger.kernel.org; tony.luck@intel.com; rjw@rjwysocki.net;
>james.morse@arm.com; lenb@kernel.org; Linuxarm
><linuxarm@huawei.com>
>Subject: Re: [RFC PATCH 0/7] RAS/CEC: Extend CEC for errors count check on
>short time period
>
>On Fri, Oct 02, 2020 at 01:22:28PM +0100, Shiju Jose wrote:
>> Open Questions based on the feedback from Boris, 1. ARM processor
>> error types are cache/TLB/bus errors.
>> [Reference N2.4.4.1 ARM Processor Error Information UEFI Spec v2.8]
>> Any of the above error types should not be consider for the error
>> collection and CPU core isolation?
>>
>> 2.If disabling entire CPU core is not acceptable, please suggest
>> method to disable L1 and L2 cache on ARM64 core?
>
>More open questions:
>
>> This requirement is the part of the early fault prediction by taking
>> action when large number of corrected errors reported on a CPU core
>> before it causing serious faults.
>
>And do you know of actual real-life examples where this is really the case? Do
>you have any users who report a large error count on ARM CPUs, originating
>from the caches and that something like that would really help?
>
>Because from my x86 CPUs limited experience, the cache arrays are mostly
>fine and errors reported there are not something that happens very
>frequently so we don't even need to collect and count those.
>
>So is this something which you need to have in order to check a box
>somewhere that there is some functionality or is there an actual real-life use
>case behind it which a customer has requested?
We have not got a real-life example for this case. However rare errors
like this can occur frequently sometimes at scale, which would cause
more serious issues if not handled.
>
>Open question from James with my reply to it:
>
>On Thu, Oct 01, 2020 at 06:16:03PM +0100, James Morse wrote:
>> If the corrected-count is available somewhere, can't this policy be
>> made in user-space?
The error count is present in the struct cper_arm_err_info, the fields of
this structure are not reported to the user-space through trace events?
Presently the fields of table struct cper_sec_proc_arm only are reported
to the user-space through trace-arm-event.
Also there can be multiple cper_arm_err_info per cper_sec_proc_arm.
Thus I think this need reporting through a new trace event?

Also the logical index of a CPU which I think need to extract from the 'mpidr' field of
struct cper_sec_proc_arm using platform dependent kernel function get_logical_index().
Thus cpu index also need to report to the user space.
>
>You mean rasdaemon goes and offlines CPUs when certain thresholds are
>reached? Sure. It would be much more flexible too.
I think adding the CPU error collection to the kernel
has the following advantages,
1. The CPU error collection and isolation would not be active if the
rasdaemon stopped running or not running on a machine.
2. Receiving errors and isolating a CPU core from the user-space would
probably delayed more, when large number of errors are reported.
3. Supporting the interface for configuration parameters and error statistics etc
probably easy to implement in the kernel.
4. The interface given for disabling a CPU is easy to use from the kernel level.

>
>First we answer questions and discuss, then we code.
>
>--
>Regards/Gruss,
> Boris.
>

Thanks,
Shiju

>https://people.kernel.org/tglx/notes-about-netiquette
\
 
 \ /
  Last update: 2020-10-02 17:39    [W:0.073 / U:0.152 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site