lkml.org 
[lkml]   [2022]   [Aug]   [3]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
SubjectRe: [PATCH] x86,mm: print likely CPU at segfault time
From
On 8/2/22 13:09, Rik van Riel wrote:
> Add a printk to show_signal_msg() to print the CPU, core, and socket

Nit: ^ printk(), please

> --- a/arch/x86/mm/fault.c
> +++ b/arch/x86/mm/fault.c
> @@ -782,6 +782,12 @@ show_signal_msg(struct pt_regs *regs, unsigned long error_code,
>
> print_vma_addr(KERN_CONT " in ", regs->ip);
>
> + printk(KERN_CONT " on CPU %d (core %d, socket %d)",
> + raw_smp_processor_id(),
> + topology_core_id(raw_smp_processor_id()),
> + topology_physical_package_id(raw_smp_processor_id()));

This seems totally sane to me. I have found myself looking through
customer-provided *oopses* more than once trying to figure out if the
same CPU cores were at fault. This extends that to userspace crashes
too. I've also found myself trying to map back from logical CPU numbers
to core and package.

One nit: Preempt is enabled here, right? I understand that this thing
is fundamentally racy, but if we did:

int cpu = READ_ONCE(raw_smp_processor_id());

it would make it internally *consistent*. Without that, we could
theoretically get three different raw_smp_processor_id()'s. It might
even make the code look a wee bit nicer.

The changelog here is great, but couple of comments would also be nice:

/* This is a racy snapshot, but is better than nothing: */
int cpu = READ_ONCE(raw_smp_processor_id());
...
/*
* Dump the likely CPU where the fatal segfault happened. This
* can help help identify buggy pieces of hardware.
*/
printk(KERN_CONT " on CPU %d (core %d, socket %d)", cpu,
topology_core_id(cpu),
topology_physical_package_id(cpu));

If you want to wait a bit and see if you get any other comments, this
seems like something we can suck in after the merge window.

\
 
 \ /
  Last update: 2022-08-03 16:50    [W:0.061 / U:0.324 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site