Messages in this thread | | | Date | Tue, 28 Nov 2023 12:33:31 -0600 | Subject | Re: [bug report] x86/split_lock: Make life miserable for split lockers | From | Tom Lendacky <> |
| |
+Joerg
On 11/28/23 11:40, Tony Luck wrote: > On Tue, Nov 28, 2023 at 04:12:24PM +0300, Dan Carpenter wrote: >> Hello Tony Luck, >> >> The patch b041b525dab9: "x86/split_lock: Make life miserable for >> split lockers" from Mar 10, 2022 (linux-next), leads to the following >> Smatch static checker warning: >> >> arch/x86/kernel/cpu/intel.c:1179 split_lock_warn() >> warn: sleeping in atomic context >> >> arch/x86/kernel/cpu/intel.c >> 1158 static void split_lock_warn(unsigned long ip) >> 1159 { >> 1160 struct delayed_work *work; >> 1161 int cpu; >> 1162 >> 1163 if (!current->reported_split_lock) >> 1164 pr_warn_ratelimited("#AC: %s/%d took a split_lock trap at address: 0x%lx\n", >> 1165 current->comm, current->pid, ip); >> 1166 current->reported_split_lock = 1; >> 1167 >> 1168 if (sysctl_sld_mitigate) { >> 1169 /* >> 1170 * misery factor #1: >> 1171 * sleep 10ms before trying to execute split lock. >> 1172 */ >> 1173 if (msleep_interruptible(10) > 0) >> 1174 return; >> 1175 /* >> 1176 * Misery factor #2: >> 1177 * only allow one buslocked disabled core at a time. >> 1178 */ >> --> 1179 if (down_interruptible(&buslock_sem) == -EINTR) >> 1180 return; >> 1181 work = &sl_reenable_unlock; >> 1182 } else { >> 1183 work = &sl_reenable; >> 1184 } >> 1185 >> 1186 cpu = get_cpu(); >> 1187 schedule_delayed_work_on(cpu, work, 2); >> 1188 >> 1189 /* Disable split lock detection on this CPU to make progress */ >> 1190 sld_update_msr(false); >> 1191 put_cpu(); >> 1192 } >> >> The call tree is: >> >> kernel_exc_vmm_communication() <- disables preempt >> -> vc_raw_handle_exception() >> -> vc_forward_exception() >> -> exc_alignment_check() >> -> __exc_alignment_check() >> -> handle_user_split_lock() >> -> split_lock_warn() >> >> I think maybe the mismatch is that kernel_exc_vmm_communication() calls >> irqentry_nmi_enter(regs); which disable preemption but exc_alignment_check() >> does local_irq_enable() which doesn't enable it. > > I think we need some arch/x86/kernel/sev.c expertise to explain the > preemption requirements in that stack trace. Adding Tom Lendacky.
Adding Joerg as the original developer of this code.
I believe that irqentry_nmi_enter() is used so that we are ensured that the kernel can't be interrupted while using the per-CPU GHCB when entered from kernel-mode in order to avoid nested #VCs (except for an NMI). Joerg might have further insights since there was a lot of discussion around these changes.
I'm not sure if is possible, but I wonder if irqentry_nmi_exit() can be issued before forwarding the exception - or even delay forwarding the exception until after irqentry_nmi_exit().
Thanks, Tom
> >> Also why does arch/x86 not have a dedicated mailing list? > > Good question. X86 was once the default architecture. So everything went to > linux-kernel@vger.kernel.org. I'll add that to Cc: for this. But maybe > it's time for an x86 specific list? > >> regards, >> dan carpenter > > -Tony
| |