lkml.org 
[lkml]   [2021]   [Nov]   [17]   [last100]   RSS Feed
Views: [wrap][no wrap]   [headers]  [forward] 
 
Messages in this thread
/
Date
From
Subject[luto:sched/lazymm 11/16] kernel/sched/idle.c:288: undefined reference to `unlazy_mm_irqs_off'
tree:   https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git sched/lazymm
head: c0d03d4f2778fd0a7c16e69cdfb3f111296129b5
commit: 23ef8314367945e2dd63230fb1c2ebb7e148fe48 [11/16] Rework "sched/core: Fix illegal RCU from offline CPUs"
config: i386-randconfig-c021-20211117 (attached as .config)
compiler: gcc-9 (Debian 9.3.0-22) 9.3.0
reproduce (this is a W=1 build):
# https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git/commit/?id=23ef8314367945e2dd63230fb1c2ebb7e148fe48
git remote add luto https://git.kernel.org/pub/scm/linux/kernel/git/luto/linux.git
git fetch --no-tags luto sched/lazymm
git checkout 23ef8314367945e2dd63230fb1c2ebb7e148fe48
# save the attached .config to linux build tree
mkdir build_dir
make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

ld: kernel/sched/idle.o: in function `do_idle':
>> kernel/sched/idle.c:288: undefined reference to `unlazy_mm_irqs_off'


vim +288 kernel/sched/idle.c

255
256 /*
257 * Generic idle loop implementation
258 *
259 * Called with polling cleared.
260 */
261 static void do_idle(void)
262 {
263 int cpu = smp_processor_id();
264
265 /*
266 * Check if we need to update blocked load
267 */
268 nohz_run_idle_balance(cpu);
269
270 /*
271 * If the arch has a polling bit, we maintain an invariant:
272 *
273 * Our polling bit is clear if we're not scheduled (i.e. if rq->curr !=
274 * rq->idle). This means that, if rq->idle has the polling bit set,
275 * then setting need_resched is guaranteed to cause the CPU to
276 * reschedule.
277 */
278
279 __current_set_polling();
280 tick_nohz_idle_enter();
281
282 while (!need_resched()) {
283 rmb();
284
285 local_irq_disable();
286
287 if (cpu_is_offline(cpu)) {
> 288 unlazy_mm_irqs_off();
289 tick_nohz_idle_stop_tick();
290 cpuhp_report_idle_dead();
291 arch_cpu_idle_dead();
292 }
293
294 arch_cpu_idle_enter();
295 rcu_nocb_flush_deferred_wakeup();
296
297 /*
298 * In poll mode we reenable interrupts and spin. Also if we
299 * detected in the wakeup from idle path that the tick
300 * broadcast device expired for us, we don't want to go deep
301 * idle as we know that the IPI is going to arrive right away.
302 */
303 if (cpu_idle_force_poll || tick_check_broadcast_expired()) {
304 tick_nohz_idle_restart_tick();
305 cpu_idle_poll();
306 } else {
307 cpuidle_idle_call();
308 }
309 arch_cpu_idle_exit();
310 }
311
312 /*
313 * Since we fell out of the loop above, we know TIF_NEED_RESCHED must
314 * be set, propagate it into PREEMPT_NEED_RESCHED.
315 *
316 * This is required because for polling idle loops we will not have had
317 * an IPI to fold the state for us.
318 */
319 preempt_set_need_resched();
320 tick_nohz_idle_exit();
321 __current_clr_polling();
322
323 /*
324 * We promise to call sched_ttwu_pending() and reschedule if
325 * need_resched() is set while polling is set. That means that clearing
326 * polling needs to be visible before doing these things.
327 */
328 smp_mb__after_atomic();
329
330 /*
331 * RCU relies on this call to be done outside of an RCU read-side
332 * critical section.
333 */
334 flush_smp_call_function_from_idle();
335 schedule_idle();
336
337 if (unlikely(klp_patch_pending(current)))
338 klp_update_patch_state(current);
339 }
340

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org
[unhandled content-type:application/gzip]
\
 
 \ /
  Last update: 2021-11-17 10:57    [W:0.035 / U:0.420 seconds]
©2003-2020 Jasper Spaans|hosted at Digital Ocean and TransIP|Read the blog|Advertise on this site