after below change merge to kernel, spin_lock() will turn off preemption by default,
but this change is not applicable to all scenarios. The locations in the kernel that
use spin_lock() extensively only require short critical sections and do not trigger
scheduling, which leads to serious performance degradation of NuttX in AMP mode.
In this PR, I try to expose similar problems and hope that each subsystem will carefully check the code coverage
https://github.com/apache/nuttx/pull/14578
|commit b69111d16a
|Author: hujun5 <hujun5@xiaomi.com>
|Date: Thu Jan 23 16:14:18 2025 +0800
|
| spinlock: add sched_lock to spin_lock_irqsave
|
| reason:
| We aim to replace big locks with smaller ones. So we will use spin_lock_irqsave extensively to
| replace enter_critical_section in the subsequent process. We imitate the implementation of Linux
| by adding sched_lock to spin_lock_irqsave in order to address scenarios where sem_post occurs
| within spin_lock_irqsave, which can lead to spinlock failures and deadlocks.
|
| Signed-off-by: hujun5 <hujun5@xiaomi.com>
Signed-off-by: chao an <anchao.archer@bytedance.com>