Change every occurence of up_switch_context to use this_task() as the first parameter.
"nxsched_add_readytorun" returns "true" if context switch is required. "nxsched_add_readytorun"
typically could only switch the assigned/running task to the one which is passed in as parameter.
But this is not strictly guaranteed in SMP; if other CPUs tweak with affinities or priorities,
it may be possible that the running task after the call is changed, but is some other
task from the readytorun list (and it should be, if there is higher priority one available or the
affinity of the added task prevents it to be scheduled in, but the previous head of the readytorun
list should run.
this_task() is always the correct one to switch to, since it always points to the tcb which was
just switched in by the nxsched_add_readytorun.
This is also a precursor to re-writing the SMP queue logic to remove pending lists for SMP.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
There is no need to check the holder structure "counts". There are cases
where the counts may be greater than 1 when several tasks block
on the mutex, but there is always just one holder, which must be freed.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This is not a bug, but unnecessary code. If the mutex is no longer blocking,
the released thread will set the holder and clear the blocking bit in the end
of nxsem_wait.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
Checking only adds a race condition. The checking if the wdog is active
or not must be done inside wd_cancel, where the proper spinlock is held
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This commit added a macro function clock_delay2abstick to calculate the
absolute tick after the delay.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This puts the mutex support fully inside nxsem, allowing
locking the mutex and setting the holder with single atomic
operation.
This enables fast mutex locking from userspace, avoiding taking
critical_sections, which may be heavy in SMP and cleanup
of nxmutex library in the future.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The task which is deleted should be removed from the semaphores waitlist,
if the task happens to be blocked.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This avoids unnecessary syscalls in memory protected builds, when mutex
lock/unlock can be done with only atomic counter access
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
When the semaphore priority flags is set to NONE, and the semaphore
is a mutex, the fast locking path can be used, even when
priority inheritance or priority protect are enabled globally.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The kernel mapping should be performed in sem_wait (thread level) as
virtual memory mappings cannot be added from interrupt, at least for now.
The reason?
kmm_map() depends on mm_map_add(), which in turn uses a mutex for mutual
exclusion. Using mutexes from interrupt level is not permitted.
Mapping tcb->waitobj into kernel virtual memory directly in sem_wait()
makes sense, since accessing tcb->waitobj via a user virtual address can
lead to unexpected results (the wrong mappings can be in place).
1. remove up_interrupt_context() check, which should be safe in interrupt context
2. remove sem instance check will be handle in nxsem_trywait()
Signed-off-by: chao an <anchao@lixiang.com>
Otherwise the free holder list will leak, causing either a crash due to
holder->htcb = NULL, or the free holder list becomes (erroneously) empty
even though most of the holder entries are free.
The holder list can be modified via interrupt so using addrenv_select is
not safe. Access the semaphore by mapping it into kernel virtual memory
instead.
The temporary mappings via addrenv_select() and addrenv_restore() simply
do not work from interrupt, so remove its usage and replace with kmap
which is safe.
Add sem_wait fast operations, use atomic to ensure
atomicity of semcount operations, and do not depend
on critical section.
Test with robot:
before modify:
nxmutex_lock cost: 78 ns
nxmutex_unlock cost: 82 ns
after modify:
nxmutex_lock cost: 28 ns
nxmutex_unlock cost: 14 ns
Signed-off-by: zhangyuan29 <zhangyuan29@xiaomi.com>
This reverts commit befe29801f.
Because a few regressions have been reported and
it likely will take some time to fix them:
* for some configurations, semaphore can be used on the special
memory region, where atomic access is not available.
cf. https://github.com/apache/nuttx/pull/14625
* include/nuttx/lib/stdatomic.h is not compatible with
the C11 semantics, which the change in question relies on.
cf. https://github.com/apache/nuttx/pull/14755
Move CONFIG_SEM_PREALLOCHOLDERS to include/semaphore.h to avoid undefined issues from occurring in other places as well.
Signed-off-by: cuiziwei <cuiziwei@xiaomi.com>
Add sem_wait fast operations, use atomic to ensure
atomicity of semcount operations, and do not depend
on critical section.
Test with robot:
before modify:
nxmutex_lock cost: 78 ns
nxmutex_unlock cost: 82 ns
after modify:
nxmutex_lock cost: 28 ns
nxmutex_unlock cost: 14 ns
Signed-off-by: zhangyuan29 <zhangyuan29@xiaomi.com>
set CONFIG_PRIORITY_INHERITANCE=y
set CONFIG_SEM_PREALLOCHOLDERS=0 or CONFIG_SEM_PREALLOCHOLDERS=8
#24 0x4dcab71 in __assert assert/lib_assert.c:37
#25 0x4d6b0e9 in nxsem_destroyholder semaphore/sem_holder.c:602
#26 0x4d80cf7 in nxsem_destroy semaphore/sem_destroy.c:80
#27 0x4d80db9 in sem_destroy semaphore/sem_destroy.c:120
#28 0x4dcb077 in nxmutex_destroy misc/lib_mutex.c:122
#29 0x4dc6611 in pipecommon_freedev pipes/pipe_common.c:117
#30 0x4dc7fdc in pipecommon_close pipes/pipe_common.c:397
#31 0x4ed4f6d in file_close vfs/fs_close.c:78
#32 0x6a91133 in local_free local/local_conn.c:184
#33 0x6a92a9c in local_release local/local_release.c:129
#34 0x6a91d1a in local_subref local/local_conn.c:271
#35 0x6a75767 in local_close local/local_sockif.c:797
#36 0x4e978f6 in psock_close socket/net_close.c:102
#37 0x4eed1b9 in sock_file_close socket/socket.c:115
#38 0x4ed4f6d in file_close vfs/fs_close.c:78
#39 0x4ed1459 in nx_close_from_tcb inode/fs_files.c:754
#40 0x4ed1501 in nx_close inode/fs_files.c:781
#41 0x4ed154a in close inode/fs_files.c:819
#42 0x6bcb9ce in property_get kvdb/client.c:307
#43 0x6bcd465 in property_get_int32 kvdb/common.c:270
#44 0x5106c9a in tz_offset_restore app/miwear_bluetooth.c:745
#45 0x510893f in miwear_bluetooth_main app/miwear_bluetooth.c:1033
#46 0x4dcf5c8 in nxtask_startup sched/task_startup.c:70
#47 0x4d70873 in nxtask_start task/task_start.c:134
#48 0x4e04a07 in pre_start sim/sim_initialstate.c:52
Signed-off-by: ligd <liguiding1@xiaomi.com>
set CONFIG_PRIORITY_INHERITANCE=y
set CONFIG_SEM_PREALLOCHOLDERS=0
semaphore/sem_holder.c:320:34: runtime error: member access within null pointer of type 'struct tcb_s'
#0 0xd8b540 in nxsem_boostholderprio semaphore/sem_holder.c:320
#1 0xd8c1cf in nxsem_boost_priority semaphore/sem_holder.c:703
#2 0xda5dfa in nxsem_wait semaphore/sem_wait.c:145
#3 0xda61d9 in nxsem_wait_uninterruptible semaphore/sem_wait.c:248
#4 0x12f2477 in media_service_thread0 /home/ligd/platform/dev/apps/examples/hello/hello_main.c:44
#5 0x1204154 in pthread_startup pthread/pthread_create.c:59
#6 0x1cd906f in pthread_start pthread/pthread_create.c:139
#7 0xe72fcb in pre_start sim/sim_initialstate.c:52
Signed-off-by: ligd <liguiding1@xiaomi.com>
If the write lock is already held by oneself and sine the write
lock can be recursively held, so, this operation can be converted to a write
lock to avoid deadlock.
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
nxsem_tickwait correctly sleeps more than 1 tick. But nxsem_tickwait_uninterruptible
may wake up to a signal (with -EINTR), in which case the tick + 1 must also
be taken into account. Otherwise the nxsem_tickwait_uninterruptible may
wake up 1 tick too early.
Also fix the nxsem_tickwait to return with -ETIMEDOUT if called with delay 0.
This is similar to e.g. posix sem_timedwait.
Signed-off-by: Jukka Laitinen <jukkax@ssrc.tii.ae>
reason:
1In the scenario of active waiting, context switching is inevitable, and we can eliminate redundant judgments.
code size
before
hujun5@hujun5-OptiPlex-7070:~/downloads1/vela_sim/nuttx$ size nuttx
text data bss dec hex filename
262848 49985 63893 376726 5bf96 nuttx
after
hujun5@hujun5-OptiPlex-7070:~/downloads1/vela_sim/nuttx$ size nuttx
text data bss dec hex filename
263324 49985 63893 377202 5c172 nuttx
reduce code size by -476
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
we can use g_cpu_lockset to determine whether we are currently in the scheduling lock,
and all accesses and modifications to g_cpu_lockset, g_cpu_irqlock, g_cpu_irqset
are in the critical section, so we can directly operate on it.
test:
We can use qemu for testing.
compiling
make distclean -j20; ./tools/configure.sh -l qemu-armv8a:nsh_smp ;make -j20
running
qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic -machine virt,virtualization=on,gic-version=3 -net none -chardev stdio,id=con,mux=on -serial chardev:con -mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
The overhead of spinlok is less than mutext (mutex need to call
enter_critical section.)
After this patch, `down_write_trylock` and `down_read_trylock` can be
use in interrupt context.
The instruction is protected with mutex only one instruction so using
spinlock is better.
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
A new locking mechanism: read/write locks
When there is a writer it is not possible to put on a read lock or a write lock; when there is a reader it is possible to reenter the read lock but not the write lock.
Writers are exclusive locks, readers are shared locks.
At the same time through the waiter count to determine whether there is currently a blocked task, if there is then in the unlock time to wake up all the waiter, through the priority of the competition to complete the blocked lock execution.
For example:
When we have a reader blocking two waiter writers, when the reader is unlocked it wakes up both writers. The writer with higher priority wakes up and checks for a successful condition and locks the lock, the second writer wakes up and fails to check for a condition and continues to block the lock.
Signed-off-by: chenrun1 <chenrun1@xiaomi.com>
This moves all the public POSIX semaphore functions into libc and with
this most of the user-space logic is also moved; namely cancel point and
errno handling.
This also removes the need for the _SEM_XX macros used to differentiate
which API is used per user-/kernel mode. Such macros are henceforth
unnecessary.
If the semaphore is shared, the holder has put its own mmapped address
to pholder->sem. This means we must switch to the holder's address
environment when going through the held semaphores list.
A better option would be to get the kernel mapped address for the
semaphore's physical page, but that mechanism is not functional yet.
This fixes a full system crash when CONFIG_PRIORITY_INHERITANCE=y and
CONFIG_BUILD_KERNEL=y and user makes shared semaphore via:
int semfd = shm_open("sem", O_CREAT | O_RDWR, 0666);
sem_t *sem = mmap(0, sizeof(sem_t), PROT_READ | PROT_WRITE, MAP_SHARED, semfd, 0);
This path just for modify Mac sim-02 issue.
The compiler require the firt paramter of atomic_compare_exchange_strong
is atomic type and second parameter is int type.
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
spinlock.c:
Implement read write spinlock.
Readers can take lock simultaneously but only one writer can take lock.
irq_spinlock.c:
Align g_irq_spin_count.
If the lock is NULL, the caller will get global lock (e.g. g_irq_spin) and spin_lock_irqsave() support nest on the same CPU.
If the CPU can write lock, it can call write_lock_irqsave() again (e.g. support nest).
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
Co-authored-by: David Sidrane <David.Sidrane@Nscdg.com>
test config: ./tools/configure.sh -l qemu-armv8a:nsh_smp
Pass ostest
No matter big-endian or little-endian, ticket spinlock only check the
next and the owner is equal or not.
If they are equal, it means there is a task hold the lock or lock is
free.
Signed-off-by: TaiJu Wu <tjwu1217@gmail.com>
Co-authored-by: Xiang Xiao <xiaoxiang781216@gmail.com>