nxsig_timeout calls nxsched_add_readytorun and up_switch_context, so
must be in critical section.
nxsig_timeout is used as wdentry in nxsig_clockwait
See wd_expiration CALL_FUNC is not in critical section.
Signed-off-by: Serg Podtynnyi <serg@podtynnyi.com>
In case the wdog has already completed when calling cancel, the cancel is
supposed to just return OK.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This commit added a macro function clock_delay2abstick to calculate the
absolute tick after the delay.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit resolves a timing error caused by the round-up behavior in clock_time2ticks. In rare cases, this could lead to a two-tick increment within a single tick interval. To fix this, we introduced clock_time2ticks_floor, which guarantees the correct semantics for obtaining current system ticks.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit added CONFIG_TIMER_ADJUST_USEC to support the time compensation for wdog timer. Normally, timer event cannot triggered on exact time due to the existence of interrupt latency. Assuming that the interrupt latency is distributed within [Best-Case Execution Time, Worst-Case Excution Time], we can set the timer adjustment value to the BCET to reduce the latency.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
To improve the absolute timer accuracy, this commit move the tick++ to relative wdog timer.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
Or will be catch by codespell, when do checkpatch.sh
Also fix the relative comment file changed.
include/nuttx/scsi.h
drivers/syslog/ramlog.c
excluded as we have to modify field name in struct
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
This commit improve the performance of the work_queue by reducing
unnecessary wdog timer setting.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit fixed work_cancel_sync at a very rare boundary case. When a worker thread re-enqueues the work data structure during the execution of work, the user thread cannot directly dequeue the work in work_cancel_sync. Instead, it should wait until all workers' references to the work data structure have been eliminated after dequeuing.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
When using pthread_kill, the signal should be delivered to the
specified thread. Current implementation, however, may add the
signal to the groups pending list, if the signal is masked at the
time of dispatch. From the group's pending list it can be delivered
to any thread of the group, which is wrong.
Fix this by adding a new field "FAR struct tcb_s *tcb" to
"struct sigpendq", marking if the signal needs to be delivered
to a specific thread. Use NULL for the value if delivery to any
thread in the group is ok.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The kmm_alloc can break the critical section, if it sleeps on the
heap mutex. If we run out of pending signal structures, allocate more
right after entering the critical section but before checking if the signal
needs to be added to the pending queue.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The TCB used in find_action is locked by spinlock, so it doesn't belong
inside critical section. Just find the possible action already in the
beginning of nxsig_tcbdispatch.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
- Remove the redundant holder, as nxsem now manages hoder TID
- Remove DEBUGASSERTIONS which are managed in nxsem
- Remove the "reset" handling logic, as it is now managed in nxsem
- Inline the simplest functions
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This puts the mutex support fully inside nxsem, allowing
locking the mutex and setting the holder with single atomic
operation.
This enables fast mutex locking from userspace, avoiding taking
critical_sections, which may be heavy in SMP and cleanup
of nxmutex library in the future.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The task which is deleted should be removed from the semaphores waitlist,
if the task happens to be blocked.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
For the watchdog list and the workqueue list, we wonder whether the list head has changed after the insertion. In the original implementation, we have to access the list->next field and compare the result to the currently inserted node. In this commit, we mark the list head before the insertion and comparing the current traversed node with the list head we marked, which can avoid accessing the list->next and is more cache-friendly.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit refactors the logic of workqueue processing delayed and periodic work, and changes the timer to be set in `work_thread`. The improvements of this change are as follows:
- Fixed the memory reuse problem of the original periodic workqueue implementation.
- By removing the `wdog_s` structure in the `work_s` structure, the memory overhead of each `work_s` structure is reduced by about 30 bytes.
- Set the timer for each workqueue instead of each work, which improves system performance.
- Simplified the workqueue cancel logic.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
In NuttX, the dq and the list are two different implementations of the double-linked list. Comparing to the dq, the list implementation has less branch conditions such as checking whether the head or tail is NULL. In theory and practice, the list is more friendly to the CPU pipeline. This commit changed the dq to the list in the wqueue implementation.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
For example a race in managing tcb->sigprocmask may cause signal not being delivered at all.
The mechanism is as follows:
1. cpu 1: nxsig_deliver adds the signal to the sigprocmask for the duration of signal delivery
2. cpu 2: new signal (same signo) is dispatched in nxsig_tcbdispatch
3. cpu 2: nxsig_tcbdispatch detects that signal is masked
4. cpu 2: nxsig_tcbdispatch leaves critical section before calling nxsig_add_pendigsignal
5. cpu 1: nxsig_deliver finishes. The "nxsig_unmask_pendingsignal" is called in the end but does nothing
6. cpu 2: nxsig_tcbdispatch continues and adds the pending signal
In the end, the logic in the end of nxsig_deliver, which tries to handle signals added
to the pending queue during the signal action delivery (step 5) failed, and the pending signal
was not delivered.
Fix this by just keeping the critical section for during the whole duration of nxsig_tcbdispatch,
and move things which can't be executed from within critical section outside.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
For some SMP calls it is necessary to lock the current CPU for the process
receiving the SMP call. This is done by setting the CPU affinity to the
current CPU and preventing the CPU selection algorithm from switching
CPUs.
dtcb->flags |= TCB_FLAG_CPU_LOCKED;
CPU_SET(dtcb->cpu, &dtcb->affinity);
However, this logic is currently broken, as CPU_SET is defined as:
#define CPU_SET(c,s) do { *(s) |= (1u << (c)); } while (0)
In order to assign tcb->cpu (the current CPU) to the affinity mask, the
mask must be cleared first by calling CPU_ZERO.
Renaming "modlib" to "libelf" is more in line with the implementation content,
which makes it easier for individual developers to understand the capabilities of this module.
CONFIG_LIBC_MODLIB -> CONFIG_LIBC_ELF
Signed-off-by: chao an <anchao.archer@bytedance.com>
test:
1.use mps3-an547 build helloxx as module and run it
2.use qemu-armv7a:knsh test kernel build helloxx and run it
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
There is no need for a gettid() syscall, as the thread ID is stable through
the life of the process. It is safe to put a copy of TID to TLS. This way
a user processes can access TID quickly via its own stack, instead of
having to use an expensive syscall.
Signed-off-by: Ville Juven <ville.juven@unikie.com>
This avoids unnecessary syscalls in memory protected builds, when mutex
lock/unlock can be done with only atomic counter access
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
When the semaphore priority flags is set to NONE, and the semaphore
is a mutex, the fast locking path can be used, even when
priority inheritance or priority protect are enabled globally.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
After pthread mutexes changed to nxmutex, the priority inheritance
was set on by default; even if one tried to control it with
CONFIG_PTHREAD_MUTEX_DEFAULT_PRIO_INHERIT.
Also the CONFIG_PTHREAD_MUTEX_DEFAULT_PRIO_PROTECT is not effective.
Fix this by setting the default mutex priority adjustment flags according
to CONFIG_PTHREAD_MUTEX_DEFAULT_PRIO_INHERIT and CONFIG_PTHREAD_MUTEX_DEFAULT_PRIO_PROTECT.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
Replace get current tcb method from nxsched_get_tcb(nxsched_gettid()) to this_task(),
change two function calls with inline function to improve performance:
FAR struct tcb_s *tcb = nxsched_get_tcb(nxsched_gettid());
FAR struct tcb_s *tcb = this_task();
Signed-off-by: chao an <anchao.archer@bytedance.com>
The kernel mapping should be performed in sem_wait (thread level) as
virtual memory mappings cannot be added from interrupt, at least for now.
The reason?
kmm_map() depends on mm_map_add(), which in turn uses a mutex for mutual
exclusion. Using mutexes from interrupt level is not permitted.
Mapping tcb->waitobj into kernel virtual memory directly in sem_wait()
makes sense, since accessing tcb->waitobj via a user virtual address can
lead to unexpected results (the wrong mappings can be in place).
This commit addresses an issue where calling `wd_cancel_period` within the periodic watchdog callback would fail to cancel the watchdog timer.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
Fix build error with the following condition.
- CONFIG_BOARD_CRASHDUMP_CUSTOM=y
- CONFIG_COREDUMP=n
The condition for calling `coredump_initialize()` is incorrect.
CONFIG_BOARD_CRASHDUMP_CUSTOM is not dependent on CONFIG_COREDUMP.
Signed-off-by: SPRESENSE <41312067+SPRESENSE@users.noreply.github.com>
The missing <nuttx/spinlock.h> header caused following compile errors:
CC: clock/clock_adjtime.c clock/clock_adjtime.c: In function 'adjtime_wdog_callback':
clock/clock_adjtime.c:67:11: error: implicit declaration of function 'spin_lock_irqsave' [-Wimplicit-function-declaration]
67 | flags = spin_lock_irqsave(&g_adjtime_lock);
| ^~~~~~~~~~~~~~~~~
clock/clock_adjtime.c:78:3: error: implicit declaration of function 'spin_unlock_irqrestore' [-Wimplicit-function-declaration]
78 | spin_unlock_irqrestore(&g_adjtime_lock, flags);
Signed-off-by: Michal Lenc <michallenc@seznam.cz>
Coredump doens't need CONFIG_ELF to be enabled, but need elf.h to include correct elf32.h or elf64.h.
Select LIBC_ARCH_ELF in COREDUMP to allow LIBC_ARCH_ELF_64BIT to be
defined correctly.
Signed-off-by: Neo Xu <neo.xu1990@gmail.com>
Firstly call arm_coredump_add_region in up_initialize to add nvic region, then call
coredump_set_memory_region to add board mem region, at this moment, an overlap occurs.
Signed-off-by: wanggang26 <wanggang26@xiaomi.com>
Add a custom note for NuttX information including magic and coredump
file data size, so when system reboots, we can check if the block device
contains valid coredump file, and save it to file system.
Signed-off-by: xuxingliang <xuxingliang@xiaomi.com>