The nxsig_dispatch should just deliver the signal to either a
thread by pid (tid) or to the process (group) by pid.
Simplify the code so that the intent is more obvious.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The signal dispatch is called from interrupt handlers as well, so
this_task() is wrong. The thread to which the signal is supposed to
be delivered is known (stcb), use that.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
In a flat build, a separate init thread should not be mandatory,
users can create a user task in the board early initialization or late initialization hook.
For example, set sigaction after create signalfd,
the sa_sigaction was restored but sa_user not,
causing signalfd_action() get the wild private data.
Signed-off-by: wangjianyu3 <wangjianyu3@xiaomi.com>
This commit adds simple implementation of guardsize for pthreads.
At this moment this option simply increases the size of allocated pthread stack.
At default pthread guard size is set to 0.
Signed-off-by: p-szafonimateusz <p-szafonimateusz@xiaomi.com>
When the task has TCB_FLAG_CPU_LOCKED it is locked to the CPU regardless
of the affinity. There is no need to switch the affinity back and forth.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
Since g_assignedtasks only holds the running task for each CPU, it can
be just a vector. Idle tasks are already preserved in statically allocated
structures "g_idletcb", and can be used from there.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This fixes several places, where the sched erroneously checks if the scheduling
is locked on current cpu/task, when it should check if the scheduling is locked
on the target cpu/task.
The original code randomly caused a task to be added to the pending list, and never
taken out from there, leading to system halt.
For SMP, there is no need for the pending list. Each CPU has got it's own
running list (assigned tasks list), and pending tasks can just be kept in
the unassigned (readytorun) list.
In addition, the smp scheduling is changed in a way that every CPU just picks
up the tasks from the ready-to-run list themselves. Which task to pick is
not tried to be dictated by another CPU.
This also allows using up_send_smp_sched for asynchronously
- re-prioritizing a running task
- triggering round robin scheduling switch
Iow, no separate smp call mechanism is needed for those and the code can be simplified.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
Change every occurence of up_switch_context to use this_task() as the first parameter.
"nxsched_add_readytorun" returns "true" if context switch is required. "nxsched_add_readytorun"
typically could only switch the assigned/running task to the one which is passed in as parameter.
But this is not strictly guaranteed in SMP; if other CPUs tweak with affinities or priorities,
it may be possible that the running task after the call is changed, but is some other
task from the readytorun list (and it should be, if there is higher priority one available or the
affinity of the added task prevents it to be scheduled in, but the previous head of the readytorun
list should run.
this_task() is always the correct one to switch to, since it always points to the tcb which was
just switched in by the nxsched_add_readytorun.
This is also a precursor to re-writing the SMP queue logic to remove pending lists for SMP.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
When exiting schedlock, that task should first take the critical section and
only after that decrease the lockcount to 0. Otherwise an interrupt might
cause a re-schedule before the task enters the critical section, which makes
the following code meaningless.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The current setlogmask call used in coredump_dump_syslog specifies a raw log level
instead of a bitmask, and this causes wrong evaluations later on when that value
is checked against a mask. Therefore the LOG_UPTO macro is added for conversion.
Signed-off-by: Niccolò Maggioni <nicco.maggioni+nuttx@gmail.com>
after this commits
commit e7fa4cae6cbf567266985c8072db1f51ad480943
Author: Yanfeng Liu <yfliu2008@qq.com>
Date: Fri May 17 06:11:52 2024 +0800
sched/tcb: use shared group for kthreads
all kernel thread share group idle
and should not dup filelist to this group
Signed-off-by: guohao15 <guohao15@xiaomi.com>
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
When Stack pointer value not within the stack, the default methon ignored,
while will be discard this information for debug.
Signed-off-by: Lingao Meng <menglingao@xiaomi.com>
This commit fixed windows compilation errors `struct
lp_wqueue_s/hp_wqueue_s has an illegal zero-sized array`.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This patch is a rework of the NuttX file descriptor implementation. The
goal is two-fold:
1. Improve POSIX compliance. The old implementation tied file description
to inode only, not the file struct. POSIX however dictates otherwise.
2. Fix a bug with descriptor duplication (dup2() and dup3()). There is
an existing race condition with this POSIX API that currently results
in a kernel side crash.
The crash occurs when a partially open / closed file descriptor is
duplicated. The reason for the crash is that even if the descriptor is
closed, the file might still be in use by the kernel (due to e.g. ongoing
write to file). The open file data is changed by file_dup3() and this
causes a crash in the device / drivers themselves as they lose access to
the inode and private data.
The fix is done by separating struct file into file and file descriptor
structs. The file struct can live on even if the descriptor is closed,
fixing the crash. This also fixes the POSIX issue, as two descriptors
can now point to the same file.
Signed-off-by: Ville Juven <ville.juven@unikie.com>
Signed-off-by: dongjiuzhu1 <dongjiuzhu1@xiaomi.com>
For simplicity, better performance and lower memory-overhead, this commit replaced the periodical workqueue APIs with the more expressive work_queue_next. The work_queue_next restarts work based on the last expiration time.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit replaced periodical timer with the wd_start_next to improve the timing accuracy.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
For better simplicity, this commit replaced the periodical wdog APIs with the more expressive wd_start_next. The wd_start_next restarts watchdog timer based on the last expiration time.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit changed the type of the delay ticks to the unsigned, which can reduce the useless branch conditions
Besides, this commit added max delay tick limitation to fix incorrect timing behavior if we delay SCLOCK_MAX in the SMP build.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
There is no need to check the holder structure "counts". There are cases
where the counts may be greater than 1 when several tasks block
on the mutex, but there is always just one holder, which must be freed.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This is not a bug, but unnecessary code. If the mutex is no longer blocking,
the released thread will set the holder and clear the blocking bit in the end
of nxsem_wait.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
Checking only adds a race condition. The checking if the wdog is active
or not must be done inside wd_cancel, where the proper spinlock is held
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
nxsig_timeout calls nxsched_add_readytorun and up_switch_context, so
must be in critical section.
nxsig_timeout is used as wdentry in nxsig_clockwait
See wd_expiration CALL_FUNC is not in critical section.
Signed-off-by: Serg Podtynnyi <serg@podtynnyi.com>
In case the wdog has already completed when calling cancel, the cancel is
supposed to just return OK.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This commit added a macro function clock_delay2abstick to calculate the
absolute tick after the delay.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit resolves a timing error caused by the round-up behavior in clock_time2ticks. In rare cases, this could lead to a two-tick increment within a single tick interval. To fix this, we introduced clock_time2ticks_floor, which guarantees the correct semantics for obtaining current system ticks.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit added CONFIG_TIMER_ADJUST_USEC to support the time compensation for wdog timer. Normally, timer event cannot triggered on exact time due to the existence of interrupt latency. Assuming that the interrupt latency is distributed within [Best-Case Execution Time, Worst-Case Excution Time], we can set the timer adjustment value to the BCET to reduce the latency.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
To improve the absolute timer accuracy, this commit move the tick++ to relative wdog timer.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
Or will be catch by codespell, when do checkpatch.sh
Also fix the relative comment file changed.
include/nuttx/scsi.h
drivers/syslog/ramlog.c
excluded as we have to modify field name in struct
Signed-off-by: buxiasen <buxiasen@xiaomi.com>
This commit improve the performance of the work_queue by reducing
unnecessary wdog timer setting.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
This commit fixed work_cancel_sync at a very rare boundary case. When a worker thread re-enqueues the work data structure during the execution of work, the user thread cannot directly dequeue the work in work_cancel_sync. Instead, it should wait until all workers' references to the work data structure have been eliminated after dequeuing.
Signed-off-by: ouyangxiangzhen <ouyangxiangzhen@xiaomi.com>
When using pthread_kill, the signal should be delivered to the
specified thread. Current implementation, however, may add the
signal to the groups pending list, if the signal is masked at the
time of dispatch. From the group's pending list it can be delivered
to any thread of the group, which is wrong.
Fix this by adding a new field "FAR struct tcb_s *tcb" to
"struct sigpendq", marking if the signal needs to be delivered
to a specific thread. Use NULL for the value if delivery to any
thread in the group is ok.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The kmm_alloc can break the critical section, if it sleeps on the
heap mutex. If we run out of pending signal structures, allocate more
right after entering the critical section but before checking if the signal
needs to be added to the pending queue.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
The TCB used in find_action is locked by spinlock, so it doesn't belong
inside critical section. Just find the possible action already in the
beginning of nxsig_tcbdispatch.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
- Remove the redundant holder, as nxsem now manages hoder TID
- Remove DEBUGASSERTIONS which are managed in nxsem
- Remove the "reset" handling logic, as it is now managed in nxsem
- Inline the simplest functions
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>
This puts the mutex support fully inside nxsem, allowing
locking the mutex and setting the holder with single atomic
operation.
This enables fast mutex locking from userspace, avoiding taking
critical_sections, which may be heavy in SMP and cleanup
of nxmutex library in the future.
Signed-off-by: Jukka Laitinen <jukka.laitinen@tii.ae>