the virtqueue should be protected by the spinlock, because the virtqueues are used
both in worker and interrupt.
Signed-off-by: wangyongrong <wangyongrong@xiaomi.com>
For remoteproc transport layer, the virtio drivers can not use the malloced
memory from the default system heap to communicate with the virtio devices,
the buffers placed in virtqueue should be in the share memory region
(mallcoed virtio_alloc_buf()).
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
By setting config CONFIG_DRIVERS_VIRTIO_SERIAL_NAME to "ttyXX0;ttyXX1;..."
to customize the virtio serial name.
For example:
If CONFIG_DRIVERS_VIRTIO_SERIAL_NAME="ttyBT;ttyTest0;ttyTest1",
virtio-serial will register three uart devices with names:
"/dev/ttyBT", "/dev/ttyTest0", "/dev/ttyTest1" to the VFS.
nsh> ls dev
/dev:
console
null
telnet
ttyBT
ttyS0
ttyTest0
ttyTest1
zero
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
In secure state, call virtio_register_mmio_device_secure() to
register the secure virtio mmio device;
In non-secure state, call virtio_register_mmio_device() to
register the non-secure virtio mmio device;
Board should ensure not mixed use the secure and non-seucre mmio
device.
Signed-off-by: Bowen Wang <wangbowen6@xiaomi.com>
Summary:
Fixed the problem of releasing the bucket prematurely in multi-threaded flock scenarios.
A thread setlk
B thread setlk_wait
A thread releases lock but fails to determine if nwaiter causes the bucket to be released prematurely
post B thread causes crash due to heap use after free
https://github.com/apache/nuttx/issues/13821
Signed-off-by: chenrun1 <chenrun1@xiaomi.com>
The upper layer expects -ENOTTY is returned if ioctl command is not
found in file system specific implementation.
Signed-off-by: Michal Lenc <michallenc@seznam.cz>
1. use '__attribute__((constructor))' mark initialize function
2. use '__attribute__((destructor))' mark uninitialize function
3. compile module with -fvisibility=hidden. use `__attribute__((visibility("default")))`
mark is need export symbol.so not need module_initialize to initialize export symbol.
Signed-off-by: anjiahao <anjiahao@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
In the kernel, we are planning to remove all occurrences of up_cpu_pause as one of the steps to
simplify the implementation of critical sections. The goal is to enable spin_lock_irqsave to encapsulate critical sections,
thereby facilitating the replacement of critical sections(big lock) with smaller spin_lock_irqsave(small lock)
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic
-machine virt,virtualization=on,gic-version=3
-net none -chardev stdio,id=con,mux=on -serial chardev:con
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
reason:
1 To improve efficiency, we mimic Linux's behavior where preemption disabling is only applicable to the current CPU and does not affect other CPUs.
2 In the future, we will implement "spinlock+sched_lock", and use it extensively. Under such circumstances, if preemption is still globally disabled, it will seriously impact the scheduling efficiency.
3 We have removed g_cpu_lockset and used irqcount in order to eliminate the dependency of schedlock on critical sections in the future, simplify the logic, and further enhance the performance of sched_lock.
4 We set lockcount to 1 in order to lock scheduling on all CPUs during startup, without the need to provide additional functions to disable scheduling on other CPUs.
5 Cpu1~n must wait for cpu0 to enter the idle state before enabling scheduling because it prevents CPUs1~n from competing with cpu0 for the memory manager mutex, which could cause the cpu0 idle task to enter a wait state and trigger an assert.
size nuttx
before:
text data bss dec hex filename
265396 51057 63646 380099 5ccc3 nuttx
after:
text data bss dec hex filename
265184 51057 63642 379883 5cbeb nuttx
size -216
Configuring NuttX and compile:
$ ./tools/configure.sh -l qemu-armv8a:nsh_smp
$ make
Running with qemu
$ qemu-system-aarch64 -cpu cortex-a53 -smp 4 -nographic \
-machine virt,virtualization=on,gic-version=3 \
-net none -chardev stdio,id=con,mux=on -serial chardev:con \
-mon chardev=con,mode=readline -kernel ./nuttx
Signed-off-by: hujun5 <hujun5@xiaomi.com>
Unwinding the kernel stack did not work previously due to the way the task
startup logic works via nxtask_start and the up_task_start() system call.
After modifying the logic behind those, the kernel stack is in fact fully
unwound when return_from_exception is executed, so calling the original
hack "riscv_current_ksp" is not necessary anymore.
This removes 2 reserved system calls and replaces them with an ASM snippet.
The result removes an unnecessary ecall from the process startup logic, as
well as ensures the stacks are FULLY unwound when the user process starts.
The logic is ported from ARM64.
Port the simplification from ARM64, this removes the ugly inline assembly
trampoline "do_syscall" and replaces it with a simple table lookup and
call via function pointer.
nuttx\libs\libc\misc\lib_bitmap.c(1,1): warning C4819: The file contains a character that cannot
be represented in the current code page (936).
Signed-off-by: chenxiaoyi <chenxiaoyi@xiaomi.com>
Most tools used for compliance and SBOM generation use SPDX identifiers
This change brings us a step closer to an easy SBOM generation.
Signed-off-by: Alin Jerpelea <alin.jerpelea@sony.com>
signal handler stack must be properly aligned, otherwise vector instructions doesn't work in signal handler
Signed-off-by: p-szafonimateusz <p-szafonimateusz@xiaomi.com>