| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
| |
Update #2674.
|
| |
|
|
|
|
| |
closes #2810.
|
| |
|
| |
|
|
|
|
|
|
| |
Use _Thread_Do_dispatch() instead of _Thread_Dispatch(). Restore the
PSR[EF] state of the interrupted context via new system call
syscall_irqdis_fp in case floating-point support is enabled.
|
|
|
|
|
| |
Initialize the thread queue context with invalid data in debug
configurations to catch missing set up steps.
|
|
|
|
|
|
|
|
|
| |
Previously, if the cache range operations were called with a range that
was larger than the cache size, this would lead to multiple iterations
over the cache, which is unnecessary.
Limit this so that if the range is larger than the cache size, the
operations will only iterate over the whole cache once.
|
|
|
|
|
|
|
|
|
|
| |
Move the code of the _CPU_OR1K_Cache_{enable,disable}_* functions into the
equivalent exported _CPU_cache_{enable,disable}_* functions instead, and
then delete them, in order to reduce the code indirection and aid
readability.
This does not touch the currently unused prefetch, writeback, and lock
functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously _ISR_Local_{disable,enable}() was executed twice for each
cache line operation, and since operations over the entire cache were
implemented by calling the single-line operations in a loop, this made
those operations rather costly.
Fix the double-toggle by calling _OR1K_mtspr() directly, and removing
the now-unused corresponding _CPU_OR1K_Cache_* functions.
Fix the entire-operations by moving the ISR toggle outside of the
loop, and by calling _OR1K_mtspr() directly instead of the single-line
operations.
Also implement range functions, since otherwise the cache manager falls
back on looping over the single-line operations.
|
|
|
|
|
|
|
| |
* Fix indentation of variable declarations.
* Change commented-out asm -> __asm__ to meet c99 standard if
uncommented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add functions for flushing and invalidating whole cache.
Since we don't have system calls that can operate on anything more than
a single cache line, these simply retrieves the cache size and iterates
over the full size, invalidating each line.
The current implementation assumes that there's only one level of cache.
These changes were contributed by Antmicro under contract by ÅAC
Microtec AB.
Close #2602
|
| |
|
| |
|
|
|
|
| |
Update #2811.
|
|
|
|
|
|
|
|
|
| |
Move thread state for _Thread_queue_Enqueue() to the thread queue
context. This reduces the parameter count of _Thread_queue_Enqueue()
from five to four (ARM for example has only four function parameter
registers). Since the thread state is used after several function calls
inside _Thread_queue_Enqueue() this parameter was saved on the stack
previously.
|
|
|
|
|
|
|
|
|
|
| |
Callers of _Thread_Do_dispatch() must have a valid
Per_CPU_Control::Stats::thread_dispatch_disabled_instant.
Call _Profiling_Outer_most_interrupt_entry_and_exit() with the interrupt
stack to not exceed Per_CPU_Control::Interrupt_frame.
Update #2751.
|
| |
|
|
|
|
| |
Update #2674.
|
|
|
|
| |
Update #2825.
|
|
|
|
|
|
| |
Avoid use of internal _Thread_Dispatch_disable() function.
Update #2825.
|
|
|
|
| |
Update #2825.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Turn pthread_spinlock_t into a self-contained object. On uni-processor
configurations, interrupts are disabled in the lock/trylock operations
and the previous interrupt status is restored in the corresponding
unlock operations. On SMP configurations, a ticket lock is a acquired
and released in addition.
The self-contained pthread_spinlock_t object is defined by Newlib in
<sys/_pthreadtypes.h>.
typedef struct {
struct _Ticket_lock_Control _lock;
__uint32_t _interrupt_state;
} pthread_spinlock_t;
This implementation is simple and efficient. However, this test case of
the Linux Test Project would fail due to call of printf() and sleep()
during spin lock ownership:
https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_spin_lock/1-2.c
There is only limited support for profiling on SMP configurations.
Delete CORE spinlock implementation.
Update #2674.
|
| |
|
|
|
|
|
| |
Delete unused _Thread_queue_Enqueue() and rename
_Thread_queue_Enqueue_critical() to _Thread_queue_Enqueue().
|
|
|
|
|
|
|
| |
Replace the expected thread dispatch disable level with a thread queue
enqueue callout. This enables the use of _Thread_Dispatch_direct() in
the thread queue enqueue procedure. This avoids impossible exection
paths, e.g. Per_CPU_Control::dispatch_necessary is always true.
|
|
|
|
|
|
|
|
|
|
|
|
| |
On SMP configurations, it is a fatal error to call blocking operating
system with interrupts disabled, since this prevents delivery of
inter-processor interrupts. This could lead to executing threads which
are not allowed to execute resulting in undefined behaviour.
The ARM Cortex-M port has a similar problem, since the interrupt state
is not a part of the thread context.
Update #2811.
|
|
|
|
| |
Cache align locks in the context.
|
|
|
|
| |
We may own the allocator mutex during context switches.
|
|
|
|
| |
Set scheduler before the task start.
|
| |
|
|
|
|
|
|
|
| |
Use the right register to determine if a thread dispatch is allowed and
necessary.
Update #2751.
|
|
|
|
|
|
| |
Avoid rtems_semaphore_flush() to reduce the maximum thread dispatch
disabled time of this test. Remove superfluous yield and malloc().
Ensure that no resource leak occurs.
|
|
|
|
|
|
|
|
| |
This fixes the CPU ports with relaxed alignment restrictions, e.g. type
alignment is less than the type size.
Close #2822.
Close #2823.
|
|
|
|
| |
Close #2824.
|
|
|
|
|
|
|
| |
We cannot use the MRS or MSR instructions in Thumb-1 mode. Stay in ARM
mode for the Thumb-1 targets during interrupt low-level processing.
Update #2751.
|
|
|
|
|
|
| |
The MPC5XX support uses a legacy interrupt/exception infrastructure.
Close #2819.
|
|
|
|
| |
Close #2820.
|
|
|
|
| |
Update #2820.
|
|
|
|
| |
Close #2818.
|
|
|
|
| |
Close #2816.
|
|
|
|
| |
Close #2817.
|
|
|
|
| |
Avoid use of the stack for the hot paths.
|
| |
|
|
|
|
|
|
|
| |
In contrast to _ISR_Get_level() the _ISR_Is_enabled() function evaluates
a level parameter and returns a boolean value.
Update #2811.
|
|
|
|
| |
Update #2811.
|
|
|
|
| |
Update #2751.
|
|
|
|
| |
Update #2751.
|
|
|
|
|
| |
Move profiling code closer to bsp_interrupt_disable() to allow re-use of
r9 later.
|
| |
|