| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
|
| |
This has been tested on SPARC, i386, PowerPC and ARM.
Closes #2767.
|
| |
|
| |
|
|
|
|
| |
Update #2825.
|
|
|
|
| |
Update #2825.
|
| |
|
|
|
|
| |
Update #2825.
|
|
|
|
| |
Update #2825.
|
| |
|
|
|
|
| |
Update #2825.
|
|
|
|
| |
Update #2825.
|
|
|
|
| |
Update #2825.
|
|
|
|
|
|
|
|
|
| |
The fatal is internal indicator is redundant since the fatal source and
error code uniquely identify a fatal error. Keep the fatal user
extension is internal parameter for backward compatibility and set it to
false always.
Update #2825.
|
| |
|
|
|
|
| |
Update #2830.
|
| |
|
|
|
|
| |
Data obtained on QorIQ T4240 running with 1500MHz.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Change the testsuite configuration files to hold state information about
a test. The states are:
exclude - Do not build the test
expected-fail - The test is expected to fail
indeterminate - The test may pass or may fail
A message is printed just after the test's BEGIN message to indicate
there is a special state for the test. No state message means the test
is expected to pass.
This support requires tests are correctly written to the use standard
support to begin and end a test.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Resurrect RTEMS_LINKER_SET_BEGIN() and RTEMS_LINKER_SET_END().
Add new macros RTEMS_LINKER_SET_ITEM_COUNT(),
RTEMS_LINKER_SET_IS_EMPTY(), and
RTEMS_LINKER_SET_FOREACH().
Remove confusing RTEMS_LINKER_SET_ASSIGN_BEGIN() and
RTEMS_LINKER_SET_ASSIGN_END().
Fix RTEMS_LINKER_SET_SIZE() to return the size in characters as
specified by the documentation.
Update #2408.
Update #2790.
|
| |
|
|
|
|
| |
Update #2811.
|
|
|
|
| |
Update #2751.
|
|
|
|
| |
Update #2797.
|
|
|
|
|
|
| |
Fix thread dispatch profiling of rtems_scheduler_add_processor().
Update #2797.
|
|
|
|
|
|
|
|
| |
Initialize thread queue context early preferably outside the critical
section.
Remove implicit _Thread_queue_Context_initialize() from
_Thread_Wait_acquire().
|
| |
|
|
|
|
|
| |
On ARM Thumb we may have function addresses ending with 0x7f, if we are
lucky.
|
|
|
|
| |
Update #2674.
|
| |
|
|
|
|
| |
closes #2810.
|
| |
|
| |
|
|
|
|
|
|
| |
Use _Thread_Do_dispatch() instead of _Thread_Dispatch(). Restore the
PSR[EF] state of the interrupted context via new system call
syscall_irqdis_fp in case floating-point support is enabled.
|
|
|
|
|
| |
Initialize the thread queue context with invalid data in debug
configurations to catch missing set up steps.
|
|
|
|
|
|
|
|
|
| |
Previously, if the cache range operations were called with a range that
was larger than the cache size, this would lead to multiple iterations
over the cache, which is unnecessary.
Limit this so that if the range is larger than the cache size, the
operations will only iterate over the whole cache once.
|
|
|
|
|
|
|
|
|
|
| |
Move the code of the _CPU_OR1K_Cache_{enable,disable}_* functions into the
equivalent exported _CPU_cache_{enable,disable}_* functions instead, and
then delete them, in order to reduce the code indirection and aid
readability.
This does not touch the currently unused prefetch, writeback, and lock
functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously _ISR_Local_{disable,enable}() was executed twice for each
cache line operation, and since operations over the entire cache were
implemented by calling the single-line operations in a loop, this made
those operations rather costly.
Fix the double-toggle by calling _OR1K_mtspr() directly, and removing
the now-unused corresponding _CPU_OR1K_Cache_* functions.
Fix the entire-operations by moving the ISR toggle outside of the
loop, and by calling _OR1K_mtspr() directly instead of the single-line
operations.
Also implement range functions, since otherwise the cache manager falls
back on looping over the single-line operations.
|
|
|
|
|
|
|
| |
* Fix indentation of variable declarations.
* Change commented-out asm -> __asm__ to meet c99 standard if
uncommented.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Add functions for flushing and invalidating whole cache.
Since we don't have system calls that can operate on anything more than
a single cache line, these simply retrieves the cache size and iterates
over the full size, invalidating each line.
The current implementation assumes that there's only one level of cache.
These changes were contributed by Antmicro under contract by ÅAC
Microtec AB.
Close #2602
|
| |
|
| |
|
|
|
|
| |
Update #2811.
|
|
|
|
|
|
|
|
|
| |
Move thread state for _Thread_queue_Enqueue() to the thread queue
context. This reduces the parameter count of _Thread_queue_Enqueue()
from five to four (ARM for example has only four function parameter
registers). Since the thread state is used after several function calls
inside _Thread_queue_Enqueue() this parameter was saved on the stack
previously.
|
|
|
|
|
|
|
|
|
|
| |
Callers of _Thread_Do_dispatch() must have a valid
Per_CPU_Control::Stats::thread_dispatch_disabled_instant.
Call _Profiling_Outer_most_interrupt_entry_and_exit() with the interrupt
stack to not exceed Per_CPU_Control::Interrupt_frame.
Update #2751.
|
| |
|
|
|
|
| |
Update #2674.
|
|
|
|
| |
Update #2825.
|