| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
| |
Store the floating-point unit property in the thread control block
regardless of the CPU_HARDWARE_FP and CPU_SOFTWARE_FP settings. Make
sure the floating-point unit is only enabled for the corresponding
multilibs. This helps targets which have a volatile only floating point
context like SPARC for example.
|
|
|
|
|
|
|
| |
Define CONTEXT_FP_SIZE to zero in case hardware and software floating
point support is disabled. The problem is that empty structures have a
different size in C and C++. In C++ they have a non-zero size leading
to an overestimate of the workspace size.
|
|
|
|
|
|
| |
With the introduction of fine grained locking there is no longer a
one-to-one connection between the Giant lock nest level and the thread
dispatch disable level.
|
|
|
|
| |
Close #2358.
|
|
|
|
| |
Update #2270.
|
|
|
|
|
|
|
| |
The FP context save/restore makes only sense in the context of FP
threads.
Update #2270.
|
|
|
|
| |
Update #2270.
|
|
|
|
|
|
|
| |
Write the pattern only once to the entry register window and the
floating point registers.
Update #2270.
|
|
|
|
|
| |
The _TOD_Get_zero_based_uptime_as_timespec() returns already the right
value.
|
| |
|
|
|
|
| |
Replace _API_Mutex_Is_locked() with _API_Mutex_Is_owner().
|
|
|
|
|
| |
Reserve a full minimum block to account for the heap protection enabled
via RTEMS_DEBUG.
|
| |
|
|
|
|
|
| |
The _Configure_From_workspace() already takes care that zero size
allocations contribute nothing to the workspace size estimate.
|
|
|
|
|
|
|
| |
In case the zlib compression was used, then the librtemscpu.a depended
on libz.a. To avoid a GCC patch or complicated link flags move the
JFFS2 support into a separate library to use a simple "-ljffs2 -lz" to
link the executable.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
Implement _CPU_Context_validate() and _CPU_Context_volatile_clobber().
Update #2270.
|
|
|
|
|
|
|
|
| |
Replace timestamp implementation with FreeBSD bintime and timecounters.
New test sptests/sptimecounter02.
Update #2271.
|
|
|
|
|
|
| |
New test sptests/timecounter01.
Update #2271.
|
|
|
|
| |
Update #2271.
|
|
|
|
|
|
|
| |
FreeBSD assumes that u_int is a 32-bit integer type. This is wrong for
some 16-bit targets supported by RTEMS.
Update #2271.
|
|
|
|
| |
Update #2271.
|
| |
|
|
|
|
|
|
|
| |
Thread dispatching is disabled in case interrupts are disabled. To get
an accurate thread dispatch disabled time it is important to use the
interrupt disabled instant in case a transition from an interrupt
disabled section to a thread dispatch level section happens.
|
|
|
|
|
|
|
| |
Use _Thread_Timeout() instead. Use pseudo thread queue for nanosleep()
to deal with signals.
Close #2130.
|
| |
|
|
|
|
| |
This reduces the code size drastically.
|
|
|
|
|
| |
This function is superfluous due to the introduction of fine grained
locking.
|
|
|
|
| |
Update #2273.
|
|
|
|
| |
Update #2307.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Move the writes to Thread_Control::current_priority and
Thread_Control::real_priority into _Thread_Change_priority() under the
protection of the thread lock. Add a filter function to
_Thread_Change_priority() to enable specialized variants.
Avoid race conditions during a thread priority restore with the new
Thread_Control::priority_restore_hint for an important average case
optimizations used by priority inheritance mutexes.
Update #2273.
|
|
|
|
| |
Update #2273.
|
| |
|
| |
|
|
|
|
| |
Rename _CORE_semaphore_Seize_isr_disable() to _CORE_semaphore_Seize().
|
|
|
|
| |
Update #2273.
|
|
|
|
|
|
|
|
|
|
|
| |
Aggregate several critical sections into a bigger one. Sending and
receiving messages is now protected by an ISR lock. Thread dispatching
is only disabled in case a blocking operation is necessary. The message
copy procedure is done inside the critical section (interrupts
disabled). Thus this change may have a negative impact on the interrupt
latency in case very large messages are transferred.
Update #2273.
|
|
|
|
|
| |
Check the number of pending messages in _CORE_message_queue_Flush() to
avoid race conditions.
|
|
|
|
|
| |
Use a parameter for _Thread_queue_Enqueue() instead to reduce memory
usage.
|
|
|
|
|
|
|
|
|
| |
Use thread wait flags for synchronization. The enqueue operation is now
part of the initial critical section. This is the key change and
enables fine grained locking on SMP for objects using a thread queue
like semaphores and message queues.
Update #2273.
|
|
|
|
|
|
|
|
|
| |
Move thread queue discipline specific operations into
Thread_queue_Operations. Use a separate node in the thread control
block for the thread queue to make it independent of the scheduler data
structures.
Update #2273.
|
|
|
|
|
|
|
|
| |
Replace the Thread_Priority_control with more general
Thread_queue_Operations which will be used for generic priority change,
timeout, signal and wait queue operations in the future.
Update #2273.
|
|
|
|
|
|
|
|
|
|
|
| |
Move the complete thread queue enqueue procedure into
_Thread_queue_Enqueue_critical(). It is possible to use the thread
queue lock to protect state of the object embedding the thread queue.
This enables per object fine grained locking in the future.
Delete _Thread_queue_Enter_critical_section().
Update #2273.
|