summaryrefslogtreecommitdiffstats
path: root/cpukit (follow)
Commit message (Collapse)AuthorAgeFilesLines
* score: Add Thread_Control::is_fpSebastian Huber2015-06-095-12/+16
| | | | | | | | Store the floating-point unit property in the thread control block regardless of the CPU_HARDWARE_FP and CPU_SOFTWARE_FP settings. Make sure the floating-point unit is only enabled for the corresponding multilibs. This helps targets which have a volatile only floating point context like SPARC for example.
* rtems: Change CONTEXT_FP_SIZE defineSebastian Huber2015-06-031-1/+5
| | | | | | | Define CONTEXT_FP_SIZE to zero in case hardware and software floating point support is disabled. The problem is that empty structures have a different size in C and C++. In C++ they have a non-zero size leading to an overestimate of the workspace size.
* score: Remove assertSebastian Huber2015-06-031-6/+0
| | | | | | With the introduction of fine grained locking there is no longer a one-to-one connection between the Giant lock nest level and the thread dispatch disable level.
* posix: Fix _POSIX_Timer_Insert_helper() lockingSebastian Huber2015-06-031-5/+9
| | | | Close #2358.
* sparc: Disable FPU in interrupt contextAlexander Krutwig2015-05-302-1/+32
| | | | Update #2270.
* sparc: Remove superfluous FP enableSebastian Huber2015-05-302-22/+7
| | | | | | | The FP context save/restore makes only sense in the context of FP threads. Update #2270.
* sparc: Avoid new window for FP save/restoreSebastian Huber2015-05-301-54/+48
| | | | Update #2270.
* sparc: Improve _CPU_Context_validate()Alexander Krutwig2015-05-291-46/+49
| | | | | | | Write the pattern only once to the entry register window and the floating point registers. Update #2270.
* posix: Fix clock_gettime()Sebastian Huber2015-05-291-2/+0
| | | | | The _TOD_Get_zero_based_uptime_as_timespec() returns already the right value.
* dosfs: avoid buffer-overread. closes #2292.Gedare Bloom2015-05-271-2/+2
|
* score: Replace _API_Mutex_Is_locked()Sebastian Huber2015-05-273-9/+12
| | | | Replace _API_Mutex_Is_locked() with _API_Mutex_Is_owner().
* sapi: Fix workspace size estimateSebastian Huber2015-05-271-2/+1
| | | | | Reserve a full minimum block to account for the heap protection enabled via RTEMS_DEBUG.
* sapi: Fix workspace size estimateSebastian Huber2015-05-271-4/+15
|
* sapi: Simplify confdefs.hSebastian Huber2015-05-271-1/+0
| | | | | The _Configure_From_workspace() already takes care that zero size allocations contribute nothing to the workspace size estimate.
* jffs2: Move into separate librarySebastian Huber2015-05-273-2/+18
| | | | | | | In case the zlib compression was used, then the librtemscpu.a depended on libz.a. To avoid a GCC patch or complicated link flags move the JFFS2 support into a separate library to use a simple "-ljffs2 -lz" to link the executable.
* sparc: Add static assertionSebastian Huber2015-05-261-0/+5
|
* sparc: Delete unused CONTEXT_CONTROL_SIZESebastian Huber2015-05-262-5/+0
|
* sparc: Delete unused ISF_STACK_FRAME_OFFSETSebastian Huber2015-05-262-3/+0
|
* sparc: Add static offset assertionsSebastian Huber2015-05-261-0/+32
|
* rtems/endian.h: Reduce header dependenciesSebastian Huber2015-05-221-13/+13
|
* cpukit: Add Epiphany architecture port v4Hesham ALMatary2015-05-2116-0/+2492
|
* region*.c: Ensure return_status is set when RTEMS_MULTIPROCESSING is enabledJoel Sherrill2015-05-218-8/+0
|
* kill_noposix.c: Remove obsolete __kill()Joel Sherrill2015-05-211-6/+0
|
* sparc: Add support for sptests/spcontext01Alexander Krutwig2015-05-214-10/+528
| | | | | | Implement _CPU_Context_validate() and _CPU_Context_volatile_clobber(). Update #2270.
* timecounter: Use in RTEMSAlexander Krutwig2015-05-2045-1266/+304
| | | | | | | | Replace timestamp implementation with FreeBSD bintime and timecounters. New test sptests/sptimecounter02. Update #2271.
* timecounter: Port to RTEMSAlexander Krutwig2015-05-2012-0/+976
| | | | | | New test sptests/timecounter01. Update #2271.
* timecounter: Honor FFCLOCK defineAlexander Krutwig2015-05-191-0/+4
| | | | Update #2271.
* timecounter: Use uint32_t instead of u_intAlexander Krutwig2015-05-192-19/+19
| | | | | | | FreeBSD assumes that u_int is a 32-bit integer type. This is wrong for some 16-bit targets supported by RTEMS. Update #2271.
* timecounter: Import from FreeBSDAlexander Krutwig2015-05-199-0/+2979
| | | | Update #2271.
* rtems: Avoid Giant lock for eventsSebastian Huber2015-05-192-4/+0
|
* score: _Thread_Dispatch_disable_critical()Sebastian Huber2015-05-198-19/+67
| | | | | | | Thread dispatching is disabled in case interrupts are disabled. To get an accurate thread dispatch disabled time it is important to use the interrupt disabled instant in case a transition from an interrupt disabled section to a thread dispatch level section happens.
* score: Replace _Thread_Delay_ended()Sebastian Huber2015-05-196-74/+30
| | | | | | | Use _Thread_Timeout() instead. Use pseudo thread queue for nanosleep() to deal with signals. Close #2130.
* score: Add static initializers for thread queuesSebastian Huber2015-05-191-0/+34
|
* score: Do not inline SMP lock if profiling enabledSebastian Huber2015-05-193-1/+125
| | | | This reduces the code size drastically.
* score: Delete _Objects_Put_for_get_isr_disable()Sebastian Huber2015-05-193-12/+0
| | | | | This function is superfluous due to the introduction of fine grained locking.
* score: Fine grained locking for MrsPSebastian Huber2015-05-195-61/+124
| | | | Update #2273.
* score: Remove Giant lock in rtems_clock_tick()Sebastian Huber2015-05-191-10/+1
| | | | Update #2307.
* score: Rework _Thread_Change_priority()Sebastian Huber2015-05-1919-208/+409
| | | | | | | | | | | | | Move the writes to Thread_Control::current_priority and Thread_Control::real_priority into _Thread_Change_priority() under the protection of the thread lock. Add a filter function to _Thread_Change_priority() to enable specialized variants. Avoid race conditions during a thread priority restore with the new Thread_Control::priority_restore_hint for an important average case optimizations used by priority inheritance mutexes. Update #2273.
* score: Fine grained locking for mutexesSebastian Huber2015-05-1913-117/+143
| | | | Update #2273.
* score: Inline _CORE_semaphore_Surrender()Sebastian Huber2015-05-193-70/+41
|
* score: Inline _CORE_semaphore_Flush()Sebastian Huber2015-05-193-42/+10
|
* score: Delete _CORE_semaphore_Seize()Sebastian Huber2015-05-196-109/+15
| | | | Rename _CORE_semaphore_Seize_isr_disable() to _CORE_semaphore_Seize().
* score: Fine grained locking for semaphoresSebastian Huber2015-05-197-34/+75
| | | | Update #2273.
* score: Fine grained locking for message queuesSebastian Huber2015-05-1916-135/+299
| | | | | | | | | | | Aggregate several critical sections into a bigger one. Sending and receiving messages is now protected by an ISR lock. Thread dispatching is only disabled in case a blocking operation is necessary. The message copy procedure is done inside the critical section (interrupts disabled). Thus this change may have a negative impact on the interrupt latency in case very large messages are transferred. Update #2273.
* score: Delete _CORE_message_queue_Flush_support()Sebastian Huber2015-05-195-116/+62
| | | | | Check the number of pending messages in _CORE_message_queue_Flush() to avoid race conditions.
* score: Delete Thread_queue_Control::timeout_statusSebastian Huber2015-05-1934-57/+60
| | | | | Use a parameter for _Thread_queue_Enqueue() instead to reduce memory usage.
* score: New thread queue implementationSebastian Huber2015-05-1913-432/+266
| | | | | | | | | Use thread wait flags for synchronization. The enqueue operation is now part of the initial critical section. This is the key change and enables fine grained locking on SMP for objects using a thread queue like semaphores and message queues. Update #2273.
* score: More thread queue operationsSebastian Huber2015-05-1910-92/+282
| | | | | | | | | Move thread queue discipline specific operations into Thread_queue_Operations. Use a separate node in the thread control block for the thread queue to make it independent of the scheduler data structures. Update #2273.
* score: Add Thread_queue_OperationsSebastian Huber2015-05-1910-136/+205
| | | | | | | | Replace the Thread_Priority_control with more general Thread_queue_Operations which will be used for generic priority change, timeout, signal and wait queue operations in the future. Update #2273.
* score: Add Thread_queue_Control::LockSebastian Huber2015-05-1935-169/+261
| | | | | | | | | | | Move the complete thread queue enqueue procedure into _Thread_queue_Enqueue_critical(). It is possible to use the thread queue lock to protect state of the object embedding the thread queue. This enables per object fine grained locking in the future. Delete _Thread_queue_Enter_critical_section(). Update #2273.