summaryrefslogtreecommitdiffstats
path: root/cpukit/score/include (follow)
Commit message (Collapse)AuthorAgeFilesLines
* rtems: Change CONTEXT_FP_SIZE defineSebastian Huber2015-06-031-1/+5
| | | | | | | Define CONTEXT_FP_SIZE to zero in case hardware and software floating point support is disabled. The problem is that empty structures have a different size in C and C++. In C++ they have a non-zero size leading to an overestimate of the workspace size.
* score: Replace _API_Mutex_Is_locked()Sebastian Huber2015-05-271-3/+8
| | | | Replace _API_Mutex_Is_locked() with _API_Mutex_Is_owner().
* timecounter: Use in RTEMSAlexander Krutwig2015-05-206-580/+209
| | | | | | | | Replace timestamp implementation with FreeBSD bintime and timecounters. New test sptests/sptimecounter02. Update #2271.
* timecounter: Port to RTEMSAlexander Krutwig2015-05-204-0/+252
| | | | | | New test sptests/timecounter01. Update #2271.
* timecounter: Use uint32_t instead of u_intAlexander Krutwig2015-05-191-2/+2
| | | | | | | FreeBSD assumes that u_int is a 32-bit integer type. This is wrong for some 16-bit targets supported by RTEMS. Update #2271.
* timecounter: Import from FreeBSDAlexander Krutwig2015-05-195-0/+940
| | | | Update #2271.
* score: _Thread_Dispatch_disable_critical()Sebastian Huber2015-05-195-15/+63
| | | | | | | Thread dispatching is disabled in case interrupts are disabled. To get an accurate thread dispatch disabled time it is important to use the interrupt disabled instant in case a transition from an interrupt disabled section to a thread dispatch level section happens.
* score: Add static initializers for thread queuesSebastian Huber2015-05-191-0/+34
|
* score: Do not inline SMP lock if profiling enabledSebastian Huber2015-05-191-1/+56
| | | | This reduces the code size drastically.
* score: Delete _Objects_Put_for_get_isr_disable()Sebastian Huber2015-05-191-10/+0
| | | | | This function is superfluous due to the introduction of fine grained locking.
* score: Fine grained locking for MrsPSebastian Huber2015-05-192-46/+117
| | | | Update #2273.
* score: Rework _Thread_Change_priority()Sebastian Huber2015-05-195-87/+171
| | | | | | | | | | | | | Move the writes to Thread_Control::current_priority and Thread_Control::real_priority into _Thread_Change_priority() under the protection of the thread lock. Add a filter function to _Thread_Change_priority() to enable specialized variants. Avoid race conditions during a thread priority restore with the new Thread_Control::priority_restore_hint for an important average case optimizations used by priority inheritance mutexes. Update #2273.
* score: Fine grained locking for mutexesSebastian Huber2015-05-191-21/+19
| | | | Update #2273.
* score: Inline _CORE_semaphore_Surrender()Sebastian Huber2015-05-191-2/+40
|
* score: Inline _CORE_semaphore_Flush()Sebastian Huber2015-05-191-2/+9
|
* score: Delete _CORE_semaphore_Seize()Sebastian Huber2015-05-191-29/+1
| | | | Rename _CORE_semaphore_Seize_isr_disable() to _CORE_semaphore_Seize().
* score: Fine grained locking for semaphoresSebastian Huber2015-05-191-6/+7
| | | | Update #2273.
* score: Fine grained locking for message queuesSebastian Huber2015-05-191-12/+109
| | | | | | | | | | | Aggregate several critical sections into a bigger one. Sending and receiving messages is now protected by an ISR lock. Thread dispatching is only disabled in case a blocking operation is necessary. The message copy procedure is done inside the critical section (interrupts disabled). Thus this change may have a negative impact on the interrupt latency in case very large messages are transferred. Update #2273.
* score: Delete _CORE_message_queue_Flush_support()Sebastian Huber2015-05-191-17/+0
| | | | | Check the number of pending messages in _CORE_message_queue_Flush() to avoid race conditions.
* score: Delete Thread_queue_Control::timeout_statusSebastian Huber2015-05-194-11/+11
| | | | | Use a parameter for _Thread_queue_Enqueue() instead to reduce memory usage.
* score: New thread queue implementationSebastian Huber2015-05-193-139/+135
| | | | | | | | | Use thread wait flags for synchronization. The enqueue operation is now part of the initial critical section. This is the key change and enables fine grained locking on SMP for objects using a thread queue like semaphores and message queues. Update #2273.
* score: More thread queue operationsSebastian Huber2015-05-193-9/+121
| | | | | | | | | Move thread queue discipline specific operations into Thread_queue_Operations. Use a separate node in the thread control block for the thread queue to make it independent of the scheduler data structures. Update #2273.
* score: Add Thread_queue_OperationsSebastian Huber2015-05-194-95/+134
| | | | | | | | Replace the Thread_Priority_control with more general Thread_queue_Operations which will be used for generic priority change, timeout, signal and wait queue operations in the future. Update #2273.
* score: Add Thread_queue_Control::LockSebastian Huber2015-05-196-35/+132
| | | | | | | | | | | Move the complete thread queue enqueue procedure into _Thread_queue_Enqueue_critical(). It is possible to use the thread queue lock to protect state of the object embedding the thread queue. This enables per object fine grained locking in the future. Delete _Thread_queue_Enter_critical_section(). Update #2273.
* score: Generalize _Event_Timeout()Sebastian Huber2015-05-192-0/+27
| | | | | | | Add a thread wait timeout code. Replace _Event_Timeout() with a general purpose _Thread_Timeout() watchdog handler. Update #2273.
* score: Reduce thread wait statesSebastian Huber2015-05-192-26/+6
| | | | | | | | | | Merge THREAD_WAIT_STATE_SATISFIED, THREAD_WAIT_STATE_TIMEOUT, THREAD_WAIT_STATE_INTERRUPT_SATISFIED, and THREAD_WAIT_STATE_INTERRUPT_TIMEOUT into one state THREAD_WAIT_STATE_READY_AGAIN. This helps to write generic routines to block a thread. Update #2273.
* score: New timer server implementationSebastian Huber2015-05-191-13/+46
| | | | | | | | Use mostly the standard watchdog operations. Use a system event for synchronization. This implementation is simpler and offers better SMP performance. Close #2131.
* score: Move _Watchdog_Tickle()Sebastian Huber2015-05-191-10/+0
| | | | | | | Make internal function _Watchdog_Remove_it() static to avoid accidental usage. Update #2307.
* score: Add Watchdog_IteratorSebastian Huber2015-05-191-16/+42
| | | | | | | | | | Rewrite the _Watchdog_Insert(), _Watchdog_Remove() and _Watchdog_Tickle() functions to use iterator items to synchronize concurrent operations. This makes it possible to get rid of the global variables _Watchdog_Sync_level and _Watchdog_Sync_count which are a blocking point for scalable SMP solutions. Update #2307.
* score: Add _Watchdog_Acquire|Release|Flash()Sebastian Huber2015-05-191-0/+30
| | | | Update #2307.
* score: Add header to _Watchdog_Remove()Sebastian Huber2015-05-192-2/+18
| | | | | | | | Add watchdog header parameter to _Watchdog_Remove() to be in line with the other operations. Add _Watchdog_Remove_ticks() and _Watchdog_Remove_seconds() for convenience. Update #2307.
* score: Delete STATES_WAITING_ON_THREAD_QUEUESebastian Huber2015-05-191-24/+2
| | | | | | Avoid the usage of the current thread state in _Thread_queue_Extract_with_return_code() since thread queues should not know anything about thread states.
* score: _Thread_queue_Extract()Sebastian Huber2015-05-191-11/+5
| | | | | Remove thread queue parameter from _Thread_queue_Extract() since the current thread queue is stored in the thread control block.
* score: Add _SMP_Assert()Sebastian Huber2015-05-191-0/+9
|
* score: Fix scheduler helping protocolSebastian Huber2015-05-113-104/+202
| | | | | | Account for priority changes of threads executing in a foreign partition. Exchange idle threads in case a victim node uses an idle thread and the new scheduled node needs an idle thread.
* score: Fix Thread_Control and Thread_Proxy_controlSebastian Huber2015-05-061-59/+65
| | | | | Fix layout of the common block of Thread_Control and Thread_Proxy_control. Ensure that the offsets match.
* score: Delete unused Thread_queue_Timeout_calloutSebastian Huber2015-04-301-9/+0
|
* score: Fix POSIX thread joinSebastian Huber2015-04-231-1/+3
| | | | | | | | | | | | A thread join is twofold. There is one thread that exists and an arbitrary number of threads that wait for the thread exit (one-to-many relation). The exiting thread may want to wait for a thread that wants to join its exit (STATES_WAITING_FOR_JOIN_AT_EXIT in _POSIX_Thread_Exit()). On the other side we need a thread queue for all the threads that wait for the exit of one particular thread (STATES_WAITING_FOR_JOIN in pthread_join()). Update #2035.
* score: Delete _Thread_queue_Dequeue_priority()Sebastian Huber2015-04-231-19/+0
|
* score: _CORE_mutex_Seize_interrupt_blocking()Sebastian Huber2015-04-231-6/+9
| | | | | Move some code into _CORE_mutex_Seize_interrupt_blocking() so that the thread queue handling is in one place.
* score: Delete Thread_queue_Control::stateSebastian Huber2015-04-233-7/+9
| | | | | Use a parameter for _Thread_queue_Enqueue() instead to reduce memory usage.
* score: Fix priority message queue insertSebastian Huber2015-04-231-31/+1
| | | | | | | | Move the linear search into a critical section to avoid corruption due to higher priority interrupts. The interrupt disable time depends now on the count of pending messages. Close #2328.
* score: Delete _CORE_RWLock_Timeout()Sebastian Huber2015-04-222-33/+4
| | | | | This function was identical to _Thread_queue_Timeout(). This makes _Thread_queue_Enqueue_with_handler() obsolete.
* score: Delete bogus THREAD_QUEUE_WAIT_FOREVERSebastian Huber2015-04-221-5/+0
| | | | | It makes no sense to use this indirection since the type for timeout values is Watchdog_Interval.
* score: Delete object control block ISR lockSebastian Huber2015-04-213-98/+1
| | | | | | | The Objects_Control::Lock was a software layer violation. It worked only for the threads since they are somewhat special. Update #2273.
* score: Add _Thread_Get_interrupt_disable()Sebastian Huber2015-04-211-10/+78
| | | | | | | | | | Remove _Thread_Acquire() and _Thread_Acquire_for_executing(). Add utility functions for the default thread lock. Use the default thread lock for the RTEMS events. There is no need to disable thread dispatching and a Giant acquire in _Event_Timeout() since this was already done by the caller. Update #2273.
* score: Modify _Thread_Dispatch_disable_critical()Sebastian Huber2015-04-212-9/+11
| | | | | Return the current processor to be in line with _Thread_Disable_dispatch().
* score: _Objects_Get_isr_disable()Sebastian Huber2015-04-211-3/+0
| | | | | | | | Do not disable thread dispatching and do not acquire the Giant lock. This makes it possible to use this object get variant for fine grained locking. Update #2273.
* score: _Objects_Get_isr_disable()Sebastian Huber2015-04-213-30/+30
| | | | | | | Use ISR_lock_Context instead of ISR_Level to allow use of ISR locks for low-level locking. Update #2273.
* score: Add _ISR_lock_ISR_disable/enable()Sebastian Huber2015-04-202-2/+38
|