summaryrefslogtreecommitdiffstats
path: root/cpukit/rtems/src (follow)
Commit message (Collapse)AuthorAgeFilesLines
* score: Fix simple timecounter supportSebastian Huber2016-01-271-1/+5
| | | | Close #2502.
* Fix _Assert() statementSebastian Huber2015-11-251-1/+1
|
* score: Fix race condition on SMPSebastian Huber2015-11-172-22/+39
| | | | | | | | We must ensure that the Thread_Control::Wait information update is visible to the target thread before we update its wait flags, otherwise we may return out of date events or a wrong status. Close #2471.
* rtems: Add rtems_interrupt_local_disable|enable()Sebastian Huber2015-06-221-2/+7
| | | | | | | | Add rtems_interrupt_local_disable|enable() as suggested by Pavel Pisa to emphasize that interrupts are only disabled on the current processor. Do not define the rtems_interrupt_disable|enable|flash() macros and functions on SMP configurations since they don't ensure system wide mutual exclusion.
* Remove use ticks for statistics configure option.Joel Sherrill2015-06-154-137/+54
| | | | | | | | | | This was obsolete and broken based upon recent time keeping changes. Thie build option was previously enabled by adding USE_TICKS_FOR_STATISTICS=1 to the configure command line. This propagated into the code as preprocessor conditionals using the __RTEMS_USE_TICKS_FOR_STATISTICS__ conditional.
* score: Add _Watchdog_Preinitialize()Sebastian Huber2015-06-133-2/+3
| | | | | | Add an assert to ensure that the watchdog is the proper state for a _Watchdog_Initialize(). This helps to detect invalid initializations which may lead to a corrupt watchdog chain.
* region*.c: Ensure return_status is set when RTEMS_MULTIPROCESSING is enabledJoel Sherrill2015-05-218-8/+0
|
* timecounter: Use in RTEMSAlexander Krutwig2015-05-206-98/+9
| | | | | | | | Replace timestamp implementation with FreeBSD bintime and timecounters. New test sptests/sptimecounter02. Update #2271.
* rtems: Avoid Giant lock for eventsSebastian Huber2015-05-192-4/+0
|
* score: _Thread_Dispatch_disable_critical()Sebastian Huber2015-05-192-2/+2
| | | | | | | Thread dispatching is disabled in case interrupts are disabled. To get an accurate thread dispatch disabled time it is important to use the interrupt disabled instant in case a transition from an interrupt disabled section to a thread dispatch level section happens.
* score: Replace _Thread_Delay_ended()Sebastian Huber2015-05-192-7/+11
| | | | | | | Use _Thread_Timeout() instead. Use pseudo thread queue for nanosleep() to deal with signals. Close #2130.
* score: Delete _Objects_Put_for_get_isr_disable()Sebastian Huber2015-05-192-2/+0
| | | | | This function is superfluous due to the introduction of fine grained locking.
* score: Fine grained locking for MrsPSebastian Huber2015-05-192-10/+7
| | | | Update #2273.
* score: Remove Giant lock in rtems_clock_tick()Sebastian Huber2015-05-191-10/+1
| | | | Update #2307.
* score: Rework _Thread_Change_priority()Sebastian Huber2015-05-191-9/+11
| | | | | | | | | | | | | Move the writes to Thread_Control::current_priority and Thread_Control::real_priority into _Thread_Change_priority() under the protection of the thread lock. Add a filter function to _Thread_Change_priority() to enable specialized variants. Avoid race conditions during a thread priority restore with the new Thread_Control::priority_restore_hint for an important average case optimizations used by priority inheritance mutexes. Update #2273.
* score: Fine grained locking for mutexesSebastian Huber2015-05-192-11/+2
| | | | Update #2273.
* score: Delete _CORE_semaphore_Seize()Sebastian Huber2015-05-191-1/+1
| | | | Rename _CORE_semaphore_Seize_isr_disable() to _CORE_semaphore_Seize().
* score: Fine grained locking for semaphoresSebastian Huber2015-05-192-10/+18
| | | | Update #2273.
* score: Fine grained locking for message queuesSebastian Huber2015-05-195-17/+42
| | | | | | | | | | | Aggregate several critical sections into a bigger one. Sending and receiving messages is now protected by an ISR lock. Thread dispatching is only disabled in case a blocking operation is necessary. The message copy procedure is done inside the critical section (interrupts disabled). Thus this change may have a negative impact on the interrupt latency in case very large messages are transferred. Update #2273.
* score: Delete Thread_queue_Control::timeout_statusSebastian Huber2015-05-199-11/+19
| | | | | Use a parameter for _Thread_queue_Enqueue() instead to reduce memory usage.
* score: Add Thread_queue_Control::LockSebastian Huber2015-05-193-14/+9
| | | | | | | | | | | Move the complete thread queue enqueue procedure into _Thread_queue_Enqueue_critical(). It is possible to use the thread queue lock to protect state of the object embedding the thread queue. This enables per object fine grained locking in the future. Delete _Thread_queue_Enter_critical_section(). Update #2273.
* score: Generalize _Event_Timeout()Sebastian Huber2015-05-192-71/+7
| | | | | | | Add a thread wait timeout code. Replace _Event_Timeout() with a general purpose _Thread_Timeout() watchdog handler. Update #2273.
* score: Reduce thread wait statesSebastian Huber2015-05-192-4/+4
| | | | | | | | | | Merge THREAD_WAIT_STATE_SATISFIED, THREAD_WAIT_STATE_TIMEOUT, THREAD_WAIT_STATE_INTERRUPT_SATISFIED, and THREAD_WAIT_STATE_INTERRUPT_TIMEOUT into one state THREAD_WAIT_STATE_READY_AGAIN. This helps to write generic routines to block a thread. Update #2273.
* rtems: Use once mutex for timer server initSebastian Huber2015-05-191-2/+3
|
* score: New timer server implementationSebastian Huber2015-05-191-373/+197
| | | | | | | | Use mostly the standard watchdog operations. Use a system event for synchronization. This implementation is simpler and offers better SMP performance. Close #2131.
* score: Add Watchdog_IteratorSebastian Huber2015-05-191-0/+6
| | | | | | | | | | Rewrite the _Watchdog_Insert(), _Watchdog_Remove() and _Watchdog_Tickle() functions to use iterator items to synchronize concurrent operations. This makes it possible to get rid of the global variables _Watchdog_Sync_level and _Watchdog_Sync_count which are a blocking point for scalable SMP solutions. Update #2307.
* score: Add header to _Watchdog_Remove()Sebastian Huber2015-05-1913-15/+50
| | | | | | | | Add watchdog header parameter to _Watchdog_Remove() to be in line with the other operations. Add _Watchdog_Remove_ticks() and _Watchdog_Remove_seconds() for convenience. Update #2307.
* score: _Thread_queue_Extract()Sebastian Huber2015-05-195-5/+5
| | | | | Remove thread queue parameter from _Thread_queue_Extract() since the current thread queue is stored in the thread control block.
* score: Delete Thread_queue_Control::stateSebastian Huber2015-04-232-2/+2
| | | | | Use a parameter for _Thread_queue_Enqueue() instead to reduce memory usage.
* score: Add _Thread_Get_interrupt_disable()Sebastian Huber2015-04-217-74/+57
| | | | | | | | | | Remove _Thread_Acquire() and _Thread_Acquire_for_executing(). Add utility functions for the default thread lock. Use the default thread lock for the RTEMS events. There is no need to disable thread dispatching and a Giant acquire in _Event_Timeout() since this was already done by the caller. Update #2273.
* score: _Objects_Get_isr_disable()Sebastian Huber2015-04-211-0/+8
| | | | | | | | Do not disable thread dispatching and do not acquire the Giant lock. This makes it possible to use this object get variant for fine grained locking. Update #2273.
* score: _Objects_Get_isr_disable()Sebastian Huber2015-04-211-5/+9
| | | | | | | Use ISR_lock_Context instead of ISR_Level to allow use of ISR locks for low-level locking. Update #2273.
* score: Rename _Watchdog_Reset()Sebastian Huber2015-04-141-2/+1
| | | | Update #2307.
* score: Add Watchdog_HeaderSebastian Huber2015-04-132-20/+19
| | | | | | | This type is intended to encapsulate all state to manage a watchdog chain. Update #2307.
* score: Split _Watchdog_Adjust()Sebastian Huber2015-04-131-1/+1
| | | | | | | | | Split _Watchdog_Adjust() into _Watchdog_Adjust_backward() and _Watchdog_Adjust_forward(). Remove Watchdog_Adjust_directions, _Watchdog_Adjust_seconds() and _Watchdog_Adjust_ticks(). This avoids to check the same condition again. Update #2307.
* rtems: Atomically suspend/resume tasksSebastian Huber2015-04-082-12/+10
|
* score: Add scheduler acquire/releaseSebastian Huber2015-03-241-3/+3
| | | | | | | This is currently a global lock for all scheduler instances. It should get replaced with one lock per scheduler instance in the future. Update #2273.
* Disable deprecated warning on implementation of deprecated methodsJoel Sherrill2015-03-178-4/+46
|
* cpukit: Remove old DESCRIPTION: in commentsJoel Sherrill2015-03-112-34/+10
| | | | These were remnants of pre-Doxygen comment style.
* score: Implement fine-grained locking for eventsSebastian Huber2015-03-0510-185/+191
| | | | | | | Use the ISR lock of the thread object to protect the event state and use the Giant lock only for the blocking operations. Update #2273.
* score: Simplify and fix signal deliverySebastian Huber2015-03-051-1/+0
| | | | | | | Deliver the POSIX signals after the thread state was updated to avoid race-conditions on SMP configurations. Update #2273.
* score: Update _Thread_Heir only if necessarySebastian Huber2015-03-051-35/+15
| | | | | | | | | | | | | | | | | | | | Previously, the _Thread_Heir was updated unconditionally in case a new heir was determined. The _Thread_Dispatch_necessary was only updated in case the executing thread was preemptible or an internal thread was unblocked. Change this to update the _Thread_Heir and _Thread_Dispatch_necessary only in case the currently selected heir thread is preemptible or a dispatch is forced. Move the schedule decision into the change priority operation and use the schedule operation only in rtems_task_mode() in case preemption is enabled or an ASR dispatch is necessary. This is a behaviour change. Previously, the RTEMS_NO_PREEMPT also prevented signal delivery in certain cases (not always). Now, signal delivery is no longer influenced by RTEMS_NO_PREEMPT. Since the currently selected heir thread is used to determine if a new heir is chosen, non-preemptible heir threads currently not executing now prevent a new heir. This may have an application impact, see change test tm04. Document this change in sp04. Update #2273.
* score: Add and use PRIORITY_PSEUDO_ISRSebastian Huber2015-03-051-1/+1
|
* score: Rework global constructionSebastian Huber2014-10-131-1/+12
| | | | | | Ensure that the global construction is performed in the context of the first initialization thread. On SMP this was not guaranteed in the previous implementation.
* rtems: SMP fix for timer serverSebastian Huber2014-08-271-1/+3
|
* rtems: Inline rtems_clock_get_ticks_since_boot()Sebastian Huber2014-08-251-31/+0
| | | | Update documentation.
* semdelete.c: Correct spacingJoel Sherrill2014-07-141-1/+1
|
* score: Remove scheduler parameter from most opsSebastian Huber2014-06-233-20/+4
| | | | | | | | | | | | | Remove the scheduler parameter from most high level scheduler operations like - _Scheduler_Block(), - _Scheduler_Unblock(), - _Scheduler_Change_priority(), - _Scheduler_Update_priority(), - _Scheduler_Release_job(), and - _Scheduler_Yield(). This simplifies the scheduler operations usage.
* score: Fix _Thread_Delay_ended() on SMPSebastian Huber2014-06-203-11/+13
| | | | | | | | | | | | | | | | Suppose we have two tasks A and B and two processors. Task A is about to delete task B. Now task B calls rtems_task_wake_after(1) on the other processor. Task B will block on the Giant lock. Task A progresses with the task B deletion until it has to wait for termination. Now task B obtains the Giant lock, sets its state to STATES_DELAYING, initializes its watchdog timer and waits. Eventually _Thread_Delay_ended() is called, but now _Thread_Get() returned NULL since the thread is already marked as deleted. Thus task B remained forever in the STATES_DELAYING state. Instead of passing the thread identifier use the thread control block directly via the watchdog user argument. This makes _Thread_Delay_ended() also a bit more efficient.
* score: PR2181: Add _Thread_Yield()Sebastian Huber2014-06-121-2/+1
| | | | | | | | | | | | | | | | | | | The _Scheduler_Yield() was called by the executing thread with thread dispatching disabled and interrupts enabled. The rtems_task_suspend() is explicitly allowed in ISRs: http://rtems.org/onlinedocs/doc-current/share/rtems/html/c_user/Interrupt-Manager-Directives-Allowed-from-an-ISR.html#Interrupt-Manager-Directives-Allowed-from-an-ISR Unlike the other scheduler operations the locking was performed inside the operation. This lead to the following race condition. Suppose a ISR suspends the executing thread right before the yield scheduler operation. Now the executing thread is not longer in the set of ready threads. The typical scheduler operations did not check the thread state and will now extract the thread again and enqueue it. This corrupted data structures. Add _Thread_Yield() and do the scheduler yield operation with interrupts disabled. This has a negligible effect on the interrupt latency.