summaryrefslogtreecommitdiffstats
path: root/cpukit/score/src/threaddispatch.c (follow)
Commit message (Collapse)AuthorAgeFilesLines
* score: Add _Thread_Dispatch_direct_no_return()Sebastian Huber2021-05-021-0/+3
| | | | | | | | | | The __builtin_unreachable() cannot be used with current GCC versions to tell the compiler that a function does not return to the caller, see: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=99151 Add a no return variant of _Thread_Dispatch_direct() to avoid generation of dead code.
* Remove superfluous <rtems/score/wkspace.h> includesSebastian Huber2021-04-201-1/+0
|
* score: Change thread action lockingSebastian Huber2021-02-201-3/+0
| | | | | | | | Require that the corresponding lock is acquired before the action handler returns. This helps to avoid recursion in the signal processing. Update #4244.
* score: Canonicalize Doxygen @file commentsSebastian Huber2020-12-021-2/+5
| | | | | | Use common phrases for the file brief descriptions. Update #3706.
* doxygen: Switch @brief and @ingroupSebastian Huber2020-04-281-1/+2
| | | | This order change fixes the Latex documentation build via Doxygen.
* Canonicalize config.h includeSebastian Huber2020-04-161-1/+1
| | | | | | | | Use the following variant which was already used by most source files: #ifdef HAVE_CONFIG_H #include "config.h" #endif
* score: Fix context switch extensions (SMP)Sebastian Huber2020-02-281-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In uniprocessor and SMP configurations, the context switch extensions were called during _Thread_Do_dispatch(): void _Thread_Do_dispatch( Per_CPU_Control *cpu_self, ISR_Level level ) { Thread_Control *executing; executing = cpu_self->executing; ... do { Thread_Control *heir; heir = _Thread_Get_heir_and_make_it_executing( cpu_self ); ... _User_extensions_Thread_switch( executing, heir ); ... _Context_Switch( &executing->Registers, &heir->Registers ); ... } while ( cpu_self->dispatch_necessary ); ... } In uniprocessor configurations, this is fine and the context switch extensions are called for all thread switches except the very first thread switch to the initialization thread. However, in SMP configurations, the context switch may be invalidated and updated in the low-level _Context_Switch() routine. See: https://docs.rtems.org/branches/master/c-user/symmetric_multiprocessing_services.html#thread-dispatch-details In case such an update happens, a thread will execute on the processor which was not seen in the previous call of the context switch extensions. This can confuse for example event record consumers which use events generated by a context switch extension. Fixing this is not straight forward. The context switch extensions call must move after the low-level context switch. The problem here is that we may end up in _Thread_Handler(). Adding the context switch extensions call to _Thread_Handler() covers now also the thread switch to the initialization thread. We also have to save the last executing thread (ancestor) of the processor. Registers or the stack cannot be used for this purpose. We have to add it to the per-processor information. Existing extensions may be affected, since now context switch extensions use the stack of the heir thread. The stack checker is affected by this. Calling the thread switch extensions in the low-level context switch is difficult since at this point an intermediate stack is used which is only large enough to enable servicing of interrupts. Update #3885.
* score: Add _SMP_Need_inter_processor_interrupts()Sebastian Huber2020-02-251-1/+1
| | | | | | | Test for the proper system condition instead of using the rtems_configuration_is_smp_enabled() workaround. Update #3876.
* score: Use an ISR lock for Per_CPU_Control::LockSebastian Huber2019-04-121-7/+10
| | | | | | The use of a hand crafted lock for Per_CPU_Control::Lock was necessary at some point in the SMP support development, but it is no longer justified.
* doxygen: Rename Score* groups in RTEMSScore*Sebastian Huber2019-04-041-1/+1
| | | | Update #3706
* Adjust interrupt mode tests for some CPU portsSebastian Huber2019-01-091-1/+1
| | | | | | | In case the robust thread dispatch is enabled by the CPU port, then the interrupt level must not be changed through the task mode. Update #3000.
* score: Add thread pin/unpin supportSebastian Huber2018-09-101-3/+85
| | | | | | | | | Add support to temporarily pin a thread to its current processor. This may be used to access per-processor data structures in critical sections with enabled thread dispatching, e.g. a pinned thread is allowed to block. Update #3508.
* score: Do not inline _Thread_Dispatch_enable()Sebastian Huber2018-08-231-0/+27
| | | | | | | | | | | | | | | | | | | | | | | | | | | | This function is slighly too complex for inlining with two if statements. The caller already needs a stack frame due to the potential call to _Thread_Do_dispatch(). In _Thread_Dispatch_enable() the call to _Thread_Do_dispatch() can be optimized to a tail call. A text size comparision (text size after patch - text size before patch) / text size before patch on sparc/erc32 with SMP enabled showed these results: Minimum -0.000697892 (fsdosfsname01.exe) Median -0.00745021 (psxtimes01.exe) Maximum -0.0233032 (spscheduler01.exe) A text size comparision text size after patch - text size before patch on sparc/erc32 with SMP enabled showed these results: Minimum -3312 (ada_sp09.exe) Median -1024 (tm15.exe) Maximum -592 (spglobalcon01.exe)
* score: Add assert to _Thread_Dispatch()Sebastian Huber2017-07-041-0/+1
| | | | Update #3060.
* score: Remove rtems_ada_selfSebastian Huber2017-06-141-8/+0
| | | | | | This task variable is superfluous since we use thread-local storage now. Update #2289.
* score: Move _Thread_Scheduler_ask_for_help()Sebastian Huber2017-02-031-1/+34
| | | | | Move _Thread_Scheduler_ask_for_help(), rename it to _Thread_Ask_for_help() and make it static.
* score: Introduce _Internal_error()Sebastian Huber2016-12-121-8/+2
|
* score: Remove fatal is internal indicatorSebastian Huber2016-12-091-2/+0
| | | | | | | | | The fatal is internal indicator is redundant since the fatal source and error code uniquely identify a fatal error. Keep the fatal user extension is internal parameter for backward compatibility and set it to false always. Update #2825.
* score: Robust thread dispatchSebastian Huber2016-11-231-0/+15
| | | | | | | | | | | | On SMP configurations, it is a fatal error to call blocking operating system with interrupts disabled, since this prevents delivery of inter-processor interrupts. This could lead to executing threads which are not allowed to execute resulting in undefined behaviour. The ARM Cortex-M port has a similar problem, since the interrupt state is not a part of the thread context. Update #2811.
* score: Allow interrupts during thread dispatchSebastian Huber2016-11-181-16/+1
| | | | | | | | | Use a processor-specific interrupt frame during context switches in case the executing thread is longer executes on the processor and the heir thread is about to start execution. During this period we must not use a thread stack for interrupt processing. Update #2809.
* score: Add and use _Thread_Dispatch_direct()Sebastian Huber2016-11-181-0/+16
| | | | | | | | | | This function is useful for operations which synchronously block, e.g. self restart, self deletion, yield, sleep. It helps to detect if these operations are called in the wrong context. Since the thread dispatch necessary indicator is not used, this is more robust in some SMP situations. Update #2751.
* score: Add new SMP scheduler helping protocolSebastian Huber2016-11-021-1/+77
| | | | Update #2556.
* score: Remove superfluous SMP debug supportSebastian Huber2016-09-071-2/+0
| | | | This information turned out to be useless in the last couple of months.
* score: Rename _ISR_Disable() and _ISR_Enable()Sebastian Huber2016-05-201-2/+2
| | | | | | | | | Rename _ISR_Disable() into _ISR_Local_disable(). Rename _ISR_Enable() into _ISR_Local_enable(). Remove _Debug_Is_owner_of_giant(). This is a preparation to remove the Giant lock. Update #2555.
* score: Rename _ISR_Disable_without_giant()Sebastian Huber2016-05-201-3/+3
| | | | | | | | | Rename _ISR_Disable_without_giant() into _ISR_Local_disable(). Rename _ISR_Enable_without_giant() into _ISR_Local_enable(). This is a preparation to remove the Giant lock. Update #2555.
* score: Introduce thread state lockSebastian Huber2016-05-121-8/+7
| | | | Update #2556.
* score: Fix CPU time used by executing threadsSebastian Huber2016-03-171-5/+0
| | | | | | | | | | | | | | The CPU time used of a thread was previously maintained per-processor mostly during _Thread_Dispatch(). However, on SMP configurations the actual processor of a thread is difficult to figure out since thread dispatching is a highly asynchronous process (e.g. via inter-processor interrupts). Only the intended processor of a thread is known to the scheduler easily. Do the CPU usage accounting during thread heir updates in the context of the scheduler operations. Provide the function _Thread_Get_CPU_time_used() to get the CPU usage of a thread using proper locks to get a consistent value. Close #2627.
* score: Avoid SCORE_EXTERNSebastian Huber2016-02-171-0/+8
| | | | | | | | | | | | | | | | | | | | | | | | | Delete SCORE_INIT. This finally removes the some.h: #ifndef SOME_XYZ_EXTERN #define SOME_XYZ_EXTERN extern #endif SOME_XYZ_EXTERN type xyz; some_xyz.c: #define SOME_XYZ_EXTERN #include <some.h> pattern in favour of some.h: extern type xyz; some_xyz.c #include <some.h> type xyz; Update #2559.
* Delete unused API extensionsSebastian Huber2016-02-031-1/+0
|
* Optional Initial Extensions initializationSebastian Huber2016-02-031-0/+2
| | | | Update #2408.
* Require __getreent()Sebastian Huber2015-11-251-10/+0
| | | | | This function is used by Newlib since 2013-07-09 (Git commit 9b51cd8c6b9cdd067d9648a7ab952884019c56a5).
* Remove use ticks for statistics configure option.Joel Sherrill2015-06-151-11/+4
| | | | | | | | | | This was obsolete and broken based upon recent time keeping changes. Thie build option was previously enabled by adding USE_TICKS_FOR_STATISTICS=1 to the configure command line. This propagated into the code as preprocessor conditionals using the __RTEMS_USE_TICKS_FOR_STATISTICS__ conditional.
* score: Add and use _Thread_Do_dispatch()Sebastian Huber2015-03-051-30/+34
| | | | | | | | | | | | The _Thread_Dispatch() function is quite complex and the time to set up and tear down the stack frame is significant. Split this function into two parts. The complex part is now in _Thread_Do_dispatch(). Call _Thread_Do_dispatch() in _Thread_Enable_dispatch() only if necessary. This increases the average case performance. Simplify _Thread_Handler() for SMP configurations. Update #2273.
* score: Fix FP context restore via _Thread_HandlerSebastian Huber2015-02-171-36/+2
| | | | | | | | | | After a context switch we end up in the second part of _Thread_Dispatch() or in _Thread_Handler() in case of new threads. Use the same function _Thread_Restore_fp() to restore the floating-point context. It makes no sense to do this in _Thread_Start_multitasking(). This fixes also a race condition in SMP configurations. Update #2268.
* score: Implement forced thread migrationSebastian Huber2014-05-071-32/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current implementation of task migration in RTEMS has some implications with respect to the interrupt latency. It is crucial to preserve the system invariant that a task can execute on at most one processor in the system at a time. This is accomplished with a boolean indicator in the task context. The processor architecture specific low-level task context switch code will mark that a task context is no longer executing and waits that the heir context stopped execution before it restores the heir context and resumes execution of the heir task. So there is one point in time in which a processor is without a task. This is essential to avoid cyclic dependencies in case multiple tasks migrate at once. Otherwise some supervising entity is necessary to prevent life-locks. Such a global supervisor would lead to scalability problems so this approach is not used. Currently the thread dispatch is performed with interrupts disabled. So in case the heir task is currently executing on another processor then this prolongs the time of disabled interrupts since one processor has to wait for another processor to make progress. It is difficult to avoid this issue with the interrupt latency since interrupts normally store the context of the interrupted task on its stack. In case a task is marked as not executing we must not use its task stack to store such an interrupt context. We cannot use the heir stack before it stopped execution on another processor. So if we enable interrupts during this transition we have to provide an alternative task independent stack for this time frame. This issue needs further investigation.
* score: Use common names for per-CPU variablesSebastian Huber2014-04-221-24/+24
| | | | | | | | | | | | | | | | Use "cpu" for an arbitrary Per_CPU_Control variable. Use "cpu_self" for the Per_CPU_Control of the current processor. Use "cpu_index" for an arbitrary processor index. Use "cpu_index_self" for the processor index of the current processor. Use "cpu_count" for the processor count obtained via _SMP_Get_processor_count(). Use "cpu_max" for the processor maximum obtained by rtems_configuration_get_maximum_processors().
* score: Critical fix for SMPSebastian Huber2014-04-161-1/+12
| | | | | | The _Scheduler_SMP_Allocate_processor() and _Thread_Dispatch() exchange information without locks. Make sure we use the right load/store ordering.
* score: Delete _Thread_Ticks_per_timesliceSebastian Huber2014-04-071-1/+2
| | | | Use the Configuration instead.
* score: Delete post-switch API extensionsSebastian Huber2014-03-311-1/+0
| | | | Use thread post-switch actions instead.
* score: Add thread actionsSebastian Huber2014-03-311-0/+33
| | | | | | | | | | Thread actions are the building block for efficient implementation of - Classic signals delivery, - POSIX signals delivery, - thread restart notification, - thread delete notification, - forced thread migration on SMP configurations, and - the Multiprocessor Resource Sharing Protocol (MrsP).
* Change all references of rtems.com to rtems.org.Chris Johns2014-03-211-1/+1
|
* score: Add per-CPU profilingSebastian Huber2014-03-141-0/+2
| | | | | | | Add per-CPU profiling stats API. Implement the thread dispatch disable level profiling. The interrupt profiling must be implemented in CPU port specific parts (mostly assembler code). Add a support function _Profiling_Outer_most_interrupt_entry_and_exit() for this purpose.
* score: Minor _Thread_Dispatch() optimizationSebastian Huber2013-12-021-2/+1
| | | | | It is not necessary to load the executing thread control again after the context switch since it is an invariant of the executing thread.
* smp: Add and use _Assert_Owner_of_giant()Sebastian Huber2013-08-301-2/+2
| | | | | | | | | | | | | | Add and use _ISR_Disable_without_giant() and _ISR_Enable_without_giant() if RTEMS_SMP is defined. On single processor systems the ISR disable/enable was the big hammer which ensured system-wide mutual exclusion. On SMP configurations this no longer works since other processors do not care about disabled interrupts on this processor and continue to execute freely. On SMP in addition to ISR disable/enable an SMP lock must be used. Currently we have only the Giant lock so we can check easily that ISR disable/enable is used only in the right context.
* score: Per-CPU thread dispatch disable levelSebastian Huber2013-08-091-57/+50
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use a per-CPU thread dispatch disable level. So instead of one global thread dispatch disable level we have now one instance per processor. This is a major performance improvement for SMP. On non-SMP configurations this may simplifiy the interrupt entry/exit code. The giant lock is still present, but it is now decoupled from the thread dispatching in _Thread_Dispatch(), _Thread_Handler(), _Thread_Restart_self() and the interrupt entry/exit. Access to the giant lock is now available via _Giant_Acquire() and _Giant_Release(). The giant lock is still implicitly acquired via _Thread_Dispatch_decrement_disable_level(). The giant lock is only acquired for high-level operations in interrupt handlers (e.g. release of a semaphore, sending of an event). As a side-effect this change fixes the lost thread dispatch necessary indication bug in _Thread_Dispatch(). A per-CPU thread dispatch disable level greatly simplifies the SMP support for the interrupt entry/exit code since no spin locks have to be acquired in this area. It is only necessary to get the current processor index and use this to calculate the address of the own per-CPU control. This reduces the interrupt latency considerably. All elements for the interrupt entry/exit code are now part of the Per_CPU_Control structure: thread dispatch disable level, ISR nest level and thread dispatch necessary. Nothing else is required (except CPU port specific stuff like on SPARC).
* score: Rename tod.h to todimpl.hSebastian Huber2013-08-011-1/+1
|
* score: Add and use _Thread_Update_cpu_time_used()Sebastian Huber2013-08-011-11/+4
| | | | Fix _times().
* smp: Delete _SMP_Request_other_cores_to_dispatch()Sebastian Huber2013-07-301-5/+0
| | | | | Use an event triggered unicast to inform remote processors about a necessary thread dispatch instead.
* score: Critical section change in _Thread_DispatchSebastian Huber2013-07-301-2/+2
| | | | | | | | | | | | | | | | If we enter _Thread_Dispatch() then _Thread_Dispatch_disable_level must be zero. Single processor RTEMS assumes that stores of non-zero values to _Thread_Dispatch_disable_level are observed by interrupts as non-zero values. Move the _Thread_Dispatch_set_disable_level( 1 ) out of the first ISR disabled critical section. In case interrupts happen between the _Thread_Dispatch_set_disable_level( 1 ) and _ISR_Disable( level ) then the interrupt will observe a non-zero _Thread_Dispatch_disable_level and will not issue a _Thread_Dispatch() and we can enter the ISR disabled section directly after interrupt processing. This change leads to symmetry between the single processor and SMP configuration.
* score: Create thread implementation headerSebastian Huber2013-07-261-17/+3
| | | | | | | | Move implementation specific parts of thread.h and thread.inl into new header file threadimpl.h. The thread.h contains now only the application visible API. Remove superfluous header file includes from various files.