summaryrefslogtreecommitdiffstats
path: root/cpukit/score/include/rtems/score/threadqimpl.h (follow)
Commit message (Collapse)AuthorAgeFilesLines
* Remove make preinstallChris Johns2018-01-251-1265/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | A speciality of the RTEMS build system was the make preinstall step. It copied header files from arbitrary locations into the build tree. The header files were included via the -Bsome/build/tree/path GCC command line option. This has at least seven problems: * The make preinstall step itself needs time and disk space. * Errors in header files show up in the build tree copy. This makes it hard for editors to open the right file to fix the error. * There is no clear relationship between source and build tree header files. This makes an audit of the build process difficult. * The visibility of all header files in the build tree makes it difficult to enforce API barriers. For example it is discouraged to use BSP-specifics in the cpukit. * An introduction of a new build system is difficult. * Include paths specified by the -B option are system headers. This may suppress warnings. * The parallel build had sporadic failures on some hosts. This patch removes the make preinstall step. All installed header files are moved to dedicated include directories in the source tree. Let @RTEMS_CPU@ be the target architecture, e.g. arm, powerpc, sparc, etc. Let @RTEMS_BSP_FAMILIY@ be a BSP family base directory, e.g. erc32, imx, qoriq, etc. The new cpukit include directories are: * cpukit/include * cpukit/score/cpu/@RTEMS_CPU@/include * cpukit/libnetworking The new BSP include directories are: * bsps/include * bsps/@RTEMS_CPU@/include * bsps/@RTEMS_CPU@/@RTEMS_BSP_FAMILIY@/include There are build tree include directories for generated files. The include directory order favours the most general header file, e.g. it is not possible to override general header files via the include path order. The "bootstrap -p" option was removed. The new "bootstrap -H" option should be used to regenerate the "headers.am" files. Update #3254.
* score: Move thread queue timeout handlingSebastian Huber2017-10-241-25/+85
| | | | | Update #3117. Update #3182.
* score: Rename function threadq support functionSebastian Huber2017-10-241-5/+5
| | | | | | | | | Rename _Thread_queue_Context_set_do_nothing_enqueue_callout() into _Thread_queue_Context_set_enqueue_do_nothing_extra(). More _Thread_queue_Context_set_enqueue_*() functions will follow. Update #3117. Update #3182.
* score: Add _Thread_queue_Dispatch_disable()Sebastian Huber2017-10-101-0/+10
|
* score: Add RTEMS_HAVE_MEMBER_SAME_TYPE()Sebastian Huber2017-07-311-2/+6
| | | | This fixes some "variably modified" warnings and a clang compile error.
* score: Add _Thread_queue_Object_nameSebastian Huber2017-01-311-2/+46
| | | | | | | | | | | | | Add the special thread queue name _Thread_queue_Object_name to mark thread queues embedded in an object with identifier. Using the special thread state STATES_THREAD_QUEUE_WITH_IDENTIFIER is not reliable for this purpose since the thread wait information and thread state are protected by different SMP locks in separate critical sections. Remove STATES_THREAD_QUEUE_WITH_IDENTIFIER. Add and use _Thread_queue_Object_initialize(). Update #2858.
* score: Add Thread_queue_Queue::nameSebastian Huber2017-01-131-12/+26
| | | | Update #2858.
* score: Improve SMP lock debug supportSebastian Huber2017-01-111-1/+1
| | | | | The CPU index starts with zero. Increment it by one, to allow global SMP locks to reside in the BSS section.
* score: Fix debug thread queue context initSebastian Huber2016-12-021-0/+2
| | | | | On ARM Thumb we may have function addresses ending with 0x7f, if we are lucky.
* score: Fix thread queue context initializationSebastian Huber2016-11-281-3/+4
| | | | | Initialize the thread queue context with invalid data in debug configurations to catch missing set up steps.
* score: Optimize _Thread_queue_Enqueue()Sebastian Huber2016-11-241-8/+44
| | | | | | | | | Move thread state for _Thread_queue_Enqueue() to the thread queue context. This reduces the parameter count of _Thread_queue_Enqueue() from five to four (ARM for example has only four function parameter registers). Since the thread state is used after several function calls inside _Thread_queue_Enqueue() this parameter was saved on the stack previously.
* score: Rename _Thread_queue_Enqueue_critical()Sebastian Huber2016-11-231-48/+10
| | | | | Delete unused _Thread_queue_Enqueue() and rename _Thread_queue_Enqueue_critical() to _Thread_queue_Enqueue().
* score: Add thread queue enqueue calloutSebastian Huber2016-11-231-10/+35
| | | | | | | Replace the expected thread dispatch disable level with a thread queue enqueue callout. This enables the use of _Thread_Dispatch_direct() in the thread queue enqueue procedure. This avoids impossible exection paths, e.g. Per_CPU_Control::dispatch_necessary is always true.
* score: Optimize self-contained objectsSebastian Huber2016-11-181-0/+19
| | | | Avoid use of the stack for the hot paths.
* score: Use non-inline thread queue lock opsSebastian Huber2016-11-041-19/+35
| | | | | | This reduces the code size and helps to reduce the amount of testing. Hot paths can use the _Thread_queue_Queue_acquire_critical() and _Thread_queue_Queue_release_critical() functions which are still inline.
* score: First part of new MrsP implementationSebastian Huber2016-11-021-0/+62
| | | | Update #2556.
* score: Scheduler node awareness for thread queuesSebastian Huber2016-09-211-3/+3
| | | | | | | | Maintain the priority of a thread for each scheduler instance via the thread queue enqueue, extract, priority actions and surrender operations. This replaces the primitive priority boosting. Update #2556.
* score: Rework thread priority managementSebastian Huber2016-09-211-80/+65
| | | | | | | | | | | | | | | | | | | | | | | | | Add priority nodes which contribute to the overall thread priority. The actual priority of a thread is now an aggregation of priority nodes. The thread priority aggregation for the home scheduler instance of a thread consists of at least one priority node, which is normally the real priority of the thread. The locking protocols (e.g. priority ceiling and priority inheritance), rate-monotonic period objects and the POSIX sporadic server add, change and remove priority nodes. A thread changes its priority now immediately, e.g. priority changes are not deferred until the thread releases its last resource. Replace the _Thread_Change_priority() function with * _Thread_Priority_perform_actions(), * _Thread_Priority_add(), * _Thread_Priority_remove(), * _Thread_Priority_change(), and * _Thread_Priority_update(). Update #2412. Update #2556.
* score: Introduce Thread_queue_Lock_contextSebastian Huber2016-09-081-6/+6
| | | | | | Introduce Thread_queue_Lock_context to contain the context necessary for thread queue lock and thread wait lock acquire/release operations to reduce the Thread_Control size.
* score: Simplify thread queue acquire/releaseSebastian Huber2016-09-081-10/+32
|
* score: Introduce thread queue surrender operationSebastian Huber2016-08-111-25/+0
| | | | | | This is an optimization for _Thread_queue_Surrender(). It helps to encapsulate the priority boosting in the priority inheritance thread queue operations.
* score: Add _Thread_queue_Surrender()Sebastian Huber2016-08-111-0/+27
| | | | | Add _Thread_queue_Surrender() to unify the mutex surrender procedures which involve a thread queue operation.
* score: Fix and simplify thread wait locksSebastian Huber2016-08-031-106/+6
| | | | | | | | | | There was a subtile race condition in _Thread_queue_Do_extract_locked(). It must first update the thread wait flags and then restore the default thread wait state. In the previous implementation this could lead under rare timing conditions to an ineffective _Thread_Wait_tranquilize() resulting to a corrupt system state. Update #2556.
* score: Add deadlock detectionSebastian Huber2016-07-271-0/+38
| | | | | | | | | | The mutex objects use the owner field of the thread queues for the mutex owner. Use this and add a deadlock detection to _Thread_queue_Enqueue_critical() for thread queues with an owner. Update #2412. Update #2556. Close #2765.
* score: Turn thread lock into thread wait lockSebastian Huber2016-07-271-3/+145
| | | | | | | | | The _Thread_Lock_acquire() function had a potentially infinite run-time due to the lack of fairness at atomic operations level. Update #2412. Update #2556. Update #2765.
* score: Priority inherit thread queue operationsSebastian Huber2016-07-271-0/+23
| | | | | | | | | Move the priority change due to priority interitance to the thread queue enqueue operation to simplify the locking on SMP configurations. Update #2412. Update #2556. Update #2765.
* cpukit: refactor nanosleep and use 64-bit timeout for threadqGedare Bloom2016-07-261-1/+1
| | | | | | | | | | * Fixes a bug with elapsed time calculations misusing absolute time arguments in nanosleep_helper by passing the requested relative interval. * Fixes a bug with truncation of absolute timeouts by passing the full 64-bit value to Thread_queue_Enqueue. * Share yield logic between nanosleep and clock_nanosleep. updates #2732
* cpukit: Add and use Watchdog_Discipline.Gedare Bloom2016-07-251-7/+60
| | | | | | | | | Clock disciplines may be WATCHDOG_RELATIVE, WATCHDOG_ABSOLUTE, or WATCHDOG_NO_TIMEOUT. A discipline of WATCHDOG_RELATIVE with a timeout of WATCHDOG_NO_TIMEOUT is equivalent to a discipline of WATCHDOG_NO_TIMEOUT. updates #2732
* score: Add debug support to chainsSebastian Huber2016-07-221-0/+2
| | | | | | | This helps to detect * double insert, append, prepend errors, and * get from empty chain errors.
* score: _CORE_mutex_Check_dispatch_for_seize()Sebastian Huber2016-05-301-12/+38
| | | | | | | | | | | Move the safety check performed by _CORE_mutex_Check_dispatch_for_seize() out of the performance critical path and generalize it. Blocking on a thread queue with an unexpected thread dispatch disabled level is illegal in all system states. Add the expected thread dispatch disable level (which may be 1 or 2 depending on the operation) to Thread_queue_Context and use it in _Thread_queue_Enqueue_critical().
* score: Add _Thread_queue_Context_set_MP_callout()Sebastian Huber2016-05-301-25/+22
| | | | | | Add _Thread_queue_Context_set_MP_callout() to simplify _Thread_queue_Context_initialize(). This makes it possible to more easily add additional fields to Thread_queue_Context.
* score: Adjust thread queue layoutSebastian Huber2016-05-301-3/+4
| | | | | | Adjust thread queue layout according to Newlib. This makes it possible to use the same implementation for <sys/lock.h> and CORE mutexes in the future.
* score: Add Status_Control for all APIsSebastian Huber2016-05-261-5/+35
| | | | | | | | | | | Unify the status codes of the Classic and POSIX API to use the new enum Status_Control. This eliminates the Thread_Control::Wait::timeout_code field and the timeout parameter of _Thread_queue_Enqueue_critical() and _MPCI_Send_request_packet(). It gets rid of the status code translation tables and instead uses simple bit operations to get the status for a particular API. This enables translation of status code constants at compile time. Add _Thread_Wait_get_status() to avoid direct access of thread internal data structures.
* score: Move thread queue MP callout to contextSebastian Huber2016-05-251-109/+82
| | | | | | | | Drop the multiprocessing (MP) dependent callout parameter from the thread queue extract, dequeue, flush and unblock methods. Merge this parameter with the lock context into new structure Thread_queue_Context. This helps to gets rid of the conditionally compiled method call helpers.
* score: Get rid of mp_id parameterSebastian Huber2016-05-251-30/+8
| | | | | Get rid of the mp_id parameter used for some thread queue methods. Use THREAD_QUEUE_QUEUE_TO_OBJECT() instead.
* score: Move thread queue object supportSebastian Huber2016-05-251-0/+27
|
* mpci: Fix thread queue flush methodSebastian Huber2016-05-251-49/+11
| | | | | | | We must call the MP callout for proxies if we unblock them after a thread queue extraction. This was missing in _Thread_queue_Flush_critical(). Move thread remove timer and unblock code to new function _Thread_Remove_timer_and_unblock().
* score: Use _RBTree_Insert_inline()Sebastian Huber2016-05-201-15/+0
| | | | | | Use _RBTree_Insert_inline() for priority thread queues. Update #2556.
* score: Avoid Giant lock for scheduler set/getSebastian Huber2016-05-121-3/+23
| | | | Update #2555.
* score: Add _Thread_queue_Is_lock_owner()Sebastian Huber2016-05-121-3/+45
| | | | Add _Thread_queue_Is_lock_owner() in case RTEMS_DEBUG is defined.
* score: Add _Thread_queue_Flush_default_filter()Sebastian Huber2016-04-221-0/+15
|
* score: Add _Thread_queue_Is_empty()Sebastian Huber2016-04-221-0/+7
|
* score: Introduce _Thread_queue_Flush_critical()Sebastian Huber2016-04-211-24/+63
| | | | | | | | Replace _Thread_queue_Flush() with _Thread_queue_Flush_critical() and add a filter function for customization of the thread queue flush operation. Update #2555.
* score: Rework MP thread queue callout supportSebastian Huber2016-04-061-31/+182
| | | | | | | | | | | | | | | | | | | | | | | | | | | | The thread queue implementation was heavily reworked to support SMP. This broke the multiprocessing support of the thread queues. This is fixed by this patch. A thread proxy is unblocked due to three reasons 1) timeout, 2) request satisfaction, and 3) extraction. In case 1) no MPCI message must be sent. This is ensured via the _Thread_queue_MP_callout_do_nothing() callout set during _Thread_MP_Allocate_proxy(). In case 2) and 3) an MPCI message must be sent. In case we interrupt the blocking operation during _Thread_queue_Enqueue_critical(), then this message must be sent by the blocking thread. For this the new fields Thread_Proxy_control::thread_queue_callout and Thread_Proxy_control::thread_queue_id are used. Delete the individual API MP callout types and use Thread_queue_MP_callout throughout. This type is only defined in multiprocessing configurations. Prefix the multiprocessing parameters with mp_ to ease code review. Multiprocessing specific parameters are optional due to use of a similar macro pattern. There is no overhead for non-multiprocessing configurations.
* score: _Thread_queue_Flush() parameter changesSebastian Huber2016-04-061-20/+61
| | | | | | | | | | | | Change _Thread_queue_Flush() into a macro that invokes _Thread_queue_Do_flush() with the parameter set defined by RTEMS_MULTIPROCESSING. For multiprocessing configurations add the object identifier to avoid direct use of the thread wait information. Use mp_ prefix for multiprocessing related parameters. Rename Thread_queue_Flush_callout to Thread_queue_MP_callout since this type will be re-used later for other operations as well.
* score: Remove Thread_queue_Queue::operations fieldSebastian Huber2016-03-291-60/+32
| | | | | | | | | Remove the Thread_queue_Queue::operations field to reduce the size of this structure. Add a thread queue operations parameter to the _Thread_queue_First(), _Thread_queue_First_locked(), _Thread_queue_Enqueue(), _Thread_queue_Dequeue() and _Thread_queue_Flush() functions. This is a preparation patch to reduce the size of several synchronization objects.
* score: Implement priority boostingSebastian Huber2015-09-041-0/+25
|
* score: Implement SMP-specific priority queueSebastian Huber2015-09-041-0/+16
|
* score: Add thread queue for self-contained objectsSebastian Huber2015-07-301-0/+18
|
* score: Use a plain ticket lock for thread locksSebastian Huber2015-07-301-9/+64
| | | | | | This enables external libraries to use thread locks since they are independent of the actual RTEMS build configuration, e.g. profiling enabled or disabled.