summaryrefslogtreecommitdiffstats
path: root/cpukit/score/src (follow)
Commit message (Collapse)AuthorAgeFilesLines
* score: _CPU_Context_switch_to_first_task_smp()Sebastian Huber2014-02-051-1/+1
| | | | | Delete _CPU_Context_switch_to_first_task_smp() and use _CPU_Context_restore() instead.
* Add thread-local storage (TLS) supportSebastian Huber2014-02-044-1/+52
| | | | | Tested and implemented on ARM, m68k, PowerPC and SPARC. Other architectures need more work.
* score: Add _Thread_Get_maximum_internal_threads()Sebastian Huber2014-02-041-11/+1
|
* score: Add _Workspace_Allocate_aligned()Sebastian Huber2014-02-041-0/+5
|
* cpukit/rtems: Add rtems_clock_get_uptime_nanoseconds to the RTEMS API.Chris Johns2013-12-241-0/+29
| | | | | | Add Timestamp support in the score to return a timestamp in nanoseconds. Add a test. Update the RTEMS API documentation.
* score: Minor _Thread_Dispatch() optimizationSebastian Huber2013-12-021-2/+1
| | | | | It is not necessary to load the executing thread control again after the context switch since it is an invariant of the executing thread.
* score: Format changes in _Thread_Set_state()Sebastian Huber2013-11-261-8/+8
|
* score: Simplify _Thread_queue_Dequeue_priority()Sebastian Huber2013-11-261-2/+2
|
* score/rbtree: Remove "unprotected" from APISebastian Huber2013-11-219-20/+20
|
* score/rbtree: Delete protected operationsSebastian Huber2013-11-216-163/+0
| | | | | The user of the red-black tree container must now ensure that at most one thread at once can access an instance.
* scheduler/EDF: Use unprotected insert and extractSebastian Huber2013-11-213-4/+4
| | | | | | Interrupts are disabled by the caller _Thread_Change_priority() or _Thread_Set_transient() or directly in the scheduler operation. Thus there is no need to use protected variants.
* heapgetinfo: Free all delayed blocksSebastian Huber2013-11-181-0/+1
|
* smp: Add and use _Assert_Owner_of_giant()Sebastian Huber2013-08-303-13/+27
| | | | | | | | | | | | | | Add and use _ISR_Disable_without_giant() and _ISR_Enable_without_giant() if RTEMS_SMP is defined. On single processor systems the ISR disable/enable was the big hammer which ensured system-wide mutual exclusion. On SMP configurations this no longer works since other processors do not care about disabled interrupts on this processor and continue to execute freely. On SMP in addition to ISR disable/enable an SMP lock must be used. Currently we have only the Giant lock so we can check easily that ISR disable/enable is used only in the right context.
* score: Add SMP support to _Watchdog_Report_chain()Sebastian Huber2013-08-271-1/+3
|
* score: Delete unused function parameterSebastian Huber2013-08-264-5/+3
|
* score: PR2140: Fix _Thread_queue_Process_timeout()Sebastian Huber2013-08-261-8/+37
| | | | | | The _Thread_queue_Process_timeout() operation had several race conditions in the event of nested interrupts. Protect the critical sections via disabled interrupts.
* score: PR2140: _Thread_queue_Extract()Sebastian Huber2013-08-263-8/+11
| | | | | Return if the executing context performed the extract operation since interrupts may interfere.
* smp: Fix warningsSebastian Huber2013-08-232-2/+0
|
* score: _Thread_queue_Enqueue_with_handler()Sebastian Huber2013-08-239-16/+26
| | | | | Add thread parameter to _Thread_queue_Enqueue_with_handler() to avoid access to global _Thread_Executing.
* smp: Delete RTEMS_BSP_SMP_SIGNAL_TO_SELFSebastian Huber2013-08-211-7/+0
|
* smp: Disable restart of threads other than selfSebastian Huber2013-08-201-0/+10
|
* smp: Add Deterministic Priority SMP SchedulerSebastian Huber2013-08-202-9/+213
|
* smp: Generalize Simple SMP schedulerSebastian Huber2013-08-201-85/+75
|
* smp: Optimize Simple SMP schedulerSebastian Huber2013-08-202-33/+105
| | | | | | | | | | Add Thread_Control::is_in_the_air field if configured for SMP. This helps to simplify the extract operation and avoids superfluous inter-processor interrupts. Move the processor allocation step into the enqueue operation. Add and use _Scheduler_simple_smp_Get_highest_ready(). Add and use _Scheduler_SMP_Get_lowest_scheduled().
* smp: _Scheduler_simple_smp_Allocate_processor()Sebastian Huber2013-08-201-44/+2
| | | | | Rename _Scheduler_simple_smp_Allocate_processor() to _Scheduler_SMP_Allocate_processor().
* smp: Rename _Scheduler_simple_smp_Start_idle()Sebastian Huber2013-08-202-12/+40
| | | | | Rename _Scheduler_simple_smp_Start_idle() to _Scheduler_SMP_Start_idle().
* smp: Replace Scheduler_simple_smp_ControlSebastian Huber2013-08-201-13/+9
| | | | | Replace Scheduler_simple_smp_Control with Scheduler_SMP_Control. Rename _Scheduler_simple_smp_Instance() to _Scheduler_SMP_Instance().
* score: _Priority_bit_map_Handler_initialization()Sebastian Huber2013-08-202-6/+48
| | | | | | | Delete _Priority_bit_map_Handler_initialization() and rely on BSS initialization. Move definition of _Priority_Major_bit_map and _Priority_Bit_map to separate file. Move definition of __log2table also to this file.
* score: _Scheduler_priority_Ready_queue_initialize()Sebastian Huber2013-08-202-1/+10
| | | | Move workspace allocation to _Scheduler_priority_Initialize().
* score: Add _Scheduler_priority_Get_ready_queues()Sebastian Huber2013-08-201-4/+3
| | | | Add and use _Scheduler_priority_Get_ready_queues()
* score: Add _Scheduler_priority_Get_scheduler_infoSebastian Huber2013-08-203-22/+17
| | | | Add and use _Scheduler_priority_Get_scheduler_info().
* score: PR2136: Fix _Thread_Change_priority()Sebastian Huber2013-08-2010-65/+40
| | | | | | | | | | | | | | | Add call to _Scheduler_Schedule() in missing path after _Thread_Set_transient() in _Thread_Change_priority(). See also sptests/spintrcritical19. Add thread parameter to _Scheduler_Schedule(). This parameter is currently unused but may be used in future SMP schedulers. Do heir selection in _Scheduler_Schedule(). Use _Scheduler_Update_heir() for this in the particular scheduler implementation. Add and use _Scheduler_Generic_block().
* score: Per-CPU thread dispatch disable levelSebastian Huber2013-08-0910-201/+244
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use a per-CPU thread dispatch disable level. So instead of one global thread dispatch disable level we have now one instance per processor. This is a major performance improvement for SMP. On non-SMP configurations this may simplifiy the interrupt entry/exit code. The giant lock is still present, but it is now decoupled from the thread dispatching in _Thread_Dispatch(), _Thread_Handler(), _Thread_Restart_self() and the interrupt entry/exit. Access to the giant lock is now available via _Giant_Acquire() and _Giant_Release(). The giant lock is still implicitly acquired via _Thread_Dispatch_decrement_disable_level(). The giant lock is only acquired for high-level operations in interrupt handlers (e.g. release of a semaphore, sending of an event). As a side-effect this change fixes the lost thread dispatch necessary indication bug in _Thread_Dispatch(). A per-CPU thread dispatch disable level greatly simplifies the SMP support for the interrupt entry/exit code since no spin locks have to be acquired in this area. It is only necessary to get the current processor index and use this to calculate the address of the own per-CPU control. This reduces the interrupt latency considerably. All elements for the interrupt entry/exit code are now part of the Per_CPU_Control structure: thread dispatch disable level, ISR nest level and thread dispatch necessary. Nothing else is required (except CPU port specific stuff like on SPARC).
* score: Add and use _Per_CPU_Acquire_all().Sebastian Huber2013-08-092-3/+15
| | | | | | | Add and use _Per_CPU_Release_all(). The context switch user extensions are invoked in _Thread_Dispatch(). This change is necessary to avoid the giant lock in _Thread_Dispatch().
* smp: Use ISR lock in per-CPU controlSebastian Huber2013-08-091-6/+6
| | | | | | | Rename _Per_CPU_Lock_acquire() to _Per_CPU_ISR_disable_and_acquire(). Rename _Per_CPU_Lock_release() to _Per_CPU_Release_and_ISR_enable(). Add _Per_CPU_Acquire() and _Per_CPU_Release().
* score/cpu: Add CPU_Per_CPU_controlSebastian Huber2013-08-091-0/+5
| | | | Add CPU port specific per-CPU control.
* score: Rename _Scheduler_simple_Update()Sebastian Huber2013-08-082-6/+29
| | | | Rename _Scheduler_simple_Update() in _Scheduler_default_Update().
* score: Rename _Scheduler_simple_Allocate(), etc.Sebastian Huber2013-08-082-13/+38
| | | | | Rename _Scheduler_simple_Allocate() in _Scheduler_default_Allocate(). Rename _Scheduler_simple_Free() in _Scheduler_default_Free().
* score: Rename _Scheduler_priority_Release_job()Sebastian Huber2013-08-081-10/+9
| | | | | Rename _Scheduler_priority_Release_job() into _Scheduler_default_Release_job().
* smp: Generalize _Thread_Start_multitasking()Sebastian Huber2013-08-052-44/+32
| | | | | | | | | | Add context parameter to _Thread_Start_multitasking() and use this function in rtems_smp_secondary_cpu_initialize(). This avoids duplication of code. Fix missing floating point context initialization in rtems_smp_secondary_cpu_initialize(). Now performed via _Thread_Start_multitasking().
* score: Use an ISR lock for TODSebastian Huber2013-08-015-13/+57
| | | | | | | | | | | | | Two issues are addressed. 1. On single processor configurations the set/get of the now/uptime timestamps is now consistently protected by ISR disable/enable sequences. Previously nested interrupts could observe partially written values since 64-bit writes are not atomic on 32-bit architectures in general. This could lead to non-monotonic uptime timestamps. 2. The TOD now/uptime maintanence is now independent of the giant lock. This is the first step to remove the giant lock in _Thread_Dispatch().
* score: Move nanoseconds since last tick supportSebastian Huber2013-08-013-43/+15
| | | | | | Move the nanoseconds since last tick support from the Watchdog to the TOD handler. Now the TOD managment is encapsulated in the TOD_Control structure.
* score: Delete _TOD_Activate and _TOD_DeactivateSebastian Huber2013-08-012-3/+0
|
* score: Rename tod.h to todimpl.hSebastian Huber2013-08-0114-14/+14
|
* score: Add and use _Thread_Update_cpu_time_used()Sebastian Huber2013-08-011-11/+4
| | | | Fix _times().
* smp: Provide cache optimized Per_CPU_ControlSebastian Huber2013-07-312-5/+9
| | | | Delete _Per_CPU_Information_p.
* smp: Delete _SMP_Request_other_cores_to_dispatch()Sebastian Huber2013-07-305-30/+9
| | | | | Use an event triggered unicast to inform remote processors about a necessary thread dispatch instead.
* smp: Delete _ISR_Disable_on_this_core(), etc.Sebastian Huber2013-07-303-29/+9
| | | | | | | | | | | | | | | | | | | | Delete _ISR_Enable_on_this_core(), _ISR_Flash_on_this_core(), _ISR_SMP_Disable(), _ISR_SMP_Enable(), _ISR_SMP_Flash(). The ISR disable/enable interface has no parameter to pass a specific object. Thus it is only possible to implement a single global lock object with this interface. Using the ISR disable/enable as the giant lock on SMP configurations is not feasible. Potentially blocking resource obtain sequences protected by the thread dispatch disable level are subdivided into smaller ISR disabled critical sections. This works since on single processor configurations there is only one thread of execution that can block. On SMP this is different (image a mutex obtained concurrently by different threads on different processors). The thread dispatch disable level is currently used as the giant lock. There is not need to complicate things with this unused interface.
* smp: Delete _ISR_SMP_Initialize()Sebastian Huber2013-07-302-8/+0
|
* score: Critical section change in _Thread_DispatchSebastian Huber2013-07-301-2/+2
| | | | | | | | | | | | | | | | If we enter _Thread_Dispatch() then _Thread_Dispatch_disable_level must be zero. Single processor RTEMS assumes that stores of non-zero values to _Thread_Dispatch_disable_level are observed by interrupts as non-zero values. Move the _Thread_Dispatch_set_disable_level( 1 ) out of the first ISR disabled critical section. In case interrupts happen between the _Thread_Dispatch_set_disable_level( 1 ) and _ISR_Disable( level ) then the interrupt will observe a non-zero _Thread_Dispatch_disable_level and will not issue a _Thread_Dispatch() and we can enter the ISR disabled section directly after interrupt processing. This change leads to symmetry between the single processor and SMP configuration.