summaryrefslogtreecommitdiffstats
path: root/cpukit/score/cpu (follow)
Commit message (Collapse)AuthorAgeFilesLines
* score: Use atomic API for SMP lockSebastian Huber2014-02-1715-544/+0
| | | | | Use a ticket lock implementation based on atomic operations. Delete CPU port specific SMP lock implementations.
* sparc: Add atomic support for SPARC V8Sebastian Huber2014-02-172-0/+203
| | | | Use SWAP instruction with one lock for the system in the SMP case.
* sparc: Add LEON3_ASR17_PROCESSOR_INDEX_SHIFTSebastian Huber2014-02-141-0/+14
| | | | Add _LEON3_Get_current_processor().
* score: Remove volatile from asm statementsSebastian Huber2014-02-142-2/+2
| | | | | The instructions to get the processor current index have no side-effects.
* score: Add CPU counter supportSebastian Huber2014-02-1434-0/+288
| | | | | | | | | Add a CPU counter interface to allow access to a free-running counter. It is useful to measure short time intervals. This can be used for example to enable profiling of critical low-level functions. Add two busy wait functions rtems_counter_delay_ticks() and rtems_counter_delay_nanoseconds() implemented via the CPU counter.
* sparc: Increase CPU_STRUCTURE_ALIGNMENT to 32Sebastian Huber2014-02-131-1/+1
| | | | Recent LEON4 systems use a cache line size of 32 bytes.
* sparc: Save/restore only non-volatile contextSebastian Huber2014-02-122-64/+74
| | | | | | | | | | The _CPU_Context_switch() is a normal function call. The following registers are volatile (the caller must assume that the register contents are destroyed by the callee) according to "SYSTEM V APPLICATION BINARY INTERFACE - SPARC Processor Supplement", Third Edition: g1, o0, o1, o2, o3, o4, o5. Drop these registers from the context. Ensure that offset defines match the structure offsets.
* score: _CPU_Context_switch_to_first_task_smp()Sebastian Huber2014-02-054-23/+0
| | | | | Delete _CPU_Context_switch_to_first_task_smp() and use _CPU_Context_restore() instead.
* Add thread-local storage (TLS) supportSebastian Huber2014-02-0437-52/+270
| | | | | Tested and implemented on ARM, m68k, PowerPC and SPARC. Other architectures need more work.
* arm: Add ARMv7-M SHCSR register bitsSebastian Huber2014-01-101-0/+6
|
* arm: Fix set by but not used warningSebastian Huber2013-12-161-1/+2
|
* no_cpu/cpusmplock.h: Clean up to be compilableJoel Sherrill2013-12-141-0/+4
|
* arm: Clear reservationsSebastian Huber2013-12-032-1/+2
| | | | | Recent GCC versions use atomic operations based on load/store exclusive in the C++ library.
* nios2: TyposSebastian Huber2013-11-261-2/+2
|
* powerpc: Add r2 to CPU contextSebastian Huber2013-11-183-11/+13
| | | | The r2 may be used for thread-local storage.
* powerpc: Do not validate reserved XER bitsSebastian Huber2013-11-181-2/+2
|
* no_cpu/.../cpu.h: Comment improvementJoel Sherrill2013-11-141-0/+5
|
* mips/.../cpu.h: Comment improvementJoel Sherrill2013-11-141-1/+3
|
* arm: Fix inconsistent define usageSebastian Huber2013-09-061-2/+3
|
* nios2: Include proper header fileSebastian Huber2013-09-031-1/+1
|
* score: Simplify <rtems/score/cpuatomic.h>WeiY2013-08-2817-510/+102
| | | | Add proper license and copyright.
* arm: Make barrier operations more visibleSebastian Huber2013-08-221-10/+15
|
* powerpc: Fix _CPU_Context_validate()Sebastian Huber2013-08-131-1/+1
|
* sparc: Make _CPU_ISR_Dispatch_disable per-CPUSebastian Huber2013-08-092-17/+21
| | | | This variable must be available for each processor in the system.
* sparc: Move _CPU_Context_switch(), etc.Sebastian Huber2013-08-091-224/+0
| | | | | | Move the _CPU_Context_switch(), _CPU_Context_restore() and _CPU_Context_switch_to_first_task_smp() code since the method to obtain the processor index is BSP specific.
* arm: Per-CPU thread dispatch disableSebastian Huber2013-08-091-51/+25
| | | | Interrupt support for per-CPU thread dispatch disable level.
* score: Per-CPU thread dispatch disable levelSebastian Huber2013-08-094-17/+14
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Use a per-CPU thread dispatch disable level. So instead of one global thread dispatch disable level we have now one instance per processor. This is a major performance improvement for SMP. On non-SMP configurations this may simplifiy the interrupt entry/exit code. The giant lock is still present, but it is now decoupled from the thread dispatching in _Thread_Dispatch(), _Thread_Handler(), _Thread_Restart_self() and the interrupt entry/exit. Access to the giant lock is now available via _Giant_Acquire() and _Giant_Release(). The giant lock is still implicitly acquired via _Thread_Dispatch_decrement_disable_level(). The giant lock is only acquired for high-level operations in interrupt handlers (e.g. release of a semaphore, sending of an event). As a side-effect this change fixes the lost thread dispatch necessary indication bug in _Thread_Dispatch(). A per-CPU thread dispatch disable level greatly simplifies the SMP support for the interrupt entry/exit code since no spin locks have to be acquired in this area. It is only necessary to get the current processor index and use this to calculate the address of the own per-CPU control. This reduces the interrupt latency considerably. All elements for the interrupt entry/exit code are now part of the Per_CPU_Control structure: thread dispatch disable level, ISR nest level and thread dispatch necessary. Nothing else is required (except CPU port specific stuff like on SPARC).
* score/cpu: Add CPU_Per_CPU_controlSebastian Huber2013-08-0918-2/+123
| | | | Add CPU port specific per-CPU control.
* arm: Fix ISR level context initializationSebastian Huber2013-08-051-1/+2
|
* arm: Fix CPU_MODES_INTERRUPT_MASKSebastian Huber2013-08-052-4/+6
| | | | | The set of interrupt levels must be a continuous range of non-negative integers starting at zero.
* bfin/cpu.h: Remove duplicate definition of CPU_SIMPLE_VECTORED_INTERRUPTSJoel Sherrill2013-08-011-13/+0
|
* score/cpu: Fix _CPU_SMP_lock_Acquire()Sebastian Huber2013-07-302-2/+2
| | | | Avoid infinite loops due to compiler optimization.
* score/i386: Fix _CPU_Fatal_halt()Sebastian Huber2013-07-301-1/+2
|
* Include missing <rtems/score/threaddispatch.h>Sebastian Huber2013-07-262-4/+3
|
* score: PR1782: CPU_USE_DEFERRED_FP_SWITCHSebastian Huber2013-07-232-2/+10
| | | | Do not redefine CPU_USE_DEFERRED_FP_SWITCH.
* smp: Rename _CPU_Processor_event_receive()Sebastian Huber2013-07-175-6/+6
| | | | Rename to _CPU_SMP_Processor_event_receive().
* smp: Rename _CPU_Processor_event_broadcast()Sebastian Huber2013-07-175-6/+6
| | | | Rename to _CPU_SMP_Processor_event_broadcast().
* smp: Add and use _CPU_SMP_Send_interrupt()Sebastian Huber2013-07-175-0/+17
| | | | Delete bsp_smp_interrupt_cpu().
* smp: Add and use _CPU_SMP_Get_current_processor()Sebastian Huber2013-07-175-0/+46
| | | | | | | | | | Add and use _SMP_Get_current_processor() and rtems_smp_get_current_processor(). Delete bsp_smp_interrupt_cpu(). Change type of current processor index from int to uint32_t to match _SMP_Processor_count type.
* update-all-architectures-to-new-atomic-implementationWeiY2013-07-1717-911/+79
|
* arm: Fix exception frame informationSebastian Huber2013-07-161-1/+1
| | | | | Use the right stack pointer value for the exception frame. Assume that we do not have a double abort exception.
* bsps/arm: Fix printk args to match formatRic Claus2013-07-151-7/+7
|
* powerpc: Fix Altivec supportSebastian Huber2013-06-261-4/+4
| | | | Use the right context.
* arm: Fix default exception prologuesChris Johns2013-06-211-0/+6
|
* documentation: Fix Doxygen commentsSebastian Huber2013-06-141-4/+4
|
* score: Add and use _Thread_Dispatch_is_enabled()Sebastian Huber2013-06-142-2/+2
| | | | | Delete _Thread_Dispatch_in_critical_section() and _Thread_Is_dispatching_enabled().
* smp: Add ARM supportSebastian Huber2013-05-316-2/+180
|
* smp: Add PowerPC supportSebastian Huber2013-05-314-1/+116
|
* smp: New SMP lock APISebastian Huber2013-05-3111-27/+334
| | | | | | | | | | | | | | Move the SMP lock implementation to the CPU port. An optimal SMP lock implementation is highly architecture dependent. For example the memory models may be fundamentally different. The new SMP lock API has a flaw. It does not provide the ability to use a local context for acquire and release pairs. Such a context is necessary to implement for example the Mellor-Crummey and Scott (MCS) locks. The SMP lock is currently used in _Thread_Disable_dispatch() and _Thread_Enable_dispatch() and makes them to a giant lock acquire and release. Since these functions do not pass state information via a local context there is currently no use case for such a feature.
* smp: Simplify SMP initialization sequenceSebastian Huber2013-05-293-0/+51
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Delete bsp_smp_wait_for(). Other parts of the system work without timeout, e.g. the spinlocks. Using a timeout here does not make the system more robust. Delete bsp_smp_cpu_state and replace it with Per_CPU_State. The Per_CPU_State follows the Score naming conventions. Add _Per_CPU_Change_state() and _Per_CPU_Wait_for_state() functions to change and observe states. Use Per_CPU_State in Per_CPU_Control instead of the anonymous integer. Add _CPU_Processor_event_broadcast() and _CPU_Processor_event_receive() functions provided by the CPU port. Use these functions in _Per_CPU_Change_state() and _Per_CPU_Wait_for_state(). Add prototype for _SMP_Send_message(). Delete RTEMS_BSP_SMP_FIRST_TASK message. The first context switch is now performed in rtems_smp_secondary_cpu_initialize(). Issuing the first context switch in the context of the inter-processor interrupt is not possible on systems with a modern interrupt controller. Such an interrupt controler usually requires a handshake protocol with interrupt acknowledge and end of interrupt signals. A direct context switch in an interrupt handler circumvents the interrupt processing epilogue and may leave the system in an inconsistent state. Release lock in rtems_smp_process_interrupt() even if no message was delivered. This prevents deadlock of the system. Simplify and format _SMP_Send_message(), _SMP_Request_other_cores_to_perform_first_context_switch(), _SMP_Request_other_cores_to_dispatch() and _SMP_Request_other_cores_to_shutdown().