Age | Commit message (Collapse) | Author |
|
Check that data cache snooping exists and is enabled on all cores.
|
|
|
|
This patch removes migration of a running thread. This may result
in less than optimal run sets.
|
|
Similar to the task priority option the new CPU Affinity
option is fist controlled by the RPCI specific rpciodCpuset
option and then the global network task config and then
falls back to not setting the affinity (all CPUs)
|
|
|
|
|
|
On SMP rtems_interrupt_lock_context must be used. Most tests fail with a
NULL pointer exception when exiting, except on NGMP where main memory is
at 0x00000000.
|
|
Manupilating the interrupt control registers directly instead
of going through the interrupt layer can be deceiving.
|
|
With this patch it is possible to compile a kernel that works on
both FPU and non-FPU SPARC systems. The SPARC PSR register contains
a bit indicating if FPU is present.
It is a minimal increase to the foot-print.
|
|
|
|
The LEON2 and ERC32 maps the new macros to CPU0 since they do not
support SMP. With the LEON3 a specific CPU's interrupt controller
registers can be modified using macros.
|
|
The LEON3 BSP have support for up to 8 termios consoles, the
LEON3-FT GR712RC uses 6 UARTs.
This does not take into account the BSP maximum devices instead
it is hardcoded to 6. This patch increases it the maximum
devices of DEVFS04 from 6 to 10.
|
|
The printf() causing the problem is removed temporarily. It
must be analysed more in depth why this problem occurs. The
stack is overflowed when the signal handler is called. What
triggers it might be related to a slow UART, but I believe the
actual problem is something else.
The following steps can be seen:
1. Thread is switched in, a call frame is added for the signal
handler.
2. The signal handler calls printf()
3. a semaphore is taken which blocks the signal handler
4. the task switched in again, but now it starts execcuting
the signal handler *again*. Jump to 1. After a couple of
loops the stack is overflowed.
It might be that systems with a larger UART FIFO or faster UART
is not affected.
|
|
.. according to the maximum number of termios ports which is
8. Since LEON3 uses PnP to find how many UARTs there are
present we must make sure worst case work.
The current maximum of 4 free nodes caused for example the
GR712RC with its 6 UARTs to fail during devfs02 test.
|
|
This patch adds a default network tasks CPU affinity configuration
option. The network drivers have the option to create their own
daemon tasks with a custom CPU affinity set, or rely on the
default set.
|
|
This code is confusing since the LEON3 SMP requires
mcpu=leon3 and the _CPU_SMP_Get_current_processor()
is declared in cpu.h and inline.
|
|
These single-core tests failed to build on SMP for the LEON.
|
|
|
|
In order to support older toolchains and LEON3 v7 systems the
-mcpu=leon3 flag can not be used. The SMP kernel however requires
-mcpu=leon3 for the CAS support only present in GCC-4.8.3 and
GCC-4.9.
|
|
By removing the bsp_reset() mechanism and instead relying on the
CPU_Fatal_halt() routine SMP and single-core can halt by updating
the _Internal_errors_What_happened structure and set the state to
SYSTEM_STATE_TERMINATED (the generic way). This will be better
for test scripts and debugger that can generically look into why
the OS stopped.
For SMP systems, only the fatal-reporting CPU waits until all other
CPUs are powered down (with a time out of one clock tick). The
reason why a fatal stop happend may be because CPU0 was soft-locked
up so we can never trust that CPU0 should do the halt for us.
|
|
The Fatal_halt handler now have two options, either halt
as before or enter system error state to return to
debugger or simulator. The exit-code is now also
propagated to the debugger which is very useful for
testing.
The CPU_Fatal_halt handler was split up into two, since
the only the LEON3 support the CPU power down.
The LEON3 halt now uses the power-down instruction to save
CPU power. This doesn't stop a potential watch-dog timer
from expiring.
|
|
|
|
Instead of calling the system call TA instruction directly it
is better paractise to isolate the trap implementation to the
system call functions.
The BSP_fatal_return() should always exist, regardless of SPARC
CPU.
|
|
Without the source the error code does not say that much.
Let it be up to the CPU/BSP to determine the error code
reported on fatal shutdown.
This patch does not change the current behaviour, just
adds the option to handle the source of the fatal halt.
|
|
|
|
The _CPU_Context_Restart_self() implementations usually assume that self
context is executing.
FIXME: We have a race condition in _Thread_Start_multitasking() in case
another thread already performed scheduler operations and moved the heir
thread to another processor. The time frame for this is likely too
small to be practically relevant.
|
|
Close the thread object in _Thread_Make_zombie() so that all blocking
operations that use _Thread_Get() in the corresponding release directive
can find a terminating thread and can complete the operation.
|
|
Add a chain node to the scheduler node to decouple the thread and
scheduler nodes. It is now possible to enqueue a thread in a thread
wait queue and use its scheduler node at the same for other threads,
e.g. a resouce owner.
|
|
This reduces the API to the minimum data structures to maximize the
re-usability.
|
|
Add Thread_Scheduler_control to collect scheduler related fields of the
TCB.
|
|
Remove the scheduler parameter from most high level scheduler operations
like
- _Scheduler_Block(),
- _Scheduler_Unblock(),
- _Scheduler_Change_priority(),
- _Scheduler_Update_priority(),
- _Scheduler_Release_job(), and
- _Scheduler_Yield().
This simplifies the scheduler operations usage.
|
|
Add and use SCHEDULER_OPERATION_DEFAULT_GET_SET_AFFINITY.
|
|
|
|
Suppose we have two tasks A and B and two processors. Task A is about
to delete task B. Now task B calls rtems_task_wake_after(1) on the
other processor. Task B will block on the Giant lock. Task A
progresses with the task B deletion until it has to wait for
termination. Now task B obtains the Giant lock, sets its state to
STATES_DELAYING, initializes its watchdog timer and waits. Eventually
_Thread_Delay_ended() is called, but now _Thread_Get() returned NULL
since the thread is already marked as deleted. Thus task B remained
forever in the STATES_DELAYING state.
Instead of passing the thread identifier use the thread control block
directly via the watchdog user argument. This makes
_Thread_Delay_ended() also a bit more efficient.
|
|
|
|
|
|
It is used in combination with the inode number to uniquely identify a
file system node in the system.
|
|
|
|
Delete _Scheduler_priority_Get_scheduler_info().
|
|
The _Scheduler_Yield() was called by the executing thread with thread
dispatching disabled and interrupts enabled. The rtems_task_suspend()
is explicitly allowed in ISRs:
http://rtems.org/onlinedocs/doc-current/share/rtems/html/c_user/Interrupt-Manager-Directives-Allowed-from-an-ISR.html#Interrupt-Manager-Directives-Allowed-from-an-ISR
Unlike the other scheduler operations the locking was performed inside
the operation. This lead to the following race condition. Suppose a
ISR suspends the executing thread right before the yield scheduler
operation. Now the executing thread is not longer in the set of ready
threads. The typical scheduler operations did not check the thread
state and will now extract the thread again and enqueue it. This
corrupted data structures.
Add _Thread_Yield() and do the scheduler yield operation with interrupts
disabled. This has a negligible effect on the interrupt latency.
|
|
|
|
|
|
These functions are used only via the function pointers in the generic
SMP scheduler implementation. Provide them as static inline so that the
compiler can optimize more easily.
|
|
This helps to avoid untestable code for the normal SMP schedulers.
|
|
|
|
|
|
|
|
|
|
This scheduler attempts to account for needed thread migrations caused
as a side-effect of a thread state, affinity, or priority change operation.
This scheduler has its own allocate_processor handler named
_Scheduler_SMP_Allocate_processor_exact() because
_Scheduler_SMP_Allocate_processor() attempts to prevent an executing
thread from moving off its current CPU without considering affinity.
Without this, the scheduler makes all the right decisions and then
they are discarded at the end.
==Side Effects of Adding This Scheduler==
Added Thread_Control * parameter to Scheduler_SMP_Get_highest_ready type
so methods looking for the highest ready thread can filter by the processor
on which the thread blocking resides. This allows affinity to be considered.
Simple Priority SMP and Priority SMP ignore this parameter.
+ Added get_lowest_scheduled argument to _Scheduler_SMP_Enqueue_ordered().
+ Added allocate_processor argument to the following methods:
- _Scheduler_SMP_Block()
- _Scheduler_SMP_Enqueue_scheduled_ordered()
- _Scheduler_SMP_Enqueue_scheduled_ordered()
+ schedulerprioritysmpimpl.h is a new file with prototypes for methods
which were formerly static in schedulerprioritysmp.c but now need to
be public to be shared with this scheduler.
NOTE:
_Scheduler_SMP_Get_lowest_ready() appears to have a path which would
allow it to return a NULL. Previously, _Scheduler_SMP_Enqueue_ordered()
would have asserted on it. If it cannot return a NULL,
_Scheduler_SMP_Get_lowest_ready() should have an assertions.
|
|
|