| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
| |
Rewrite the _Watchdog_Insert(), _Watchdog_Remove() and
_Watchdog_Tickle() functions to use iterator items to synchronize
concurrent operations. This makes it possible to get rid of the global
variables _Watchdog_Sync_level and _Watchdog_Sync_count which are a
blocking point for scalable SMP solutions.
Update #2307.
|
|
|
|
| |
Update #2307.
|
|
|
|
|
|
|
|
| |
Add watchdog header parameter to _Watchdog_Remove() to be in line with
the other operations. Add _Watchdog_Remove_ticks() and
_Watchdog_Remove_seconds() for convenience.
Update #2307.
|
| |
|
|
|
|
|
|
| |
Avoid the usage of the current thread state in
_Thread_queue_Extract_with_return_code() since thread queues should not
know anything about thread states.
|
|
|
|
|
| |
Remove thread queue parameter from _Thread_queue_Extract() since the
current thread queue is stored in the thread control block.
|
| |
|
| |
|
|
|
|
| |
Restructure to avoid large maximum thread dispatch disabled times.
|
|
|
|
|
| |
These global variables are obsolete since
65f71f8472fa904ca48b816301ed0810def47001.
|
|
|
|
| |
This prevents a deadlock situation in the capture engine.
|
|
|
|
|
|
| |
Account for priority changes of threads executing in a foreign
partition. Exchange idle threads in case a victim node uses an idle
thread and the new scheduled node needs an idle thread.
|
| |
|
| |
|
|
|
|
|
| |
Fix layout of the common block of Thread_Control and
Thread_Proxy_control. Ensure that the offsets match.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The cpuuse top command now supports the current load where the list of
tasks is ordered based on the current load rather than the total cpu usage.
This lets you see what is using the processor at any specific instance.
The ability to sort on a range of thread values is now supported.
Added memory usage stats for unified and separate workspace and C heaps as
well as displaying the allocated stack space.
Added a few more command keys to refresh the display, show all tasks in the
system, control the lines display and a scrolling mode that does not clear
the display on each refresh.
Removed support for tick kernel builds. The tick support in the kernel is to
be removed.
|
|
|
|
| |
Avoid collision with <sys/param.h> defined PAGE_SIZE.
|
| |
|
| |
|
|
|
|
|
| |
Get rid of _CPU_cache_invalidate_instruction_range declaration
as it doesn't make sense here.
|
| |
|
|
|
|
| |
Closes #2329
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
A thread join is twofold. There is one thread that exists and an
arbitrary number of threads that wait for the thread exit (one-to-many
relation). The exiting thread may want to wait for a thread that wants
to join its exit (STATES_WAITING_FOR_JOIN_AT_EXIT in
_POSIX_Thread_Exit()). On the other side we need a thread queue for all
the threads that wait for the exit of one particular thread
(STATES_WAITING_FOR_JOIN in pthread_join()).
Update #2035.
|
|
|
|
|
|
|
| |
There may be a way to reduce the memory requirements but it
will require time to ensure the math is right and it passes
on all targets. At the current time, it fails on 22 BSPs which
run on simulators.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Move some code into _CORE_mutex_Seize_interrupt_blocking() so that the
thread queue handling is in one place.
|
|
|
|
|
| |
Use a parameter for _Thread_queue_Enqueue() instead to reduce memory
usage.
|
|
|
|
|
|
|
|
| |
Move the linear search into a critical section to avoid corruption due
to higher priority interrupts. The interrupt disable time depends now
on the count of pending messages.
Close #2328.
|
|
|
|
|
| |
This function was identical to _Thread_queue_Timeout(). This makes
_Thread_queue_Enqueue_with_handler() obsolete.
|
|
|
|
|
| |
It makes no sense to use this indirection since the type for timeout
values is Watchdog_Interval.
|
|
|
|
|
| |
Otherwise there is a risk that a CPU misses a cache manager message
from another CPU and the test hangs.
|
|
|
|
|
|
|
| |
The Objects_Control::Lock was a software layer violation. It worked
only for the threads since they are somewhat special.
Update #2273.
|
|
|
|
|
|
|
|
|
|
| |
Remove _Thread_Acquire() and _Thread_Acquire_for_executing(). Add
utility functions for the default thread lock. Use the default thread
lock for the RTEMS events. There is no need to disable thread
dispatching and a Giant acquire in _Event_Timeout() since this was
already done by the caller.
Update #2273.
|
|
|
|
|
| |
Return the current processor to be in line with
_Thread_Disable_dispatch().
|
|
|
|
|
|
|
|
| |
Do not disable thread dispatching and do not acquire the Giant lock.
This makes it possible to use this object get variant for fine grained
locking.
Update #2273.
|
|
|
|
|
|
|
| |
Use ISR_lock_Context instead of ISR_Level to allow use of ISR locks for
low-level locking.
Update #2273.
|
| |
|
| |
|
|
|
|
|
|
|
| |
or1ksim BSP was initially named after or1ksim simulator, and it was
intented to only run there. But now it can also run on QEMU, jor1k and
real FPGA boards without modifications. It makes more sense to give
it a new generic name like generic_or1k.
|