summaryrefslogtreecommitdiffstats
path: root/c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README
diff options
context:
space:
mode:
authorTill Straumann <strauman@slac.stanford.edu>2008-03-13 19:28:33 +0000
committerTill Straumann <strauman@slac.stanford.edu>2008-03-13 19:28:33 +0000
commit35506215c9a6a7bb3007e70dd491de114aea73cc (patch)
treeb64c4ec49e237c84a20c8507e288666bc3ae99db /c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README
parent2008-03-12 Joel Sherrill <joel.sherrill@oarcorp.com> (diff)
downloadrtems-35506215c9a6a7bb3007e70dd491de114aea73cc.tar.bz2
2008-03-13 Till Straumann <strauman@slac.stanford.edu>
* new-exceptions/bspsupport/ppc_exc_asm_macros.h, new-exceptions/bspsupport/ppc_exc.S, new-exceptions/bspsupport/README, new-exceptions/bspsupport/ppc_exc_hdl.c: Thomas Doerfler clarified (thanks!) that raising an exception and executing the 1st instruction is not an atomical operation. I added a fix to the code that checks if a lower-priority interrupt is under way: we now not only test if the 'lock' variable was set but also check if the interrupted PC points to the 'write lock' instruction. Added more comments and updated README.
Diffstat (limited to 'c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README')
-rw-r--r--c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README59
1 files changed, 53 insertions, 6 deletions
diff --git a/c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README b/c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README
index 7fd830bd1e..fc58482382 100644
--- a/c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README
+++ b/c/src/lib/libcpu/powerpc/new-exceptions/bspsupport/README
@@ -93,7 +93,7 @@ that they could is beyond doubt...):
- some PPCs don't fit into the classic scheme where
the exception vector addresses all were multiples of
- 0x100 (some are spaced as closely as 0x10).
+ 0x100 (some vectors are spaced as closely as 0x10).
The API should not expose vector offsets but only
vector numbers which can be considered an abstract
entity. The mapping from vector numbers to actual
@@ -323,10 +323,6 @@ RACE CONDITION WHEN DEALING WITH CRITICAL INTERRUPTS
.. increase thread-dispatch-disable-level
.. clear 'ee_lock' variable
- The earliest a critical exception could interrupt
- the 'external' exception handler is after the
- 'stw r1, ee_lock@sdarel(r13)' instruction.
-
After the HPI decrements the dispatch-disable level
it checks 'ee_lock' and refrains from performing
a context switch if 'ee_lock' is nonzero. Since
@@ -340,4 +336,55 @@ RACE CONDITION WHEN DEALING WITH CRITICAL INTERRUPTS
b) use an addressing mode that doesn't require
loading any registers. The short-data area
pointer R13 is appropriate.
-
+
+ CAVEAT: unfortunately, this method by itself
+ is *NOT* enough because raising a low-priority
+ exception and executing the first instruction
+ of the handler is *NOT* atomic. Hence, the following
+ could occur:
+
+ 1) LPI is taken
+ 2) PC is saved in SRR0, PC is loaded with
+ address of 'locking instruction'
+ stw r1, ee_lock@sdarel(r13)
+ 3) ==> critical interrupt happens
+ 4) PC (containing address of locking instruction)
+ is saved in CSRR0
+ 5) HPI is dispatched
+
+ For the HPI to correctly handle this situation
+ it does the following:
+
+
+ a) increase thread-dispatch disable level
+ b) do interrupt work
+ c) decrease thread-dispatch disable level
+ d) if ( dispatch-disable level == 0 )
+ d1) check ee_lock
+ d2) check instruction at *CSRR0
+ d3) do a context switch if necessary ONLY IF
+ ee_lock is NOT set AND *CSRR0 is NOT the
+ 'locking instruction'
+
+ this works because the address of 'ee_lock'
+ is embedded in the locking instruction
+ 'stw r1, ee_lock@sdarel(r13)' and because the
+ registers r1/r13 have a special purpose
+ (stack-pointer, SDA-pointer). Hence it is safe
+ to assume that the particular instruction
+ 'stw r1,ee_lock&sdarel(r13)' never occurs
+ anywhere else.
+
+ Another note: this algorithm also makes sure
+ that ONLY nested ASYNCHRONOUS interrupts which
+ enable/disable thread-dispatching and check if
+ thread-dispatching is required before returning
+ control engage in this locking protocol. It is
+ important that when a critical, asynchronous
+ interrupt interrupts a 'synchronous' exception
+ (which does not disable thread-dispatching)
+ the thread-dispatching operation upon return of
+ the HPI is NOT deferred (because the synchronous
+ handler would not, eventually, check for a
+ dispatch requirement).
+