Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SeqCst around critical sections. #37

Open
perlindgren opened this issue Jun 13, 2020 · 0 comments
Open

SeqCst around critical sections. #37

perlindgren opened this issue Jun 13, 2020 · 0 comments

Comments

@perlindgren
Copy link
Contributor

The low level lock operation resides in the src/export.rs and implements mutual exclusion by manipulating either the global interrupt enable (interrupt::free) or masking the interrupts through the BASEPRI register (for the armv7m).

In order to manipulate the underlying HW, we need to perform a few ARM assembly instructions. Here is the crux. Currently, inline asm is not stabilized (see rust-lang/rfcs#2873 for ongoing work).

In the meantime (to get it working with current stable toolchain) we use a "hack", where the actual assembly functions are linked with the application.

Although not stated explicitly (at least not here https://doc.rust-lang.org/nomicon/ffi.html), it makes sense to assume that SeqCst is ensured for FFI calls (as they can as in our case have side effects).

As a proof of concept the ordering branch implements compiler_fence(Ordering::SeqCst) around critical sections. This implies a "compiler fence" around the closure to be executed as a critical section for the locked resource.

#[cfg(armv7m)]
#[inline(always)]
pub unsafe fn lock<T, R>(
    ptr: *mut T,
    priority: &Priority,
    ceiling: u8,
    nvic_prio_bits: u8,
    f: impl FnOnce(&mut T) -> R,
) -> R {
    let current = priority.get();

    if current < ceiling {
        if ceiling == (1 << nvic_prio_bits) {
            priority.set(u8::max_value());
            let r = interrupt::free(|_| {
                compiler_fence(Ordering::SeqCst);
                let r = f(&mut *ptr);
                compiler_fence(Ordering::SeqCst);
                r
            });
            priority.set(current);
            r
        } else {
            priority.set(ceiling);
            basepri::write(logical2hw(ceiling, nvic_prio_bits));
            compiler_fence(Ordering::SeqCst);
            let r = f(&mut *ptr);
            compiler_fence(Ordering::SeqCst);
            basepri::write(logical2hw(current, nvic_prio_bits));
            priority.set(current);
            r
        }
    } else {
        f(&mut *ptr)
    }
}

and

#[cfg(not(armv7m))]
#[inline(always)]
pub unsafe fn lock<T, R>(
    ptr: *mut T,
    priority: &Priority,
    ceiling: u8,
    _nvic_prio_bits: u8,
    f: impl FnOnce(&mut T) -> R,
) -> R {
    let current = priority.get();

    if current < ceiling {
        priority.set(u8::max_value());
        let r = interrupt::free(|_| {
            compiler_fence(Ordering::SeqCst);
            let r = f(&mut *ptr);
            compiler_fence(Ordering::SeqCst);
            r
        });
        priority.set(current);
        r
    } else {
        f(&mut *ptr)
    }
}

Initial measurements verify no added OH due to the explicit fences.

Is it needed? Likely not.
Does it hurt? Likely not.

What about aggressive LTO optimization. Well as long as semantic information is preserved throughout the compilation and link time optimization, nothing bad should happen. What this experiment demonstrates is that it is trivial to put a safeguard around our critical sections. With the assumed semantics of FFI enforcing SeqCst, it should not have any adverse effect on possible optimizations (as the SeqCst is already there).

Can we do better in the future?

Well maybe. What we need to enforce is that the data structure passed to the closure is accessed only from within the critical section (enforced by means of the HW manipulation). So a more fine grained consistency model might be possible to adopt. The C+++/LLVM model focus ordering regarding atomics. In our case a resource can be a non atomic data structure, so I'm not sure the atomic model is useful here.

Discussion: What exactly is a critical section?

In the original Real-Time For the Masses Model of Computation (RTFM-MoC), a resource is merely a named critical section (not related to any specific data-structure - rather a way to control concurrency). In this context, locking a resource just implies that that you can perform a sequence of operations guaranteed not to be preempted by any equally named critical section. In effect, the SeqCst gives us exactly that (even if we currently use it mostly for avoiding race conditions). What are the potential uses of named critical sections then? Well, if you want to ensure that a specific sequence of side effects is obtained, e.g., first enabling/powering some HW, then writing some registers in predefined order. How is that accomplished in RTIC. Well using the PAC abstraction, Register Blocks are (typically) treated as RTIC Resources. So in order to access the HW (through the Register Block) we can either have exclusive access or gain access to the block through locking the wrapping resource. The actual access is then done through an &mut to the Register Block. It is an abstraction that works, but can in cases feel to restrictive or too permissive. We might want to take a step back to the idea of named critical sections, and see if that would allow for better abstractions (take shared bus as an example). Is really exclusive access to atomic registers the right abstraction (what kind of guarantees does it really give)? See #29 for some further thoughts.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant