Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Suggestion] Implement Delay without starting / stopping SysTick timer? #382

Open
idubrov opened this issue Jan 1, 2022 · 0 comments
Open

Comments

@idubrov
Copy link

idubrov commented Jan 1, 2022

I am proposing to change Delay implementation to avoid changing SYST settings (enable / disable / reload value) and instead wait for a specific counter value.

How?

This is the code I wrote that I believe achieves that (the code assumes 9Mhz SysTick timer): https://github.com/idubrov/x2-feed/blob/e6320743b95a6d4e87678451470365d45f523b7d/src/hal/delay.rs#L6-L14

Why?

  1. It would allow SysTick timer running which could be useful for other purposes. For example, I measure delays by reading SysTick timer counter value. The delays, as implemented by this crate, would interfere with that.
  2. Would not require &mut access to Delay itself (as it will not have any side effects). Might be a bit more convenient in some cases.

What are the downsides of proposed implementation?

  1. I think (I haven't verified, though) that the current implementation could rely on wfi to wait counter wraps. It does not do that currently, but it could. Would be more energy efficient. In my implementation, this won't be possible as it relies on constant calculations. However, given the nature these delays are commonly used, might not be big of an issue (I would assume, for more precision / efficient cases, one would use other timers).
  2. The implementation I propose is only reliable unless code is "interrupted" for a longer than half of the longest period. See the explanation below.

Explanation

My code uses a sign bit of a difference to see if we are "undershooting" or "overshooting". If counter is off from the "target" value by more than half of the period, the code will get confused (thinking we are undershooting). With 72Mhz system clock and SysTick using core as an input, the longest period would be ~ 0xffffff / 72_000_000 / 2 ~= 117ms. So, if code is interrupted for more than 117ms, it will fail to measure delay properly. I believe that the current implementation has similar issue, but it needs to be interrupted for more than the whole period (~233ms seconds in the example case): it will count two overflows as one.

Actually, after giving it some thought, my implementation could as well just grab the difference from previous invocation and subtract it from the total amount of "ticks" to wait. Something like:

let mut last = SYST::get_current();
let mut total = ...;
loop {
  let current = SYST::get_current();
  let diff = current.wrapping_sub(last);
  if total <= diff {
    break;
  }
  total -= diff;
  last = current;
}

Which would make it a bit more robust (it will only miscalculate if interrupted for more than the whole period).

P.S. Also, while experimenting, I've realized that I use different input for SysTick (I use AHB / 8, which gives me 9Mhz). I guess with so many tiny details on what exactly do I want from SysTick, I could just continue using my implementation 😆 (72Mhz would be too fast for me -- less convenient to measure delays).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant