You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am proposing to change Delay implementation to avoid changing SYST settings (enable / disable / reload value) and instead wait for a specific counter value.
It would allow SysTick timer running which could be useful for other purposes. For example, I measure delays by reading SysTick timer counter value. The delays, as implemented by this crate, would interfere with that.
Would not require &mut access to Delay itself (as it will not have any side effects). Might be a bit more convenient in some cases.
What are the downsides of proposed implementation?
I think (I haven't verified, though) that the current implementation could rely on wfi to wait counter wraps. It does not do that currently, but it could. Would be more energy efficient. In my implementation, this won't be possible as it relies on constant calculations. However, given the nature these delays are commonly used, might not be big of an issue (I would assume, for more precision / efficient cases, one would use other timers).
The implementation I propose is only reliable unless code is "interrupted" for a longer than half of the longest period. See the explanation below.
Explanation
My code uses a sign bit of a difference to see if we are "undershooting" or "overshooting". If counter is off from the "target" value by more than half of the period, the code will get confused (thinking we are undershooting). With 72Mhz system clock and SysTick using core as an input, the longest period would be ~ 0xffffff / 72_000_000 / 2 ~= 117ms. So, if code is interrupted for more than 117ms, it will fail to measure delay properly. I believe that the current implementation has similar issue, but it needs to be interrupted for more than the whole period (~233ms seconds in the example case): it will count two overflows as one.
Actually, after giving it some thought, my implementation could as well just grab the difference from previous invocation and subtract it from the total amount of "ticks" to wait. Something like:
letmut last = SYST::get_current();letmut total = ...;loop{let current = SYST::get_current();let diff = current.wrapping_sub(last);if total <= diff {break;}
total -= diff;
last = current;}
Which would make it a bit more robust (it will only miscalculate if interrupted for more than the whole period).
P.S. Also, while experimenting, I've realized that I use different input for SysTick (I use AHB / 8, which gives me 9Mhz). I guess with so many tiny details on what exactly do I want from SysTick, I could just continue using my implementation 😆 (72Mhz would be too fast for me -- less convenient to measure delays).
The text was updated successfully, but these errors were encountered:
I am proposing to change
Delay
implementation to avoid changing SYST settings (enable / disable / reload value) and instead wait for a specific counter value.How?
This is the code I wrote that I believe achieves that (the code assumes 9Mhz SysTick timer): https://github.com/idubrov/x2-feed/blob/e6320743b95a6d4e87678451470365d45f523b7d/src/hal/delay.rs#L6-L14
Why?
&mut
access toDelay
itself (as it will not have any side effects). Might be a bit more convenient in some cases.What are the downsides of proposed implementation?
wfi
to wait counter wraps. It does not do that currently, but it could. Would be more energy efficient. In my implementation, this won't be possible as it relies on constant calculations. However, given the nature these delays are commonly used, might not be big of an issue (I would assume, for more precision / efficient cases, one would use other timers).Explanation
My code uses a sign bit of a difference to see if we are "undershooting" or "overshooting". If counter is off from the "target" value by more than half of the period, the code will get confused (thinking we are undershooting). With 72Mhz system clock and SysTick using core as an input, the longest period would be ~
0xffffff / 72_000_000 / 2
~= 117ms. So, if code is interrupted for more than 117ms, it will fail to measure delay properly. I believe that the current implementation has similar issue, but it needs to be interrupted for more than the whole period (~233ms seconds in the example case): it will count two overflows as one.Actually, after giving it some thought, my implementation could as well just grab the difference from previous invocation and subtract it from the total amount of "ticks" to wait. Something like:
Which would make it a bit more robust (it will only miscalculate if interrupted for more than the whole period).
P.S. Also, while experimenting, I've realized that I use different input for SysTick (I use
AHB / 8
, which gives me 9Mhz). I guess with so many tiny details on what exactly do I want from SysTick, I could just continue using my implementation 😆 (72Mhz would be too fast for me -- less convenient to measure delays).The text was updated successfully, but these errors were encountered: