4 Tips for using floating point computations in firmware
Digital Health: Floating Away
Until very recently I would never have dreamed of using floating point computations in firmware. It required too heavy a penalty in code space and computation time. Suddenly the game changed considerably. Now there are a number of relatively inexpensive microcontrollers which have single-precision floating point computations implemented in hardware.
Here are things I keep in mind while coding and debugging
Check code execution time regularly
It’s pretty easy to accidentally prevent the compiler from even using the floating-point unit (FPU). Plus there’s still plenty of computations that can only be implemented in software. Burning excessive CPU time can crop up for many reasons and not just for teething issues with a new tool chain or getting used to a new compiler. For instance, it’s really easy to forget the “f” at the end of a constant and suddenly a whole line of math gets promoted to double.
Some functions just take a bunch of time, although not always predictably. The mod operator is a good example. If the dividend is considerably bigger than the divisor then it often blows up into way more instruction cycles than you’d ever expect. I surround chunks of code with port twiddles and scope them. If the measured time is way off from what I estimate or what I require then I dig down to see what’s going on and what can be done.
Regularly check the assembly (or disassembly as it’s often called in the development environment)
This makes it quite plain if the compiler has decided to use soft math. On an ARM processor, I look for lots of instructions with F32 in them; if I don’t see them then something’s wrong. It’s also instructive to see some of the decisions the compiler makes. Sometimes compilers can be incredibly smart, other times not.
Remember that floating point is still limited
Single precision floating point only has a 24 bit mantissa. It may seem stupidly obvious, but this is actually 8 bits less than 32 bit fixed point! When I add two numbers that are significantly different in magnitude, the smaller one is going lose a bunch of precision. This can often be quite severe. I still go through the same sort of analysis I used with fixed point: I make sure I understand the worst case precision loss as my calculations propagate. This is a bit of a chore but it’s better than discovering after some in-the-field troubleshooting that my signal is noticeably stair-stepping or my integral control term is looking suspiciously binary.
Many implementations of floating point computations
– like IEEE 754 – include special values that can cause a whole bunch of trouble. We’re all familiar with divide by zero exceptions when we’re programming PCs. In firmware the FPU still flags these events and goes one step further. If you divide a number by zero or add, multiply or divide a number until it overflows, you will get either plus or minus infinity (+INF, -INF). If you divide zero by zero, or try to square root a negative number you get not-a-number (NaN). These three values can cause a ton of trouble for any math that has state. For instance they will instantly knock an IIR filter dead. Dead for good. These numbers don’t go away in the math chain; every subsequent calculation gets contaminated.
The only way I’ve come up with to deal with NaNs and INFs is to plan for them. Testing for them is easy, either test results or query the FPU state. The tricky part is deciding what to do when I find one. I might think that getting a NaN is ridiculously unlikely but just imagine what the result is: my math flat lines forever.
So I ask myself a number of questions. Does it make sense to zero the value? What would that do to the output of my control loop? Is it even possible to sensibly cope with the situation or would it be best to stop everything and wait for operator input. Sometimes the strategy to fix everything can be quite complicated if it’s important that the system does not go off line. Whatever I decide, I have to be careful that all state variables – anything that retains a value for use in future computations – are repaired.
Having an FPU often does not necessarily make life easier. Much of the same good practice in fixed point math still applies. Nevertheless, it’s pretty amazing what modern microcontrollers can do. I’d be interested to hear any strategies that readers use when using floating point math.
Kenneth MacCallum, PEng, is a former Principal Engineering Physicist at StarFish Medical. He works on Medical Device Development and uses floating point computations in Digital Health and ultrasound applications.