A technique for simulating floating point operations on processors where floating point operations are expensive.
We used this on an 8080-based autopilot in the late-1980s/early-1990s. It was impossible to run the control algorithms in floating point and still hit the necessary update rate. So, we did things like:
BITS_PER_MILE = 50; BITS_PER_DEGREE = 72;and multiplied/divided by conversion constants as appropriate.
One of my early assignments was to port a supplier's turbine control algorithms to our autopilot. The reference implementation was poorly written PC Basic with double precision floating point, lots of lead/lag filters, and MagicNumbers everywhere. Rewriting it in assembler code with fixed precision integer was not a pleasant task!
The payoff was worth it though. When we fired the engine up for the first time, the manufacturer's rep was amazed at how much better our controller was than theirs. We're not sure why, but it was probably because our control laws ran at 100 Hz while theirs ran at 30 Hz...