As far as I now there is 2 way to define how to format a numeric into a string using the floating point representation (%f):
- Setting the number of significant digits (%_2f)
- Setting the number of digits or precision (%.2f)
However I often find myself in the need of a way to define the width of the string. For example I always want 3 digits maximum. Using significant digits won't work if the number is 0.0012 since this only has 2 significant digits, so the string will be 0.0012 even though I would like to see 0.00 (so 0 if we hide trailing zeros).
On the other hand, using digits of precision won't work if I have a number like 123.45 since it has 2 digits of precision, so the string will be 123.45 even though I would like to see 123.
Now the obvious brutal way would be to use a string subset and only keep the first 3 digits (I guarantee that my number is never bigger than 999). But I really hope there is a more elegant way. I can't imagine that nobody has run into this requirement before. How did you tackle it?