What do you do when the system you’re working on doesn’t support printing a floating point variable? Below is a small introduction about floating point numbers and a couple of approaches on float-to-string implementation.

A common task when dealing with sensors is to display the read value as a real number with integer and fraction part. Most systems and languages provides the user with library functions to perform this. For example, in C on a PC one can simply write:

`printf("Value: %f", val);`

This will print value in the format of AAAA.BBBBB . Printf even allows the user to control the precision (number of digits after the decimal dot) etc.

When working with very small or very large numbers a preferred version would be to use Scientific Notation (i.e: 6.022141e23). In printf notation:

`printf("Value: %e", val);`

Thing is, that the printf function is very ‘heavy’ and many embedded system implementations choose to supply a limited version without floating point support. So how can we implement these facilities ourselves?

One common solution is to dump the number in its hexadecimal format. For example:

`float val = 3.141592;`

printf("Value: %08x", *((unsigned int *)&val));

The result would be: 40490fd8

The user can take the value and convert it back to float. The method’s main advantage is that automated systems can always read the value without complicated parsing. The obvious drawback is that it’s really not human-friendly.

The direct approach to display the number in real format would be to write a code like:

void strreverse(char* begin, char* end) { char tmp; while (end > begin) tmp=*end, *end--=*begin, *begin++=tmp; } #define ABS(x) ((x) < 0 ? -x : x) /* * Based on one of the versions at: * http://www.jb.man.ac.uk/~slowe/cpp/itoa.html * Look there for multiple bases conversion */ char *itoa(long value, char* str) { char *p = str; static char digit[] = "0123456789"; //Add sign if needed if(value < 0) *(p++)='-'; //Work on unsigned value = ABS(value); // Conversion. Number is reversed. do { const int tmp = value / 10; *(p++) = digit[value - (tmp * 10)]; //like modulu 10, but fast value = tmp; } while(value); *p=''; strreverse(str,p - 1); //Reverse back number return p; } /* ftoa - Convert float to ASCII. Parameters: f - Input floating number buf - Output string buffer, pre-allocated to sufficient size places - places after the decimal point Returns pointer to buf. */ char *ftoa(float f, char *buf, int places) { if (signbit(f)) *(buf++) = '-'; if (isnan(f)) { memcpy(buf, "nan", 4); return buf; } if (isinf(f)) { memcpy(buf, "inf", 4); return buf; } long int_part = (long)(f); const long prec = lpow(10, places); long frac_part = lround((f - int_part) * prec); //handle fraction round up to 1.0 if (ABS(frac_part) == prec) { signbit(f) ? int_part-- : int_part++; frac_part = 0; } buf = itoa(ABS(int_part), buf); *(buf++) = '.'; //frac leading zeroes if (frac_part) { long tmp = ABS(frac_part) * 10; while (tmp < prec) { *(buf++) = '0'; tmp *= 10; } } buf = itoa(ABS(frac_part), buf); return buf; }

This implementation is cross-platform and doesn’t rely on the internal format of the floating point number. It’s also limited and not very efficient… the function requires floating point multiplication and comparisons. Why isn’t there a straight forward to print a floating point number?

In almost all modern computer systems single precision floating point number (a.k.a float) is implemented according to the ieee-754 standard. The number’s internal make is as follows:

sign | exponent | significand (mantissa) | |||||||||||||||||||||||||||||||

1 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 | 0 |

The format represents a normalized real number (similar to the scientific notation above). The format also encodes special values like infinity and not-a-number etc.

Still, the format seems very similar to the one we want to print, so why is it so complicated? The answer is its base. The number is coded in *binary* form. This means the number represented looks like 1.0010101e101. So just like integer form we need to base-convert it to decimal. The manual algorithm can be found here and here. Code example implementing this can be found here or fast low-precision form here.

The base-convert method is faster but also more bug-prone and machine dependent.

Perhaps unsurprisingly the ieee-754 standard also defines decimal-floating point storage, but I must admit that I haven’t seen a modern system with such implementation since binary floating point arithmetic is much simpler for hardware.

In the end I ended up using the simple direct method not requiring in-depth knowledge of the float format. Still there are some uses for the internal structure of floats to approximate inverse square root or exponents.

**I’m still missing a fast implementation of function to print a floating point number in scientific notation. Can you recommend one?**

For an in-depth look into the floating point implementation and common pitfalls I really recommend reading What Every Computer Scientist Should Know About Floating-Point Arithmetic. The paper covers rounding errors, best practices and removes some of the black magic around the topic.

Fast floating number formatting ? I Think I;ve seen it at http://code.google.com/p/stringencoders/ , not sure if it supports scientific notation though.