I've been working some more on my LabVIEW object serialization library. I wanted to fix the problem of double precision numbers. The standard LabVIEW function for translating double to string does not produce numbers that are bitwise recoverable unless you specify a precision of 17 for all values, meaning a lot of junk in the file. LabVIEW uses the standard C printf function under the hood -- the problems LV has are the same problems most other languages have.
I figured someone must have found a way to fix this. Oh, boy, did I stumble into gold. In 2010, a grad student, one Florian Loitsch, not only published an algorithm that does this, it is 5x faster for 99.5% of double values than the standard algorithm. It was the first improvement in the algorithm speed in over 20 years. And just last month, he published the code for a further speed improvement.
Here's an article about the improvement. This author found a 10x speed improvement in his JSON library. That's exactly what I want to hear.
It's hard to find, so here's the link to the actual C code source for this algorithm.
I am going to investigate dropping this code into LabVIEW for 2015. I looked a long while at implementing the C code in G. It's doable, but it isn't small at all, so I'd rather just upgrade LV using his C code than try to recode it. Less error prone and it floats all boats, not just my library.