Popular Post Aristos Queue Posted February 6, 2014 Popular Post Report Share Posted February 6, 2014 I've been working some more on my LabVIEW object serialization library. I wanted to fix the problem of double precision numbers. The standard LabVIEW function for translating double to string does not produce numbers that are bitwise recoverable unless you specify a precision of 17 for all values, meaning a lot of junk in the file. LabVIEW uses the standard C printf function under the hood -- the problems LV has are the same problems most other languages have. I figured someone must have found a way to fix this. Oh, boy, did I stumble into gold. In 2010, a grad student, one Florian Loitsch, not only published an algorithm that does this, it is 5x faster for 99.5% of double values than the standard algorithm. It was the first improvement in the algorithm speed in over 20 years. And just last month, he published the code for a further speed improvement. Here's an article about the improvement. This author found a 10x speed improvement in his JSON library. That's exactly what I want to hear. It's hard to find, so here's the link to the actual C code source for this algorithm. I am going to investigate dropping this code into LabVIEW for 2015. I looked a long while at implementing the C code in G. It's doable, but it isn't small at all, so I'd rather just upgrade LV using his C code than try to recode it. Less error prone and it floats all boats, not just my library. 5 Quote Link to comment
DukeGriffin Posted February 6, 2014 Report Share Posted February 6, 2014 I can see this solving a lot of annoyances we were having with the AQ Character Lineator.. Specifically the large amount of space constantly taken up by Doubles when just a few numbers would suffice. I look forward to hearing more about this integration and I have a feeling this can apply to a lot more situations than just ours. Quote Link to comment
Darin Posted February 7, 2014 Report Share Posted February 7, 2014 Even in a CLFN the code is slower than the built-in primitive so I would not even bother a G implementation unless you are curious about the integer math. In C++ code I use it all of the time, it rocks. I do not care about the speed as much as a nice way to autoformat floating point numbers (it will do SGL as well). Quote Link to comment
ShaunR Posted February 7, 2014 Report Share Posted February 7, 2014 (edited) It's not a complete replacement, though, is it? I think this algo can fail with certain numbers so you'd have to fall-back to the printf and incur the overhead of running two in series. Maybe those numbers cropping up are statistically insignificant though. Edited February 7, 2014 by ShaunR Quote Link to comment
Darin Posted February 7, 2014 Report Share Posted February 7, 2014 There is a reason that the return value from the double to ascii function is a boolean. I dropped it into a while loop and get a success rate of 99.6-99.7% for random samples over different ranges. Running something which is roughly twice as fast 99.6% of the time and something which is 1.5 times slower (both in series) 0.4% of the time is still a net win. That is of course in C++, in LV the CLFN implementation of the fast method is roughly 1.5x slower than the primitive implementation of the slow method. I have not used this for speed, I use it to get the smallest representation. I have written custom graph and table views in C++ and needed to do the float to string conversion. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.