mje Posted November 14, 2017 Report Share Posted November 14, 2017 (edited) I've identified a bottleneck in some of my more text-heavy picture based user interfaces as calls to Get Text Rect.vi. It's adding on the order of 100 - 1000 ms rendering time to my interface which otherwise does things in 10 - 100 ms depending on data density. To make matters worse it gets slower by a factor of 5x or so if a user-defined font is used (anything other than application/system/dialog). User-defined fonts are required to alter text size, so...yeah. S. L. O. W. I'm platform locked so the GetCharABCWidths method springs out, but I'd have to dig a bit to figure out the data structures used in that call as I haven't really dealt with it before. Has anyone tackled this before and can perhaps suggest alternatives? Edited November 14, 2017 by mje Quote Link to comment
mje Posted November 14, 2017 Author Report Share Posted November 14, 2017 Come to think about it, this is LabVIEW-- unicode/multi-byte isn't exactly a thing. So there's not a lot of printable characters. The interface uses the same font + size for the whole display, so when it's changed, run Get Text Rect.vi on all possible characters caching the results, incurring a relatively small cost, then the bounds for any string can easily be calculated without making calls to the underlying GDI layer. Should be fast. May need to have the caches keyed by style (bold, italic, none, or both). Quote Link to comment
hooovahh Posted November 14, 2017 Report Share Posted November 14, 2017 Oh I like that. I too have done some image processing stuff and found that function to be quite slow. One simple technique I wanted to try (but probably wouldn't help) would be to have a string control that isn't seen, and get the size of the text in it, after setting the text and information appropriately. Not sure if several thread swaps using property nodes would be faster than the internal call to a LabVIEW function or not. But caching the characters would definitely be faster. I see a place where variant attributes could be used for sure. Quote Link to comment
mje Posted November 14, 2017 Author Report Share Posted November 14, 2017 Given we're talking 7 or 8 bit characters depending on if you care about extended codes, and the first 32 codes aren't printable, I'd go with an array for direct indexing. Quote Link to comment
hooovahh Posted November 14, 2017 Report Share Posted November 14, 2017 So attached is my first attempt. I did things a little different than you described. For any one set of font settings I didn't generate the size of each printable character. I only get the size of each character in the string and cached the result. If that character for that set of settings was already generated it uses it. At the moment it only supports Left to Right text, without offsets (unlike the normal method) but that could be added. Also I included a test VI which after you run it a few times clearly generates text that is always faster than the built in method for a set of random strings. If you open up the font range obviously it is less likely to have a cached value for your characters and will take longer. Also my test runs the cached code twice, in the hopes I could see the difference when some text hasn't been cached but when you run it with 1000 random strings of random sizes it becomes noise quickly. Get Text Rect Cached.zip 1 Quote Link to comment
Thoric Posted November 15, 2017 Report Share Posted November 15, 2017 Just be careful with kerning. A "w" character followed by a "a", for example, might be narrower than the sum of the "w" and the "a" separately. I'm not sure how advanced the text printing functions are in LabVIEW, but true-type fonts are often kerned. Quote Link to comment
hooovahh Posted November 15, 2017 Report Share Posted November 15, 2017 Thanks for the advice. On my system all of the string character sets I tried had the same result, summing their parts, as NI's method. Quote Link to comment
mje Posted November 15, 2017 Author Report Share Posted November 15, 2017 8 hours ago, Thoric said: Just be careful with kerning. A "w" character followed by a "a", for example, might be narrower than the sum of the "w" and the "a" separately. I'm not sure how advanced the text printing functions are in LabVIEW, but true-type fonts are often kerned. Yep, I've confirmed that mucks things up good. Looks like I'll have to go hooovahh's route and cache the results of entire strings at a given font setting. Ugly, since the initial render will still be slow, but follow up renders will be quick. Quote Link to comment
JasonD Posted July 12, 2021 Report Share Posted July 12, 2021 Hello from 2021 - I'm just encountering this issue myself, "Get Text Rect" was super speedy in LV2014, but now that I've (finally) migrated to 2020, I'm seeing the same sluggishness that you are seeing. Actually, I recall seeing this in LV2014 on Windows 10 as well, so it may be a Windows 10 thing(?) I'm just wondering if you found a fix or an easier method than pre-caching all possible chars. Quote Link to comment
hooovahh Posted July 12, 2021 Report Share Posted July 12, 2021 Well maps could make this caching stuff a little easier. Using the Variant Attribute meant having to use a string as the unique identifier years ago, where today a map can be a cluster of random data. But beyond that I can't think of a way to make this whole process any easier. If the native function is slow, you can try to use this caching method, and possible save and load its results to disk too, in a temp file or something. I still don't know the cause of this functions slowed down behavior, and can only suggest ways to work around it Quote Link to comment
JasonD Posted July 12, 2021 Report Share Posted July 12, 2021 After doing a bit of a deep dive into trying to isolate the bottleneck, it's still pretty unclear. I was able to replicate OP's claim of 5x slower for User Specified fonts, even though it was just Arial font I was using as User Specified. But there was inconsistency across test machines. Creating a simple test of the Get Text Rect in a For Loop and running it 1000x showed User Specified as much slower than Application or System fonts, but what's odd is that if you Continuous Run that test VI and then watch the milliseconds per 1000 loops, you can observe it be slow, then open the Performance and Memory Profiler, and it instantly gets faster, on par with the Application Font, without even starting the profiler (the window call was enough to kick it faster). Then it doesn't slow down after that until you quit LV. Same effect occurs if you have a "Save as...:" dialog pop up, it instantly speeds up, and stays fast. So there must be some sort of call under the hood to suddenly get the system to tell the DLL inside GetTextRect to cache the User font stuff. So, there's probably a lot more digging that could be done, but for now, I'll see if I can use the Get Text Rect VI more judiciously in my application. I like the caching idea as well, but with the kerning in quasi-random text strings, that could be a real headache. Thanks for the rapid response! Quote Link to comment
MikaelH Posted July 14, 2021 Report Share Posted July 14, 2021 You can fix it in 2 ways. 1. Force open the VI and change the dll call (LabVIEW.exe) to not run in the user interface thread. I've tested this and this will speed it up 1000 times or so but... this could crash LV if this code is running in any of LV's special Application instances like Project Provider App instance (And that is where I need the performance improvements in the OpenGDS UML Modeller). 2. Yes, use a Map or Variant look up. See example VI GetTextRect_AnyThread.vi Quote Link to comment
ShaunR Posted July 14, 2021 Report Share Posted July 14, 2021 3 hours ago, MikaelH said: Force open the VI and change the dll call (LabVIEW.exe) to not run in the user interface thread. The call is not thread safe. Don't do this. Quote Link to comment
JasonD Posted July 14, 2021 Report Share Posted July 14, 2021 Thanks guys. I had forced my way into the VI a few days ago to see what's in there. I usually stop dead when I see a DDL call, in case there might be possible issues (as what Shaun mentions). The slowness issue appeared for me when I jumped to Windows 10, so perhaps the code in that DLL is a few deprecated Win 7 calls which just need to be modernized by NI. Thankfully I was able to code around the slowness in my case, to perform only a single call to Get Text Rect at about 30ms run time, down from around 120 calls at 30ms each, which was leading to noticeable laggyness in my UI. Thanks! Quote Link to comment
Rolf Kalbermatter Posted July 15, 2021 Report Share Posted July 15, 2021 (edited) The code underneath is definitely NOT thread safe. It's concerning the Text Manager, another subsystem of the LabVIEW GUI system and the entire GUI API is UI_THREAD since the Windows GDI interface, which these functions all call ultimately weren't thread safe either back then and may in various ways still not be. Windows has some very old legacy burdens that it carries with it that Microsoft tried to work around with the Windows NT GDI system but there are a few areas where you simply can't do certain things or all kind of hell breaks loose. Now I happen to know pretty much how this function is implemented (it simply calls a few other lower level undocumented LabVIEW Text Manager functions) and incidentally they are all still exported from the LabVIEW kernel too. When you use a user font it calls TNewFont() to create a font description, then it basically calls TTextSize() to calculate the Point describing the extend of the surrounding box and afterwards it calls TDisposeFont() to dispose the font again if it created it in the first place. For the predifined fonts it skips the font creation and disposal and uses preallocated fonts stored in some app internal global. So there would be a possibility to cut down on the repeated execution time of GetTextRect() call for user defined fonts by only creating the font once and storing it in some variable until you don't need it anymore. No joy however about reducing TTextSize() execution time itself. That function is pretty hairy and complex and does quite a bit of GDI calls, drawing the text into hidden display contexts, to determine its extend. Edited July 15, 2021 by Rolf Kalbermatter 1 Quote Link to comment
ShaunR Posted July 15, 2021 Report Share Posted July 15, 2021 2 hours ago, Rolf Kalbermatter said: which these functions all call ultimately weren't thread safe either back then and may in various ways still not be. Yup. All Windows GDI functions have thread affinity (aka must call in the main UI thread) so it wouldn't matter if you called GetTextExtentPoint32 directly, you would still have to run it in the LabVIEW root loop. Quote Link to comment
Rolf Kalbermatter Posted July 15, 2021 Report Share Posted July 15, 2021 10 minutes ago, ShaunR said: Yup. All Windows GDI functions have thread affinity (aka must call in the main UI thread) so it wouldn't matter if you called GetTextExtentPoint32 directly, you would still have to run it in the LabVIEW root loop. And while GetTextExtendPoint32() itself may be fairly fast, you have to somehow, from somewhere get a HDC to use it on. And that HDC has to have the correct font selected into it, which is part of the time consuming effort the LabVIEW TTextSize() function does. And HDCs are quite precious resources, so creating one for every font you may ever use up front is not a good idea either. The HDC also has to be compatible with the target device, (but can't be the screen device itself as otherwise you globber the LabVIEW GUIs with artefacts). Quote Link to comment
Sam_Sharp Posted August 24, 2021 Report Share Posted August 24, 2021 I came across this topic whilst trying to figure out the text width for UTF-16 strings - the solution we used was to write the text/font to the caption of a string control with 'size to text' enabled and then read the Caption.Area Width property: I think the only issue is that LabVIEW adds a few pixels of spacing around the caption - but I think this will be a constant that can be removed. 2 Quote Link to comment
JasonD Posted August 24, 2021 Report Share Posted August 24, 2021 Nice approach! Any idea how the performance is, if you were to call it a zillion times in a loop? How many milliseconds per call (averaged)? Quote Link to comment
Sam_Sharp Posted August 24, 2021 Report Share Posted August 24, 2021 Excuse the benchmarking spaghetti but it does seem to be slower: 0.26s for 1000 updates using Get Text Rect. 0.53s using the caption method with a UTF-16 string. Using it without UTF-16 support (i.e. applying font & text to caption) it's still slower at 0.44s for 1000 updates. For UTF-16 strings we don't have a lot of choice though. A previous workaround was to convert the UTF-16 to ASCII and then replace the resulting ?'s with one or more sufficiently wide characters. Quote Link to comment
MikaelH Posted August 25, 2021 Report Share Posted August 25, 2021 If you Defer Panel Updates, with that save time? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.