Jump to content

2D picture control performance Tips


Recommended Posts

I have never gotten the performance that I desire out of the 2D picture control. I always think that it should be cheaper than using controls since they don't have to handle user inputs and click events etc. But they always seem to be slower.

I was wondering if any of the wizards out there had any 2d picture control performance tips that could help me out?

Some things that come to mind as far as questions go:

Is the conversion to from pixmap to/from picture costly?

Why does the picture control behave poorly when in a Shift register?

What do the Erase first settings cost performance wise?

Anything you can think of that are bad ideas with picture controls?

Anything you can think of that is generally a good idea with picture controls?

Link to comment

I love the Picture Control, it's very fast if you use it correctly. I developmed the whole GDS UML modeller (http://opengds.github.io/) based on that.
One performance issue is if you draw lot of text with a none default font size, then it becomes very slow.

Make sure you use Smooth updates, and I always use Erase first.
What does the shft register make it poorly?
Do you have an exmaple where it's slow we can look at?


 

Link to comment
13 hours ago, Taylorh140 said:

Why does the picture control behave poorly when in a Shift register?

This one I can answer.  So the picture data type is really a string.  This is a string with instructions on how the image should be rendered.  So imagine if the instruction is something like "Draw a rectangle that is 50 by 50 starting at 0,0 and is solid red", then the next instruction is "Draw a rectangle that is 50 by 50 starting at 0,0 and is solid blue".  Both instructions will be embedded in the string, one gets drawn, then the other on top.  Obviously in this example the red rectangle can't be seen, it will be under the blue one.  But the instructions will still draw all the operations, even the ones that won't be seen.  Now if this is in a shift register, then this string of instructions will keep getting longer as we concatenate more instructions to the end of the string over and over again.  Here is a post over on the dark side talking about it a bit.  And here is an awesome post by Norm talking about how the image instructions are stored, and how they can be manipulated (as strings) to perform image translations (repositioning in the X and Y) by changing these string values.

As for suggestions.  In the past what I often need is a picture control that is built of of several other images.  They can be combined with concatenate string in the order you specify.  So often times I will keep in memory the pieces of the over all image, so that I can quickly recreate the end image by swapping one out.  For instance lets say I have a button, and I have an overlay for when the mouse is over it.  I will draw the button, draw the overlay, and then keep them both in some private data.  Then concatenate the two when the mouse is over the picture, or just show the button (that I've already drawn) when the mouse isn't.  This is what I did in my Toolbar class.  Here each button is also it's own image I keep track of, then combine them all to draw the whole result.  I don't rerender the whole toolbar.

Link to comment

Actually the fact that the data is a string of commands makes a lot of sense. When I did not use smooth display I saw the individual elements being drawn even when i had Front panel updates defered. 

Since i Used bunches of static text I decided to only draw it the first time. by using the erase first and setting the value to 2(erase every time) and then to 0(Never erase). I tried to use the 1 setting (erase first time) but it always erased the stuff i wanted to draw the first time. 

I notice that if i use a sub-VI to do operations like erase first and such that it didn't propagate up a level and the same operations needed to be done on the top level picture control. 

After those things I went from 30 fps to 120 fps which i find more acceptable. 

 

The control only used the Picture Palette drawing functions and looked somewhat like the image attached except with real information: 

 

2018-03-02_16-17-52.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By torekp
      I was trying a python http communication tutorial - https://aiohttp-demos.readthedocs.io/en/latest/tutorial.html#views - when I had to disable the NI Application Web Server to proceed.  And then I thought, what the **** am I doing?  Maybe I should take the free (well, prepaid) gift of a working web server.
      Here's my task.  A central HQ computer will have a GUI that monitors five machine stations, each of which has its own computer.  Every approx 10 ms (negotiable), each station gives a report consisting of two arrays, the larger being 2048 data points, the other much smaller.  Whenever HQ feels like it, HQ can tell a station to start or stop (its computer stays on).  A local IP connection is used, with a router at each end.  There is also a Raspberry Pi with its own IP address at each station's router, that can send camera frames to HQ.  The station-computers use Python and C++ to do their work, not counting whatever needs to be added to communicate with HQ.
      Your advice please?  Should I use Labview?  On both ends or just the HQ?  And which if any of these helpful add-ons suggested by Hooovahh should I use?
       
    • By AndyS
      Hi!
      I have to convert a dynamically generated array into a JSON string and back. Unfortunately I found that the un-flatten method loses the variant data. See the screenshot of FP and BD and the comments inside.
      JSON_Text_test.vi
       

       
      Is this a bug in JSON Text or is my data-construction not supported as expected? In case of the letter I have modify huge parts of my code. So I hope that it is a bug 😉
       
      The 2nd thing I recognized is that the name "Value" of the cluster is not used during flatten. Instead the name of the connected constant / control / line is used. I found the green VI ("Set Data Name__ogtk.vi") at OpenG Toolkit that allows me to programmatically set the variant data name. As you can imagine I would prefer not to need the OpenG VI.
       
      Thanks in advance for your kind help 🙂
       
    • By kartik.azista
      HAs anyone tried creating a sub vi programmatically by selecting the set of blocks through scripting?????
    • By TDF
      TDF team is proud to propose for free download the scikit-learn library adapted for LabVIEW in open source.
      LabVIEW developer can now use our library for free as simple and efficient tools for predictive data analysis, accessible to everybody, and reusable in various contexts.
      It features various classification, regression and clustering algorithms including support vector machines, random forests, gradient boosting, k-means and DBSCAN, and is designed to interoperate with the Python numerical and scientific libraries NumPy and SciPy from the famous scikit-learn Python library. 
       
      Coming soon, our team is working on the « HAIBAL Project », deep learning library written in native LabVIEW, full compatible CUDA and NI FPGA.
      But why deprive ourselves of the power of ALL the FPGA boards ? No reason, that's why we are working on our own compilator to make HAIBAL full compatible with all Xilinx and Intel Altera FPGA boards.
      HAIBAL will propose more than 100 different layers, 22 initialisators, 15 activation type, 7 optimizors, 17 looses.
       
      As we like AI Facebook and Google products, we will of course make HAIBAL natively full compatible with PyTorch and Keras.
       
      Sources are available now on our GitHub for free : https://www.technologies-france.com/?page_id=487
    • By mhsjx
      Hi,
      I'm a beginner in labview, and now test cRIO about two weeks. I still can not solve the problem. I attach my test project for explanation.
      I want to realize that , for example, with time sequence t1, t2, t3, t4,  DO outputs T, F, T, F, AO1 outputs A1, A2, A3, A4, AO2 outputs B1, B2, B3, B4, and the delay of AO1 and AO2 should as small as possible(AO1 and AO2 may comes from difference modules).
      I search in Google, NI forum, and decide to use for loop and loop timer in FPGA.
      The reason as follow:
      1. To realize the specific time interval, I can use Wait and Loop timer. But in "FPGA 0--Test DO.vi", it can't not realize specific time interval by several us's error(maybe large). And to complete once of while loop, it needs 134us. I can't explain that it can realize time interval below 134us, even I acturally realize a delay of 10us, but the input is not acturally 10us, so it's not accurate. 
      And by NI example, I use the Loop timer.
      2. In "FPGA 1--Test DO and AO.vi", I find that the loop timer helps me to realize accurate time interval, however, it ignore the first time interval. Such as, t1, t2, t3, t4, with disired output A1, A2, A3, A4. It goes A1(t2), A2(t3), A3(t4), A4(t1). And in "FPGA 2--Test DO and AO.vi", it has same problem. DO0 and AO1 goes A1(t2), A2(t3), A3(t4), A4(t1). And AO0 is always ahead of DO of t1. 
       
      The people of NI forum advice that I should put AO0 and AO1 into one FPGA/IO node and use SCTL. But up to now, I don't find any example about it(Google or NI forum, maybe it's primary).  Mainly that AO0 and AO1 must go with different timeline, the dimension of input array is different.  Can anyone offer advice for me?
      Thanks
      Test.7z
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.