Jump to content

Mads

Members
  • Posts

    453
  • Joined

  • Last visited

  • Days Won

    30

Everything posted by Mads

  1. Personally I would say it's not established enough to be intuitive to the user. But hey, if the users of this particular app expect it to work that way, then that's great. The general rule though should be that if it is possible to implement in a more established way then do that. You can always add a non-standard way as a second option for people to use if they prefer it, but not as the only option. A lot of people have the habit of trying to change GUI behaviour to something "better" - ignoring the fact that the cost of forcing people to change their behaviour from what they know and expect is huge - and seldom worth paying. That's why I asked the question of what the goal with the right-click menu is in this case. If the goal should turn out to be achievable in a more standard way then that should be prioritized over the perhaps "cooler" alternative (Not knowing the use case here the standard way might even turn out to be the coolest).
  2. If the right-click is only supposed to work if it is on the selected page, do you really need to detect which *tab* was clicked? You can just assume that if the tab control as a whole was right-clicked then that is a right-click on the current page. No need to fiddle with finding out if the user for some reason hit another tab just to not react to it. Any right-click will lead to an action related to the current page. That is probably easier for the user to comprehend than if the right-click in some cases just does not give any response. Again - this would also be consistent with other GUIs - which do not react to right-clicks on tabs, but might have a menu associated with the tab page or control as a whole.
  3. What functionality are you trying to achieve? Are you are dynamically building shortcut menus for the tab (not the page) that is right-clicked? The value of the control will tell you what page is currently selected, but the selection has to be done with a left-click first...If you right-click on the tab itself and expect a tab specific reaction then tab controls in general (non LabVIEW apps included) will normally not respond to it. So perhaps the best thing is to not build a GUI where the user is expected to do such a thing. You could alternatively place transparent controls above each tab to replace and expand the in-built behaviour, or hide the tabs and just use buttons shaped to look like tabs. Not as elegant as getting the info from the tab control itself, but off the top of my head that's the options I can think of.
  4. That's quite common. In 2012 you can now define an executable to run when the user uninstalls your application; that idea was first posted by yours truly, then by Jim Kring. (Mine was marked as a duplicate because the latter one got more popular (pictures pictures pictures!)). Neither of them are credited in the 2012 release, nor marked as completed (it is a bit early after the release perhaps though) . Inlining is another example back from LV 2010. It has been marked as completed now (although the implementation is not exactly as simple to use as I envisioned it) - but was not credited because it was conceived independently of the Idea Exchange. It is only natural that we share a lot of the same ideas/needs.
  5. Are there any new features we can utilize for the array toolkit in 2012? The conditional tunnel VIs for example will still be nice to have in 2012. As the upgrade notice says: "Note The Conditional tunnel option performs memory allocations as often as the Build Array implementation. Therefore, just like with the Build Array function, National Instruments recommends you consider alternatives to the conditional tunnel in portions of your application where performance is critical."
  6. In an ideal world we would always jump to straight to the optimal solution, but if that's not an option it is still better to get something rather than nothing - unless that something is just a fraction better - but good enough for NI not to bother coming up with anything better after that...
  7. Any type of solution would be good. The "Polymorphism Wizard"-idea is possible to implement without any changes to the rest of LabVIEW / G though, so there is less reason for it not to happen, perhaps.
  8. I just posted an idea on the idea exchange related to the polymorphic VI creation.
  9. Hah, I do not have any problem with further improvements (although this one is relatively minor I would say) - as long as I'm not the one to update all those polymorph instances again ;-)
  10. So here it is; a new and complete OpenG array library with 8 of its functions optimized for speed and memory usage. All the polymorph instances have been revised. I have also backsaved it from 2011 to 2009 (which is as far back as the Remove Duplicates function will go now that it uses the In-Place structure...). I guess this is as far as I can contribute. I really hope that we can see this (or even further improved versions) in an official release not too far into the future(?). Mads OpenG Array Revised R4 LabVIEW 2009.zip OpenG Array Revised R4 LV2011.zip
  11. Crossrulz, The use of a for-loop was initially conincidental /the difference is less than it used to be now that the for loop can be aborted..), but I ran a comparison of it with a while loop and did not see a change in performance so I kept the for-loop for its simplicity. Perhaps there are test scenarios where the while loop has an edge though. You have tidied up the code very nicely. I guess we could avoid an unnecessary last search by checking if the found index is at the end of the array too - but doing that check repeatedly might cost more than we gain. I've started applying the new algorithms to all of the polymorph instances...(and I have to have a look at some 2D versions of those 1D ones) but if there are bugs to be found or further improvements to be done then it would be nice to hold that off until everything is checked out for the DBL versions. Or perhaps someone has already written scripts that change the data types automatically and generate the plymorph sets? That would definitely cut a lot of work:-)
  12. I've now optimized the last of the array functions that can be noticeably improved; the search 1D array function. The optimization I've done is to replace the repeated result array building with a dynamic array (preallocated and then resized in growing chunks only when needed). This will gain more speed the bigger the result array is, and should normally save some memory as well. Test results: 0 (sought value covers 0% of the input array) to 6x (100% sought value) increase in speed. OpenG Array functions improved Rev3.zip
  13. Indeed. I've attached a new set here with the correct Remove Duplicates. OpenG Array functions improved Rev2.zip
  14. I've spent some additional time on the array function optimization. Attached is an archive of the revised VIs. It's just the 1D DBL-versions so far, but the logic itself will of course apply to all the data types. The filter array logic is the one developed by Wouter, but the inputs and outputs (of all the VIs) are identical to the ones of the originals. The VI with the largest improvement so far is not surprisingly the Delete Elements-function. The larger the array the more significant the change is, but to get an idea: on a 250k elements array the speed was 400 to 900x that of the original on my machine (depending on how big a fraction of the array (random elements) it should remove). I've run tests in the IDE with or without debugging enabled, and on built applications. Sometimes the difference between variations of the algorithms would be very small, and then I've chosen the one that uses the least memory. I'm sure all of them can be improved even further, but it's a good step in the right direction I think. OpenG Array functions improved Rev1.zip
  15. You are right Mellroth - you save quite a bit of memory that way. The difference in speed is not noticable at first (in fact it was negative on my machine), but it surfaces when debugging is disabled. Nice.
  16. If the starting point is a function that repeatedly resizes the array unnecessarily then I would expect there to be plenty of room for improvement before the variations due to data types etc. become significant (or?). If we could make sure that that part of the optimization is applied to all the OpenG functions then we would surely have achieved a lot - and maybe as much as is possible without having different code for different data types, operative systems etc. (The former might not be too bad though...)
  17. The OpenG array functions are some of the most useful VIs there are I think. I'm very grateful for them. I've done quick jobs where those functions have been used extensively. In the cases where those applications have ended up being used on very large chunks of data I've had to revisit the design though, to remove or redesign the OpenG functions to get an acceptable performance. So, when I saw this thread now I just picked the next array function I could find on the palette and threw together something I would expect to run faster (i.e. it is not necessarily the optimal solution, but a better one). Attached is the OpenG Remove Duplicates function revised (as mentioned, it was thrown together so I've just used the auto-cleanup on it, not very nice, but it does outperform the original quite nicely). The improvement varies a lot with the size of the array and number of duplicates. I got anything from 2 to 5 times the speed of the original in my quick tests. I've kept the original's input and outputs, but the performance would have been better if the indexes output was dropped. For optimal performance it would be better to sort the input array and do a binary search for the duplicates. That would add another (optional?) step to recreate the original order of the input array, but still,- on large arrays it would have a huge impact on the performance. PS. To actually publish an improved version there is of course still a lot of work needed. If deemed of interest I could always redo this one properly and apply it to the full polymorph VI set. MTO Remove Duplicates from 1D Array (DBL)__improved.vi
  18. "What if NI were to take the top 10 or so ideas from the Idea Exchange..." But that's part of the problem with the Idea Exchange kudos system; the top 10 ideas *are* low hanging fruits with minor impact already. The good ideas (as in; will significantly improve the power of LabVIEW and the products we can make with it) hide in the middle and even lower ranks. The current top ten ideas yet to be implemented or declined are: 1. Wait (ms) with error pass-throu​gh 2. Show hidden controls as "ghosts" in edit mode 3. A faster & neater way to show Cluster Element Labels 4. Probes for Loop Iteration 5. Selection of Items on BD or FP needs to be Easier! 6. Some indication that a string control isn't showing the entire string. 7. Align objects should not align increment/​decrement buttons 8. Same Height of Unbudle by Name / Terminal / Local Variable 9. Include LabVIEW Version Number in Applicatio​n Icon 10. Smaller Event Ref Constants Of these ten only number 6 will be of any direct use to the end-user (unless the developer *is* the end user and/or the development itself constitutes the majority of time spent on the application). None of them are "killer apps".
  19. Based on the scan rate target you mention it seems like you have more than thirty devices on the same port, is that the case? We normally only use 8 devices on each port as we typically get into electrical and timing issues at higher numbers. Port servers are a great solution to get multiple ports, but the virtual port drivers that come with them are *always* crappy (Moxa, Advantech, Westermo...etc) so get one that you can set in TCP/UDP server / raw TCP/IP mode (we mostly use 16 port devices from Moxa). I wrote a test that simulates both a master and a slave (running either as two separate produce/consume loops, or as a single sequence), and used a set of virtual ports (Eltima) to check how they performed in the scenario you describe. I got down to 25 ms at 38400 baud. With the virtual ports I can turn off strict baud rate simulation though, that got the cycle time down to 0.6 ms(!). That seems to indicate that the situation is a bit complex. It would be nice to know what happens under the hood that explains the larger than expected (due to actual transfer time) difference.
  20. On a side note: if you are going to populate it with a lot of items (hundreds) it might be an idea to not fill it all in one go. Add child items on demand instead, i.e. when a node is expanded (use the "Item Open" filter event to intercept). Otherwise population of the native tree control can be extremely slow and make the GUI abnormally unresponsive.
  21. Correct asbo - I meant images of the Platform DVDs - or more to the point; the packaged installers. We use Volume License Manager so creating a single volume license installer instead of having to make one for each add-on etc. is a lot of work.
  22. It would be nice if they published platform DVDs instead of 20 individual installers. We get our developer suite DVDs after a while, but why have to wait.
  23. Sorry Jason, I never seem to get notifications from LAVA when there is a reply...But now you have the solution anyway, it's the same as Jonathon has posted. As I mentioned on OpenG there might be better ways of doing it, but it does the job.
  24. Yes, during development you need to deploy the VIs that you are going to launch dynamically manally prior to running the calling VIs. You only have to redo that if you change them,as long as they do not change the previous deployment is valid for the next run... It would be nice if we could mark the dynamic VIs to always be deployed upon run if they are missing or have changed. (Perhaps you could suggest that on the RT Idea Exchange:-))
  25. I see swenp just beat me to it - but let me post my comment anyway: I do this in a lot of different applications. For built applications deployed on RT the main thing to remember is that you have to ensure that the VIs are actually put where the code expects it to be. It is not enough to just add them to the always included box, - you also have to define a destination (on the destinations tab in the build script) and set the destination of the VIs to that target (on the source file settings tab). Do that, build and deploy - then FTP onto the target and check if the destination folders got created and the VIs put there. The base directory of the application is normally ni-rt\startup\ so if e.g. I want to put the subVIs in a folder named Objects that I can refer to as "Current VIs Parent Director\Objects" (see later explanation) then create a destination (I would call it Objects) on the Destinations tab that has the path c:\ni-rt\startup\Objects, and then set the destination of the VI to "Objects" on the source file settings tab (I would normally have folders of such VIs so I would just select the folder, check the "Set destination of all contained items"-box and set the destination... To actually find the directory of the executable I normally use the OpenG VI called "Current VIs Parent Directory__ogtk.vi". I use this instead of e.g. the App.Dir property, because then I always know where it points to; the directory of the calling VI (unlike App.Dir which might be the directory of LabVIEW, during development...). One bothersome thing when you do dynamic calls like this is that during debugging you have to make sure you have deployed a copy of the VIs to the target prior to running your app...otherwise thecallers will be downloaded to memory on the target and never find the VIs they want to dynamically launch because then they will look for the directory on the RT target...So do a deploy then run to debug, but the problem then is that you cannot debug the instances that gets launched on the RT target. Unfortunately that's just part of the game as LV RT works now.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.