Jump to content

Mads

Members
  • Posts

    446
  • Joined

  • Last visited

  • Days Won

    28

Everything posted by Mads

  1. Any type of solution would be good. The "Polymorphism Wizard"-idea is possible to implement without any changes to the rest of LabVIEW / G though, so there is less reason for it not to happen, perhaps.
  2. I just posted an idea on the idea exchange related to the polymorphic VI creation.
  3. Hah, I do not have any problem with further improvements (although this one is relatively minor I would say) - as long as I'm not the one to update all those polymorph instances again ;-)
  4. So here it is; a new and complete OpenG array library with 8 of its functions optimized for speed and memory usage. All the polymorph instances have been revised. I have also backsaved it from 2011 to 2009 (which is as far back as the Remove Duplicates function will go now that it uses the In-Place structure...). I guess this is as far as I can contribute. I really hope that we can see this (or even further improved versions) in an official release not too far into the future(?). Mads OpenG Array Revised R4 LabVIEW 2009.zip OpenG Array Revised R4 LV2011.zip
  5. Crossrulz, The use of a for-loop was initially conincidental /the difference is less than it used to be now that the for loop can be aborted..), but I ran a comparison of it with a while loop and did not see a change in performance so I kept the for-loop for its simplicity. Perhaps there are test scenarios where the while loop has an edge though. You have tidied up the code very nicely. I guess we could avoid an unnecessary last search by checking if the found index is at the end of the array too - but doing that check repeatedly might cost more than we gain. I've started applying the new algorithms to all of the polymorph instances...(and I have to have a look at some 2D versions of those 1D ones) but if there are bugs to be found or further improvements to be done then it would be nice to hold that off until everything is checked out for the DBL versions. Or perhaps someone has already written scripts that change the data types automatically and generate the plymorph sets? That would definitely cut a lot of work:-)
  6. I've now optimized the last of the array functions that can be noticeably improved; the search 1D array function. The optimization I've done is to replace the repeated result array building with a dynamic array (preallocated and then resized in growing chunks only when needed). This will gain more speed the bigger the result array is, and should normally save some memory as well. Test results: 0 (sought value covers 0% of the input array) to 6x (100% sought value) increase in speed. OpenG Array functions improved Rev3.zip
  7. Indeed. I've attached a new set here with the correct Remove Duplicates. OpenG Array functions improved Rev2.zip
  8. I've spent some additional time on the array function optimization. Attached is an archive of the revised VIs. It's just the 1D DBL-versions so far, but the logic itself will of course apply to all the data types. The filter array logic is the one developed by Wouter, but the inputs and outputs (of all the VIs) are identical to the ones of the originals. The VI with the largest improvement so far is not surprisingly the Delete Elements-function. The larger the array the more significant the change is, but to get an idea: on a 250k elements array the speed was 400 to 900x that of the original on my machine (depending on how big a fraction of the array (random elements) it should remove). I've run tests in the IDE with or without debugging enabled, and on built applications. Sometimes the difference between variations of the algorithms would be very small, and then I've chosen the one that uses the least memory. I'm sure all of them can be improved even further, but it's a good step in the right direction I think. OpenG Array functions improved Rev1.zip
  9. You are right Mellroth - you save quite a bit of memory that way. The difference in speed is not noticable at first (in fact it was negative on my machine), but it surfaces when debugging is disabled. Nice.
  10. If the starting point is a function that repeatedly resizes the array unnecessarily then I would expect there to be plenty of room for improvement before the variations due to data types etc. become significant (or?). If we could make sure that that part of the optimization is applied to all the OpenG functions then we would surely have achieved a lot - and maybe as much as is possible without having different code for different data types, operative systems etc. (The former might not be too bad though...)
  11. The OpenG array functions are some of the most useful VIs there are I think. I'm very grateful for them. I've done quick jobs where those functions have been used extensively. In the cases where those applications have ended up being used on very large chunks of data I've had to revisit the design though, to remove or redesign the OpenG functions to get an acceptable performance. So, when I saw this thread now I just picked the next array function I could find on the palette and threw together something I would expect to run faster (i.e. it is not necessarily the optimal solution, but a better one). Attached is the OpenG Remove Duplicates function revised (as mentioned, it was thrown together so I've just used the auto-cleanup on it, not very nice, but it does outperform the original quite nicely). The improvement varies a lot with the size of the array and number of duplicates. I got anything from 2 to 5 times the speed of the original in my quick tests. I've kept the original's input and outputs, but the performance would have been better if the indexes output was dropped. For optimal performance it would be better to sort the input array and do a binary search for the duplicates. That would add another (optional?) step to recreate the original order of the input array, but still,- on large arrays it would have a huge impact on the performance. PS. To actually publish an improved version there is of course still a lot of work needed. If deemed of interest I could always redo this one properly and apply it to the full polymorph VI set. MTO Remove Duplicates from 1D Array (DBL)__improved.vi
  12. "What if NI were to take the top 10 or so ideas from the Idea Exchange..." But that's part of the problem with the Idea Exchange kudos system; the top 10 ideas *are* low hanging fruits with minor impact already. The good ideas (as in; will significantly improve the power of LabVIEW and the products we can make with it) hide in the middle and even lower ranks. The current top ten ideas yet to be implemented or declined are: 1. Wait (ms) with error pass-throu​gh 2. Show hidden controls as "ghosts" in edit mode 3. A faster & neater way to show Cluster Element Labels 4. Probes for Loop Iteration 5. Selection of Items on BD or FP needs to be Easier! 6. Some indication that a string control isn't showing the entire string. 7. Align objects should not align increment/​decrement buttons 8. Same Height of Unbudle by Name / Terminal / Local Variable 9. Include LabVIEW Version Number in Applicatio​n Icon 10. Smaller Event Ref Constants Of these ten only number 6 will be of any direct use to the end-user (unless the developer *is* the end user and/or the development itself constitutes the majority of time spent on the application). None of them are "killer apps".
  13. Based on the scan rate target you mention it seems like you have more than thirty devices on the same port, is that the case? We normally only use 8 devices on each port as we typically get into electrical and timing issues at higher numbers. Port servers are a great solution to get multiple ports, but the virtual port drivers that come with them are *always* crappy (Moxa, Advantech, Westermo...etc) so get one that you can set in TCP/UDP server / raw TCP/IP mode (we mostly use 16 port devices from Moxa). I wrote a test that simulates both a master and a slave (running either as two separate produce/consume loops, or as a single sequence), and used a set of virtual ports (Eltima) to check how they performed in the scenario you describe. I got down to 25 ms at 38400 baud. With the virtual ports I can turn off strict baud rate simulation though, that got the cycle time down to 0.6 ms(!). That seems to indicate that the situation is a bit complex. It would be nice to know what happens under the hood that explains the larger than expected (due to actual transfer time) difference.
  14. On a side note: if you are going to populate it with a lot of items (hundreds) it might be an idea to not fill it all in one go. Add child items on demand instead, i.e. when a node is expanded (use the "Item Open" filter event to intercept). Otherwise population of the native tree control can be extremely slow and make the GUI abnormally unresponsive.
  15. Correct asbo - I meant images of the Platform DVDs - or more to the point; the packaged installers. We use Volume License Manager so creating a single volume license installer instead of having to make one for each add-on etc. is a lot of work.
  16. It would be nice if they published platform DVDs instead of 20 individual installers. We get our developer suite DVDs after a while, but why have to wait.
  17. Sorry Jason, I never seem to get notifications from LAVA when there is a reply...But now you have the solution anyway, it's the same as Jonathon has posted. As I mentioned on OpenG there might be better ways of doing it, but it does the job.
  18. Yes, during development you need to deploy the VIs that you are going to launch dynamically manally prior to running the calling VIs. You only have to redo that if you change them,as long as they do not change the previous deployment is valid for the next run... It would be nice if we could mark the dynamic VIs to always be deployed upon run if they are missing or have changed. (Perhaps you could suggest that on the RT Idea Exchange:-))
  19. I see swenp just beat me to it - but let me post my comment anyway: I do this in a lot of different applications. For built applications deployed on RT the main thing to remember is that you have to ensure that the VIs are actually put where the code expects it to be. It is not enough to just add them to the always included box, - you also have to define a destination (on the destinations tab in the build script) and set the destination of the VIs to that target (on the source file settings tab). Do that, build and deploy - then FTP onto the target and check if the destination folders got created and the VIs put there. The base directory of the application is normally ni-rt\startup\ so if e.g. I want to put the subVIs in a folder named Objects that I can refer to as "Current VIs Parent Director\Objects" (see later explanation) then create a destination (I would call it Objects) on the Destinations tab that has the path c:\ni-rt\startup\Objects, and then set the destination of the VI to "Objects" on the source file settings tab (I would normally have folders of such VIs so I would just select the folder, check the "Set destination of all contained items"-box and set the destination... To actually find the directory of the executable I normally use the OpenG VI called "Current VIs Parent Directory__ogtk.vi". I use this instead of e.g. the App.Dir property, because then I always know where it points to; the directory of the calling VI (unlike App.Dir which might be the directory of LabVIEW, during development...). One bothersome thing when you do dynamic calls like this is that during debugging you have to make sure you have deployed a copy of the VIs to the target prior to running your app...otherwise thecallers will be downloaded to memory on the target and never find the VIs they want to dynamically launch because then they will look for the directory on the RT target...So do a deploy then run to debug, but the problem then is that you cannot debug the instances that gets launched on the RT target. Unfortunately that's just part of the game as LV RT works now.
  20. TDMS is extremely flexible. You can dump anything you want into the files, at any time - without worrying about how to find it again. The downside in my case is the speed you get when you need to read the data. Even defragmentet (a must, if you need to write in small and varying segments, otherwise the performance gets really crappy), a custom binary format will be much much faster to read.
  21. I see your point and agree that it would be better to have the file somewhere you can write to it. On the other hand; how often do you need to change LabVIEW-specific settings? The user can still do it manually if required, and/or you could edit the access rights so that the program has access to write to it. You can also store many of these in another configuration file and override the settings of the .ini file in your own code as soon as it executes. That's the case for the VI server port that you mention e.g. (It can be set programmatically using the VI Server Port property).
  22. Do you really need to both specify a custom LabVIEW type .ini file for each run - *and* pass custom parameters to the application via the command line feature? Let me assume that the reason for this request is just that you have not explored all the options yet, and give you some pointers (if you know of all thsi already and it's not applicable, just ignore the "instructional tone" of the following text....;-) ) The built application will always read the INI-file and the LabVIEW Run-Time-Engine will automatically use the LabVIEW-keys it recognises in the section that has the same name as the executable. If you want to use the same file for additional parameters you can, but then you'll have to write code to reads the file and handles your custom keys yourself. Such use of the ini file is a nice solution for parameters that should be somewhat configurable, but which do not require to be changed frequently. For parameters you would want to change often you should make a user interface in the application that reads and writes to separate configuration files instead. Those files should be stored in the directories dedicated by the OS to application data (on Windows this would be the Appdata or ProgramData folders). The command line argument feature is a different way "in", with its own use case (I guess NI did not expect you to combine both LabVIEW specific arguments there, AND custom arguments, so they turn the former off if you activate the interface for the latter). There you can specify parameters more dynamically at the call of the application instead, something which is more practical to use for a limited set of parameters that you may (or may not) want to define at startup. Most people do not want to have to specify parameters in this fashion, but it can be good way to allow other applications or an administrator to individualize each session.
  23. There is another interesting option here. The end goal of the user is often not to save the graph as an image, but to get the image into a document. Instead of forcing the user to save the image as a file first, you can add a "Copy As Image" option. The key to get this functionality is to generate the image and load it in a picture control in the background - and then use the picture control's Export Image method with the target set to Clipboard. The attached VI does the job for you once you have generated the picture. Image to Clipboard.vi
  24. Regarding ID: 3004519 - Use custom decimal sign for floats, I hope the automatic solution is chosen. Otherwise you will have to know which decimal sign has been used up front and that is not the case when configuration files are shared between computers with different decimal signs (useful in client/server applications e.g.) I can write code that will open the file and check prior to calling the OpenG function of course, but that's not very smooth. Doing it automatically might not work perfectly for ever case (let's say that someone has specified the decimal sign to be something else than period or comma, after all you are free to do so under Windows at least), but then those users can analyze and specify the decimal sig. The automatics will still be able to offer a better solution for 95% of the cases...
  25. Writing an empty array to the item names property will clear it, are you sure you have indeed don so when you tested it (perhaps you had other code that refilled it with old data from a shift register just afterwards e.g.)? In general this should be an easier task in LabVIEW. I've suggested it on the idea exchange here,I've also made an RCF plug-in that will do it for you.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.