Jump to content

Mads

Members
  • Posts

    437
  • Joined

  • Last visited

  • Days Won

    27

Everything posted by Mads

  1. I just did a quick test for you. Updating the background color of all 200x50 cells (with panel updates deferred) still takes 1,1 seconds in LabVIEW 2013 (on my relatively new PC)... But do you really need to update them all, and in one go - or could you update them individually on changes in the cell content *only* instead? Checking if a change is actually required first would take no time, so (unless most of them are likely to need a change) you could reduce the number of actual updates enough to make it seem "instant".
  2. The *asynchronous* call pool is only required for synchronous calls? Was it not introduced with the async call by reference node in the first place?
  3. That is an interesting tidbit. I've so far thought that the only way to avoid getting blocked by the root loop was to populate the asynchronous call pool (and hope that the pool is large enough to meet the demand during a blocking event...). Populating the call pool is only required for a speed gain then, or?
  4. Microsoft has been messing a lot with the GUI lately. I think this article sums it up quite nicely: http://www.nngroup.com/articles/windows-8-disappointing-usability/ Most programs I make have multiple windows, but they also have a main window, which is the one that is shown after startup (once the splash screen has disappeared). The main window then has File>>Exit (Ctrl+Q). If I had an application with multiple windows that could all be perceived as "main" (meaning for example that it makes sense to have the app running with just one of them open), I would add the exit option to each window. It should be possible to shut-down the whole app without having to close each window individually, and the File>>Exit option is a well established and hence intuitive way to to that...
  5. It's not really about long term data logging. If you have a huge data set you will have to write it to disk and reload data from the source (DB or other alternative) dynamically anyway. In such cases the user expects, and will therefor accept, that he might need to provide input and perhaps also wait a noticeable time for the new data. However, if you have e.g. 50 MB of time stamped doubles you can dump it all into a .Net graph without any worries. The GUI will run smoothly, and you do not need to bother handling events from the user's interactions with the data. The user can zoom and scroll with instant access to the underlying data. The graph will handle that amount of data fine on its own. That's not the case with the native LabVIEW XY graph. On a standard PC of today LabVIEW can easily hold much more data than that in memory (and in other types of controls/indicators on the front panel), just not in a graph. It is obviously much heavier to draw a graph than an array indicator, but if done right the native graph should at least be able to match the alternatives.
  6. A *really* old thread, but not much has happened since 2006 as far as I can see (LV2013), and perhaps the reason is that the problem is (as Jason wrote in the last entry here) still not acknowledged(?). The graphs decimate the displayed data, as Jason describes, but you still get a serious performance hit above a certain number of points (the GUI slows down to a halt...). For XY-graphs that number is very easy to hit. We have to use non-G alternatives to get proper performance with bigger data sets. So there seems to be something that slows things down even though, if the decimation worked, the number of points actually drawn should not grow. I could perhaps understand it if the software had a problem holding the full data set in memory (in the background), or if the slowness was only noticable when the user did a change to the GUI that actually required the graph to recalculate which points to draw, but that does not seem to be the case. And obviously, code written in other languages *is* able to cope just fine, so there really is no excuse.
  7. See the reply from NI on my idea to implement native support for this...it turns out it is supported, just not very obviously: http://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Support-industry-standard-time-server-NTP-e-g/idi-p/2340392 Here's an implementation using the System Config API: https://decibel.ni.com/content/docs/DOC-26987
  8. Good idea. It is a bit sad that we have to resolve to such tricks though. In 2004 a typical installer from us would be about 4 MB. The same builds now take about 140 MB. I still use LabVIEW 7.1 every now and then if I need the small file size, and/or do not want an installer, but just have the RTE-files in the same folder as the executable. Same source code mostly, just older and more compact wrapping
  9. Not a direct answer to your problem, but in similar situations I've solved the issue either by making the file producers use temporary names when they write the file, then rename them afterwards to something that the copier is looking for, or I've made the copier read the modification time/file size, and only copy those that have not changed for a certain time. This only works if the file producers write and close just once, or at a slow pace, and/or if the copiers can extract a valid content and copy that while the file is being updated.
  10. It should work as you describe it yes, but even when it is done like this, LV often fails to re-link everything. I've ended up with corrupt class errors (it might ask where the class file is, but fails to recognise it even though I know it is the same file, just in a folder that has gotten its name changed). That's why I posted this on the idea exchange.
  11. The recommendations (from myself and others) were all covered in this thread: http://lavag.org/topic/15980-openg-filter-array-revised/ I summarized the recommended changes in this file. Not all the VIs in the library were changed, but most of the ones that repeatedly resize the arrays were modified to reduce the memory footprint and increase the speed.
  12. Any chance the updated array functions could be evaluated/included as well? The new Delete Array Elements functions for example can deliver a 100X speed increase compared to the existing ones.
  13. The array functions were subject to a lot of discussion and rework here on LAVA in this thread back in 2012: http://lavag.org/topic/15980-openg-filter-array-revised/ At the time I was quite eager to get the improvements included in the official OpenG release. I made a complete replacement of the existing array library, back-saved it to 2011 and posted it there...but I could not see a way forward from there so I asked a question similar to yours on that thread. Ton then sent me a PM though asking me to recompile everything back to LV 2009, sign up on SourceForge, and send my SF ID to Jim or Jonathon. I got around to do the sign-up, sent the ID, did not hear anything at first, and then things got hectic at work so I dropped the ball. I've never picked it up since. Ideally it would be simpler to contribute, and the path to do so could be clearer. In the mentioned case I wished there was a way to just hand off the 2011 code to someone who would then do the boring tasks involved in getting an update released...But that's just not the case (at least that is my impression), and I'm not blaming anyone for it.
  14. I have some architectures where different resources (COM-ports for example) are shared by handlers that are created dynamically to manage that single resource. Functions that need access to a given resource do it by contacting the relevant handler through its (single) input queue, and the message protocol requires them to provide a return queue for the reply. For historical reasons the return queue reference is not an actual reference, just a name generated by a given set of rules. So the handlers need to acquire and close return queues for every transaction. Typically each handler will do this 10-20 times per second, and there are typically 10-100 handlers running in one and the same application.The continuous acquisition of references has never caused any problems at these rates, and they are used in applications that (have to) run 24/7 on both PCs and PACs.
  15. I know you are asking if DVRs and LVOOP, but my first reaction was - why the nested while loops, why a functional global (and, if used, why not put the analysis and report functionality within it, at least that would eliminate some data copies), - why build the measurement array one element at a time (and if each measurement can be 100's of MB - perhaps it would be more efficient to put the temporary data on disk instead of in memory...) - and why not use for-loops instead, with auto-indexing... If you really can wait for all the measurements to be done before doing the analysis, you can skip the functional global and instead pass an array by wire. Use for loops with auto-indexing to make the indexing and memory allocation automatic. Building arrays one element at a time in a loop is costly, both in memory and speed (every execution of the build function triggers a data copy operation that just becomes larger and larger the bigger the array gets). In your case it is *very* costly due to the sizes involved. The most memory efficient way to build an array is to pre-initialize it to its final size prior to filling in the results (using the replace element function then, not insert) it with data. If the data type has a fixed footprint (i.e. not an array of variable length e.g.) LabVIEW can do it for you (and with the best performance) if you use a for-loop with auto-indexing on. If the footprint is unknown, but you have an idea of its upper bound at least - initialize an array to the maximum size, then scale it down after filling in the measurements (or up again by a factor if found to be too small somewhere in the process). Alternatively you can write the measurements to disk (unless that's too slow), then pass a list of paths or file references to the analysing function. If the analysis can and *has* to run in parallel with the measurements (to avoid halting the measurements), use the same model as has been done here to pass report data; use a queue (producer-consumer model). Perhaps the report loop can be merged with the analysis loop,or is there a need to analyse any quicker than you can report? All of this applies even if you make a measurements class and have different subclasses for each type of measurement (or type of output from a measurement...).
  16. The first use case I thought about for the new functionality was that I could use it for things like cursor moves and window resizing. Those are events that typically fire in rapid succession, require fast reactions, but only to the last occurrence. Prior to 2013 I've done this by reducing the handling of each event into setting a flag, which would then trigger a flag-resetting process elsewhere. This does not produce the same responsiveness though, and adds (what should be) unnecessary code.
  17. Gave up the downloaders and downloaded the giant file with FileZilla instead. Probably, I've never seen any platform ISOs on the NI-site...it would be greatly appreciated if they did start to publish platform DVDs, having to wait for the physical ones is boring, and should be unnecessary these days.
  18. We've never experiences any VISA problems from shut-downs...but if the software required actions to be done prior to quitting, I do not think the application instance close event would do the trick, it does not seem to fire when the service is quit from the services panel. Does anyone have a way to filter/intercept such an event and run shutdown code? PS. Uou should also have RunAsService=True in the executables ini-file so that the service ignores any user logoffs (see http://www.ni.com/white-paper/3185/en).
  19. And remember to vote for: http://forums.ni.com/t5/LabVIEW-Idea-Exchange/New-build-option-Service/idi-p/943246
  20. Platform ISOs? Have they ever done that? I've always missed having such available. Normally they just post individual components and we have to wait for the platform DVDs to build a proper installer for our VLM... The driver DVD is out though. There are downloaders available too, but they always seem to fail right at the end, at least from any of my computers.
  21. We've converted most of our applications into services on regular Windows (95/Vista/7/8) targets using srvany, and they work without any modifications. Are you sure the app works properly in the first place, before being converted into a service? Does it rely on access to configurations e.g that it might not have access to when it starts up as a service? If that configuration file is expected to be in a system folder the servic emight not refer to the same folder...) The link you posted talks about VISA server, not the VISA-RunTime...and it's the RunTime that's needed for the service to access a local serial port.
  22. Every now and then I try using an XControl.. and get annoyed by its limitations. Graph annotations is another example... I just used the IP-address XControl here from the LAVA repository - it is a great use case, but the fact that you cannot tab out of this XControl like with other controls breaks the consistency of the GUI. I have not tried to get around it by modifying the code, but found a page on ni.com that says it's not doable. Is that really the case? Silly...
  23. We use terminal servers for this all the time. Moxa, Advantech, Digi and Westermo. The virtual serial port drivers that come with these devices never (the ones from Digi might be the exception) handle network connection issues gracefully. If the network goes down for example, the drivers will freeze - and this can cause VISA to freeze or crash. However, if you use raw TCP/IP connections and write your own client (VISA has in-built functionality for this as well...I'm not sure if that's robust though), then you can make all of them work nicely.
  24. I've tracked down the crash that came after upgrading to 2012 SP1(f1); it is caused by an add-on installed with VIPM. Clearing user.lib on its own did not fix it, neither did moving things out of the project folder. But when I removed the add-ons from vi.lib the crash stopped. The funny thing though is that if I then just put back the stuff from user.lib the crash returned. So maybe there are multiple items crashing LabVIEW (both in user.lib and the other places)...I'll do some more digging.
  25. After upgrading first to SP1, then immediately applying fix 1 - LabVIEW is no longer able to run without crashing with an access violation: "Access violation (0xC0000005) at EIP=0x00EFD480" A common/known issue, or was I just "lucky"?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.