-
Posts
446 -
Joined
-
Last visited
-
Days Won
28
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Mads
-
The regular global will definitely get into trouble with writes yes. It depends a bit on the write frequency, but with one write per read the functional global is still fast enough. With writes on the regular global too, the speed relation increases to about 17500x on my machine :-)
-
If global access is a requirement you might want to use a functional global or DVR instead. Here is a crude example that is about 9000 times faster in LV2013 on my machine, and 4500 times faster in LV2014.
-
Cannot view any content (Lavag.org Driver Error)
Mads replied to Bob Schor's topic in Site Feedback & Support
I got around it by (seemingly at least) by clearing the browsing history. -
I had forgotten that I also need to do deflate/inflate in memory on strings. When do you think a new release might come about Rolf Perhaps I can use some of these tips to do it until then though...
-
Thanks Rolf. Until then - I've began playing with alternative solutions, and have posted a description of it on the Linux RT forum.
-
Having been using the Open G Zip Tools on both Windows and VxWorks targets for a long time I just ran into the issue of compatibility with Linux RT... I'm sure I can find an alternative on Linux RT, but the otpimal solution of course would be to have the existing toolkit also support Linux RT Has anyone compiled and modified the kit already for Linux (or set up a nice replacement)? Are there any plans to add such support in the official version? MTO
-
My company seems "stuck" on 2011... benefits to upgrading?
Mads replied to Autodefenestrator's topic in LabVIEW General
I'm lucky to be the one that decides our upgrade strategy, and the philosophy is basically that we "Evolve or Die". This way we learn and adapt continously, making each step small and manageble (if we need a feature from 2015 it is likely that the transition is simple if we are already familiar with 2014). And perhaps most importantly it keeps the developers happy (who likes to be "stuck" in the old days). That is a major contributor when it comes to productivity, creativity and the quality of the work people do. Sure, I would like to see more news between the different versions than we have done lately - yearly upgrades are a bit too frequent, but the frustrations we get from that have been outweighed by the positives. -
The first report of it was actually 1 year earlier - back in 2009... I do not see why we should need to set a Focus property to get this right...So instead of fighting to get that property made public (not that that's not nice as well), I would prefer it if no such property setting was needed (if that is made available it could instead be used to override what is then the default behaviour - in the rarer event that you would not want to focus on the current value).
-
How would you introduce LabVIEW to a group of Text-Based Programmers
Mads replied to cvanvlack's topic in LAVA Lounge
I would show them (optionally just parts of) ...and then link that to graphical programming and LabVIEW. Showing them how a number of organizations, like CERN, SpaceX...or smaller companies, like the one I'm from - ClampOn!) utilize LabVIEW could also bring in some motivation. There are a lot of text-coders out there, LabVIEW experience is less of a commodity. -
I just did a quick test for you. Updating the background color of all 200x50 cells (with panel updates deferred) still takes 1,1 seconds in LabVIEW 2013 (on my relatively new PC)... But do you really need to update them all, and in one go - or could you update them individually on changes in the cell content *only* instead? Checking if a change is actually required first would take no time, so (unless most of them are likely to need a change) you could reduce the number of actual updates enough to make it seem "instant".
- 7 replies
-
- 1
-
- property node
- optimize
-
(and 1 more)
Tagged with:
-
LV Dialog box holds Open VI Reference
Mads replied to eberaud's topic in Application Design & Architecture
The *asynchronous* call pool is only required for synchronous calls? Was it not introduced with the async call by reference node in the first place? -
LV Dialog box holds Open VI Reference
Mads replied to eberaud's topic in Application Design & Architecture
That is an interesting tidbit. I've so far thought that the only way to avoid getting blocked by the root loop was to populate the asynchronous call pool (and hope that the pool is large enough to meet the demand during a blocking event...). Populating the call pool is only required for a speed gain then, or? -
Microsoft has been messing a lot with the GUI lately. I think this article sums it up quite nicely: http://www.nngroup.com/articles/windows-8-disappointing-usability/ Most programs I make have multiple windows, but they also have a main window, which is the one that is shown after startup (once the splash screen has disappeared). The main window then has File>>Exit (Ctrl+Q). If I had an application with multiple windows that could all be perceived as "main" (meaning for example that it makes sense to have the app running with just one of them open), I would add the exit option to each window. It should be possible to shut-down the whole app without having to close each window individually, and the File>>Exit option is a well established and hence intuitive way to to that...
-
It's not really about long term data logging. If you have a huge data set you will have to write it to disk and reload data from the source (DB or other alternative) dynamically anyway. In such cases the user expects, and will therefor accept, that he might need to provide input and perhaps also wait a noticeable time for the new data. However, if you have e.g. 50 MB of time stamped doubles you can dump it all into a .Net graph without any worries. The GUI will run smoothly, and you do not need to bother handling events from the user's interactions with the data. The user can zoom and scroll with instant access to the underlying data. The graph will handle that amount of data fine on its own. That's not the case with the native LabVIEW XY graph. On a standard PC of today LabVIEW can easily hold much more data than that in memory (and in other types of controls/indicators on the front panel), just not in a graph. It is obviously much heavier to draw a graph than an array indicator, but if done right the native graph should at least be able to match the alternatives.
-
A *really* old thread, but not much has happened since 2006 as far as I can see (LV2013), and perhaps the reason is that the problem is (as Jason wrote in the last entry here) still not acknowledged(?). The graphs decimate the displayed data, as Jason describes, but you still get a serious performance hit above a certain number of points (the GUI slows down to a halt...). For XY-graphs that number is very easy to hit. We have to use non-G alternatives to get proper performance with bigger data sets. So there seems to be something that slows things down even though, if the decimation worked, the number of points actually drawn should not grow. I could perhaps understand it if the software had a problem holding the full data set in memory (in the background), or if the slowness was only noticable when the user did a change to the GUI that actually required the graph to recalculate which points to draw, but that does not seem to be the case. And obviously, code written in other languages *is* able to cope just fine, so there really is no excuse.
-
See the reply from NI on my idea to implement native support for this...it turns out it is supported, just not very obviously: http://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Support-industry-standard-time-server-NTP-e-g/idi-p/2340392 Here's an implementation using the System Config API: https://decibel.ni.com/content/docs/DOC-26987
-
Run EXE Based on Installed Run-Time
Mads replied to hooovahh's topic in Application Builder, Installers and code distribution
Good idea. It is a bit sad that we have to resolve to such tricks though. In 2004 a typical installer from us would be about 4 MB. The same builds now take about 140 MB. I still use LabVIEW 7.1 every now and then if I need the small file size, and/or do not want an installer, but just have the RTE-files in the same folder as the executable. Same source code mostly, just older and more compact wrapping -
File sharing between to VIs, exe or apps
Mads replied to Bjarne Joergensen's topic in Application Design & Architecture
Not a direct answer to your problem, but in similar situations I've solved the issue either by making the file producers use temporary names when they write the file, then rename them afterwards to something that the copier is looking for, or I've made the copier read the modification time/file size, and only copy those that have not changed for a certain time. This only works if the file producers write and close just once, or at a slow pace, and/or if the copiers can extract a valid content and copy that while the file is being updated. -
The nightmare that is renaming a class and its folder
Mads replied to GregFreeman's topic in Object-Oriented Programming
It should work as you describe it yes, but even when it is done like this, LV often fails to re-link everything. I've ended up with corrupt class errors (it might ask where the class file is, but fails to recognise it even though I know it is the same file, just in a folder that has gotten its name changed). That's why I posted this on the idea exchange. -
The recommendations (from myself and others) were all covered in this thread: http://lavag.org/topic/15980-openg-filter-array-revised/ I summarized the recommended changes in this file. Not all the VIs in the library were changed, but most of the ones that repeatedly resize the arrays were modified to reduce the memory footprint and increase the speed.
-
The array functions were subject to a lot of discussion and rework here on LAVA in this thread back in 2012: http://lavag.org/topic/15980-openg-filter-array-revised/ At the time I was quite eager to get the improvements included in the official OpenG release. I made a complete replacement of the existing array library, back-saved it to 2011 and posted it there...but I could not see a way forward from there so I asked a question similar to yours on that thread. Ton then sent me a PM though asking me to recompile everything back to LV 2009, sign up on SourceForge, and send my SF ID to Jim or Jonathon. I got around to do the sign-up, sent the ID, did not hear anything at first, and then things got hectic at work so I dropped the ball. I've never picked it up since. Ideally it would be simpler to contribute, and the path to do so could be clearer. In the mentioned case I wished there was a way to just hand off the 2011 code to someone who would then do the boring tasks involved in getting an update released...But that's just not the case (at least that is my impression), and I'm not blaming anyone for it.
-
I have some architectures where different resources (COM-ports for example) are shared by handlers that are created dynamically to manage that single resource. Functions that need access to a given resource do it by contacting the relevant handler through its (single) input queue, and the message protocol requires them to provide a return queue for the reply. For historical reasons the return queue reference is not an actual reference, just a name generated by a given set of rules. So the handlers need to acquire and close return queues for every transaction. Typically each handler will do this 10-20 times per second, and there are typically 10-100 handlers running in one and the same application.The continuous acquisition of references has never caused any problems at these rates, and they are used in applications that (have to) run 24/7 on both PCs and PACs.
-
LVOOP with DVRs to reduce memory copies - sanity check
Mads replied to Troy K's topic in Object-Oriented Programming
I know you are asking if DVRs and LVOOP, but my first reaction was - why the nested while loops, why a functional global (and, if used, why not put the analysis and report functionality within it, at least that would eliminate some data copies), - why build the measurement array one element at a time (and if each measurement can be 100's of MB - perhaps it would be more efficient to put the temporary data on disk instead of in memory...) - and why not use for-loops instead, with auto-indexing... If you really can wait for all the measurements to be done before doing the analysis, you can skip the functional global and instead pass an array by wire. Use for loops with auto-indexing to make the indexing and memory allocation automatic. Building arrays one element at a time in a loop is costly, both in memory and speed (every execution of the build function triggers a data copy operation that just becomes larger and larger the bigger the array gets). In your case it is *very* costly due to the sizes involved. The most memory efficient way to build an array is to pre-initialize it to its final size prior to filling in the results (using the replace element function then, not insert) it with data. If the data type has a fixed footprint (i.e. not an array of variable length e.g.) LabVIEW can do it for you (and with the best performance) if you use a for-loop with auto-indexing on. If the footprint is unknown, but you have an idea of its upper bound at least - initialize an array to the maximum size, then scale it down after filling in the measurements (or up again by a factor if found to be too small somewhere in the process). Alternatively you can write the measurements to disk (unless that's too slow), then pass a list of paths or file references to the analysing function. If the analysis can and *has* to run in parallel with the measurements (to avoid halting the measurements), use the same model as has been done here to pass report data; use a queue (producer-consumer model). Perhaps the report loop can be merged with the analysis loop,or is there a need to analyse any quicker than you can report? All of this applies even if you make a measurements class and have different subclasses for each type of measurement (or type of output from a measurement...). -
The first use case I thought about for the new functionality was that I could use it for things like cursor moves and window resizing. Those are events that typically fire in rapid succession, require fast reactions, but only to the last occurrence. Prior to 2013 I've done this by reducing the handling of each event into setting a flag, which would then trigger a flag-resetting process elsewhere. This does not produce the same responsiveness though, and adds (what should be) unnecessary code.