Jump to content

Steen Schmidt

Members
  • Posts

    156
  • Joined

  • Last visited

  • Days Won

    8

Posts posted by Steen Schmidt

  1. index.php?app=downloads&module=display&section=screenshot&id=207

    Name: GPower toolsets package

    Submitter: Steen Schmidt

    Submitted: 15 Mar 2012

    Category: *Uncertified*

    LabVIEW Version: 2009

    License Type: BSD (Most common)

    This is a submission for the GPower toolsets palette, which will also shortly be available on the NI LabVIEW Tools Network. I submit it to Lava to (of course) share it with everyone, but also to establish an open forum to discuss improvements and to provide support. Much better than hiding all this in private email conversations with each user.

    The toolsets are compiled for LV2009 SP1, are provided as VIPs for VIPM, and will mass compile to at least LV2010 and LV 2011 without problems (this happens automagically during the VIP-install process for those of you that don't know VI Package Manager :cool:). The 'gpower_lib_toolsets-1.0.0.6.vip' is an umbrella for the four other VIPs, it doesn't add anything extra by itself.

    Currently this consists of 4 toolsets:

    Dynamic Dispatch

    Open, pass data to (without using the front panel), run, and close dynamic VIs. A cumbersome process using VI Server made easy with this toolset.

    Error

    Streamline common error wire tasks such as set, filter and clear errors and warnings, and unbundle the error cluster. Adds advanced features such as dynamic setting of custom global errors, that you can invoke from anywhere in your application, and bundling errors into a single error wire. No more need for arrays of error clusters!

    Timing

    Calculate elapsed and remaining time, and abort a running Wait function for instance.

    VIRegister

    High performance "global variables" that lets you pass data between any corners of your application. It's based on queues, so a "global" that does not involve using front panel controls, files, or the LV project - it all happens in the block diagram. In most cases you don't even have to pass a refnum around, just drop a VIRegister somewhere and you're done.

    If this thread generates too much noise across the different toolsets, we could split this into 4 submissions. But lets see how it goes. More toolsets are awaiting release, but I'll start with these four to get a feel for the process.

    Cheers,

    Steen

    Click here to download this file

  2. It's not so much early adoption as it is hours spent up front instead of 10-fold on fixes and more fixes. A well designed application is more or less hitting the ground running, and you can focus on adding functionality or the next application instead of perpetually fixing the old ones.

    So the pain you feel is just because you do all the work up front, and the sum of it is much less than if you look back at the work put into a poorly designed application :).

    Cheers,

    Steen

    PS: I hope the "holiday dinner" feeling has faded - for the record I believe the debate about using Shared Variables or not is very much in synch with the topic (if that was the topic you felt sidetracked your thread a bit here). My experience with SVs, which I described here, was a direct response to your "I now see no reason to avoid shared variables for sharing data between vis when only one vi is ever going to write to them" statement. And that experience didn't come cheap. Just for the record. Didn't want to swamp your voice :P.

  3. While I certainly agree there are some areas where SVs can improve (and a few years ago I would have agreed with your assessment when they had much more serious problems) I think SVs now deserve more credit. (I will state upfront that we still see occasional issues with the SVE--mostly having to do with logging, but we have workarounds for these, and I think if there are more users NI will fix these issues more quickly.)

    We started using SVs really heavily 3-4 years ago for streaming. We ran into all sorts of trouble with them, and were thrown around by NI a fair bit while they tried to find the spot where we used those SVs wrong. Finally last year we got the verdict: SVs were not designed for streaming, they can break down when heavily loaded - please use Network Streams for this instead. That NI didn't let this info surface before Network Streams were ready as replacement is quite distasteful, it has cost us and our customers millions chasing SVs with the wrong end of the stick. Had NI told us with a straight face that our implementation weren't going to work, but they were working on a replacement, we'd have been in a much better position. Now, as it is, I never want to touch anything that has to do with SVs. I was burned too badly, and now we have our own much better solution.

    1) Weight. We've never had an issue with an SVE deployed on Windows. We never deploy the SVE on RT. (We don't have any reason to do so.) Our RT applications (on cRIOs) read and write to SVs hosted on Windows without any problem.

    We need to deploy the SVE on RT since the RT system is usually the always-on part in our applications - our Windows hosts are usually optional connect/disconnect type controllers while the RTs run some stuff forever (or for a long while). Simulation, DAQ, control/regulations etc. It could seem SVs fare better on Windows, both due to the (often) higher availability of system resources there, but also due to the more relaxed approach to determinism (obviously).

    2) Black box: It is true that we don't see the internal code of the SVE, but we don't have to maintain it either. At some level everything is a black box (a truism, I know)--including TCP/IP, obviously--it just depends on how much you trust the technology. I understand your concerns. By the way, in my tests I haven't noted a departure from the 10 ms or buffer full rule (plus variations since we are deploying on a nonRT system), but there may be issues I haven't seen. The performance has met our requirements.

    That's a good thing, and I sense that SVs work for most people. It may just be because SVs were never designed to the load we put on them. It's only when we have problems that it's bad that the toolset is a black box. Otherwise it's just good encapsulation :). But with TCPIP-Link we just flip open the bonnet if anything's acting up.

    3) Buffer overflow: I'm not sure I understand this one. SV reads and writes do have standard error in functionality, so they won't execute if there is an error on the error input. We don't wire the error input terminal for this reason but merge the errors after the node. (I don't think this is what you are saying, though. I think you are saying you get the same warning out of multiple parallel nodes? We haven't encountered this ourselves so I can't help here.)

    If you have a network enabled, buffered, SV, all subscribers (readers) will have their own copy of a receive buffer. This means a fast reader A (a fast loop executing the read SV node) may keep up with the written data and never experience a buffer overrun. A slow reader B of the same SV may experience backlogging, or buffer overwrite, and start issuing the buffer overflow error code (−1950678981, "The shared variable client-side read buffer overflowed"). A bug in the SVE backend means that read node A will start issuing this error code as well. In fact all read nodes of this SV will output this error code, even though only one of the read buffers overflowed. That is really bad, since it 1) makes it really hard to pinpoint where the failure occurred in the code, and 2) makes it next to impossible to implement proper error handling since some read nodes may be fine with filtering out this error code while other read nodes may need to cause an application shutdown if its buffer runs full. If the neighbors buffer runs full but you get the blame... how will you handle this? NI just says "That's acknowledged, but it's too hard to fix".

    4) While we don't programmatically deploy SVs on RT, as I mentioned, I agree it would be good (and most appropriate!) if LabVIEW RT supported this functionality. For the record, we do programmatically connect to SVs hosted on Windows in our RT applications, and that works fine.

    You connect programmatically through DataSocket, right? With a PSP URL? Unfortunately DataSocket needs root loop access, so it blocks all other processes while reading and writing. I don't know if SVs do the same, but I know raw TCP/IP (and thus TCPIP-Link) doesn't. Anyways, for many applications this might not be an issue. For us it often is. But the main reason I'd like to be able to programmatically deploy SVs on RT would be for when variables undeploy themselves for some reason. If we could deploy them again programmatically we could implement an automatic recovery system for this failure mode. As it is now we sometimes need to get the RT system connected to a Windows host with LabVIEW dev on it to deploy the variables again manually. I know this process can be automated some more, but I'd really like the Windows host out of this equation - the RT systems should be able to recover all by themselves.

    6) Queues, events, and local variables are suitable for many purposes but not, as you note, for networked applications. (We do, of course, use events with SVs.) TCP/IP is a very good approach for networking, but in the absence of a wrapper does not provide a publish-subscribe system. Networked shared variables are one approach (and the only serious contender packaged by NI) to wrapping TCP/IP with a publish-subscribe framework. If someone wants to write such a wrapper, that is admirable (and that may be necessary and a good idea!), but I think for most users it is a much shorter path to use the SV than to work the kinks out their own implementation. I haven't tried making my own, though, and the basics of the Observer Pattern do seem straightforward enough that it could be worth the attempt--it's just not for everybody. (I'd also prefer that we as a community converge on a single robust implementation of a publish-subscribe system, whether this be an NI option or the best offered by and supported by the community. At the present time I think SVs are the best option readily available and supported.)

    It was very hard to write TCPIP-Link. But for us NI simply doesn't deliver an alternative. I cannot sell an application that uses Shared Variables - no matter the level of mission criticality. I can't have my name associated with potential bombs like that.

    Single process (intra-process) SVs seems to work fine, but I haven't used them that much. For these purposes I usually use something along the line of VIRegisters (queues basically), but the possibility to enable RT FIFOs on the single process SV could be handy. But again I'm loath to put too many hours into experimenting with them, knowing how much time we've wasted on their network enabled evil cousins :lol:.

    Cheers,

    Steen

  4. For the record, we use shared variables (a button event writes to a shared variable; a shared variable value change events triggers an update to one or more display elements; shared variable binding for the simplest cases). This works quite well. The event handling deals with initial reads cleanly. We do use the DSC module for the shared variable events.

    ...

    On communication:

    As I said, we use shared variables (networked shared variables). We use this approach because we communicate over Ethernet (between various computers and locations and between controllers--often on cRIOs--and views) and because we want to use a publish-subscribe paradigm--that is, multiple subscribers will get the same information (or a demand may come from a view or a higher level component), and shared variables handle both of these needs well. (We also don't need to create or maintain a messaging system.)

    Well, I really hate Shared Variables due to their weight (the SVE among other things), their black-box nature (when do updates happen? Not at 10 ms/8 kB as the white paper says, that's for sure) combined with known bugs (for instance will a buffer overflow warning in one node flow out of error out of all the other SV instances as well, even though their buffer didn't overflow), and due to their inflexibility (you can't programmatically deploy SVs on LV Real-Time for instance, and you need to be able to, since we sometimes experience SVs undeploying themselves spontaneously when the SVE is heavily taxed).

    There are many better alternatives for intra-process communication (queues, events, and even locals). For network we have developed our own sticky client/multi-server TCPIP-Link solution which is much better (faster, more stable, more features, slimmer) than SVs and Network Streams for network communication. Granted, TCPIP-Link wasn't trivial to make, it currently has clocked up in excess of 1000 man-hours. The only killer feature of SVs is how simple they are to bind to controls. But one or two wrapper-VIs around TCPIP-Link will get you almost the same...

    And it's a pity you need the DSC module for enabling events on SVs.

    Using a waveform for individual reading does NOT seem like a good idea. Why not just store two channels: “Measurement” and “Measurement_Time”? That’s what I do.

    We transmit waveforms on network without problems for data rates up to maybe 10 kS/s - a cRIO can generate data at that rate as waveforms for 40 channels without any backlogging due to either network or other system resources. For higher data rates we usually select a slimmer approach, but that aslo means more work to make data stay together. On PXI we can easily output data as clusters each with timestamp, some parameters like data origin, data flow settings (usually a couple of Booleans) and the data itself as an array of DBL for instance. This works fine for several MB/s payload. For Gb/s we must pack data in an efficient stream, usually peppered with codes to pick out control data (time/sync/channel info) that flows in a slimmer connection next to it. The latter approach can be rather difficult to keep contiguous if network dropouts can occur (which they will). Every application demands its own pros/cons decission making.

    Another technique is to realize that most simple indicators are really just text, or can easily be represented by text, and thus one can do a lot with a multicolumn listbox or tree control to display the state of your instruments.

    That's at least a quite heavy approach, encoding numeric data into strings and stuffing it into a MC listbox or a Tree. But it'll work for really slow data rates of course (a few Hz).

    Cheers,

    Steen

  5. Hi.

    Posting updates from many channels into a GUI is not a problem if you are careful about the design.

    One recent large application of mine (3-4,000 VIs) runs on RT but facilitate a UI on a Host PC that can be connected and disconnected from the running RT application. The UI is used for channel value and status display, as well as giving the operator an opportinuty to override each channel with a user specified value or in some cases a programmed waveform for instance. The RT system generates something like 100,000 events/s (equivalent to channel updates) and the UI just plugs in and out of this event stream as a module by dynamically registering and unregistering the wanted events. There is a module on RT that receives each event that it knows the GUI wants, then the data is transmitted over TCP/IP to the Host PC, where new events are generated to be received by the GUI module. All data is timestamped, so propagation delay is not a problem. Similarly for override-data the other way.

    It is no problem receiving 100,000 updates/s in the UI module without backlogging at all, but I do filter the events such that each event gets its own local register in the UI module which it may update at any rate, and then 10 times/s all changed registers are written to the GUI front panel. At almost 1,000 channels (and thus 1,000 controls on the GUI in tabs) this still works really smooth. So 8 channels of mechanical relays is no problem.

    The clue is to filter the updates so you don't update the GUI thousands of times per second, but remember to always maintain the most current value for when you actually do update. And if you run the UI on the same machine as the actual business logic then you need to be careful not to block that business logic - updating the UI will most often switch to the user interface thread or may even demand root loop access. Head that.

    Cheers,

    Steen

  6. In your example I see it get close to 1.5 million elements.

    How do you see it get close to 1.5 million elements? On my example machine it allocated ~150 Mb memory before the mem full dialog (~150,000 elements). I've just tried running it again, and now I got to 1,658,935 elements (I think) before the mem full dialog, so I probably had much more fragmented memory earlier.

    The out of memory dialog is displayed by LabVIEW not by the OS. The problem is this dialog is triggered at a low level inside our memory manager and at that point we don't know if the caller is going to correctly report the error or not. So we favor given redundant notifications over possibly giving no notification. This does get in the way of programmatically handling out of memory errors, but this is often quite difficult because anything you do in code might cause further allocation and we already know memory is limited.

    Thanks for this info Greg, it makes perfect sense. A failed mem alloc is most probably a fish (or an app) dead in the water.

    Cheers,

    Steen

  7. The last concept I'll mention is fragmentation. This relates to the issue of contiguous memory. You may have a lot of free address space but if it is in a bunch of small pieces, then you are not going to be able to make any large allocations. The sample you showed is pretty much a worst case for fragmentation. As the queue gets more and more elements, we keep allocating larger and larger buffers. But between each of these allocations you are allocating a bunch of small arrays. This means that the address space used for the smaller queue buffers is mixed with the array allocations and there aren't contiguous regions large enough to allocate the larger buffers. Also keep in mind that each time this happens we have to allocate the larger buffer while still holding the last buffer so the data can be copied to the new allocation. This means that we run out of gaps in the address space large enough to hold the queue buffer well before we have actually allocated all the address space for LabVIEW.

    Thanks, Greg, for your thorough explanation. I suspected contiguous memory was the issue here, but while I know that LV arrays and clusters need contiguous memory I didn't believe a queue needed contiguous memory? That's a serious drawback I think, putting an even bigger hit on the dynamic allocation nature of queue buffers. As touched on earlier in this thread I thought a queue was basically an array of pointers to the elements, with only this (maybe 1-10 Mb) array having to fit in contiguous memory, not the entire possibly gigabyte sized buffer of elements. At least for complex data types the linked list approach would lessen the demands for contiguous memory. If the queue data type is simpler that overhead would obviously be a silly penalty to pay, in which case a contiguous element buffer would be smarter. But that's not how it is I gather. A queue is always a contiguous chunk of memory.

    It would be nice if a queue prim existed that could be used for resizing a finite queue buffer then...

    For your application what this really means is that if you really expect to be able to let the queue get this big and recover, you need to change something.
    • If you think you should be able to have a 200 million element backlog and still recover, then you could allocate the queue to 200 million elements from the start. This avoids the dynamically growing allocations greatly reducing fragmentation and will almost certainly mean you can handle a bigger backlog. The downside is this sets a hard limit on your backlog and could have adverse affects on the amount of address space available to other parts or your program.
    • You could switch to 64-bit LabVIEW on 64-bit Windows. This will pretty much eliminate the address space limits. However, this means that when you get really backed up you may start hitting virtual memory slowdowns so it is even harder to catch up.
    • You can focus on reducing the reasons that cause you to create these large backlogs in the first place. Is it being caused by some synchronous operation that could be made asynchronous?

    Thanks. I know of these workarounds, and use them when necessary. For instance we usually prime queues on LV Real-Time to avoid the dynamic mem alloc at runtime. The case from my original post is solved, so no problem there - I was just surprised that I saw the queue cause a mem full so soon, but the need for contiguous memory for the entire queue is a surprising albeit fitting explanation. Do you have any idea how I could catch the mem full dialog from the OS? The app should probably end when such a dialog is presented, as a failed mem alloc could have caused all sorts of problems. I'd rather end it gracefully if possible though, instead of a "foreign" dialog popping up.

    Cheers,

    Steen

  8. Arrays need contiguous memory even when they don't have to...

    Anyways, this snippet of code runs out memory before RAM is full:

    post-15239-0-16808000-1321781127.png

    My first encounter was on a WinXP 32-bit machine with ~1 Gb RAM free, LabVIEW reported memory full after ~150 Mb allocation for the queue.

    Then I ran the above snippet on a Win7 64-bit machine running LV 2010 SP1 32-bit with about 13 Gb RAM free (I know LV 32-bit should be able to use up to about 4 Gb RAM). LV now reported mem full after LV had allocated about 3.8 Gb RAM (don't know how much went to the queue). This memory allocation remained at that level, so following runs of the same code snippet reported memeory full much sooner, almost immediately in fact. LV only deallocated the 3.8 Gb RAM when it was closed and opened again, in which case I was able to fill the queue for a long time again before mem full. Adding a request mem deallocate in the VI didn't help me get the allocated memory back from when the queue ran full.

    - So at least on the 64-bit machine LV could use about all the memory as to be expected, but it didn't deallocate that memory again when the VI stopped.

    - Any idea how to avoid the pop-up dialog stating the mem full? I'd really like a graceful handling of this failure scenario.

    Cheers,

    Steen

  9. Hi.

    In an application of mine I came into an issue where a queue ran full, so I want to start a little discussion about which data structures scales the best when the goal is to use RAM to store temporary data (a "buffer"). This is for Desktop and Real-Time (PXI, cRIO etc.).

    My particular application is a producer-consumer type design, where the consumer can be delayed for a long time but will always catch up given enough time - the resource to support this is of course enough RAM, which in this case is a valid requirement for the system. The issue will present itself in many forms of course, so this might be of interest to a lot of you guys here.

    My application unexpectedly gave me a memory allocation error (code 2 on error out from the Enqueue prim, followed by a pop-up dialog stating the same) after filling about 150 Mb of data in the queue (that took a couple of seconds). This system was a 32-bit Windows XP running LabVIEW 2010 SP1. Other systems could be 64-bit, LV 2009-2011, or it could be Real-Time, both VxWorks and ETS-Pharlap.

    Is 150 Mb really the limit of a single queue, or is it due to some other system limit - free contiguous memory for instance? The same applies for arrays, where contiguous memory plays an important role for you running out of memory before your 2 or 3 Gb is used up.

    And, at least as important; how do I know beforehand when my queue is about to throw this error? An error from error out on the Enqueue prim I can handle gracefully, but the OS pop-up drops a wrench in the gears - it's modal, it can stop most other threads, and it has to be acknowledged by the user. On Real-Time it's even worse.

    Cheers,

    Steen

  10. Hi.

    Version 1.3 of VIRegister is now ready for download:

    Version changes from v1.2:

    - Warning propagation through 'VIRegister - Read.vi' improved (in response to Götz Becker's post).

    Cheers,

    Steen

  11. Yes, that's on purpose. I'm usually quite anal about wiring the error terminals, so it's quite rare I choose not to.

    In this case the error in will always be "no error" due to the error-case before, and no nodes in between to generate an error. I thought only wiring from error out on the lossy enqueue underlined the fact that any error out of this case was from that node.

    And as I write this I realize that any warnings coming in through the error wire will be dropped. I don't know if any warnings can be generated by the nodes before, as there is no way to tell which errors and warnings each LabVIEW function can generate. This means I also don't know if I really want to propagate any warnings or not... Oh well...

    And I like nit-picking, it's a great way to improvement :yes:.

    Cheers,

    Steen

  12. Not quite fair to call it a bug in LV2009. We added new features to the compiler to do deeper optimizations in LV2010, which gave us better performance than we've had before. A bug would be something that got slower in a later version. This is just new research giving us new powers.

    Not to split hairs over words, but several orders of magnitude slower (~4000 times), for a specific data type only (variant), in a specific configuration (initialized feedback node) equals highly unexpected behaviour. For sure the code runs without error, so the computation itself is fine. It's a question of how we define a bug. When I see performance vary so greatly between neighboring configurations I can't help but wonder if no memory got hurt in the longer computation? :P

    In LabVIEW 8.6.1 the same code runs in about 50 ms, so very comparable to LV 2010 SP1. But in LV 2009 SP1 that figure is 3 minutes.

    I'm not sniping at you in R&D, who make huge improvements to the LabVIEW we love without ever getting due credit for what you're doing, but something did happen here. In this case it almost made me abandon variant attributes as a means of key cache, since VAs have varied greatly in performance over the last several versions of LabVIEW. Had I (falsely) concluded that VAs were unreliable performance wise, that would've had widespread consequences for many people, including myself, since my recommendations in regards of LabVIEW and NI SW/HW in general is taken very seriously by alot of people in Denmark.

    The only thing that made me stick to digging in this until I found out what was wrong, was the fact that I started out by making two small code snippets that clearly demonstrated that the binary and linear algorithms were in fact still used as expected, and thus setup my expectations toward performance. My VIRegister implementation using VAs showed a completely different picture though, and it took me a long time to give up finding where I'd made a mistake in my code. Then I was forced to decimate my code until I was down to the snippet I posted earlier, which hasn't got more than a handful of nodes in it. Big was my surprise when performance went up by a factor of 4000 just by replacing the feedback node with shift registers.

    But nevermind, I couldn't find anywhere to report this "bug" onlline (I usually go through NI Denmark, but it's weekend), so it hasn't been reported to anyone. If it isn't a bug I won't bother pursuing it. Now I at least know of another caveat in LV 2009 SP1 :D.

    Cheers,

    Steen

  13. Ok, I quickly ported a VIRegister (read/write Boolean) to use variant attributes (let's call that v1.2

    First benchmarks indicate a performance improvement especially when using variable register names (500-1000 times better performance), but also when using constant register names (15-20%). When using variable register names you actually only halve performance now compared to a constant name. I wouldn't have believed that possible a few days ago. With v1.1 the two use cases are a world apart, but that's the difference between n and Log(n) search performance right there.

    Performance on LV 2010 SP1 seems slightly better even than 2009 SP1 (5-10%).

    Thanks Mads, for pushing me to investigate this in depth :thumbup1:.

    /Steen

    • Like 1
  14. Yes, the way polymorph VIs work in LabVIEW I can definitely understand why you do not want to update the VIRegister library again. In most cases the current implementation will be just fine. Thanks again for the work.

    I did run a quick test to see what kind of performance I could get if needed though. To simplify the change I skipped the scope part and just used the register name as the reference.

    Updating 10 000 booleans with the same node used to take 1,5 seconds, now it runs in 39 ms.

    Ok, I quickly ported a VIRegister (read/write Boolean) to use variant attributes (let's call that v1.2), but was sorely disappointed when I benchmarked it against v1.1. v1.2 came out 1000 times slower than v1.1 on 10000 variable registers. 6 hours later I've isolated a bug in LabVIEW 2009 SP1 regarding initialized feedback nodes of variant data :wacko:. The code snippet below runs in almost 3 minutes on LabVIEW 2009 SP1, but in just 40 milliseconds on LabVIEW 2010 SP1:

    post-15239-0-24095600-1310799926_thumb.p

    Just by changing the feedback node into a shift register equalizes the performance on LV 2009 SP1 and 2010 SP1:

    post-15239-0-11443300-1310800220_thumb.p

    But it has something to do with initialization or not, and how the wire of the feedback/shift register branches, so it can also fail with a shift register under some circumstances. I think I can get around that bug, but it'll cause a performance hit since none of the optimum code solutions work on 2009 SP1 (I need this code to work on 2009 SP1 also). This means I'll probably release v1.2 implementing VAs soon.

    /Steen

  15. Allow me to introduce you to LabVIEW 2011. It... huh? Oh... I can't talk about that yet? Seventeen more days? Well, ok... but you're not going to keep a secret like the Asynch Call By Reference node quiet for ever. ... anyway, there's help coming.

    Heh :rolleyes:

    Now you mention it... In the context of Real-Time; does this (these) nodes use the FP for data transfer or is the con pane on the node just an interface directly to the terminals on the BD of the receiver? I'm basically asking if input data winds up in the user interface thread and so must cross in the transfer buffer, or if it is injected directly into the execution thread without forcing a thread switch? The pre-11 Synchronous node forces an initial swap to the user interface thread on the receiver end to deliver input data as I understand it...

    /Steen

  16. Named queues can be used if you set a front panel control, passing using a call by reference, populate a FGV, populate a global, etc...

    Tim

    You must mean unnamed queues... But using FGVs and globals are using references, so therefore deemed "unworthy" apparently. Call by reference and using the FP specifically isn't a good option on Real-Time.

    I'm wondering about dynamic code myself in this context, hence my earlier question regarding that.

    /Steen

  17. As before in 1.1 when the linear search was used to find the register refnum.

    The linear search is just a bit slower in absolute terms, but with 10000 registers this slowness has extra impact due to the combination of having a large list to search, and having to do it so many times...

    In the test I changed the register name on every call so it is a worst case scenario. The very first run takes a bit longer than the quoted times because of the array build though (that part could perhaps be sped up as well by expanding the list in chuncks instead of on every call, but that's a minor issue).

    Ok, so you have 1.5 seconds in v1.1 and 39 ms for the same operation in your v1.1 modified for VA? Looks promising, but the hit on constant-name lookup better be small! ;).

    How do you solve the lookup problem regarding not being able to lookup the VA by Register name alone, but will need to use Register ID (currently a cluster of both Register name and Scope VI refnum)? I have a use case where only Scope VI refnum differ, while Register name is constant.

    /Steen

  18. Perhaps... but it doesn't mean we shouldn't use wires when we can. Faced with the choice of implementing a solution with wires versus a solution without wires, I'll choose the solution with wires unless there are extenuating circumstances that make the wire-free option the better technical solution.

    I'm starting to get the gist of your message. It's something I need to give some more thought, but I can easily see the safety of using wires versus references. I've always been aware of the pitfalls of references of course, but I may not have assigned sufficient weight to the dangers of such architectures. It must be a natural step in my development as a programmer, which incidently fulfills my wish from my initial post that I'm hoping to learn something new here :rolleyes:. In that case I'll probably start asking questions about the alternatives to references with offspring in concrete use cases then.

    But this isn't a point of view widely adopted I'd say? Not even inside NI it seems. That probably ought to change.

    What other people can or can't comprehend and what they might make a mess of, isn't my concern.

    I confess, your apparent lack of concern for code maintainability confuses me. Am I misunderstanding what you're trying to say? Do you work in an environment where you and only you ever look at your code?

    No, you're right - it was a very egotistical statement from me. I'm just kind of used to customers making a mess of even the simplest of things sometimes, that I've grown to expect people to have difficulty understanding the solutions I (and my colleagues) provide. Not because the solutions aren't maintainable or that they lack consistent coding style, but simply because I use more advanced functionality than the case structure basically. But it was a rude generalization from bad habit - I apologize. I do indeed go to great lengths to achieve the most solid and maintainable architecture, and when I on workshops demo my code the floor usually have no difficulty understanding what goes on.

    Therefore, the difference in my mind between a named and an unnamed queue is convenience.

    In my mind, the main difference between them is readability and confidence. When I'm looking through new code there's far more work involved in understanding and verifying the behavior of a named queue versus an unnamed queue.

    I agree completely. This is also the case with events and FGs and the like. I just need to get my head around to the fact that so many use these methods without raising the same questions (JKI for instance).

    For me to gain confidence the components are going to work together correctly, I have to trace through the code and make sure unexpected things can't happen due to all these asynchronous loops obtaining and releasing the queue on their own schedule. I have to figure out all the possible timing variations that the parallel loops can execute with. I've seen code by very good developers where a parallel loop may obtain a named queue, send a message, and release the queue before the message receiving loop has had a chance to "permanently" obtain the named queue. They built race conditions into their code without realizing it. These aren't LV newbies; they're people who have made a living with Labview for a decade or more. They just wanted to take advantage of the "convenience" of named queues to reduce the number of wires on the screen and have a cleaner block diagram. (FYI, I succumbed to the "convenient and clean" siren song for a while too.)

    I did just that in VIRegister v1.0. It could release a named queue reference before any reader got hold of it due to an unforseen use case :oops:. Case proven.

    When I keep that name secret, no one else is going to change my data...

    Ahh.... security through obscurity. We all know how well that works. ;) (Joking. I realize the risk of somebody trying to hack into your application via a named queue is probably very, very, low.)

    Sure, for secure applications that path won't work. But there are so many ways to jam a wedge into a (standard run-of-the-mill) LabVIEW application that it won't matter much from a security standpoint. I'm addressing the mistakes that could happen if naming a "globally referenced variable" something common. But my focus is misplaced there - I should concentrate at least as much on the inherent negative exploitability of referenced data. I've just seen it as pretty easy, which is kind of dangerous (and I have noticed the first sentence in your signature ;)).

    There isn't any replacement for sharing data (by reference) when you want to share data (a common signal). I can't use by-value for this.

    Maybe this is a stupid question, but can you explain exactly what you mean by sharing data? Is it data multiple components both read from and write to? Do you consider data created by one component and consumed by another "shared?" I ask only because I'm not sure I agree with your assertion references are required to share data, but my disagreement hinges on what you consider shared data.

    Sure. I mean shared for instance when multiple readers need to react to the same "signal" (a Boolean stop-signal is probably the simplest example). If you wire that data by-value (forking the wire) from the (usually single) source to multiple readers, they'll read a constant and stale value. The readers obviously can't detect if data in another wire changes its value. I may be ignorant here, but I'd solve that with a queue for instance (or a notifier or event), but in all these cases, even though we are for sure wiring to all readers, what we're wiring is still a reference to the same copy of data. Then the question of how we obtain that reference remains; is that by wire (e.g. unnamed queue) or referenced in itself (e.g. named queue). It's probably this I can't see how to solve without using referenced data at all?

    A similar use case I have a hard time finding a "wired" solution for is sending data to a dynamically dispatched VI without using the FP terminals (Set Control Value). The reason for not using the FP terminals is that it's very fragile (a constant string must match the terminal label), and because it won't work too well in built applications (you need to include the FP here) nor on LV Real-Time (the FP is a bad communication interface on this platform except for FPGA). Even if we say unnamed queues are ok, you still need to deliver at least that reference to the dispatched VI somehow.

    I've mentioned it before on the forums, but it's never my intent to tell someone their implementation is wrong. (Unless, of course, it doesn't meet the stated objectives.) All design decisions carry a set of consequences with them. Experienced developers understand the consequences of the decisions and make choices where the consequences are most closely aligned with the project goals. Your goals are different than my goals, so your "right" decision is going to be different than my "right" decision.

    I share my implementations as a way to explore the consequences of the different design decisions we are faced with so we all can be better informed. It's not my intent to preach the 'one true path.'

    Don't worry about hurting my feelings, you have very far to go yet at that - not that I find you offensive at all :yes:. When I look into my past as a programmer (in this context) I can clearly see the learning curve I've climbed. And I didn't expect to be done with that just yet. I'm reading intently because I know there's something new here to learn for me. And it's a privilege to be able to participate in forums like this, even though it can be hard to find the time.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.