Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. 9 hours ago, drjdpowell said:

    JKI State Machine “states” are actions that are executed internally by the loop.  Unless one deliberately calls the “idle” state, “macros” of states execute in an atomic and isolated manner.  That is not the same as sending a set of messages to another process, where you have no control over what other messages are executed intermixed with yours.  

    My problem isn't with the enqueue semantics but with the idea that an external person should ever have to queue up multiple actions. I'd prefer separating the external interface (which in this case yes would be the macros) from the internal (which might take multiple actions based on the request/event which occurred. Its not a big deal I suppose but it gets to be very confusing very fast when you mix actions to be taken with messages that are received. Thats my main complaint. My other question is, if you want these N actions to be taken synchronously without interruption, why not subVIs?

    5 hours ago, ShaunR said:

    Basically. If you try and use events in "Event Driven" languages, the same way you use them in LabVIEW for 1:1. Everything falls over because the assumptions about the underlying mechanisms are incorrect. I think architectures should be language agnostic.

     

    5 hours ago, drjdpowell said:

    How do they manage not to (implicitly, at least) queue up "events" that affect a serial-access resource?  Real-world events do have time-ordering (pick up the ball, throw the ball, catch the ball) so I'd be surprised if it would be useful to be agnostic about this.  Other LabVIEW features, like parallel calls on non-reentrant subVIs or DVRs, are all implicitly queued.

    I still don't totally understand so let me come at it from the other direction which may answer drjd's question. My understanding of .net events, for example, is that they are a list of function definitions. So when you register for an event, you add your function definition to the list. When the owner generates the event, its really synchronously calling the registered function of all subscribers. However, this implementation change doesn't seem like it should matter all that much, unless you rely on the synchronous nature of events.

  2. On 8/6/2016 at 0:16 AM, ShaunR said:

    It's not portable.It is using tribal knowledge of a specific implementation of events. It is more of an abuse of events knowing they have a queue in the LabVIEW implementation so the architecture will not work in the other, true event driven language,s that I also program in (Free Pascal for example).

    I don't fully understand this one. Could you clarify?

     

    On 8/13/2016 at 3:55 PM, drjdpowell said:

    Just an aside, but I almost never use User Events as 1:N.   I use both Events and Queues as N:1 message carriers.  Each receiver creates the communication method it prefers, preloads any initial messages, and then passes it to the sender(s).  For 1:N I use arrays of these Events/Queues.

    Do you have any additional thoughts on the pros and cons of each? It seems like if you always to the N:1 case, queues are almost always better (except UIs of course).

  3. Never do #1. Not because in this specific case its not fine (although its not) but simply because its a bad habit.

    Its an incorrect form because if an error occurs on the AVI open (or on error in, or if the avi is empty) then N becomes 0. If N is 0, the for loop doesn't run. If the for loop doesn't run, the output avi refnum and output image refnum are both invalid, and thus you have a memory leak (unless this behavior has changed in the last few years). 

    To summarize:
    Error in->N=0 but no refnums are created->OK
    Error on open ->Same->OK
    Empty avi file->N=0 but both refnums created->Memory leak.

    ...which is why I say its awful hard to think about this every time so I just don't ever follow that pattern.

     

    I personally usually go for 3 unless I'm doing something dynamic (VI server call, dynamic dispatch) where it isn't clear what could happen inside the loop. In this case the safer bet is a shift register. Always using a shift register also eliminates the risk involved in the for loop not running, but introduces the risk that you could accidentally invalidate the refnum yourself at some point (case structure, default value if unwired for example).

     

    The error on the shift register depends specifically on what I'm doing. Like one commenter in the other thread mentioned I like to put a case structure in and never run the for loop if an error is coming in. I then like to either (a) immediately stop on any error, as you might do if you are processing frames in a video or (b) create an output array of all the errors which occurred, which you might do if you were batching some set of actions. I don't believe there is ever a good 'pattern' to follow for error handling because it always requires some level of thought.

     

    Fun fact: did you know that imaq dispose image doesn't run if an error occurs upstream (at least according to the help)? And when I say fun, I mean I hate imaq so so much :( .

    • Like 1
  4. 7 hours ago, ShaunR said:

    Most experienced engineers are using messaging systems today. CVT doesn't really fit with those architectures. Can you elaborate on what use cases has CVT been whittled down to and what has been found not appropriate for?

    Well if you buy into the trio of tags//streams//messages that come up in these discussions, nothing stops you from using tags alongside a messaging architecture.

    Anyway, the way it was always designed to be used (as far as I know) is as an abstraction layer for control systems. You create generic processes which all write to or read from different segments of the CVT and then one or more control loops which operate on that data. The CVT in this situation could also hold configuration data. But the fundamental concept was to make programming labview more like a PLC, where system tasks fill in an I/O table, handle networking, handle logging, and all you do is scan through your logic which operates on that data. This can be done with messages, sure, this works nicely too.

    Since it was made, cRIOs have gotten a lot faster. More people have produced dictionaries, esp DVR- or session-based ones. The DCAF video in the NI week recordings is a project essentially intended as a replacement to this design, with a similar end goal but better protection against races and configuration screwups...Its not that CVT is decaying, there are just newer options which seem simpler and easier to use. And with less FGVs.

    One area where I think it is still a good choice is something like this (https://decibel.ni.com/content/docs/DOC-41894) although again, because cRIOs are so ridiculously fast recently its questionable how much value the CVT provides rather than just taking the hit of always using random access in a dictionary library, or spamming your web service with messages for every update that comes in.

  5. 2 hours ago, Mads said:

    I also did a quick test where I stored a DBL directly in the attribute, which turned out to be even faster (1.4x) for writes, and equal for reads. That's probably not the case for more complex data types, but the gap will definitely be smaller than before. The CVT for example then will in most cases be better off using attributes to store the actual values instead of keeping separate arrays for them. That would also allow it to be made more flexible when it comes to adding or removing tags.

    This breaks down if you want to access N items which is the more common use. Accessing N variant attributes is what, N*usec every time? Accessing N array elements is N*usec once then N*nanosec ever after.

    We already went through this on the other side so just to summarize for anyone who might care: CVT used to be broadly applied to a lot of dictionary-type needs. Since it was created like a decade ago, there are a lot more cool libraries out there which are better (frequently much better) for the dictionary use case. People are quickly whittling down the use cases for the CVT, which is great. The fewer FGVs in the world the better :)

    On 8/5/2016 at 1:02 AM, hooovahh said:

    The CVT is extremely slow (is that an overstatement?) and the variant attributes look up table will probably be on the order of 100X faster (no test at the moment). 

    CVT uses variant attributes. The old version (before about 4-5 years ago) stored named lookups in arrays if thats what you're thinking of.

  6. 7 hours ago, ShaunR said:

    1- Yes. If you have two queues the state and order of operation is external to the sub systems. This means you can place something on one queue, wait for a confirmation (if it is important to do so) and then place the copy on the other. You cannot do this with events.

    2- A key point here is message filtering and, consequently, local state and system state.

    Ah, yes, waiting for a response in between would do that.

    I believe a key part of our disagreement is you seem to care that events can be 1:N. I use them quite regularly as 1:1. The important thing for me is that I can take N different message sources in 1 event structure and still have it be type safe. If that ends up with a bunch of processes all "broadcasting" to one specific listener, thats absolutely fine by me. Its also nice to know that if I want to, I can always reroute data to another source easily enough (although you can say the same for queues).

    7 hours ago, ShaunR said:

    3- (b) Agreed. But tell the JKI state machine that :lol: 

    Seeing the command Macro::DoSomething makes me :( on the inside

    7 hours ago, ShaunR said:

    4-I'm not sure I get what you are saying here and I think you might have me confused with someone else :P 

    No, I read "Queus you can poke the instructions on the queue and be guaranteed they will be acted upon in that order" and interpreted as "peek and poke" and jumped over to the "peek" part -> "you want to be able to see whats on the queue". It totally makes sense, I promise.

  7. Oh come on how can you not like quick drop. I don't use it if I'm making something from scratch but it has way too many features to just dismiss like that. Right click in 2015 replaces some of the features but quick drop is still useful.

    Quote

    That sounds like some vary valuable feedback that I'm sure NI would have appreciated in February when the beta opened...

    Yes I was just thinking that -- how could the beta testers let this through :)

     

    The TCP thing is the only real killer. I wonder if its hardcoded into labview in some way, or if they just updated the sockets to a newer implementation so if you change the OS-level settings it will work.

    (http://serverfault.com/questions/48717/practical-maximum-open-file-descriptors-ulimit-n-for-a-high-volume-system)

  8. 4 hours ago, ShaunR said:

    (1)If you send an event to two listeners, you cannot guarantee that one will execute before the other... 

    (2)For control, queues tend to be more precise and efficient since you usually want a single device or subsystem do do something that only it can do and usually in a specific order. (3) As a trivial example, think about reading a file. You need to open it,, read and close it. If each operation was event driven, you could not necessarily guarantee the order without some very specific knowledge about the underlying mechanisms. (4)Queus you can  poke the instructions on the queue and be guaranteed they will be acted upon in that order. This feature is why we have Queued Message Handlers.

    1-Can you guarantee this with two queues? In any message-based environment sync between two processes is difficult.On the receiving side, events are still received in the order they are generated...

    2-This part is very true. To avoid making a billion events I'll have some events for critical messages that I consider important to the behavior of the system (shutdown, configure alarms) and a generic string message for minor things.

    3-Again, events get there in the expected order (if you send open first it opens first, unless you mark "read" as higher priority than "open"). However, I don't think its appropriate to send three commands to any subsystem like that. There should be a single atomic message ("hey read me my data and send it back") and the receiver should figure out what to do with it.

    4-It would be nice if there were better tools for events. It would also be good if there were better tools for queues, like a true priority system, the ability to listen on multiple queues at the same time, and the ability to selectively flush certain elements from the queue (in a safe way). Events should have better preview mechanisms (well actually I don't care about this but it seems like you did), the ability to limit the size of the queue, and better type propagation. I prefer the tradeoffs of events to those of queues, but both have issues. It would be great if we could bundle an event registration with a queue and treat N queues as an event registration. That would totally fix the issues, wouldn't it?

  9. 3 hours ago, dmurray said:

    An issue I've run into: I've been messing around with the events in this project, and at edit-time I've lost some information about the events. An example is shown below, where the event is <New Temperature Hysteresis>, but in the event I get the data for 'New Temperature Limit'. What have I screwed up, and how do I fix it?

    This is the downside of the event system. I still like it, you just have to learn the fiddly bits to make it display nicely. I may be slightly off with these but:
    -User event registrations, in the event structure, show the label of the event refnum as passed into the register for events node. Therefore, your cluster label is "New Temp..." and your event case is the same.
    -Control registrations use the same principle, but with arrays of control refnums it totally goes bonkers. I've used a for loop and To More Generic with a VI server constant (this is automatically performed for you when you use a Build Array with different refnums, so throwing this in there is basically a no-op). In this case, the event case shows the label of the VI server constant used with the to more generic class function.
    -The inner data node where it says "New Temperature Limit" is based on the label of the element inside the user event refnum as passed into the register for events. That is, your wire "New Temperature Hysteresis" contains a numeric called "New Temperature Limit", presumably because you copy-pasted. You can fix this in one of three ways:

    1. Right click on the user event refnum in your cluster and select show control. Then you can relabel the existing data.
    2. Open your cluster type def (because you absolutely typedef'd it) and drop down a new control of the type you want, in this case a numeric. Relabel this new control to "Blah". Drag and drop "blah" into your user event refnum and it will replace the prior contents, meaning you've just relabeled "New..." to "Blah".
    3. Go back to where you created the event, right click on the constant you used to create the event, change the label to "Blah". Then create a new constant/control off the create user event node. This new control should have the correct label.

    I haven't personally used this, but the discussion is pertinent: 

     

    Since I'm already here, its worth mentioning that if you rearrange the order of event registration or the label of the user event (not the contained data), you **must** verify the order of your event cases. The same is true if you bundle up multiple registrations. Changing the order or name of the event registration requires a quick code review. This is the one killer negative feature of the user event system. I still like user events despite these flaws, and I think as long as you know about them you're fine, but...

     

    By the way, one neat thing I don't know a good use for is that you can unregister for selected events on a temporary basis:

    http://zone.ni.com/reference/en-XX/help/371361K-01/lvhowto/dynamic_modifying_reg/

    One thing I wanted to use it for would be as part of a state machine, you can literally stop listening for some events when you're in a state where they don't matter. But I decided it was easier and more readable to just put a case structure in the right places.

     

    7 hours ago, dmurray said:

    Your second point, I can't quite visualize. Can you post a simple screen shot? 

    I don't have labview here so I'll try again. Its worth mentioning to be clear that this is just how I personally organize the code. I've never really swapped notes with anyone on this.

    Imagine you have two loops. One reads the data from the sensor (I'm assuming this is what the loop you showed above will become -- lets call it "HW") and one that interacts with the user (lets call it "UI).

    "HW" generates a temperature reading event and an alarm event. "UI" generates a new limit and stop event.

    "HW" consumes the new limit and stop events. "UI" consumes the temp reading and alarm events.

    So my init sequence would be:

    1. "HW" creates the temp reading event and alarm event and stores this in a cluster called "HW.Pub"
    2. At the same time, "UI" creates the new limit and stop event, storing these in a cluster called "UI.Pub"
    3. On the higher level diagram, you feed "HW.Pub" to a function called "UI.Subscribe". This function pulls the events it cares about out of "HW.Pub" and bundles everything up into a cluster called "UI.Sub"
    4. At the same time, on the higher level diagram, you feed "UI.Pub" to a function called "HW.Subscribe". This function pulls the events it cares about out of "UI.Pub" and bundles everything up into a cluster called "HW.Sub".
    5. You pass "HW.Pub" and "HW.Sub" to a function ("HW.Main"), and you pass "UI.Pub" and "UI.Sub" to a function ("UI.Main").

    I like this because it makes it clear just by looking at the code exactly what each loop is subscribing to and publishing. This doesn't scale for more dynamic situations.

     

    • Like 1
  10. I enjoy the 'dancing' so far, although I haven't used it for anything real. The icons are annoying though, and kind of ugly. I also really wish the various toolbar buttons could go back to being proper buttons with outlines you can see. Labview looks old, I think we've all accepted that fact. Trying to pretty it up by changing some icons is just odd.

  11. You only want to pass around the events themselves, not the registration refnums. You should register the very last thing before you get to the event structure and unregister the very first thing after the event structure. Otherwise you have this queue floating in the ether that isn't being read from.

     

    I personally make a "create" function for any process which generates a cluster of input events and a "subscribe" function which takes any clusters from any other processes and copies the events into a local 'state' cluster. That is, I move the events around only during initialization. This probably doesn't work well for something more dynamic, but...

    • Like 1
  12. Ok that makes sense but then you said functional programming has the same issues...as far as I understood it, not worrying about specific types (being able to pattern match, for example, on any input data type which has a numeric x and numeric y, regardless of the other contents of the data type) and being very explicit about its behavior (ie f(x0) ALWAYS = y0) was the point. 

  13.  

    11 hours ago, Manudelavega said:

    I might be lacking imagination, but I achieved results in my LV code that I feel I could never have done without OOP. I have a plugin architecture where all the plugins share a lot of functionalities, which I have coded inside the parent class. Then a "plugin manager" contains an array of all the plugins, only knowing them as "parent class" and never having to worry about which type of child class they are...

    I think the post would better be described as "there are better ways to do this than this crap we invented 30 years ago". While oop is 100x better than plain vi server calls with megaclusters or variants as the parameters, the poster would I think argue that oop is 100x worse than something like the shiny interfaces of golang (I can't find a good simple description of it, but basically you say "type T has methods called A, B, and C. Anyone else in the whole world that has methods A, B, and C is therefore type T, and which doesn't stop it from also being type U, V, etc.").

    18 hours ago, ShaunR said:

    Functional programming still has fat wires-the greatest barrier to reuse. 

    Fat wires? 

  14. Yep, all the cool kids are doing functional programming now. Over in functional programming land the cake is definitely not a lie.

     

    But really, while I get the complaints (and I complained all the time about lvoop stability), lvoop is basically a fancy cluster. Its not that big of a deal, its just a handy tool sometimes.

    • Like 1
  15. 5 hours ago, dmurray said:

    Okay, had a re-watch of the Jack Dunaway presentation from NI Week a few years back, and realized I was splitting the event reg refnum wire incorrectly i.e. I was just branching it and wiring it to the two event handlers. Having two Reg Events functions solved the problem. The fine details in LabVIEW are always my downfall.... 

    Yep, register for events is basically "create receive queue" while create user even is "create send queue".

    To clarify my point, it was more to say that while labview has a bunch of different communication mechanisms, I'd just pick one. User events are nice, you can find a decent number of threads after the improvements in 2013 where people say they've totally switched from queues, and I've done that on a few programs so far as well. So in general my suggestion would be to either change the user events out entirely for queues, or change the queues out entirely for events.

    While there is absolutely nothing wrong with also using a stop message for the polling loop, for something simple like that I have zero objection to globals or locals. That loop is obviously not something you ever plan to reuse, so worrying that its tightly coupled to a global variable is...well not worth worrying about. That having been said, if you have a global stop event it does make things cleaner to use it.

    • Like 1
  16. 1 hour ago, dmurray said:

    A question on using a Global for stopping loops: in this case I assume that because the global is only written in one location (i.e. the event loop when I'm shutting down), and read in several loops, that this is a safe use case for a global? I think my other options are sing a LV2 style global (not much point as it's effectively the same thing?), or using a CVT value seeing as I already have that functionality in the code anyway.

    I would in general prefer to use the event to every loop. For example you have a QMH and en event structure -- I would pick one. The issue with the global is now every function is coupled to that one instance of the stop global. Using an event (or a queue message or a notifier) means you can pass in any queue or event. Is it a big deal? No, I'm just saying what I'd prefer. 

    The other issue with the global rather than a queue, notifier, or event is that the global is sort of implied to be an abort rather than a request to stop. The natural implementation is what you have in your polling loop -- wire the global directly to the stop. For the polling loop thats not a big deal, but for a more complex situation (talking to an FPGA for example) maybe you want to step through a few states first before you actually exit, which means you need to either (a) put all that logic outside of the loop you just aborted with the global or (b) check the state of the global, than trigger some internal state change which eventually stops the loop.

    • Like 1
  17. 15 hours ago, ShaunR said:

    DLLs are not the same as .NET. The only time you will have problems is when you run in the UI thread (because you only have 1 in the root loop). They are also non-blocking (when "Run in Any Thread") so we're a couple of rings further out in hell when compared to .NET but arguably you are in hell somewhere if not using native LabVIEW :D. If you are using orange nodes you need to throw that code out and think of something new.

    I looked through the help and couldn't find a solid statement on this, but my understanding was that the CLFN consumes whatever thread runs it, just like .NET, so if you have CPU cores*4 (or whatever the multiplier is for default execution system) long-running DLL calls the entire execution will hang, just like .net. I definitely recall someone running into this with a lot of parallel DAQ tasks, and if it isn't the case I have no clue what was happening with webdav, but again I have no documented proof.

    Does anyone know if this behavior is documented (or if I'm just totally wrong)?

    8 hours ago, ShaunR said:

    For databases. I have my own cross platform SQLite one based on the [non .NET] SQLite binaries. MySQL type databases are just strings over TCPIP so that's no hardship but there is an excellent alternative to the NI toolkit which is open source and was free when you had to pay for NIs offering (which my google-fu is failing miserably at finding again). Where .NET usually comes in with databases is with the ODBC abstraction which is nice but unnecessary.

    As a sort of addendum. For OS shipped features, there is very little in .NET that you cannot do with calls to the OS Win32 binaries and you are not saddled with the huge overheads and caveats.

    While there are problems with .net (both the labview interface and the fact that .net can be really truly bizarre sometimes) I can't say I enjoy the CLFN interface any more. Rummaging through 10 header files just to find out that a myBlahType is actually just a I32, hoping you didn't configure anything wrong enough to crash labview for the Nth time, etc... (not really a complaint against labview, I'm just saying I think your argument of "I hate .net and look how easy it is to just call the dlls instead" is a bit crazy, since calling dlls is itself quite challenging) 

  18. On 7/21/2016 at 1:19 PM, John Lokanis said:

    When calling .NET code, the execution thread is handed off to .NET, preventing LabVIEW from using it to execute any other parallel code.  So, if you make enough simultaneous .NET calls (say around 50+) and they take a long time to return, you can starve LabVIEW of all threads.  Not a single while loop will even be able to execute one iteration.  This is crazy.  Only solution is to customize your ini file to manually allocate threads to a different execution system and then set all your VIs that make .NET calls to run in that execution system.  I hope in the future LabVIEW will automatically run all external calls in a separate execution system by default with it's own pool of threads.  This one has bit me more that once.

    Ugh yes, same with DLLs. In one particular application with a looooot of parallel DLL calls I had to wrap the functions in a different execution system (ie Other 1). I had a bunch of parallel webdav reads, each of which takes a while since the files are big, and everything stopped responding. Had no idea what was happening until I remembered I defaulted to webdav instead of FTP. Sadly, I just ended up going with the older FTP stuff because it (a) is just as fast if not faster, (b) has a timeout and webdav doesn't, and (c) doesn't block the entire execution system.

    On 7/21/2016 at 1:25 AM, shoneill said:
    • When deploying code to a RT target, we're never quite sure which version of code gets deployed.  Inlined VIs, when saved with changes, do not seem to propagate the "changed" flags to their owning VIs which then leads us to do a bottom to top forced recompile in order to properly re-sync the compiled cache with the code.  Even then we sometimes get VIs marked as running on the RT which shouldn't (have just been removed) or vise versa where a VI which definitely IS running on the RT is not marked as running int he IDE.  Frustrating as hell and a real time waster.
    • I also love that the project does not allow two RT targets to share the same IP address.  We have one host software with several versions of our RT system, only one of which is ever connected.  If I give two targets IP addresses of 10.0.0.15, LV cries.  If I give one 10.000.000.15 and the other 10.0.0.15, labVIEW is quite hapy (And apparently, stupid).

    -Inlining -- I filed a CAR on this at one point. I don't know what happened to it
    -The IP address thing is awesome (and yes, quite stupid). 

    On 7/21/2016 at 5:47 AM, hooovahh said:

    Icon editor glyphs.  Oh it is a same for sure that this shipped feature of LabVIEW doesn't work and hasn't for so long.

    The icon editor is just all around awesome. Have you guys seen the ones where:
    -Using the fill tool covers up everything in the icon unless you switch tabs before using it
    -Dropping a glyph into a specific layer shoves every other glyph in that layer off screen
    -Changing the icon overlay for a lvlib/lvclass sometimes becomes opaque and covers up all the other layers in the icon
    -Changing the icon template changes the printable area and squishes all of your text into the banner or evenly spreads text out into the banner area
     

    To add one more of my own:
    User event registration behavior. "Oh, you reordered the registration? You must have also wanted to reorder what code goes with what event!". Thanks labview :(

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.