Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. Just needs someone who understands the LabVIEW scriptnode interface, C bindings to 0MQ, and how to do the Juypter protocol - which rules me out on erm, 3 counts :-( !

    Well zeromq already has pretty solid labview bindings: http://labview-zmq.sourceforge.net/

    I've used those a little bit on windows and also tried it out on the 9068 back when it was released, and the guy who wrote it uses it on linux too. I'm not familiar with either of the others, though.

  2. I don't know if there is a good way to have a zero allocation structure because it would have to be something that would be used universally (or else you'll have some using a standard error and some with an rt error) but whose RT-ness could be turned off in favor of a more verbose output.

     

    One option to aid in this could be to make wrapper functions which have conditional code so when it runs on RT and the flag "RTVerbose" is not set to true, all of the dynamic allocations are removed. Except really you don't want this at all. What you really want is the less verbose version when on RT, and RTVerbose!=true, and the code is inside of a timed loop or above normal priority VI, and currently I don't think there is a language construct to do this (although I've asked for it :( )

     

    Back to the general point about how it would look, I personally tend to think it should be a int+a dictionary (ie variant lookup in our case). I suppose a class could do it too (base class is int-only, then verbose class has source, call chain, etc, then user classes have custom data) but then there is a ton of code needed to generically access that information. All that has basically already been written for variants with the various probes, xml flatteners, etc.

     

     

    Some of the other threads mention multiple errors. I think the simple dictionary would also do a better job of handling multiple errors than something like an error stack. Just thinking out loud here, but it seems to me that for any given chunk of code there should only be one error--every other problem can be traced back to that. I think multiple errors come in handy in two situations:

    1-Combining the error from parallel chunks

    2-Init, where you want to know everything that went wrong so you can fix it.

    (2) would really make more sense as a custom field, which is what I do now -- Tag not found, Append: Tag1, Tag2, Tag3 would be converted into MissingTags=["Tag1", "Tag2",...]

    (1) would make more sense as a named error field -- ie rather than Error[0], Error[1], Error[2] you want to see FileLoggerError=7, FTPError="Thank you for using NI FTP", etc.

     

     

    The only thing I'm really certain of is that I wish the boolean were gone forever :)

  3. Are you using timed loops with the time source set to absolute time? Since the loop is tied to absolute time I've seen this cause issues when the system time is modified.

    Do regular while loops freeze too?

    How are you determining that things have frozen? Are you running in interactive mode? If so, does it disconnect when you trigger this change?

    Are there any other sections of your code where you are changing behavior based on a timestamp?

  4.  

    Back on-topic: (No worries, hooovahh!)

     

    I used and studied RyanK's CEH back when he published it. I found two major (IMO) problems with the approach:

    1. CEH handles errors asynchronously

      Based on my experience as an app developer, I believe error handling should be done synchronously. ("Handling" involves clearing an error and responding to it gracefully, even if that response is to ignore the error or translate it into other information like a "timed out" Boolean parameter.) If the error can't be handled sync, then it can be reported to a higher-level entity async for logging, display to an operator, or handling at a broader scope than the "thread" (LV loop) that threw the error and failed to handle it locally. Broad-scope handling actions in my RT apps often look like resetting the application's operating mode or forcing output signals to fail over to a safe state until the operator reviews and clears the critical error.

      I think ShaunR's suggestion of an "Error Event" has the same flaw: events are asynchornous in LV. What I think we need is an "Error Callback" so it's handled synchronously with context-sensitive dispatch of which handler to use. On that note...

       

    2. CEH is insensitive to context

      An error code has no context whatsoever. Error 7 ("File not found") might be safely ignored in some processes, but it might represent a critical failure in others. It gets thrown by native APIs that have nothing to do with file access, too, so you might not think to deifne a general-purpose handler for code 7 that addresses an unexpected API or app process. Errors should be handled as close to where they're thrown as possible, so the handling code can be written in the same design context as the code that generated the error.

     

    I don't disagree in general, but the express VI settings help provide for some of this. For example a given instance of the express VI might be configured to clear error 7, while another instance is not. Still doesn't provide source, though, which I agree is unfortunate. The devzone paper also describes using it super-locally to handle things like retries. And of course in some cases there is nothing to do locally. Its the middle range of issues where the handling is more complicated than just retry but not bad enough to just shutdown where there are challenges using the SEH.

     

    We ended up doing something I think is similar to what you described, but of limited usefulness since its within our framework. Code modules synchronously return error codes to the caller and provide a method for categorizing them (no error, trivial, critical, unknown) and then the calling code has a set of actions it can take, and the mapping from (module, classification)->(action) is all configuration based. The actions are things like shutdown, go to save state, log, or reinitialize module. The caller is also responsible for distributing error codes to any other module that cares, so for example control code can be informed if scan engine had an error.

  5. Oh well :(

     

    Edit:

    Was just looking back through it and I remembered all the issues I had getting it to work at all. Things like guids and paths just didn't get set as I would expect. You can always dig through some of the other code in the same directory, but its pretty hard to understand what the different functions are doing.

  6. I haven't done much with vi scripting, so I might have thrown myself into the deep end here, but this is it: Is it possible to create a new web service in my existing project and then do all the configuration via scripting too? Or can I create a web service from template so that I don't have to do all the configuration in scripting?

     

    I tried googling but all results were for scripting *in* web services.

    I put together something which imports a web service into your project from a template. Its really hacky and may not work for all web services (in fact it no longer works for the web service I originally wrote it for, although I plan to fix that at some point).

    The code is on this download page:

    https://decibel.ni.com/content/docs/DOC-38927

    Specifically ni_cvt_web_addon-1.6...

    I'd just unzip it and open the vi ni_cvt_web_addon-1.6.0.1\File Group 0\project\AddCVTWebServiceToProject.vi

     

    Also, be sure not to judge me based on that code. :)

  7. (A) Don't know why its an unusual size.

    (B) If you delete the entire genealogy section in the xml the size goes back to normal.

    © I've never had this cause an issue, but I would imagine its not officially supported. The genealogy is probably unrelated to those issues you posted, but it will cause issues if you try to do unflatten on old data. I don't have a need for this feature, so I delete the data pretty regularly. Makes RT deploys a bit less painful too, or at least it does in my imagination.

  8. I'm a witch doctor and my magic is strong. Seriously though- I have read some things from functional programming decrying the existence of "state" in programs. But my view of the matter is that the state of the physical system you are controlling needs to be reflected in the computer to be able to do anything useful. Almost always the state of the system is dependent on is prior history and what you can do next is dependent on that.

     

    I don't see what are talking about, when applied to programs that control a machine in the real world. But I am starting to read about it Functional Programming, maybe it'll reveal the answer as to how to avoid the problem!

     

    The way I interpret some of that stuff is that state isn't evil, its the combination of state and action. For example, consider the difference in labview between a subVI which includes an uninitialized shift register vs one which simply takes data in (from wherever), processes it, and returns a result (to wherever). The second is a whole lot easier to understand, to prove correct, and to test. There is still state, its just been moved up a level or two or.... Also, I personally found this paper interesting and very helpful in understanding those functional programming crazies.

    • Like 2
  9. Do you know how to achieve this in a built executable? Unfortunately threadconfig.vi is a GUI type VI, so not really able to be embedded into another application. I was able to get the contents of this VI and remove the GUI aspect and then inserted it into my code. However I am not really sure this will do what I want as I do not know if it is possible to modify the thread configuration from a running application?

    I remember someone having issues with DAQ threads at some point and fixed it in an exe. I believe all you have to do is copy the appropriate INI keys from labview.ini into your exe's ini file and the runtime will allocate the right number of threads on launch.

    • Like 1
  10.  Amen. But I would argue they are not exceptions. The maxim is "never use re-entrant memory storage" and all those you describe are not re-entrant (well, FPGA might be-I don't really know what you are referring to there). I wouldn't go as far as say "never store state in a VI" otherwise you are back to global and local variables and those you list were invented to address the issues with those memory storage methods.

    For FPGA most everything is reentrant but really I'm referring to things like delay, rising edge, etc. functions which store previous state in a feedback node. This would cause a problem if you made any such function non-reentrant but this isn't the default.

    But...I actually do mean going to the extreme of never storing state in a VI except in these special cases -- with DVRs I don't think an uninitialized shift register is really appropriate anymore. For the analysis functions I don't think there was ever a need to store state inside--they're math functions!

     

    Sure. Obtain returns the actual queue reference rather than a copy.

    Or they could go even further and make any branch perform a ref count, then deallocate the refs as soon as the last calling VI with that branch of the wire goes idle...you know like when it frees all that memory in your 5M element array ;)

  11. LV does have its rules for when it will release the memory, so I wouldn't consider it a real memory leak (it's not like calling malloc in C and then ignoring the pointer). Whether LV actually releases the memory or hangs onto it like someone with abandonment issues is another story.

    It could totally free that memory if it really wanted to. It doesn't have a problem. :(

     

     

     

    The thread safety issue was introduced with pooling of re-entrant VIs.

    Only if you store state inside of your VIs, which I personally think is a bad idea with a few exceptions (FPGA, action engine operating on a singleton resource, etc). Pure math functions are certainly not on the list.

     

     

    Also, staab posted up this suggestion a while ago: http://forums.ni.com/t5/ideas/v2/ideapage/blog-id/labviewideas/article-id/19226

    Didn't seem to be popular for whatever reason, I think he probably just didn't explain how big the issue was. Either way, my hope is that enough people know this limitation by now that they'll stop making functions with internal state, but we'll see.

  12. Hello Experts,

    I would like to operate a Labview RealTime system (PXI) as EtherCAT Slave. The master system should be a Beckhoff control. Is it possible to operate the Labview RealTime system as EtherCAT slave?

    Thanks in advance

     

    No, the only slave is the 9144, PXI is usually used as a master. Out of curiosity why would you want to use a PXI chassis as a slave? I normally see PXI has a high speed/throughput acquisition system unless veristand is involved. Is there a PXI-only feature you need which isn't supported by the 9144? If you don't need deterministic single-point data communication, does the beckhoff controller support other protocols?

  13. What would be nice would be if LabVIEW supported a multi-part “Call by Referenceâ€, where one could fill the inputs of a function, pass it to an async process for execution, then read to outputs when it comes back.  That would be type-safe and very simple.  For async HTTP Get, you’d just need a reference to HTTP Get.  Might also simplify command messages as in the AF.

    So I mean they were trying to get there with call and collect, I just don't think its very user-friendly. Your concept makes a lot of sense, and it would be handy for all the different types of call-by-ref. Instead of a single node, we have "latch values", "run", and "get values", and I suppose we'd have to have a function to "get instance from clone pool". The other usability items I can think of:

    • Timeout on the wait/collect node
    • Easy way to abort reentrant clone pools if we need to shut down (probably solvable if we had a "get instance from clone pool" function).
    • Improved type propagation so if you update your connpane it doesn't break everything in your code (this seems to happen more with objects...to fix this I've resorted to just feeding a variant through everywhere :/)
    • Decorate VI server references with different settings, so you don't have to remember the correct call by ref setting (and so the compiler could check the type for you--if you say "open this for reentrant run" and the VI isn't reentrant, it should break.
    • Some of the functional programming/lamda discussions from one of the other forums would be handy (I was thinking earlier "well hey most of this I could probably do with an xnode" and then I realized that I'd need to make a second VI, and this would be solved if I could script the VI I needed inside of the node...)

    How do those sound to you?

  14. Looks pretty straightforward. On the one hand I like the type safety the object gives you but on the other hand, objects are a pain to use in large quantities. I hate the whole documentation, inheritance, etc process. I know there are some fixes out there but really I'm ok with giving up type safety in exchange for just passing in a vi server reference...of course nothing about your version prevents someone from doing that. May just have to use yours in the future :)

  15. Well, clone pools are a “high water mark†type thing; you will never have more than the maximum number that you had running at any one time.  I’ve done testing with my Async Actions and Actors, and running a few thousand is no issue.  Note that any shared-renetrant subVIs called by your top-level “function†won’t be cleaned up when you release the pool, so I’m not sure if you can reliably recover from a “do a million things at once†bug.  Whatever you do, you need to have a simple high-level API for the new User, with a minimum of different new wire types and “Create so-and-so†calls, even if you have a more complex low-level API semi-hidden in an “Advanced†palette.

    Meh, you're right. I hadn't thought about that issue...the DD calls will eventually all add up, and they'll be shared across all the call pools probably. Oh well :(

     

    On the adv vs. easy API topic what I was considering was creating a FGV which has the same behavior as what you and AF have, so it would initialize a default call pool and provide a simple 'run task' function you can just grab and use. But having a backing API makes me happy :)

     

     

     

    BTW, I didn’t comment on the Cancelation-token functionality.  I don’t have that, because once I have to send something a message, even if that message is just “cancelâ€, I consider making the something an actor, which can be extended to new messages as needed.   My “actions†are to be as simple as possible.   They always poll their equivalent of the “results reporterâ€, and abort if that object dies, since they then have no reason to continue (their job being to send results).  This feature means you don’t have the “cancel†them in order to shut down the application.  

     

    So I suggest you think about either building the cancelation token up into something more message-like, or eliminating it for simplicity.

     

    Added Later>> Check out the “CancelableObserver†class in Messenger Library.  This allows one to make a cancelable “forwarding address†out of any other standard address.  You could do this for your “Results Reporterâ€.   Then your Actions can just poll the validity of their Results Reporter instead of a Cancelation token.  Note that it is guaranteed that no message can be sent after you call “cancel†on the communication method, in contrast to calling cancel on a token, where the running Action may have just checked the token and be about to send the results.  The latter behavior can lead to race conditions.

     

    I've been kind of on the fence about the cancellation thing since I made my UI example, as its kind of hard to keep track of. It felt like it would be easier to ignore a result than to cancel *and* ignore the partial result. That having been said, my goal was not really to make shutdown faster, just to let the task know we don't care if it finishes...but then we get back to if there is really a benefit. I tend to think that for my purposes I'd chose to avoid modifying the state of the system, so cancelling really just saves CPU time, which isn't an issue. Since any really long running tasks (like wait on TCP or whatever) can't be effectively cancelled, it makes me think your suggestion of eliminating for simplicity is the right one. Will think about it more.

  16. Ok I made the changes you suggested and I think I like it better this way. :thumbup1:

     

    Also I realized I forgot to address one point on your #1

     

     

    Is there value in the Execution System stuff?  I’ve never discovered much reason to mess with Execution Systems.  It would be simpler (and possibly less overall overhead) to have a single clone pool for all Async stuff, created on first use as in the Actor Framework or "Messenger Libraryâ€.

     

    Even if I did change it to just support the standard execution system, I still prefer having a specific 'context' or whatever that everything runs on, rather than using what is basically an inaccessible FGV. If you want a semi-real reason its this: the async call pool can only grow, which makes it kind of scary to me to use in a long running application unless you have the ability to shut down the entire clone pool (which I don't think you can do in AF or yours). Having a separate reference to a specific clone pool means that as you launch or shut down parts of your application you could launch and shut down the paired clone pool. Not a huge deal, but just makes me feel more comfortable using it. 

  17. Some comments after having a look at the code:

    1) Is there value in the Execution System stuff?  I’ve never discovered much reason to mess with Execution Systems.  It would be simpler (and possibly less overall overhead) to have a single clone pool for all Async stuff, created on first use as in the Actor Framework or "Messenger Libraryâ€.

     

    2) Consider combining the “Action†and “Function†classes (and thus making Actions children of “Taskâ€).  This reduces the number of wire types and would make Tasks fully recursive (so a Batch can contain other Batches).

     

    3) Do you really need the Variant Parameter stuff in “Actionâ€, given that children of Action can just add whatever parameters they want in private data?

     

    — James

    1- ^^what he said, its mostly just there if you know you're going to block an entire thread doing something. For example with http get I believe its calling a dll, so you're blocking a thread during that process. Same thing with some of the other I/O types. Its not clearly documented, but those inputs can be completely ignored and it will automatically create a pool of size 10 on the standard exec system and always run it there.

     

    2-I thought about that one a lot and went back and forth. On the one hand I liked the idea of batches of batches, and of course you can still do that with your own tasks. But I figured that it could be handy to focus things down somewhat, which is why I added the actions. That way people who are afraid of objects can use callbyref actions, people who are ok with objects can make new actions, and people who want to use all the features can make tasks.

    --> At the same time, given that the inputs are basically the same, I kind of see what you mean. It probably makes sense to merge them.

     

    3-I also went back and forth on this. First, you're absolutely right. But... it kind of simplifies things to always have a 'parameter' input that you can call on any type of action. I think what would probably be the best solution would be to remove it from the parent action (which combined with #2 above would basically mean i delete action entirely) but leave it on the callbyref class since thats supposed to be the easiest to use and there has to be a generic parameter input on that one anyway.

  18. Have a look at the “Async Action.lvlib:Action.lvclass†in my "Messenger Library†on the Tools Network, as well as child classes such as “Time Delayed Messageâ€, “Async File Dialogâ€, or “Address Watchdogâ€.  This is my OO implementation of what your describing, where child classes override “Execute.viâ€, and optionally “Setup.vi†(run before Async Call). 

     

    However, though I have 10 asynchronous actions built into “Messenger Libraryâ€, and can create new ones easily, I don’t generally make application-specific ones.  Instead, I use either an on-diagram “helper†loop, or have an independent subactor.  The pattern could be described as a “Manager†(a event- and message-handling loop with state data) and a number of “Workers†(specialized “helper†loops or subactors).  

     

    Asynchronous actions are still very worthwhile pursuing, though, as things like a delayed enqueue or an asynchronous dialog box are very valuable.

    I thought about yours and af before moving forward on this and decided it still made sense as more of a loop co-processor than as a dedicated logical actor. That is its more of an off-diagram "helper" loop, in your terminology.

     

    I may need to go back and look at the code, though, as I was under the impression that everything in there was an actor and I wanted to avoid that because you still have the problem of clogging the QMH. If every instance is its own async call then I think I must have just missed the right spot in the code or misunderstood.

     

    Looking at it again, now that I'm looking in the right place, it looks like yours does most of the same stuff mine does. I had been under the impression that your library was more focused on communicating between actors but now I see its way more general-purpose.

     

     

     

    BTW> The “Producer-Consumer†QMH templates produced by NI are to be avoided, for all sorts of reasons.  I just did a talk on this for the CSLUG user group on Monday; there is a recording of it in their Google+ account (I mentioned the issue of state-sharing between the loops, but my primary criticism was the horrifying potential for race conditions).

    I can't access the google+ or youtube page. Could you upload the slides here or just describe the race condition problem? I think we're talking about similar sets of issues but I don't see them as being all that horrible, so I'm curious why you're so against the idea. I tend to think it just ends up being a lot more work than it needs to be to make good code.

  19. Hey all,

     

    I've spent a little time here and there working on this and I figured now was the right time to ask for feedback.

     

    Typically when making a new UI I'll use something like AMC and have a producer (the event structure) and consumer (a QMH). This is the standard template in AMC (image here) and its also used in, for example, the sample projects. This is ok and has done well for a long time, but there are weak points. (a) the QMH can get clogged. After all you're sending all of your work down there and if something is slow, the consumer will run slow. (b) This pattern seems to always end up with a weird subset of state and functionality shared between the two loops. For example maybe your UI is set up to disable some inputs in state X, except that its your QMH, not your UI loop, which determines that you're in state X. So, maybe you send a message to the QMH, it takes some time and so your user is able to press buttons they shouldn't be able to. You fix this by putting the disable code in your UI loop, but then you need both loops to know that you're in state X. Another example: if you're using some features like right click menus, you need to share state between the UI and the QMH so you can generate the appropriate right click menu. Theres many examples like this. None of them is particularly heartbreaking, but my hope is that this is a better way.

     

    At one point a few months ago I was in a conversation with R&D about events and we got onto some of these issues. Aristos and some others pointed out this was basically making two UI threads and suggested pulling everything back into a single loop (just the event handler) but then using async call by ref to take care of all the work that takes more than 200 ms (or whatever you personally consider the cutoff to be). This solves both problems because (a) async call by ref has a pool of VI instances it can use, so the code never blocks and (b) you only have one loop for the UI and associated state information, so there are fewer chances for weird situations.

     

    Since the code for doing all that manually is kind of tedious, I put together this prototype library to hopefully make the above design really really easy.

     

    Feedback I am looking for:

    -is this a worthwhile pursuit at all? (ie do you agree with the first couple paragraphs above?)

    -has this been done before (I searched and searched but I may have missed something)

    -any thoughts on this first draft at implementation?

     


    The code is here and examples are in the project or here. The main example is "example UI get websites" but this example also requires the lovely variant repository. Not for any particular reason, I just like it. There are more details about the code in the readme.

     


     

     

    • Like 1
  20. I'm not sure if it fits your needs but have you looked at this?

    http://www.ptpartners.co.uk/ptp-sequencer/

    It looks pretty cool (haven't used it but theres a video series on that page). It seems to cover all the basics you're developing here and it's got a 'run next step' function you could call from anywhere including an actor. Seems like it might fit your needs.

    Also on the tools network: http://sine.ni.com/nips/cds/view/p/lang/en/nid/212277

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.