Jump to content

Jon Kokott

Members
  • Posts

    186
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by Jon Kokott

  1. I've done it, it worked. I was using identical binaries on different machines, all of which ran lv2010 sp1. In the future I would probably try to load the class directly from one disc location for both instances using either the load class from path primitive or copying the actual file (probably in a packed library or .llb) and running it that way.

  2. My experience with network shared variables is that they are ridiculously slow. We've always used a tcp connection to share data instead, and found several orders of magnitude of performance achievable.

  3. Take a look at the actor framework. It is very well suited to what you want to do, and is probably an ideal solution.

    ~Jon

    One more thing, moving of subpanels/customization is very difficult. you CAN do what you want to do, but it'll be a pretty massive effort. the best example of how to do this was done by jack Dunway during a coding challenge from a few years back. It was called the "Cogniscent UI" or something similar. It allows the run time user to re-size the subpanels and re-position them at runtime in a very elegant way. At any rate, you should check it out because it is just awesome labview.

  4. I don't separate compiled code. Additionally I've forced recompile many, many times.

    From what I think, when you say "inline subVI", labview will add the inlining code to the top level code where it is called, while its compiled. Now, if you have changed the contents of the class later, and have kept the option of keeping compiled code separate of VIs, it shouldn't automatically add the inlining code to the top level of your VI.

    Did you have the option of separating the compiled code from the main VI enabled when you encountered this problem?

    -FraggerFox

    That doesn't sound right...

  5. I've given up on trying to do this. It is an incredibly difficult task to manage mutation history alone and have things work. I've resolved to never support old objects (even though it sometimes works.) and always store as binary. If someone needs to edit the file I create an editor program which is released alongside the actual test software, and use windows to dispatch it on a special file extension.

    Probably not what you want to hear, but I've had waay too many headaches from people manipulating .ini, xml, or any other type of human readable file.

    ~Jon

    • Like 1
  6. Because its terrible. Check to make sure you have the "unreleased" fix for endpoint, they may have allready pushed out an update since the version from a few weeks ago bled memory so badly (it was literally 10MB/min for me.) I think it straight up crashes around 250 MB of memory.

    Anyway, you'll have to talk to your IT department about your individual settings, but my advice is to disable the firewall on all the ports you are using on the test setups. It is TERRIBLE with UDP (as in it will kill your connections, make you timeout, and restart.) We couldn't do TFTP updates on any machine running endpoint because it would close our connections after like 1 min of transferring.

    I would just start fighting the war that is computer security with your IT department, and hopefully get it straight up taken off the machine. It causes nothing but problems for any kind of network activity not called web surfing.

    ~Jon

  7. Need type definitions for DWORD, LPTRSTR, LPCTSTR.

    your return type is definitely not double.

    but convention would say that your function prototype should look like:

    int32_t GetLongPathName(const char* lpszshortpath, const char* lpszlongpath, int32_t ccBuffer)

    Check this out:

    http://www.codeproject.com/Tips/76252/What-are-TCHAR-WCHAR-LPSTR-LPWSTR-LPCTSTR-etc

    I think that lpszshortpath is expecting a Unicode string, so you may have to convert the actual string you put in from ANSI to Unicode.

    ~Jon

  8. Why would I want multiple instances of a process doing exactly the same thing to the same set of data? That's just wasting cpu cycles.

    The idea is you don't do the same processing in each slave. In the navigation system you may have several unrelated processes that act on the same command "get directions home." One loop might search for the most direct route, another loop might try to find a route that results in the highest average speed. The results are then collected and based on the desired parameters (most direct route, least amount of time in car, avoiding hiways, etc it uses the correct route)

    Would the actual implementation actually use this kind of a transport layer? That I'm not so sure of. The actual implementation is more complex than that, so who knows if what you'd end up with would look anything like the original description of how the system is "supposed" to work.

    ~Jon

    • Like 1
  9. The sample code I posted doesn't preview the queue--each listener has a dedicated queue and when they receive a message they dequeue and process it like every other message. The two implementations I posted are effectively the same, aren't they?

    You can replicate notifier behavior using the dequeue/SEQ if you only need one "slave" loop. The difference is that that pattern doesn't scale to multiple "slave" loops, you are scaling it by creating multiple queue references, vrs with a notifier you only need one reference. There is one other subtle difference between notifiers and an SEQ implementation: With a notifier you can always retrieve the last notification using the "get notifier status." It will immediately return the last notification value. If you were using an SEQ you would have to store the last notification somewhere else as once the dequeue is performed, its pretty much gone.

    The idea with notifier "slave" loops is that they all respond to the same command, but the execution time of each loop is independent, and only the latest command is important.

    ~Jon

  10. another reason is that one just might want to trap "error" -- even if it's because of releasing the queue -- and not exit all of the parallel loops but perform some other kind of "recovery" operation.

    Trap the error on a released notifier? I don't follow. What other error conditions would be presented from a notifier? Unless you are shifting the link to the outside world (i.e. notifier/queue) and somehow use another reference to get a new "com link" I'm not sure that would be practical. Since all of your objects are "byval" you couldn't even do this.

    In any event, you have the history reversed -- and it IS about history.

    I'm not following you on this.

    ~Jon

  11. I have used that exact template to terminate parallel loops. In the past -- before Event Structures (and some other tricks...) that was the only way to guarantee shut down of parallel loops that received inputs from queues as well. It's a solution that goes WAY BACK to LV5 or so...

    Not really the modern NI template for M/S, but completely reasonable. I don't go back to LV5 so I'll presume that releasing the queue did not interrupt the dequeue/preview in those days with an error.

    I've done this, and I hate dislike it. It effectively takes an event based processing node and turns it into a polling mechanism. Why have two asynchronous operators in one loop? One of them is necessarily polled to keep the loop going (either by the master to notify "run" or timed out to indicate "don't stop.) This is debatable, but I've taken to releasing the queue to destroy loops. I feel like it offers a more efficient operation. I usually toss a comment in the loop that the error is the exit condition.

  12. I took a quick look, but to be honest I know very little about creating or modifying xnodes. I'd love to explore using them for collections... so many things to do... *sigh*

    I'm not sure an xnode will be suitable for a collection. You have to know everything about the type in the wire and you'll be stuck with the type after you create the collection. I think you can do what would be useful just using standard class programming, you'll just get a ton of coercion dots.

  13. Hmm... I'm not quite sure how to interpret this statement. Are you saying NI's template defines the what the master/slave pattern is, or are you saying your comments have been directed at that particular implementation of the pattern? The context of your posts suggests the former, so I'll assume that is what you mean, but to be honest it's not clear to me. Let me know if I've misunderstood you.

    I'm justing trying to make a distinction between the NI template (which I've said before is terrible) and the code you are actually writing (which is actually a good example for people.)

    I don't wanna get hung up on the vocab, it is just irritating to me that there is this template out there in the wild called "master/slave" which noone uses and is continually talked about.

    At any rate, here is some xnode magic I've been working on. I have no idea when I'll finish it, but it is in working order as of today. It fits fairly well in with this discussion anyway.

    It needs to be optimized for recompiles and more thoroughly tested.

    Disclaimer:

    Don't use this for anything, its terrible.

    It will crash labview (not really, but maybe, its not been tested enough to know for sure.)

    If you still want to look at this, look at the .lvproj in the example folder.

    It will get you started. The "catch.xnode" is worth your time i think.

    Observer Pattern.zip

  14. Definitions:

    Master/slave - NI's template implementation.

    Actually you can. Set the queue size to one and use the lossy enqueue function.

    Incorrect. This only works if you have ONE dequeue place. If you were to try this with multiple "slave" loops it would immediately break down. A queue (even size one) will not perform identically to a notifier.

    So I'll ask my original question again:

    Does anyone use the master/slave template pattern? You MUST use a notifier as the transport. You MUST have atleast 2 slave loops. You must RELY on lossy transmission (ie its a better solution than a lossless transport.) If that's not the case I won't be satisfied as it is just a different way to solve a problem that could be done using a different, better, pattern.

    Basically I think the template is terrible, and its actual use cases limited. Using a notifier for commanding one "slave" is a drop in replacement for an SEQ (What Daklu was talking about) solution. For whatever reason NI has labeled it a "fundamental" concept, but it is really just academic.

    ~Jon

  15. I don't think lossy vs. lossless is one of the defining characteristics of slaves or consumers. I suspect the template uses notifiers simply because they allow easy 1-to-many communication. I use queues instead because most of the time I want lossless master/slave communication.

    We are in disagreement on this. The implications of having a message never be received are far reaching. It would greatly affect the construct of the underlying code.

    Notifiers are one to many, queues are not. Notifier slaves will scale, QSMs (or any dequeue) operator, will not scale to other threads. The notifier will "wakeup" all waiters ONLY once, but only with the latest data. You cannot use a queue to do this, and you cannot do it with an event structure (again lossless transmission of messages.) The template is for some special cases where only the most recent data is important, and the older data can be thrown away.

    That's because we generally use the event structure to achieve this.

    Not the same as the template, event structures are lossless transmission types.

    Honestly, I'm starting to think that the NI "master slave" template, isn't even a master slave pattern. Somebody at NI read a CS Article and created this template. Subsequently its been taught in countless Labview Seminars, then seldom used, or worse, used with the assumption that the message transmission is loss-less, which it isn't (probably only "guaranteed" by fictional windows timing).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.