Jump to content

Gary Rubin

Members
  • Posts

    633
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by Gary Rubin

  1. I think the Top Reputation list on the front page is not really necessary. What would be better is to be able to search through Reputable posts... This way, it could serve as an annotation for memorable posts or extremely useful content, just the way tags worked on LAVA1.0. (I kind of miss the tagging features too, not so much the tag cluster on the front page as the tagging itself).

    I like this idea.

  2. Have a read of this....

    http://books.google....itching&f=false

    In particular 9.2.3 and (replace the word "Process" with "Execution System") and ask me again.

    Thanks,

    Looks like I ought to get a copy of that book.

    I still don't think that addresses the topic of reentrant vi's, at least not that I saw. I think the crux of my question was whether making a VI reentrant somehow overrides the execution system setting.

    We're already told to use reentrant VI's when reusing the same VI in two parallel processes. If the answer to the previous question is "no", then do we need to avoid calling reentrant VI's from different threads? Or am I overthinking?

  3. Its not so much waiting for it to become available, it more to do with the CPU having to save state information between switching from the global in one context or another. Have a Google for "context switch" its a big subject. But suffice to say, the least, the better.

    So, that leads me to a question: Do reentrant VI's that are not explicitly set to use the caller's thread involve context switches? Put another way, if a reentrant VI is set to use the caller's thread, does it avoid a context switch?

    Gary

  4. What execution system is the LV2 global assigned to (same as caller???). There's probably a lot of context switching going on since you cannot encapsulate the global in a single execution system. You basically have a one to many architecture and I would partition it slightly different to take advantage of the execution systems.

    • VI A dynamically launches VI's B and C.
    • VI B runs in a continuous loop and manages a DMA from a 3rd-party DSP board. It puts ALL of the data into Named Queue 1 and Named Queue 2.
    • VI C runs in a continuous loop, flushes Named Queue 1, Extracts the bits it needs, then and transmits the contents via TCP/IP on physical port 2.
    • VI A reads the data from Named Queue 2,Extracts the bits it needs, processes it, then flushes Named Queue 2 and transmits the contents via TCP/IP on physical port 1.

    VI B would run in (say) "Data Aquisition" at High Priority.

    VI C would run in (say) "Other 1" at High Priority.

    VI A would run in (say) "Other 2" at normal Priority.

    This way you can give your vis hierarchical priorities to determine their reposivenes under loading. You could also get vi B to extract the bits and only put what is required for A and B on the queues if it has a light loading (if you want). The way described just makes vi B simple and very fast.

    Thanks Shaun,

    The LV2 is set at subroutine priority and is therefore same as caller. I see how that would lead to context switching, but what does that actually mean? More overhead associated with calls to the LV2, so each caller spends a little bit more time waiting for it to become available?

    I can certainly try replacing the LV2 with a queue - that's pretty quick edit. I feel like we tried that a while back, but that was before the existence of VI C.

    The LV2 is set up as a lossy buffer, with an indicator telling us when it starts to drop data. I'd have to use a lossy queue, with another single element queue to pass the overflow status.

  5. Quick check to ensure the LV2 is not a bottleneck. Get rid of teh call to the LV and replace it with data of the same size and type. If ther jitter goes down it may be the LV2 is a bottleneck. NOTE: If you set your LV2 to sub-routine priority, you will pick-up a new call option "skip if Busy" that is very helpful in preventing one thread from hanging waiting on a LV2.

    Ben

    Thanks Ben, I was not aware of the "skip if busy". That's pretty cool. I don't think it would do much in this case though.

    The producer calls an LV2 in a loop, using a "put" state. The consumer calls the LV2 in a loop, using a "get" state. Normally, the consumer waits on the put state to finish running, and the producer waits on the get state to finish. If I were to use the Skip If Busy in the consumer, it wouldn't wait, but it also wouldn't get any data. Because it hasn't gotten any data, there's nothing for the consumer to do, so the loop iterates again and again, until the LV2 is no longer busy, right?

    I guess I could see how this could have benefit when using the LV2 for asynchronously passing data into a loop that's busy doing something else that doesn't necessarily require that data every time, but for my case, I don't see it helping.

    I will try what you suggested, but I'm pretty sure that the input data passing isn't the problem; I still get good behavior if I pass data in but don't process it.

    Gary

  6. Gurus,

    I'm hoping someone here has some sage advice. Here's the situation:

    Running on a Core2 Duo (i.e. dual core processor)

    • VI A dynamically launches VI's B and C.
    • VI B runs in a continuous loop and manages a DMA from a 3rd-party DSP board. It puts a subset of the data into Named Queue 1 and all of the data into a LV2 global.
    • VI A reads the data from the LV2 global, processes it, and puts the results in Named Queue 2 in Loop1. Loop2 flushes Named Queue 2 and transmits the contents via TCP/IP on physical port 1.
    • VI C runs in a continuous loop, flushes Named Queue 1 and transmits the contents via TCP/IP on physical port 2.

    I am monitoring the intervals between VI C's outputs. Ideally, VI C should be putting out data every 15ms, regardless of what VI A is doing. In order to try to ensure this, I have done the following:
    • All shared VIs are reentrant
    • VIs A, B, and C are all assigned to different Execution Systems
    • VI's B and C are run without opening their front panel.

    I've observed that when I run things with a typical processing load, the time delta between VI C's outputs gets very noisy, with spikes up in the 100s of ms. I can confirm that the issue is not related to the input traffic by disabling the processing stage of VI A. When I do this, I do see VI C's output every 15ms +/- a couple ms. I see the same thing with the processing enabled if I give it a very small processing load (leading to a very low output load).

    To me, that points to two possible causes: 1) VI A's processing is monopolizing the system, preventing VI C's loop from running as often as I would like it to. 2) The fact that VI C and VI A are both using TCP/IP writes, although to different ports, is causing some sort of blocking.

    Slowing down Loop1 in VI A considerably is not an option.

    Any thoughts? Theoretically, What's going on in VI A should not affect timing of dataflow between VI's B and C, but that's what I'm seeing. Does anyone have any tricks they care to share?

    Thanks,

    Gary

  7. Bingo.

    Think about it --- if you do "preview", you have to have a copy on your local wire and a copy left behind in the queue. Why? Because as soon as your VI continues past the Preview node, some other thread somewhere could Dequeue the data in the queue. If you haven't made your own copy, you're now sharing data between VIs... and much badness ensues (one stomps on the other, one disposes the data while the other is still using it... ug).

    For the record, Notifiers always do the equivalent of a preview because there may be any number of Wait nodes listening for the same notification.

    I was surprised at how different the runtimes were given that the queue contained a cluster of 4 scalars. I didn't think that copy that would be a very big deal. I guess all things are relative when you're talking about 1e-4 vs. 1e-5 ms.

  8. And as long as the according control is set to not operate synchronously, updating a local or terminal will NOT wait until the new value is updated on screen. It will simply drop the new value into a buffer of that control and go on. The UI thread periodically will check for such controls that need an update and redraw them.

    Controls (vs. indicators) also have a synchronous/asynchronous setting, but they have to be read with every loop iteration, don't they? Or do they work the same way as asynchronous indicators, where the code will "catch" a change in asynchronous control status whenever it gets around to it?

    I'm trying to figure out if my processing loop that contains a bunch of controls would run more efficiently if I moved those controls out to a slower loop and passed the results into the processing loop using queues.

  9. but in general if you start to look into this you should first have a look at the implementation of your LabVIEW code, as there is a good chance that you simply did the most easy algorithm in LabVIEW instead of the most optimal one.

    The code in question performs a running median. Because the length of my window is 4, I've removed the Sort by calculating the median as (sum(Array)-max(Array)-min(Array))/2. It already runs very fast (~3.8us), but considering how often it runs, that adds up to a majority of my processing time.

    I could maybe add some bookkeeping to not have find Max and Min each time, but I'm not sure how much that would help, given my short window size.

    I guess the first thing to do is code up the slowest part in C and see how it compares to LV, then worry about transitioning the whole thing, if necessary.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.