Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Posts posted by Aristos Queue

  1. Danger, Will Robinson! Deep inheritance hierarchies aren't very malleable and you may find yourself unable to accomodate future changes. Or worse, you could end up hacking in modifications and ending up with very brittle code. Can you post a diagram of your object model?

    Really? My opinion has generally been that deep hierarchies mean that you've adequately decomposed objects and you can inherit in the future off of any of the intermediate levels instead of being stuck with one concrete class that has everything. I've always thought that the deep hierarchies are highly extensible.
  2. I converted a VI from LV8.6 to LV2009. There is a ClearError VI inserted in the error wire (see attached). Why did it do that?

    post-2786-125192197963_thumb.png

    No my code isn't this ugly. I used the new create snippet to create the png file and it uglied it up a bit.

    George

    1. Ourcode conventions say that when we insert something like that on yourdiagram, you should see a load warning when you load the VI explainingthe change. Did you get such a warning? If not, please file a CAR.
    2. Generallythis would be done for any node that used to silently fail some caseand in the new version it returns an error code for that case. Existingcode may have been written to assume the silent failure. To preserveprevious version functionality, we insert a function to conditionallyclear the new error code.
    3. Inserting a full "Clear Errors.vi"that wipes out all error information would be strange... The only caseI can imagine that applying to is if in the previous version the nodeutterly failed to propagate error in to error out and we have fixedthat, but, again, to preserve previous functionality, we clear allerrors downstream. But that's just a guess. [LATER] Looking closer atyour posted picture, that is just a conditional clearing of the errorcode, so I'll bet #2 applies.

  3. Just out of interest to you program for NI on a Mac?
    My work machine is a PC. My home machines & laptop are all Mac. My C++ development is always done under Windows where I have access to that wonderful tool MS Visual Studio. I haven't ever found anything for text programming that comes close to the usability of MSVS. On the other hand, most of my G development these days is done on my laptop where I can use my touchpad, which I find much nicer than a mouse for LV programming (easier to move back and forth from keyboard to mouse as needed).
    • Like 1
  4. It used to be a great boon that most of labview was written in labview. There were lots of sub vi's installed that found uses outside the original premise. Pretty soon we'll have one vi (call library function) and everything will be a dll call.

    I seriously doubt it. The percent of LV in LV has gone up substantially with every release since 8.0. The problem is not how much of LV is written in LV, nor even how many VIs can you look at to see how something is done. The question is how many of those VIs can you use in your own code AND expect support for those VIs in the next version of LV. If every VI that ships with LV is one that we have to maintain indefinitely, we'll rapidly stagnate.

    We're not talking about password protecting the diagrams or anything. We're talking about making it so that you have to create your own copy of the VIs in order to use them.

  5. I don't believe they intrinsically suspend execution until an update is received. They also have huge caveates (e.g cannot be dynamically created at run-time). Its a bit of a sledgehammer to crack a nut IMHO.

    To the best of my knowledge, ShaunR is correct. The shared variables do not provide synchronization behaviors. On a local machine they are equivalent to global VIs. Over the network, they are a broadcast mechanism, which, by polling, you can use as a trigger, but I don't think you have any way to sleep until message received.
  6. Yes, you're right, memory leak isn't the right term for it. If had had problems, I wouldn't have known it. But sometimes it has to acquire a lot of data, sometimes less. So the amount of memory that's been allocated is governed by the largest data set acquired. Not really a big deal, just doesn't sit well with me.
    It's a question of reallocation on every call vs leaving the allocation in place to maximize performance. If the data can occassionally ramp up to "large number X", and you have enough ram, then it's a good idea to just leave large block always allocated for that large data event, no matter how rare it is. The larger the desired block, the longer it takes to allocate -- memory may have to be compacted to get a block big enough. Also, keep in mind that we're talking about the space for the top-level data, not the data itself. So if you enqueue three 1 megabyte arrays, and then you flush the queue, the queue is just keeping allocated the 12 bytes needed to store the array handles, not the arrays themselves.
  7. Option 1: Create a queue, a notifier and a rendezvous. Sender Loop enqueues into the queue. All the receiver loops wait at the rendezvous. Receiver Loop Alpha is special. It dequeues from the queue and sends to the notifier. All the rest of the Receiver Loops wait on the notifier. Every receiver loop does its thing and then goes back around to waiting on the rendezvous.

    Option 2: Create N + 1 queues, where N is the number of receivers you want. Sender enqueues into Queue Alpha. Receiver Loop Alpha dequeues from Queue Alpha and then enqueues into ALL of the other queues. The other receiver loops dequeue from their respective queues.

    Option 1 gives you synchronous processing of the messages (all receivers finish with the first message before any receiver starts on the second message). Option 2 gives you asynchronous processing (every loop gets through its messages as fast as it can without regard to how far the other loops have gotten in their list of messages).

    • Like 2
  8. I was using 'JIT' to refer to the background compiling that happens in the Labview dev environment. What do you call that process? Precompiling? A level 1 compiler? On large projects I've seen a several second delay between hitting the run button and having the application actually start, so I don't think it compiles directly to machine code.
    That's just plain old compiling. And that's what we call it. :-) The fact that you as user don't have to explicitly invoke it is a service on our part, but the compilation happens before you run, which is exactly when it happens in every other compiled programming language.
  9. Hey Jim

    That's a good point that I haven't explored deeply. My first thought is that class Bike has its own DVR to it's datamembers and the child Racer has its own so they are separate and shouldn't interfere with each other. I'll do some tests and see what is involved in breaking it or causing it to lock.

    Talk to Mikael... putting the DVR inside the class sounds an awful lot like the GOOP Toolkit implementation where they wrap a raw reference to data in a class to give it dispatching behaviors. Mikael may have some info about the deadlock potential of this situation.
  10. > Dropping a Variant Collection object on a VI creates a Variant

    > Collection with a CollectionImp object instead of one of the

    > childclass objects.

    Go to Variant Collection.lvclass:Variant Collection.ctl. Change the default value (not the type, just the value) of the CollectionImp control to be an instance of one of the concrete types -- HashTable, for example. Now whenever you drop VariantCollection.lvclass, you get one that has a hashtable unless/until you change it to something else.

    • Like 1
  11. One issue to resolve with this method is dynamic dispatching. It seems that you can't do dynamic dispatching using Data Value reference terminals. Hopefully this will be added in a future LabVIEW release. In the meantime you will probably have to do dynamic dispatching inside the In Place Element Structure.
    There are no plans to add this in a future version. It was explicitly rejected as a possible feature during brainstorming... I posted about this in the beta forums, and I *think* I updated the Decisions Behind The Design to talk about this. (If I didn't, remind me to update it soon.)
  12. The following was written on my whiteboard at my desk for most of the time we were developing LVOOP:

    El Voop == Spanish for strength!

    Le Voop == French for style!

    Al Voop == Arabic for quality!

    LVOOP == LabVIEW for class!

    And for the record, we didn't have any way of pronouncing "LVOOP-DVR" during development because DVRs are not specific to objects.

  13. Here's another suggestion:

    Create a new state called "Update Indicators". Put all the indicator FPTerms in that state. Now instead of displaying the values in the state that generates the value, you wait to get to the Update Indicators state to update the indicators. Of course, this would require one of two things. Either:

    A) a queued state machine so that you could queue up the Update Indicators state followed by the next state that you really wanted to go to, o

    B) you make the next state to transition to another value in a shift register that the Update Indicators state can use.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.