Jump to content

eaolson

Members
  • Posts

    261
  • Joined

  • Last visited

Posts posted by eaolson

  1. For the first time, I need to create a DLL from a LabVIEW application that can be used by external code, probably C++. There is a great deal of information on calling a DLL inside LabVIEW, but not very much that I can find about calling a LabVIEW-created DLL from an external application. Can anyone point me to any reference? NI used to have a document Using External Code in LabVIEW, but I can't find a "Using LabVIEW in External Code" sort of document.

    Some things are minor, for example the error cluster is defined in the created .h file as:

    typedef struct {	   LVBoolean status;	   long code;	   LStrHandle source;	   } TD1;

    That's reasonably straight-forward, but the TD1 name isn't very descriptive (I'm guessing that stands for Type Definition 1). Other clusters get TD2, etc. Is there any way to change these names to something more helpful? I worry that, if my DLL changes, these will all be renumbered, and that will make maintainability a nightmare.

    Other things are considerably more confusing. I have a function that returns a timestamp in LabVIEW. In the .h file, this function gets a prototype that looks like:

    void __stdcall GetTime(TD1 *errorInNoError, HWAVEFORM targetSTime, LVBoolean *timedOut, TD1 *errorOut);

    The only definition I can find for HWAVEFORM is in extcode.h that LabVIEW drops in the DLL folder:

    typedef IWaveform* HWAVEFORM;

    And the only reference to IWaveform I can find is also in extcode.h, where it is defined as:

    typedef struct IWaveform IWaveform;

    I can't find any other definition or even reference to IWaveform.

  2. QUOTE(Tomi Maila @ Apr 9 2007, 11:39 AM)

    Those are normalized results describing the propotionality of numer of searches per amount of people.

    Oh, I realize that. And I'm sure it's no coincidence that NI is headquartered in Austin. I just thought it was interesting that no other US cities were listed. So I'm wondering, is Austin in particular a hotbed of LabVIEW development, or are the NI folks themselves doing a whole lot of Googlin'?

  3. I'm trying to be good and arrange all my VIs appropriately in project libraries*, marking each VI as public or private as I go. But I have found that, when a polymorphic VI is called from outside a project library, LabVIEW uses the access scope (public/private) from the instance VI, not the polymorphic VI. It seems that the scope for the polymorphic VI is simply ignored; you can even call a private polymorphic VI, as long as the instance VI is public. I assume this has something to do with how the actual instances are dropped on the diagram via the polymorphic VI. Is this the intended behavior? It just doesn't seem like the way things should be, at least not in my head.

    The attached example contains a project library (Case Lib.lvlib) with three VIs:

    • Poly (private)
    • Case DBL (public)
    • Case I32 (private)

    There is also a Use Case DBL VI that calls Case DBL via Poly and a Use Case I32 VI that calls Case I32 via Poly. Use Case DBL runs fine. Use Case I32 has a broken arrow.

    Originally, I had made a library with a public polymorphic VI and private instances, because that's what makes intuitive sense to me. Calling the polymorphic VI from another VI didn't work because the instances were private.

    (Does this nomenclature bug anyone else? A library has always seemed to me to be a actual collection of things, not a separate file containing metadata about a group of things.)

  4. QUOTE(Tomi Maila @ Mar 29 2007, 01:41 PM)

    First call is not needed as feedback node can be initialized in the edge of the loop.

    So can a shift register. The initialization happens on each call, which makes an initialized feedback node not so useful for an LV2 global. I was just trying to point out that comparing a feedback-method global without First Call? and a while-method global with it is like comparing apples to oranges.

    Execution times, 10^8 iterations: (YMMV)

    feedback-method global with initialization only: 1272 ms

    feedback-method global with initialization and First Call?: 4693 ms

    for-method global: 4600 ms

    while-method global: 4200 ms

  5. QUOTE(Tomi Maila @ Mar 29 2007, 12:42 PM)

    EDIT: The updated feedback node test is attached. It's still fastest but not by so clear marigin.

    There seems to be a constant folding bug with your updated VI. The output is always 0. I think this is a known issue. I don't know if this will affect the speed of operation or not.

    You also don't have the same First Call? or case structure as in the other examples. When I add them in, it seems to be quite a bit slower than the other two methods. (The difference between the While and the For methods is only about 1% for me.)

  6. QUOTE(Tomi Maila @ Mar 29 2007, 10:38 AM)

    I didn't find the old tests but I quickly wrote new ones. They are attached. Feedback node was fastest, while loop second and for loop last.

    The feedback method has four buffer allocations inside the outermost for loop. The for method has five and the while method has six. Could that have something to do with it? I'm not sure how constant folding will affect those or exactly what happens when a loop element (i or N) is left unwired.

  7. I hesitate to jump into this thread, but I will anyway.

    QUOTE(Aristos Queue @ Mar 26 2007, 05:21 PM)

    Why wouldn't the whole Error class tree start at General Error? It seems that that would have some sort of basic error functionality that was extended in the subclasses. What's the advantage to making them siblings? (In Java, for example, everything inherits from a general Error class.)

    I can sort of see the need for multiple classes of errors as prototyped in this example. But how would a VI that was expecting, say, a Network Error behave if it was wired a Device Error instead? Those are sibling classes. Would there be an implicit typecast up to Abstract Error and then a typecast down to Network Error? (My experience with LVOOP is a bit sketchy at this point; apologies if that's a blindingly obvious question.)

    The whole error IO paradigm as it stands in LabVIEW only makes sense if a VI can accept an error and pass it down the line as far as it needs to go, possibly into and out of VIs that are not aware of that particular error customization. Is it envisioned that this will change in the future, or will error clusters be replaced with Abstract Errors?

    I kind of see Tomi's point. The only reason to have a specified User Defined Error is if the other error types are not user-definable. Does this mean I won't be able to sub-class off of DAQ Error?

  8. QUOTE(fungiscience @ Mar 20 2007, 05:08 PM)

    I'm trying to write one sample to one virtual channel programmatically with LabVIEW 8.2 in a task configured in MAX (DAQmx simulated device PXI-6225). This task contains 30 different outputs. I don't want necessarily to write all 30 outputs all the time. Is there a way to do that?

    I'm working from memory here, but I don't think so. If your task is configured for 30 channels, you need to update all 30 channels at once. What you can do is keep track of the current output values in an array, update only the array element you want to change, and use DAQmx Write 1D DBL NChan 1Samp to write the entire array to the hardware. You could also set up each channel in a different task, but that becomes complicated if you want to update them simultaneously.

  9. Just to make sure this horse really is dead ...

    I was thinking the error being thrown was incorrect, and so brought this up over at the NI forums. I was wrong; the regexp is just malformed. I think the "G[b|i]" element should really be "G(b|i)" and the outermost [ and ] should be ( and ). You can't next square brackets or use parentheses inside square brackets.

  10. QUOTE(LV Punk @ Mar 12 2007, 01:55 PM)

    BTW, Thanks :thumbup:

    You're welcome. Actually, I just saw it as an opportunity to sharpen my regexp skills, which aren't as polished as I'd like them to be.

    This error seems to have something to do with nesting an [] inside an () inside an []. When I take away any one of those elements, I stop getting the error. Now, is this a malformed regexp, or is it a problem with LabVIEW? (Told you my regexp skills weren't as polished as I'd like them to be.)

  11. QUOTE(crelf @ Mar 9 2007, 08:17 AM)

    If you have only one input and their datatypes are the same class, you can, however, create an icon that looks like a wire :shifty:

    I just noticed that, if you use the block diagram grid, LV places a one pixel wide gap around wires, but not around VIs or controls/indicators. So, for a wire, the grid stops one pixel away, but not for a sub VI.

    I can't believe I just saw this. Is there such a thing as being too detail-oriented?

  12. QUOTE(PaulG. @ Feb 7 2007, 07:33 PM)

    This got freaky right after " ... prostitutes ... " :question: I've always wanted to read the legendary "5th dimension" thread from start to finish ... but I know how it ends. :wacko:

    Hey, at least it's a happy ending.

  13. QUOTE(JFM @ Feb 23 2007, 02:07 PM)

    I don't know if this classifies as a bug or not, but I find it very annoying that simple arithmetics (lika add, subtract etc.) does not work with the Timestamp datatype. At least not returning a TimeStamp datatype in the result.

    Timestamps are absolute times, not relative ones. So (23-Feb-2007) - (22-Feb-2007) = 24 hours makes sense, but (23-Feb-2007) + (22-Feb-2007) doesn't. After all, what should the result of "Wednesday plus Thursday" be, and would it be different from "Monday plus Tuesday"?

  14. QUOTE(Michael_Aivaliotis @ Feb 22 2007, 11:32 AM)

    Typically, when doing streaming to disk, you want to append the data to a file as you collect inside the loop, not after the collection. This means you don't have to build up a large array. That's my opinion. Write everything to disk all the time and after the test is over extract the last 5 mins you want. Also, why are you assuming 16 nodes of data collection? Normally you can sweep all 16 channels with one daq node. Have you considered TDMS files? They are optimized for high speed disk streaming (NI claims). Anyway, just my 2 cents worth. In the end you're the one who has to get it working...

    He's talking about collecting 16 channels of data at 1kHz for 200 hrs. By my quick calculation, that would be an 87 GB data file. I'm sure that's not impossible, but it probably opens up entirely new avenues of challenges. Does TDMS perform any compression on the data it streams to disk?

  15. QUOTE(EJW @ Feb 19 2007, 12:49 PM)

    My original idea was to use shift registers and initalize each one with 300K elements and then use the rotate 1D array and replace array subset functions. Maybe this is the best way, maybe not.. I could sure use some input!

    I initially thought you could use a queue, but I forgot that fixed-size queues aren't lossy. You could use a queue, preallocate it, flush it, then start putting elements into it. You'd have to check if the queue was full, and if so dequeue an element before putting the next one in.

    I wouldn't use the Rotate 1D Array function since it reallocates the array. You could use an array in a shift register and an index to the next data point. Basically, create an actual double-buffered data structure.

  16. QUOTE(JStoddard @ Feb 16 2007, 01:46 PM)

    I'm currently doing a VI for a Leak Test machine that may run up to 24 hours. In this time period I'm graphing the collected pressure traces, and that's it.

    Is there a way to dump the history of the graph to a CSV file? Rather than collecting the data into another array, and then either trashing the array once they start a new test, or saving it to a file if it's involked. It just seems like better memory managment to use the history feature... i guess.

    For a chart, you can use a property node and use the History Data property to get the contents of the chart as an array. It will discard everything not contained in the chart history, though. For a graph, you can just use a local variable of the graph itself.

    I would suggest logging the data to disk as you collect it, though. Unless you're collecting at really high rates, it shouldn't slow anything down. If a power glitch can lose 24 hours of data, it's worth the small amount of extra work.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.