Jump to content

Kevin P

  • Content Count

  • Joined

  • Last visited

Community Reputation


About Kevin P

  • Rank
    Very Active

LabVIEW Information

  • Version
    LabVIEW 2016
  • Since
  1. Yeah it's an old thread, but having just stumbled across it, it's new to me. And the vi access arbitration issue is one I ran into but found a workaround for. I solved a similar latency timing issue related to FGV access arbitration on an RT system by making the FGV reentrant. Well not *only* by making the FGV reentrant, then it wouldn't be a FGV any more. Internally, I changed the data storage mechanism to be a single-element queue with a hardcoded name so every reentrant instance would get a unique reference to the single shared queue. The queue refnum was stored directly in
  2. Thanks for the replies guys. It sounds like I may in fact need to distribute both the .lvlib (or .lvclass) file which I understand to be just an XML description and also distribute all the source files for the code that is part of the library (or class). To hide implementation details would then require me to password-protect the diagrams, right? I had been resisting this approach because of config management and version control considerations. It's much easier to verify the version of a single monolithic executable than to verify a whole folder hierarchy of code. But it sounds like it
  3. I'm coming late to the party on the lvlib project library and am struggling a bit to figure out whether they can be the magic bullet to solve a particular problem of mine. What I've done so far: I've got a plugin-like architecture which implements a small homebrew scripting language. Each implemented instruction is placed on disk according to a convention of "<base instruction path>\Instruction\Instruction.vi" and all supporting subvi's for "Instruction.vi" are found in or under it's folder. At run time, these vi's are found and enumerated as the set of available scripting instruc
  4. One tiny suggestion: when I've had a need for this kind of thing, there have been 2 or 3 ways to consider handling it. 1a. (Applies to an unbuffered freq measurement task.) Use a short timeout value. On a timeout error, clear the error and report 0 freq for that software query interval. Attempts to report the instantaneous freq *right now*. In absence of pulses, 0 Hz is a reasonable estimate. 1b. (A variation of 1a for certain kinds of apps.) On a timeout error, clear the error and report the most recent successfully-measured freq, which you could store in a shift register. Reports th
  5. I've never used cDAQ devices, but based on knowledge of sync'ing tasks across PCI boards and such it looks very reasonable. Really only 2 minor comments: 1. I don't "get" the use of an internal 20 MHz clock as a trigger. Not clear to me what it buys you. But maybe that's a cDAQ thing... 2. If you ever request an AI sample rate that can't be generated exactly by your cDAQ device, I'm not sure if your property node query will work correctly when you query before reserving / committing / starting the task. I had a past experience where a similar query returned the exact same value I request
  6. For the controller I had, it would have cost $1000 for 1GB of memory from NI. I carefully researched the memory's specs (available with some digging on NI's site) and bought from Crucial. Worked out just fine. -Kevin P.
  7. QUOTE (mross @ Apr 8 2008, 02:04 PM) Mike, I've put together code before with at least 3 independent While loops that each reacted to a single "Stop" button event. One mouse click always stopped all the loops. Every event structure that is "registered" to react to a particular event will have the event delivered to its queue. I generally only use this technique for quick prototype stuff since it seems to be a frowned-upon style, but it has always worked out fine in my experience. One little tidbit for those inclined to try it out: do *not* use the value from the button's terminal.
  8. If I understand you right, there *is* a more elegant method that's pretty simple. Simply use computed indices to determine which array element to increment. Supposing you wanted to locate within a 5mm x 5mm x 5mm cube, you'd simply divide the actual x,y,z location by 5 mm to produce integer (i,j,k) indices that range from 0 to 19. Then extract the (i,j,k) element, increment the value, and use "Replace Array Subset" to put it back in the (i,j,k) location. Sorry, don't have LV near my network PC or I'd post a screenshot to illustrate. -Kevin P.
  9. I used to use the "Run VI" method as my primary way to launch background code, and my older efforts would occasionally rely on the "Abort VI" method to kill one that was found to be still running at a time it shouldn't be. As I've re-used and refactored some of that stuff, I've started using other architectures for my background code. There is still usually a "Run VI" up at the top that launches it, but the code itself is more of a queued state machine. Internally, it has its own rules for state transitions, but it can additionally be sent action messages asynchronously from the foreground
  10. Looking for reasonably elegant solution. Will be acquiring multiple channels from data acq boards at same rate (shared clock). Would like to perform software lowpass filtering on each channel, but do not know # of channels at design time. Will be processing data continuously and cannot accept the transient effect of using the 'init = True' input on every call to the filter function. Will need to use the reentrancy of the filter vi's so that each channel uses its own instance and maintains state between calls. So, the problem is that there's a simple and elegant approach (of auto-indexing
  11. Aristos: Simple question but I'm offsite and will likely forget to check this out for myself later. If my memory serves me, it appears that the 1D subarray taken from the 2D array is a ROW, whose elements would be stored in consecutive memory locations. Would there be a buffer allocation and data copy when extracting a 1D COLUMN from the 2D array, since those elements would NOT be in consecutive memory locations? In AI data acq, I'm much more likely to extract column-wise than row-wise since each column represents multiple samples from an individual channel. bbean: I'm interested in memo
  12. jlokanis, I would go with Option 2 as well. I must add my curiosity to that of LV_Punk -- 100's of spawings per minute? Sure seems some other approach might be called for... Meanwhile, for those discussing Option 1 based on the Set Control Value / Get Control Value paradigm. There are some indications that this approach isn't fully robust. There was a recent thread on the ni forums (over here) and in that thread I linked to an older thread where I encountered similar quirks a couple years ago. I wrestled it a while, but eventually hacked up an inelegant workaround and moved on. So, ei
  13. What part of the code actually opens the file ref? I've been bitten before when I had a dynamically called process launcher vi that opened a bunch of refs to files and queues, etc. I then passed the refnums along to other dynamic code. Trouble was, the vi that originally opened the refnums ran to completion and went into idle state. Shortly after that, LabVIEW closed all the refnums it had opened as part of its automatic garbage collection, and all my dynamic vi's started throwing errors. Since it isn't clear to me where your file refnum gets opened, I wonder if you may have a situatio
  14. Kevin P

    Max Issues

    The menu you highlighted looks like the place I had in mind. I don't know why it's deactivated, but one guess is that maybe Traditional NI-DAQ didn't install correctly? I'd probably try re-installing, mostly because I don't have any better ideas. Good luck! -Kevin P.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.