Jump to content

Kevin P

Members
  • Posts

    63
  • Joined

  • Last visited

Everything posted by Kevin P

  1. Thank you very much, one and all! I'm especially looking forward to poking around the XControl version from LVPunk. I haven't used XControls yet. The NI pdf didn't inspire me to toy around until / unless I had an app with a good use case. -Kevin P.
  2. Ok, I've hunted through the numeric palettes, the ni forums, and the web at large. I tried looking into the picture controls a little, and they don't hold any obvious promise either. I can't seem to find what I'm after. :headbang: Here's what I want: I would like something like a round numeric guage with a needle that can display continuous rotation. Continuous is a word which here means, "without a sudden jump across the gap from max back to min." Preferably a control that allows me to plot 2 (or more) needles simultaneously. Application need: an end-of-line test fixture to be loaned to a supplier. I'll be measuring two encoder axes that rotate at different speeds. I'd simply like a nice rotating needle to indicate the current rotational position. An operator can very easily see whether or not it's spinning. Bonus points if it can internally handle the modulus-360 degree stuff while maintaining a total rev count. I can handle those myself easily enough, but if someone's already built up an XControl for this sort of thing, well, I wouldn't refuse it. I haven't considered making my own XControl yet because I don't have an appropriate graphic indicator to start from. I can't help but think this kind of thing has been wanted and implemented before. I'll be embarassed but nevertheless grateful :worship: if it turns out to be right under my nose... -Kevin P.
  3. Sorry, don't have LV here at my network PC so haven't been been able to look at the attached vi. I'd probably approach this as a retriggerable single pulse generation. The Z-index signal would be wired / configured as the start trigger and the A channel would be configured as the timebase used to count #low ticks and #high ticks. You might also need to configure # ticks initial delay = #low ticks for proper behavior. I'd set #high ticks = 2 (the minimum allowed) to be ready for the next trigger as quickly as possible. The upside of this method is that you can change pulse specs on they fly, without stopping the task. The downside is that you can't use the extreme values of phase delay, only [2, 2046]. -Kevin P.
  4. I'd start (and quite likely end) by looking at NI's M-series multifunction boards. You get several analog inputs for force sensors and microphones, counters to generate a pulsetrain, and static DIO to specify direction or control a power relay. You can even generate a pre-defined trapezoidal motion profile using the M-series' hw-timed DO capabilities. (Counters alone can ramp, but depend on software-timing for the effective accel/decel rate. If you also want to generate a precise total # step pulses, you'd really have your work cut out for you...) -Kevin P.
  5. Hmmm, curiouser and curiouser... I've only toyed around briefly so I don't have systematic charts for all the variations. But I kept seeing smaller (faster) times for Frame 1, the implicit coercion. For example, by simply taking the code as posted, enabling auto-indexing on the array at the For Loop boundaries, and making the input array large enough to matter, I got the screeshot below. I ran this on both LV 7.1 and 8.2 with similar behavior. I wonder if it's CPU-dependent somehow -- mine's an AMD Athlon XP... Maybe EVERYBODY gets to be right! :thumbup: -Kevin P.
  6. Hmmm. The few times I was really trying to shave down the microseconds, I measured better speed with the coercion dot. As I recall, I was testing a histogram-like binning algorithm. The coercion in question was for an array index that was calculated in floating-point. For some reason, an explicit conversion to i32 ran very slightly, but quite consistently, slower than leaving the coercion dot. Still, I almost always do my coercions explicitly anyway. -Kevin P.
  7. Yeah, the M-series are pretty nice. I can confirm that they support simultaneous hw-timed DI and DO, where each uses a subset of the bits from the hw-timed port. I've got a mid-range board with 32 hw-timed bits, and have an app that generates 8 outputs on the leading edge of a clock while capturing 24 inputs on the trailing edge of the same clock. -Kevin P.
  8. Ya know, I never sat & thought hard about it before -- I just observed that I32<-->U32 "conversions" would leave all the bits alone and simply reinterpret them, typecast style. It's probably more proper to use the typecast function, but it's more of a pain to navigate to and requires more steps to implement, and uses more space. So I think I agree with you philosophically, but I happen to rely on that typecast-like behavior very extensively in a lot of my counter data acq apps. For example, I normally take encoder data as big arrays of U32 counts which get slung around my apps through queues or whatever. I very often want a signed position count value though, so I just throw in a "to I32" conversion function knowing that no new array is allocated and the array elements don't even need to be individually manipulated. It's a CPU freebie that's become S.O.P. for me. It'd be nice if there were a more prominent native way to get "in range and coerce" behavior out of nodes as simple and small as the convert functions, provided that the existing ones maintain their current behavior... -Kevin P.
  9. Adding a couple more tidbits: I just remembered the RTFIFO that was (is?) downloadable from ni.com back around LV RT v6. It IS lossy. As I recall though, you would have to roll your own loop to retrieve the entire FIFO buffer -- I don't *think* it had functions similar to the Queue's "flush" or "status" which can return all the elements at once. I'm not sure if that version would still be compatible with LV RT 7+, but it would probably still work on the Windows side. I personally haven't started using LV 8.x, but was at the NI Tech Symposium yesterday and noticed that Shared Variables have an option to allow buffering with overwriting. The guy doing the presentation hinted that under the hood, a Shared Variable configured that way would essentially implement an RTFIFO. I haven't played with Shared Variables yet, so don't know if there might be a way to retrieve the entire buffer at once. I also recall an NI forum post about using the circular overwrite built into a chart control. The suggestion was to hide the chart on the front panel to try to prevent any expensive screen redraws. Then the "History" property can give you the entire buffer all at once. I haven't tested this out for speed yet, but I kinda suspect there's still gonna be a lot of overhead in a chart control even when its panel is hidden. -Kevin P.
  10. My most common use case for queues of arrays results from DAQmx data acq. I set up a hardware monitor thread that pushes data straight from a DAQmx Read into an Enqueue call (no data forking). Then another thread Dequeues so I can write to file or do some processing. Any benchmarking I've done makes it seem pretty efficient, but I'd like to confirm: is this how you'd recommend using queues to separate data acq from processing? The other thing I typically do explains why I'd really like native queue-like support for lossy circular buffers. I generally have some type of live display going on that gives the operator a reasonable clue about what's happening. It isn't the "real" analysis, just a brief flickering view into how the test is going. What I wind up doing is that when Dequeueing the DAQ data for file writes, I also decimate it and write it to a homemade circular buffer. Inside the buffer function, I have to copy data values from the input to my internal shift register array. Question: what's the most efficient way to structure the output indicator for such a homemade circular buffer? How does LV decide whether to hang onto and reuse memory space or allocate a new output array on every call? Are there ways to force its hand? I remember some old Traditional NI-DAQ calls under RT where you could wire in the right size input array whose actual memory space was used for filling in the output array values. Would this still be the best way to handle my homemade circular buffer? My RT experience tends to make me look for ways to minimize unnecessary memory allocations... -Kevin P.
  11. One other little note: when you have a queue of 1D arrays of <whatever>, each enqueue operation can pass in a different size array, i.e., an array of 49 elements, followed by an array of 3117 elements, an array of 3 elements, etc. Of course, this may not necessarily be the friendliest thing to do to your downstream data consumer... -Kevin P.
  12. I was recently musing about something similar on the NI forums. Specifically, I've been finding fairly frequent need for a behavior more like lossy circular buffering. I'd like to fix the size of the circular buffer, and then the freshest data keeps circularly overwriting the oldest data. The UI thread could then asynchronously perform analysis on the most recent N samples, acting like a sliding window. The other behavior I'd like in a circular buffer would be the ability to query data in a manner like the DAQ circular buffers, i.e., specify Read Marks, Offsets, # Samples to Read, etc. The trouble with writing little wrappers that accomplish something similar using queues is the need to keep re-writing them for different datatypes as the need arises. Besides, the code to retrieve the most recent 1024 samples in a size 16384 buffer seems pretty clunky using queues. -Kevin P.
  13. A1: Already handled well by others -- would agree that Notifiers may be a better option. A2: I've got some old library code based on occurrences that I haven't felt like changing and have had to deal with similar persistence issues with occurrences. Here's what I've done: Call "Wait on Occurrence" once with "ignore prev" set to F and with a minimal timeout like 1 msec. (I've actually tested it some with 0 msec without seeing problems, but have felt safer using 1 msec). Any old stale occurrence firings will result in a timeout=F output while clearing the occurrence. If timeout=T then all you've done is wasted 1 msec. You can then loop over your normal "Wait..." with ignore=F and a 50 msec timeout. -Kevin P.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.