Jump to content

pschmal

Members
  • Posts

    8
  • Joined

  • Last visited

pschmal's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I'm extracting the DFT from a signal I am acquiring from one of the HS Digitizers (USB). The niScope auto-setup does ok, but I could really do with more resolution on my DFT. I've tried to increase the number of min samples, however it seems to just increase the sampling rate so the actual time sampled stays the same. I can't find a way to set the max sampling rate, only the minimum. The current flow is Auto Setup -> Change Property (Min Samples) -> Measure Waveform What magic order do I need to set this?
  2. I have found a faster way than pixel by pixel polar to rect. This would cause the for loop to run about 200,000 -- obviously way too much. I experimented with filled arcs, but the for loop would still have to run around 20,000 times -- more if a finer resolution signal was desired. So now I have tried replacePixelLine.vi. Each line is drawn with one end at the center of the picture, while the end of line coordinates sweeps around in a circle. Each channel becomes the pixel color. As I sweep past x amounts of degrees, I switch to a different channel and start coloring with that data. The for loop now runs for 1,000 times each time step, which is not a noticable slowdown on my computer (I think my wait command is still the main contributor!) However, there is a semi-strange behavior with replacePixelLine. When doing a diagonal line (most lines in polar plot!) the stair stepping behavior is not what I expected. Instead of (x,y) -> (x+1, y+0) -> (x+0,y+1), it does (x,y) -> (x+1,y+1). This means at 45 degree angle, the amount of pixels colored is only ~71%. To further distort the plot, instead of scaling the pixel values to the line length, replacePixelLine.vi just drops off the unused pixels. Therefore instead of a polar intesity plot, I get this bizarre cross between cartesian and polar, where my angular bins do get smaller as they regress to the origin, but the data makes square shape! Any advice, or will I have to create my own replacePixelLine.vi (which will be slow ) PS: The edit post function is really buggy
  3. It turns out I was generating each target 16 times, when they only have to generate once. Any hints or directions how I can do a polar intensity plot? A good analouge is a weather radar.
  4. Hello all, Right now I am working on a system simulation in LabVIEW. The process flow is Generate input signals -> Process time domain signals -> Display on FP. Once the hardware prototypes come in and work I can replace Generate input signals with a real signal acquired from a digitizer. I am trying to run the simulation in 'real time', where my simulation step and elapsed time coincide. My goal is 50ms per step. It may be unrealistic. Basically I have 16 channels in this incoming signal. I generate a Gaussian noise signal 32 times per step (1024 points), to add to the generated signal. I then FFT all 16 channels using the N-channel FFT that outputs mag and phase. Right now it seems to be running at 250ms-300ms per step, but its hard to measure since 'elapsed time' seems to only measure whole second increments. Is there a better way to generate noise? I generate twice for both thermal noise and 1/f noise. Is there a vi that generates noise in the f domain and I could just add it after the fft? If its the FFT that is taking the most time, is this the correct block to use? I like that it outputs the f domain in a way that I can directly push to display with having to mess with it too much. But its weird that each block does only a maximum of 9 channels (such a weird number...). An unrelated question: Is it possible to do an intensity graph on a polar plot? I could implement it myself but it would surely be computationally slow.
  5. Hi ESST, Thank you for your reply and pointing out the LabVIEW driver. I am also looking at some multi-input analog to usb converters. As long as the manufacturer provides directshow filters I should be ok to use the NI-IMAQ for USB, right?
  6. I would like to connect an analog camera to LabView. The analog framegrabbers NI offers are somewhat expensive, and I'm running out of PCI slots. Would getting a Analog to USB converter (such as http://www.i-cubeinc.com/pdf/frame%20grabbers/TIS-DFGUSB2.pdf) then using NI-IMAQ for USB Cameras make sense? I have tried to get to the NI-IMAQ for USB site but it seems to be down for the moment. Thank you!
  7. QUOTE (Neville D @ Jun 9 2009, 03:30 PM) Hi all, It is a CameraLink camera. I just took raw intensity of the pixels and wrote my own linear interpolation piece for each RGB channel. I think this is ok for our use but there are better interpolation schemes that I wish LabVIEW could support in a block or what not. With linear you get some color bleed around sudden edges for the blue and reds, its especially noticeable when going from dark gray to white. Thanks.
  8. Hi all, I have a camera from JAI that does not have LabVIEW drivers for it. I created one using the generator. Unfortunately the 'Bayer' pattern of this camera does not match any offered in LabVIEW vision modules. It has more green than the Bayer ratio of 1:2:1. The Bayer patterns in LabVIEW are picked from an enumeriation in a DLL, so its pretty hard coded on that front My strategy is to divide the unBayered image into pixels that should be red, blue, and green. Then I will turn this pixel groups into red pixels, green pixels, and blue pixels, then reassemble them and do interpolation. Is this the best way? More and more sensor manufacturers are using ratios other than the old style 1:2:1, it is sort of surprising there is not support for this in LabVIEW currently.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.