Jump to content

PaulG.

Members
  • Posts

    822
  • Joined

  • Last visited

  • Days Won

    22

Posts posted by PaulG.

  1. This is terribly unscientific but I have found that a VI that doesn't draw right, doesn't run right ...

    I tend to agree. I've spent literal weeks trying to debug a poorly drawn (not drawn by me, of course :rolleyes: ) LV block diagram that I would have been better off just deleting entirely and starting from scratch. :throwpc:

  2. This is, indeed, an excellent place to look if you're not UML initiated. Maybe we should create and shift this part of the conversation to a new thread about the pros and cons of UML, as opposed to its' tools, so we don't confuse the two? What do you think Mike?

    :oops: Didn't mean to redirect the conversation. UML is fairly new to me. I might be dangerous with any UML tools. :)

    :book: PaulG. looking up UML and UML tools. "I'll be back".

  3. First place I looked: Wikipedia, Unified Modeling Lanquage

    Not really a "tool" but a quick summary of UML, pros and cons. A real eye-opener. I have a feeling you and I are going to have some discussions regarding this. I'm all for GOOP. It makes sense and my attitude and style have naturally been leading me in that direction. However, UML, at first glance, seems to be a little more complicated than it needs to be. I'll withhold judgment until I've worked with it for a while. Until then, expect me to ask a lot of questions and have a few opinions. :)

  4. Reminds me of an old bumper sticker that was near and dear to my heart:

    "Support your local musicians, blow up a disco."

    B.

    Another funny one ... the scene from the movie Airplane!, where the plane is cruising towards the radio tower, the DJ says "This is Wsomethingsomething, where disco lives forever!" Then CRASH! The plane knocks over the tower ...

    :laugh:

  5. Disco? ...sorry before my time (I'm young I know)

    Disco? Did someone say "disco"? For me disco is more fun now than it was when it was supposedly "cool". Just a few weeks ago I discoverd the Windows "dancers". They're cute little guys/gals/couples who dance on your desktop to your music. Kenny is the 'disco' dancer. I'm more of a jazz guy but in my collection of music tracks I have bit of disco. Watching Kenny dance to one of my all-time favorites the O'Jay's (club epic version of) "Love Train" is a hoot! :D

  6. In case this has not been mentioned: I like the new file saving features. I can rename a vi without having to remember to delete the old one. Sometimes I want to rename a vi into something a little more descriptive. Sure reduces the number of 'old' vi's in my directory.

  7. Well, it sounds like you might have half the challenge behind you, and half the challenge ahead of you. Without knowing your particular display needs, its hard to give specific advice, but I'll shoot off some "random" suggestions non-the-less.

    20MBytes/sec of information is obviously alot for the "average human being" to digest, visually. :wacko::blink: Consider your analysis carefully, and then decide what really needs to be displayed to the user "real time". Keep in mind that a user is unlikely to be able to keep their attention to a screen for more than maybe a minute, at best, on a good day, with just enough but not too much coffee, etc. :rolleyes: You may be able to use these facts to your advantage. I'll guess that you have waveforms of 20,000 points each. Your monitor is on the order of about 1000 or so pixels, so you could bin your waveform into one of less than 1000 points, and then keep track of a max, mean, and min of each bin. It might be possible to do that fast enough, but it means being very careful in how you do your operations. Additions are quick, but multiplications are slow, at least at these timescales. You should only update the screen at the fastest once a second or so - anything faster may look sexy, ;) but its not going to impart enough of a lasting impression.

    I'd strongly suggest going with the two parallel loop shared queue solution, with your analysis in one loop and your daq in the other. It might be possible that you would then have a third parallel loop, which would be your display loop, that would share a display queue with the analysis loop that would be fed "well distilled" information once a second or so as an indicator of what is going on "under the hood".

    Do NOT try to display OR do data analysis in the data acquisition loop, please. I'm taking a wild guess that this is what kicks you into the 2300-2400 msec range. If you need to display, feed a queue and let a parallel loop eat from the queue and let that loop do analysis, update the display, take out the garbage... :) Displaying and data analysis in the data acquisition loop is fine at slower, more pedestrian applications, but it will simply not do for high speed applications such as the one you are working on.

    I hope this helps!

    -Pete Liiva

    p.s. Is this a LIDAR application? It sounds not entirely unlike some things we do around where I work.

    Our application is an ultrasonic NDT application. I think I have a lot to work with this morning. I have to have some visual display during acquisition. However, the visual representation only needs to be just that. I can decimate the array and display only the decimated portion. I can cut it down by a factor of at least 20, refresh the display only once per second, interpolate the graph and it looks fine. Also, the data analysis happens after each series of scan pulses so I don't have to worry about that ... for now. Thanks for your help.

    PaulG.

  8. Hmm... I don't think Labview RT is going to do a bit of good, I suspect that there would not be any drivers for this board. Perhaps the "state machine" is not doing what you need it to do. For DAQ systems I prefer a two parallel loop process with one loop simply doing nothing but acquiring the data and stuffing it in a queue, and the other loop feeding off of the same queue as fast as it can.

    Here is a quick and dirty idea to see if you even have a chance. Try a simple loop where you acquire the data at the rate you intend to in real life, and do nothing with it. Put in the loop the bare modicum of timing checks to see if things loop fast enough to imply that the data is being acquired that fast, and maybe set a display outside the loop to look at the last data set acquired AFTER the loop is terminated to verify that you were getting real data. Forget about the state machine, data processing, etc. Just initialize an array fed through a shift register to the 20Ksample size and replace that over and over. If you keep it simple enough, you ought to see if it is even possible to do what you require with your setup.

    Would you be willing to show your code for people to look at to see if there are any "gotchas" in the diagram? You might want to do screen captures, since the code for your code interface calls will typically be a pain to properly transfer with the actual vi if you posted that.

    -Pete Liiva

    I managed to get my 20 usec of data at 1GS/sec at 1K pps. I made some modifications, put a "0" wait state in the iteration counter loop, put some millisecond timers from where the DAQ starts to where it stops, and with 2 seconds (2000 triggers) of 1Kpulses I'm down to 2003 milliseconds and it's very consistent. That's as close to real time as I need. And that's acquiring the data and replacing the data subsets in an array. But once I try to display it, even only every 500 milliseconds it goes up to 2300-2400 msec. Displaying the data is a major roadblock. I suppose I could try displaying a decimated array but I don't know how much that will buy me. Would a power video card help with this by taking some of the display work away from the PC? Video cards are cheap, and this would be a lot cheaper than another day or two (or three or four) of development time.

    PaulG.

  9. PaulG,

    Would it be possible to tell us the hardware you are using? 20MBytes/sec of sustained/undisturbed transfer for an undefined amount of time is likely to be an issue for a Windows based system, UNLESS there is a "generous" application of buffering involved.

    -Pete Liiva

    It's a Gage 82G. The board has 2 MByte FIFO-style memory and it is capable of acquiring multiple records. Most likely it will only run for about 2 - 3 seconds at this rate at the most then process the data, then run again for 2-3 seconds. I need to do this for up to 30-60 minutes without crashing or running out of memory. I don't save all the data, just process certain portions of it utilizing array offsets later in the code.

    PaulG.

  10. We are going though a Test Standardization effort at my workplace. LV architecture and Test Data Management are our major concerns. I wanted to know if you guys had some examples illustrating different LV architectures in action and if there were more presentations like NI's Design pattern doc i.e. on software engineering with LV.

    If could you point me to the right resources, I would appreciate it.

    Speaking from experience: I don't think you need to spend a lot of time and money coming up with documentation methods for your LV programs. I would highly recommend adopting a "state machine" as a standard LV program design for all top-level vi's and 2nd-level sub-vi's. 99% of anything you want to do in LV can be accomplished in a state machine, and they are inherently self-documenting. Simply mandate as "policy" all programmers utilize state machines and that all sub-vi labels are "visible". Also, mandate as policy all enum state machine controls will be strict type def's. When you "print entire documentation" of the "master" vi and all subsequent sub-vi's and controls you have a sufficient and basic documentation package. I have an application that contains over 600 vi's. It was written utilizing a state machine in the primary and high-level sup vi's and was documented in this way. It's very easy to pick up where someone else left off when making changes. I inherited the first revision (1.0a) and the only trouble I had making revisions was when a previous programmer didn't utilize these basic guidelines and I had no idea what he was doing. I've had to rewrite just about everything he did into this format. But now I'm at "1.9". 2.0 will be perfect and easily sold to ISO or anyone else who requires documentation. And if I get hit by a truck tomorrow :o the next guy could step right in and cover for me.

  11. What would be the best way to acquire 20usec of data at 1 GS/sec (20K points) in 1Khz pulse per second intervals? The board I am using seems to be capable, but the bottleneck appears to be in LV/Windows. Do I need RT for this? I have been at this for quite some time. I'm using a state machine, the vi "free runs" from one state to the next utilizing the external trigger (1K) signal. There is no display and all data points are placed in an array that I dimension before the DAQ sequence starts with "replace array subset". My board is capable of "multiple records" but that seems to have little effect on performance. Most of the commands to the digitizer utilize code interface nodes. Have I hit the proverbial wall and need to brush up on my C?

    Thanks.

    PaulG.

    :throwpc:

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.