Jump to content

Cat

Members
  • Posts

    815
  • Joined

  • Last visited

  • Days Won

    15

Posts posted by Cat

  1. I haven't done any exhaustive testing (like filling up the disk), but we've gotten good write speeds using a 2-disk SATA RAID. Is something like that an option?

    My platform is a laptop with one eSATA port -- so only 1 drive. For rev2 I might be able to look at getting an ExpressCard SATA card with a couple ports and trying a RAID setup. Or even just 2 disks -- 1 to archive the raw data, and the other for the partially processed data. For the moment, I'm probably going to save the raw data to the eSATA drive, off-load it during down time to a USB drive, and do all of the post-processing there.

    Thanks for your input.

  2. If I understood correctly you want to connect one RS-232 Device with a USB Device, without a computer between?

    I don't understand how this is supposed to work on a technical level. USB and Serial have (Aside from the "S" part in USB name) nothing in common.

    I believe Bjarne wants to connect a USB device to a computer that is so old it only has a serial port and no USB ports. I'm not sure there is such a converter available (there are multiple converters for going the other way).

    This reminds me of those posts we used to get on LAVA regarding doing DAQ with the parallel port.

    BTDT. Long before LAVA was born. smile.gif

  3. I am looking for a RS232 to USB converter and can't find one.

    Hmm... lots of USB port -to- DB9 device converters out there, not so many DB9 port - to - USB device converters.

    Do you have room in your PC for a USB card (PCI)? You can get one of those for under $20.

  4. [...]

    Second, wouldn't your recommendation to Cat defeat the purpose of the flush not automatically deallocating?

    That's what I'm hoping. shifty.gif

    I've coded up a vi that:

    1) reads the number of elements remaining in the queue

    2) flushes the queue

    3) creates an empty array, size of remaining elements

    4) enqueues array

    5) flushes the queue

    I added step 5 in order to get those empty elements out of the queue since I'm going to need it again on the next data run. I'll only call this at the end of a data run, since I don't want empty data getting into the recorded data stream (yes, I could check for it, but I'm attempting to disturb the "real" code as little as possible). One of the issues with this design is that it's very possible that the # elements in the queue when I read it is not the max number that were ever enqueued. I would need to carry along a holder for the max value to implement this right.

    BUT, my real problem is being created by my disk not being able to keep up with my data rate. I am saving data in both it's raw form (data and lots of status info about the data packet) and in partially processed form, in order to decrease the post-processing time. This amounts to streaming to disk at somewhere in the neighborhood of 56MB/s. My disk can only keep up with that until it's about half full. My fallback position is to just save the raw data and spend more time post-processing after the fact. That cuts my write rate almost in half.

    I wrote a little vi to verified to myself that LV does indeed reuse the memory allocated for the queue. As I said in my original post, this doesn't seem to be happening in my project code. The queue backs up, lots of memory is allocated, the queue empties, no memory is deallocated, the queue starts to back up again and more memory is immediately allocated. Since this whole process also involves writing to disk, I'm wondering at this point if it doesn't have something to do with buffering before the write to disk. But that question would probably be for another post...

    Thanks for all the input, everyone, and AQ, I'm looking forward to whatever summary info you can give us on queues and notifiers.

    Cat

  5. I need some help with queue memory management. In my current project, it seems as though if a queue gets backed up, it grabs memory (obviously) but then after the dequeue process catches up, the memory is not released.

    I have an application that uses several queues to pass data around. One of these queues receives processed data and in another loop the data is dequeued and saved to disk. I am using a 1TB disk that is almost full and quite fragmented (purposefully). The closer I get to the disk being full, the slower the writes to disk happen. My write data queue gets backed up and starts gobbling up memory. Each elements of the queue is over 1.5MB, so it doesn't take much of a backup to use up a lot of memory

    All of this is understandable. What I don't get is that if the disk manages to catch up and the queue empties, all that memory that was grabbed to store the queue (I assume) is still taken. This becomes a real problem if more than a few of these slow disk cycles occur and it eats up all the memory in my machine.

    I spent a couple hours on the web researching, and while this issue has been raised before, I couldn't find an answer. I tried searching here on queue implementation, and got over 3000 hits. Many of those turned out to be unrelated posts from Aristos Queue...

    Any thoughts on this, or pointers to where to look for pertinent info?

    Cat

  6. QUOTE (PaulG. @ May 28 2009, 01:02 PM)

    Over the years I've spent a lot of quality time in SF for work and a couple of vacations. You could do a lot worse. Have fun. :)

    I'm sure it will be a great time. And, honestly, my mom is a wonderful person and one of the few people in the world who could put up with *me* for a week straight. :P

  7. I'm flying all the way across the USofA tomorrow to San Francisco (and boy, will my arms be tired.) It's for vacation (yay! finally!) and I'll be sightseeing with my mom for a week. And drinking lots. Because I'll be with Mom. For a week...

    I'm letting you all know for two reasons:

    1) I don't want anything interesting, illuminating, or even just entertaining posted all week. I don't want to miss it.

    2) I'm getting tired of Alfa posts being the only ones in the Lounge.

    See ya!

    Cat

  8. QUOTE (crelf @ May 28 2009, 10:52 AM)

    I agree. I read your comment: "I'm okay with scrolling, as long as there's a reason for it, and it's in only one direction. " and was thinking out loud that if we were stuck with one direction, it might be more intuitive to do it horizontally.

    I've read your tech articles on UI design and they've been very helpful. But they are for the FP, the actual user interface, yes? Tho, one might argue that while we're coding, the block diagrams *are* the user interface. :)

  9. I'm definitely in the "keep it all to one screen" camp, but sometimes it just can't be done without over-artificially breaking up code. In that case, I much prefer to grow horizontally. Most of us do read from left to right, after all. And LabVIEW pushes us in this direction with its horizontal sequences. Maybe those of you who like to grow vertically should put in a suggestion to NI for a vertical sequence.

    However, I regularly deal with data files with large headers. My 70+ node clusters require a lot of vertical room (on my little laptop screen, anyway). Maybe I should put in a suggestion to NI for horizontal cluster bundling... :)

  10. QUOTE (neBulus @ May 19 2009, 08:42 AM)

    ... and the nodules they kept finding where eggs of the "M..." creature. When they hatched they started doing the work of the minners. Can't recall the name though.

    Both you and crossrulz get lots of partial credit. :)

    The episode was, "The Devil in the Dark." The critter was a 'horta'.

    QUOTE

    I'm an old school Treker that never accepted any of the new versions. Netflicks offers a download service that we have set-up to be able to watch any of the episodes on demand. As soon as it was set-up we just had to watch the episode where Kirk makes gun powder to do batle with the Gorn. Boy are those costumes cheesy in high def!

    They were pretty cheesy in the original low def, too!

    QUOTE

    There few episodes where I remeber thee names but my favorite was (I believe) "City on the Edge of Forever" where the phrase "Edith Keeller must die." was used more than once and Spock used that wonderful line about using stones and bearskins to enhance his tricorder.

    My fellow trekkie cow-orkers and I often quote the stone knives and bearskins line to describe working here...

  11. QUOTE (Black Pearl @ May 21 2009, 07:58 AM)

    Why not place it on the palette yourself if you really like it?

    I should look into that. The problem is (I assume) I'd have to redo it every time I move or upgrade LV. But, since I already have to mess around with the menu configuration every time I upgrade or install on a new computer (no 3-D controls for me! They were cute looking for about a week) I could deal with it all at the same time.

  12. QUOTE (crelf @ May 20 2009, 06:10 PM)

    You can rest assured that if a few more constants were added to the palette then they wouldn't be the ones you want anyway, and you'd still need to select the representation.

    I have to agree. I use SGLs a lot more than DBLs (SGL save space with the large data sets I always seem to be working with -- I don't bother to use DBL unless there's a precision issue) and I32s a LOT more than either. So I'd be much happier if the FP numeric was I32.

    You can't please all the people all the time...

  13. QUOTE (neBulus @ May 19 2009, 10:20 AM)

    Thanks for the link, Ben.

    I've been realizing from bits and pieces I've been picking up on LAVA that it matters where controls/indicators live. Intuitively, it's always seemed to me that I should put data (control) only in the case where it's going to be read. Greg's comment that: "the subVI can truly be inplace only if its terminal is owned by the top diagram and not placed into a loop, sequence, or case diagram" changes that paradigm. I'd never considered there might be different "layers" to a single block diagram. I'll need to look over some code (especially with large data sets) and see if I need to move some controls.

    Cat

  14. QUOTE (PaulG. @ May 18 2009, 02:43 PM)

    I guess I've had it up to here with time travel. It's too lazy. And Abrams took way too many liberties with the ST mystique and legend. It felt like sacrilege. :thumbdown:

    That's true; within about 5 minutes anyone who knows ST cannon is going to be confused...

    QUOTE

    The only thing I liked about the movie were the inside jokes and references for us "old timers". "I'm not a physicist, damnit! I'm a doctor!" :laugh:

    My favorite episode is the first time (I believe) "I'm a doctor, not a fill in the blank!" was used. It was, in that case, "I'm a doctor, not a bricklayer!" I guess the "damnit" part would have never made it past the censors. Extra points to anyone who can identify the episode without looking it up. :)

  15. QUOTE (PaulG. @ May 18 2009, 09:16 AM)

    Funny you should say that. I thought Star Trek SUCKED. It was so dreadful I wanted to take a phaser to JJ Abrams. :angry:

    Them's fightin' words, mister. :)

  16. QUOTE (ShaunR @ Apr 17 2009, 04:38 PM)

    We actually rate our IT department as a project risk. The less involvement...the better.

    What a great idea!

    I feel your pain about the whole IT thing. That's why my development computer is off the net, too. And why, if I have to reacvtivate LabVIEW (or read or contribute code to LAVA), there's lots of SneakerNet involved going back and forth between the Official Computer and my real computer.

  17. QUOTE (Justin Goeres @ May 6 2009, 10:58 AM)

    If I take the logical AND of an empty array of booleans, the result is TRUE. I expected it to be FALSE.

    By the same token, the logical OR of an empty array of booleans is FALSE, as I'd expect.

    I remember having this conversation on info-labview 10 or 15 years ago...

    It may be logical, but it's annoying. I generally work around it by doing a length test on the boolean array before sending it off to do whatever I want it to do.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.