Jump to content

JPrevost

Members
  • Posts

    61
  • Joined

  • Last visited

Everything posted by JPrevost

  1. So my program is displaying a lot of data to a GUI. Attached is a screenshot. My laptop is a Compaq Presario 3000 model 3045us; specs are as follows. P4 2.4, 512mb, 60gb, 16"lcd 1280x1024, SiS 650m video shared ram. My program reads in serial data at approximately 37.6kbps where the data is async sent blind. Each packet si 277bytes where 256 is from the car's computers ram dumped followed by 8 10-bit a/d channels followed by a 16-bit checksum of all of the data followed by the start/stop bytes "55AA" repeated continuously (duh, async). Anyways, my program has several loops, 1 for events, 1 for looking at the serial port or playback file, and 1 to display the GUI. After the serial loop has gotten a full packet it runs a checksum on the packet and compares it to the checksum bytes in the packet, if good, send it to the queue. The GUI loop has the queue (FIFO) which outs the byte array "packet" which is indexed and sent to the gauges and various displays. Here is the problem, it takes up about 35-40% of my cpu power ONLY when the data is being sent to the gauges. Mind you, there are a couple tabs in my program, when one tab is in focus there is a conditional case structure that only updates what is visible. With that being said, it's obvious that the indexing and calculations being run on the byte array aren't the slow down because when I display "flag" (aka boolean) only the CPU usage is SUPER lower, like 2-6%, change over to the gauges and 1 line chart and it jumps up . Here might be my problem; I'm using classical gauges instead of the default 7.0 gauges because I thought it would be lower cpu usage, is this wrong? Are the newer gauges with the alpha blending edges and smooth appearance better code wise? I don't have the time to convert all of my gauges so maybe somebody knows. Other than that, I'm at a complete loss. The gauges are only refreshed 17Hz! Slower laptops have had similar issues with my program and honestly, I'm fed up. My last step is to update the GUI less frequently, every other packet and from there I might try indexing the byte array for only the bytes I need instead of sending the whole packet into the queue... although I need to display 1/3 of the bytes in the packets :headbang:\
  2. Which one is faster with raw data and is there a difference or are they the same thing? I notice the feedback node doesn't release a value at the end of a while loop? Could't you just wire a terminal to the the loop basically creating a shift register? Maybe there is a book that goes into details, if there is, what is it's title and where can I get it
  3. I've been reading about OpenG this and OpenG that but I don't know where to start. There is very limited documentation explaining why anybody would want whatever it is OpenG offers . Maybe I suck at researching but I'm pretty sure I'm not. Breifly, what is the advantage of OpenG other than it being "free"? Do I need to install Commander before I can use/install the OpenG Toolkit/Builder? Is there a chance that installing this software will break my current LabView program causing a reinstall? Will I need to package new run-time lib for sending out to beta testers that have only installed the main run-time? Currently I enjoy sending out updates of just a compiled exe to replace the old... I'm sorry for all of the questions but the OpenG websites I've visited haven't given me my answers. Would be nice to see some screenshots . Last question, if you don't mind; Is the OpenG builder a replacement for the LabView application builder or is it just a builder for wrapping vi's to send as toolkits? The more I know, the more help I can be in the future. I plan to stick with LabView for life... eventually helping out with this OpenG stuff
  4. So here I am working with a large program that has user configurables and I want an easy way to save them and recall them. I have the read from cfg done before any of the loops start to run and I use locals to write to the controls (is there a better way?). Then, on my program's "stop" button value change event I have a sequence that writes the new values from the locals to the cfg. All is fine and dandy except that it's taking up a lot of real estate. This program is rather large (an all-in-one tuner/datalogger for automotive EFI) and I don't have that nifty feature from 7.1 called navigator . I just started my own buisness with a friend so we can't afford the upgrade just yet. If there is an easier way to save cfg data and recall it I'm all ears.
  5. I'm using LabView to design a full fledged automotive EFI datalogger/autotune program. When I say full fledged I'm talking real-time emulation hardware support, graphing, extra a/d inputs for datalogging accelerometers and special sensors like thermocouples for EGT's and wideband o2 sensors for accurate AFR measurements all synced to the GM computers ram dumped 17Hz . It's freakn' awesome. I'll be posting pictures in another thread somewhere on this forum! I tried Visual Basic but gave up when it's serial communications wasn't giving me good control. Dealing with async streamed data isn't easy.
  6. Passing an empty array causes the GUI (which is in it's own loop) to display default values. This is the data I was calling "crap" because the user gets confused and my eyes hurt when the gauges and charts go bonkers. Not to mention that the loop that sends the data is "looking" for the next start/stop bytes which means if I get a bad packet, it will then sometimes run the minor loop 270+ times before getting the next start/stop bytes. This means the GUI gets 270+ empty arrays screwing up the line charts (not xy graphs). With queues I'm able to collect the valuable data and the GUI loop waits for the queue to have an element in it meaning low cpu usage and clean data being sent to the GUI. I was going to use globals and occurances but the queues seem to be working too well to change anything. Thanks for the bit about the memory, I'll keep an eye on it. I might go to notifiers although I don't want any dropped packets. Thanks
  7. Man, nobody has any input . JFYI, I taught myself how to use queues and it's doing great. Thank you Jon, you're a life savor .
  8. I saw it in another thread that was commenting on how NOT to program in LabView (lots of local/globals). Is this feature in express 7.0 pro? I can't find it if it is and I'm betting I missed this feature by 0.1 versions
  9. My program takes in a lot of async serial data, runs a checksum and checks it with the checksum in the datastream. If it checks out I would like to send the data array (2D) to the front panel for UI. If the data is bad I want it to NOT update the UI basically exiting the while loop, what's the best way to do this? As of right now when I get a bad checksum my program reads in the next byte in the serial buffer and checks to see if it's the start bit 55h, if it's not, it exits the case structure and after exiting the while loop it has to send out data to the UI so I get a lot of "crap". My temp solution was to use a shift register of the array so that if the checksum is bad it sends the last "good" packet of data to the front panel. The problem with this is that while the while loop is checking for the start/stop bytes "55AA" it might have to read in a LOT of bytes before coming up with the start of the next packet so while it's looking I get a lot of repeat data that shouldn't be displayed at all. So the questions is this; Do I use notifiers, local variable, global, lv2 global, or occurance with local variable? I believe I'm going to have to go from an embedded while loop to a parallel. I'm familiar with this as this is already in the VI handling events . Any help would be appreciated. One last tid bit of info, the data packets are 277 bytes, start and stop bytes, followed by 256 ram bytes from hardware, followed by some 10-bit AI, then the checksum. The serial rate is ~33.6kbps which works out to around 17 packets a second.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.