Jump to content

lecroy

Members
  • Posts

    34
  • Joined

  • Last visited

  • Days Won

    1

lecroy last won the day on August 11 2009

lecroy had the most liked content!

lecroy's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I ended up with something similar as my previous post but when the graph is zoomed out, I now slice the data into subsets then run a min/max on it. I then stitch this min/max data back together and pass it down. Similar to how Labview does it, or a peak detect on a scope. Once the user has zommed in far enough, I switch over to sendind a subset of the data. Where the graphing was in the 250 - 500mS range, using this method, its now in 50mS range. I was trying to get about 10Hz screen updates. I wrote these functions in Labview so I would assume if it were wrote in C it could be further improved on.
  2. I had made sure there were no non-present mapped drives (non found). Then tried it with the network cable pulled. This appears to have no effect on the same times. I wondered if it was not sending the whole project to NI on every save as slow as it is. What ever the problem is, I saw a pretty good hit going to various versions of 8. 9 has made things much worse. Lots of free disk space on PC. Some programs that are very small can take much less time to save than a much larger program. I have one small program that has no graphics and all the arrays are cleared but still takes 10s of seconds to save. The program I am working on now is a huge cluster with several sub panels, graphs, ect. This one will save in 5 seconds or so. Not fast but at least not thinking to get a cup of coffie on every save. It's strange.
  3. I use the old style graphs with antialias turned off to get Labview to run at a fair rate. I scale the X axis and it may not be a linear function. I would like something even faster than this and was trying some different benchmarks to see if there was any way to improve on it. What I am finding is that the graphing is not the worst offender. It's getting the data into the right format to send to the graph that takes most of the time. One way to get around this is I look at how the graph is scaled, then only work on that subset of data. This way when I am zoomed in, at least we get some speed out of it. What I wonder is if there is a better graphing method all together that is smart in how the data is processed internal. So the graph would take the entire data set and compress it down depending on screen resolution, amount of data being displayed, etc.
  4. I have noticed a major increase in the time it takes to save a VI with the new versions of Labview. If I make no changes to the pannel and just select save, times can be in the order of 20 seconds. I am using 2009 with the latest patches. Load times are also very long now. Is there some new wizz bang feature that could be turned on that is making run this slow? Newer PC, 64-bit OS, 8 Gig of RAM.
  5. Been a few days so figured I would see if anyone knew of a trick for this. Looks like bit packing is not something people use. Using the MSVS 2005, optimized for speed, release the time to run this function is now in the 30mS range. Much better than the two and a half seconds I saw using the array decimate function. This is good enough for what I need. The updated benchmarks packer_new.zip
  6. I wasn't really looking for a how to do it as much as if they had a function like this already built in. An interesting side note, I plan to just code this function in C and be done with it. Before I did this, I wrote the function the same way I plan do code it but in labview. For fun, I benchmarked this and it cut the time in half. A couple of quick tweeks got it into the 200mS range. Not bad considering where I started but not good enough to be useful. C still has it's place.
  7. I am using the latest version of Labview 2009 with all of it's wizz bang super functions. What I am attempting to do take an array of 32-bit data and convert it to a different size. The data is packed. So, for example say I have an array of UINT32 numbers and I want to convert them to 18 bits. The first number would be represented by the first UNIT32. The remaining bits are the start of the next number. I tried to flatten the 32-bit data to a 1-bit array and then decimate the 1-bit array to form multiple bit arrays of the target size. I then convert these back to single numbers. It works but is very slow. The quickest conversion we have found was using the replace array subset. I have attached the example. On my PC, this takes about a half second to run, not including the time to create the dataset. I need to get this down into the 10-50mS range. Is there a slick way to do bit packing and unpacking in labview that's fast? DMAunpack4c.vi
  8. There's a typedef labview? It's a new feature for 2009 and was not present in 6.1, 8, 8.2, 8.5 or 8.6. To be honest, my Labview code is sloppy at best (by my standards). I use the tool for rapid prototype work where I need to get things working fast. Coming from the days of using BASIC to write test scripts this tool has saved me countless hours over the years. (even when LabWindows was available, BASIC was still my choice) That said, one of the benifits with Labview is that it allows me to be sloppy. The save program defaults feature is one of those features that allows this. Those of you that initialize everything, more power to you. Surely this is the right way to program and the save defaults should be removed. For the tab menu bug, this is a work around for 2009. Consider that I am not a software programmer by profession. Labview is just another tool in my belt. Anything that NI does to the tool that saves me time is a plus, no matter how sloppy it allows me to program.
  9. Agree!! It's interesting (as mentioned earlier) that when I save the defaults, just loading the VI it will stick. So, for my case the problem appears to be related to building. Strange.
  10. Your entitled to your oppinion as am I. Unfair is paying for all the licenses and then getting this sort of quality. The interface is going down hill fast and for what? Is it making me more productive? No. I call it like I see it.
  11. I do not run the Labview programs on the XP-64 machine. They are only developed on it. I have no scripting programs running. As a matter of fact, I shut down everything when using Labview. Previous versions of Labview Undo work fine on this same PC. This is a new feature for 2009. Something they decided they had a better way to impliment and did not do a good job of testing.
  12. Nope, no MathScript. It must be something else causing it. My Labview designs are normally very basic. If you know of a way to determine which functions would be the offender, let me know. When it was running the build the first time, it did not say what function caused the problem.
  13. Because there are so many versions of each OS, like Vista and XP ... I am running XP Pro 64-bit, Version 5.2 SP2. Erase two wires, can only get one back. Erase two locals, can only get one back. Move two items, can return both to their original positions. Erase one wire, move one item, can return item to original position but can not get wire back. Did not do a lot of testing on this but it appears related to the erase. Remember when there was no undo?? At least this new version doesn't take me back that far.
  14. That was already covered above. Renamed INI, restarted LV. It created a new INI with defaults. Undo is still only one deep. My guess is it's inside the Labview code. 2009.1 will be out soon.
  15. Another person here is going to install it on another PC. We will see if they have the same problem. Could be NI has something against our company and not just me.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.