lecroy Posted January 26, 2010 Report Share Posted January 26, 2010 I use the old style graphs with antialias turned off to get Labview to run at a fair rate. I scale the X axis and it may not be a linear function. I would like something even faster than this and was trying some different benchmarks to see if there was any way to improve on it. What I am finding is that the graphing is not the worst offender. It's getting the data into the right format to send to the graph that takes most of the time. One way to get around this is I look at how the graph is scaled, then only work on that subset of data. This way when I am zoomed in, at least we get some speed out of it. What I wonder is if there is a better graphing method all together that is smart in how the data is processed internal. So the graph would take the entire data set and compress it down depending on screen resolution, amount of data being displayed, etc. Quote Link to comment
lecroy Posted February 1, 2010 Author Report Share Posted February 1, 2010 I ended up with something similar as my previous post but when the graph is zoomed out, I now slice the data into subsets then run a min/max on it. I then stitch this min/max data back together and pass it down. Similar to how Labview does it, or a peak detect on a scope. Once the user has zommed in far enough, I switch over to sendind a subset of the data. Where the graphing was in the 250 - 500mS range, using this method, its now in 50mS range. I was trying to get about 10Hz screen updates. I wrote these functions in Labview so I would assume if it were wrote in C it could be further improved on. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.