Jump to content

3 Big issues for Power Users


DredPirate

Recommended Posts

1. Incorporate the decimation feature into the graph and make it user settable.

"Fast Data Display with Decimation

In many interactive applications, the only thing you want to do with your data is show it to the user. There may be a real reason to display 5 million data points, but this amount of data is far beyond the capabilities of most displays. The average LabVIEW graph is on the order of 300 to 1000 pixels wide. Five million points is three orders of magnitude more than you can actually see on a waveform graph. Data decimation is the answer to this problem.

.........."

See: http://zone.ni.com/devzone/conceptd.nsf/we...6256E58005D9712 for more information.

There is a-lot of useful info there.

2. Create a

Link to comment

Well, first of all I did even realize it was there. So I wrote a quick program to see if it was useful. It works well but there are some issues with it.

1) Because I allow the user to zoom in on a graph it requires a large buffer allocation before I do the decimation slowing the program by by about 40%.

2) The decimation takes the Nth element in an array, this is not a good representation of real data. It also averages N elements, again this is not neccessarily the best representation of real. To more acurratly represent real data it would need a Min, Max and AVG point placed in the proper X location.

To speed my program I take smaller buffer allocations and use the Min and Max of N elements. (AVG takes to much time). In my opinion any update over 100ms for normal operation is to long because the user will notice a hesitation in the response of FP. Also, I do not like to elimate features the user is already familiar with unless absolutely neccessary. So eliminating zoom is out of the question as long as it is possible to maintain that feature.

Anyway if you want to check it out I'll attach the experimental vi here. I think I need to begin a new thread on this topic though.

Download File:post-3219-1129603650.vi

Link to comment
  • 1 month later...
3. NI needs to place more focus on the power user. ... My code is already pushing labVIEW to its limits. If they continue to concentrate on nothing but ease of use my hand may be force to begin development in another language (though I will probably still utilize labVIEW where I can).

Hmm, I'm always one to appreciate more power. Could you give a few more details on specific items/capabilities you feel LabVIEW does not yet support that you see in other languages?

One that I have always wanted was to be able to hook a VI to run on a specific hardware interrupt, ie to use the VI as an interrupt handler. Yes, there are work arounds here, and I have used them, such as writing a VXD and having it set a LabVIEW occurence, but I am talking about real, preemptive INT handling.

There was a LabVIEW system on Concurrent/Harris NightHawk and PowerHawk computers years ago. NI worked with Concurrent to make a modified LabVIEW realtime version that did true preemption, but it was pricey and only ran on the Harris CPUs.

For companies that do a lot of hardware work this would be great.

Link to comment
Hmm, I'm always one to appreciate more power. Could you give a few more details on specific items/capabilities you feel LabVIEW does not yet support that you see in other languages?

One that I have always wanted was to be able to hook a VI to run on a specific hardware interrupt, ie to use the VI as an interrupt handler. Yes, there are work arounds here, and I have used them, such as writing a VXD and having it set a LabVIEW occurence, but I am talking about real, preemptive INT handling.

There was a LabVIEW system on Concurrent/Harris NightHawk and PowerHawk computers years ago. NI worked with Concurrent to make a modified LabVIEW realtime version that did true preemption, but it was pricey and only ran on the Harris CPUs.

For companies that do a lot of hardware work this would be great.

Hmm, with the new CompactRIO you can bascially get even better reaction on external events then you ever could get with any preemtive multitasking or whatever OS solution. And with its price it really beats any potential specialized hardware/OS platform who might be able to support such a feature. Preemptive real interrupt handling in the user application level is something any non-specialized (aka very expensive RT OS) can't handle without throwing the whole system into a miriad of race conditions and other nastities.

Rolf Kalbermatter

Link to comment
Hmm, with the new CompactRIO you can bascially get even better reaction on external events then you ever could get with any preemtive multitasking or whatever OS solution. And with its price it really beats any potential specialized hardware/OS platform who might be able to support such a feature. Preemptive real interrupt handling in the user application level is something any non-specialized (aka very expensive RT OS) can't handle without throwing the whole system into a miriad of race conditions and other nastities.

Rolf Kalbermatter

I've always respected your writings Rolf, but here you are missing the point. I agree with what you say about CompactRIO, fine product, works great, etc. I don't (didn't) have an external event that I wanted an NI generic solution. I have had multiple instances where companies I worked for (and two clients since becoming an independent integrator) had a need for a new, custom I/O card that no one made, anywhere. Specialized needs/applications, etc. CompactRIO did not exist then and would not have even remotely worked for any of these applications even if it had. What was needed, and built, was a custom card, initially ISA on the first app, then VME on the next, then PCI on the last two. Each one needed realtime response. I, or coworkers, ended up writing Ring 0 VxDs, then DLLs, then LabVIEW code, etc, to work everything out. I would have liked to have done everything in LabVIEW. I agree that trying to do this in the application layer is a rats nest nightmare. Thats not what I want(ed). I wanted to be able to write the proper device driver layers all in LabVIEW, including having the Kernel level (Ring 0 in old terms) ISR in LabVIEW. This is not race conditions, or other nasties, it has been done in C and other languages for decades. It WAS done in LabVIEW with the Harris NightHawk, but was not available on the PC or Mac architectures, ie something affordable, and it could have been. This still could be on QNX or Linux or OSX or even Windoze with one of the RT extensions, but that doesn't sell NI hardware does it. I first asked for this over 10 years ago and I know several other people who have also asked for it.

Link to comment
I've always respected your writings Rolf, but here you are missing the point. I agree with what you say about CompactRIO, fine product, works great, etc. I don't (didn't) have an external event that I wanted an NI generic solution. I have had multiple instances where companies I worked for (and two clients since becoming an independent integrator) had a need for a new, custom I/O card that no one made, anywhere. Specialized needs/applications, etc. CompactRIO did not exist then and would not have even remotely worked for any of these applications even if it had. What was needed, and built, was a custom card, initially ISA on the first app, then VME on the next, then PCI on the last two. Each one needed realtime response. I, or coworkers, ended up writing Ring 0 VxDs, then DLLs, then LabVIEW code, etc, to work everything out. I would have liked to have done everything in LabVIEW. I agree that trying to do this in the application layer is a rats nest nightmare. Thats not what I want(ed). I wanted to be able to write the proper device driver layers all in LabVIEW, including having the Kernel level (Ring 0 in old terms) ISR in LabVIEW. This is not race conditions, or other nasties, it has been done in C and other languages for decades. It WAS done in LabVIEW with the Harris NightHawk, but was not available on the PC or Mac architectures, ie something affordable, and it could have been. This still could be on QNX or Linux or OSX or even Windoze with one of the RT extensions, but that doesn't sell NI hardware does it. I first asked for this over 10 years ago and I know several other people who have also asked for it.

I think this is not really an option. You have worked with the DDK as you describe the situation or at least someone has done this for you so you should know how it is structured. Support for the DDK interface in LabVIEW is simply not a viable solution. All DDK binaries, import libraries, header files and other tools assume that the according software part is written in C and in certain situations even assembler. Creating bindings to access all this functionality from within the LabVIEW environment would be a project probably just as big as the whole NI-DAQ software or even bigger. Not to mention that LabVIEW is an application and Windows does not have any support for allowing applications to access Ring 0 functionality directly. I have no idea how this problem was solved with the Harris system but I suspect that it was far from a generic device driver interface which was directly available in LabVIEW but instead some (Harris developed) helper device driver which could be accessed from LabVIEW and which did translate between the kernel level and the LabVIEW application level a few things such as memory copies and probably translating interrupts into LabVIEW occurrences and such. As the Harris hardware was a closely controlled hardware and software platform with only a few parties if more than one at all involved in hardware, OS and device driver design it is probably a managable task to develop such a translation device driver. For a system such as a PC with its thousends and thousends of hardware and software manufacturers such a driver would always fail at least 50% of the possible target applications and with that is doomed to be a project that would cost way to much, not be applicable in a lot of potential cases and with that never gain enough momentum to ever possibly pay even a small amount of its development cost.

Writing device drivers is a pain in the ###### to do, but it is quite likely a lot cheaper than whatever such a generic device driver interface in LabVIEW would need to cost to pay for its development.

One possible option though which is not doing everything in LabVIEW but comes as close to it as it can come, would be to use the VISA interface to access a PCI hardware directly. It still will require you to know about your PCI interface everything you would need to know to develop the device driver as a kernel device driver and of course also quite some study of the direct hardware access interface in VISA but you could basically stay completely in LabVIEW to do this. It won't be able to reach the same performance as a kernel device driver optimized for your hardware but it is probably the easist and best solution if you don't want to deal with the Windows DDK at all. This option by the way has been in VISA since at least VISA 3.0.

Rolf Kalbermatter

Link to comment
I think this is not really an option. You have worked with the DDK as you describe the situation or at least someone has done this for you so you should know how it is structured.
Both. I did two, I delegated two, the driver portion I mean.
Support for the DDK interface in LabVIEW is simply not a viable solution. ...snip...
Hmmm, thats what I was told about interfacing LabVIEW directly to the VME bus from a Xycom PC for my first LabVIEW project in the early 90's. (LV 2.5.2) I didn't believe the NI rep, so I just went ahead and did it, rather quickly too. It involved writing about 80 CINs, but it turned out that that was a very useful skill to have. A short while later I wanted to add Windows help files to the same project and call that from LabVIEW and was told the same thing, impossible, so I did that too, at which point the NI rep stopped telling me what couldn't be done and started asking me to lecture at the local users group meetings.
Not to mention that LabVIEW is an application and Windows does not have any support for allowing applications to access Ring 0 functionality directly. I have no idea how this problem was solved with the Harris system but I suspect that it was far from a generic device driver interface which was directly available in LabVIEW but instead some (Harris developed) helper device driver which could be accessed from LabVIEW and which did translate between the kernel level and the LabVIEW application level a few things such as memory copies and probably translating interrupts into LabVIEW occurrences and such.
Good quess, but wrong. I suspected the same thing and told the Harris rep. I had quite a long talk with the project manager. No occurrences, etc. They added a menu item to the VI execution system options to run in the POSIX equivalent of Ring 0 and when you selected that option you activated a menu to pick which interrupt (if any) to hook to. NI supplied LV source under NDA (the code, not the fact that they supplied it, so its okay to talk about) and a little bit of programmer help to tweak the LV execution stack. When interrupt X fired LV stopped whatever else it was doing (unless it was servicing a higher priority interrupt), pushed the current setup on the stack, then placed the ISR VI on the top of the stack and executed it immediately. VIs designated as an ISR had some limitations, but nothing more than what you limit a C ISR from doing. They also had some extra hardware memory privileges in order to function as an ISR. What it did, it did quickly and got out, at which point LV returned to application level. Now at this point he did not explain whether LV shifted itself from Ring 3 to 0 and back to 3 or if there were in fact 2 execution engines, one running entirely in Ring 0, etc. Forgive me, I'm mixing system metaphors, this was a POSIX system.

The point is that the architecture was simple, robust, worked well and was relied upon to do mission critical testing. I was told NI mixed and matched different layers of the LV system, taking the compiler layer from Mac and the hardware abstraction upper layers from the PC and producing, with some tweaks, something that worked pretty quickly on the Harris powerPC based hardware. I was told that it wasn't that big of a deal to do.

As the Harris hardware was a closely controlled hardware and software platform with only a few parties if more than one at all involved in hardware, OS and device driver design it is probably a managable task to develop such a translation device driver. For a system such as a PC with its thousends and thousends of hardware and software manufacturers such a driver would always fail at least 50% of the possible target applications and with that is doomed to be a project that would cost way to much, not be applicable in a lot of potential cases and with that never gain enough momentum to ever possibly pay even a small amount of its development cost.
To quote Yoda, "Always with you it cannot be done..." Actually, the Harris solution was simple, done the right way and therefore would port over pretty well. I'd bet a beer that somewhere, somewhen, someone at NI did this already. LabVIEW for linux started out as someones hobby horse, a lot of nifty items in LabVIEW do. Linux and undo were both available years before release, but you know this.
Writing device drivers is a pain in the ###### to do, but it is quite likely a lot cheaper than whatever such a generic device driver interface in LabVIEW would need to cost to pay for its development.
Not if it has already been developed and just needs to be ported and some tweaks.
One possible option though which is not doing everything in LabVIEW but comes as close to it as it can come, would be to use the VISA interface to access a PCI hardware directly. It still will require you to know about your PCI interface everything you would need to know to develop the device driver as a kernel device driver and of course also quite some study of the direct hardware access interface in VISA but you could basically stay completely in LabVIEW to do this. It won't be able to reach the same performance as a kernel device driver optimized for your hardware but it is probably the easist and best solution if you don't want to deal with the Windows DDK at all. This option by the way has been in VISA since at least VISA 3.0.

Rolf Kalbermatter

Agreed, its not a bad option if the other is still not available. It is always nice to have a range of techniques/levels of effort. But given the hardware knowledge needed for both it is probably easier to just write the Kernel driver, DLL, and LV wrapper VIs to call the DLL, at least it was the last time I did this. I haven't kept up with the latest driver models, like for XP.
Link to comment
  • 1 month later...

I know this thread is long-past dead, but I had to chime in.

1. Incorporate the decimation feature into the graph and make it user settable.

"Fast Data Display with Decimation

In many interactive applications, the only thing you want to do with your data is show it to the user. There may be a real reason to display 5 million data points, but this amount of data is far beyond the capabilities of most displays. The average LabVIEW graph is on the order of 300 to 1000 pixels wide. Five million points is three orders of magnitude more than you can actually see on a waveform graph. Data decimation is the answer to this problem.

I don't understand the problem here. The LabVIEW graphs DO decimate the datasets before (or while) drawing. Drawing used to be the most expensive part of a graph/chart update, rather than the computation to change coordinate systems from the diagram data to pixels. There is code that specifically decimates co-linear and co-incident data points before drawing. That's pretty much the best we can do, as that's the soonest we can know if any data points are superfluous.

LabVIEW must keep all data points around because at any time the user can zoom the graph. If we didn't allow this functionality, sure, we could throw out datapoints that we knew wouldn't affect the drawing the the graph and we wouldn't have to hang on to them. Unfortunately, we must balance the needs of two groups of users: those who want speed and those who want precision.

So....the only option I see is to give the user an option to tell LabVIEW that all they care about is an approximation of the actual data, and we can use a threshold to determine whether to hold on to the data point or not. However, this will mean that the graph may be misleading when zoomed in, and some time will be spent in computations to figure out if each data point handed to the graph is "necessary" or not.

All in all, the graph and chart mapping/drawing code hasn't changed much in the six years I've been at NI. Some macros were converted to templates for readability/debuggability, which had a slight negative effect on performance, but that was offset by a refactoring of some mapping code that gave a positive gain. As your computer has been getting faster over the years, the graph/chart code has largely stayed the same so it should be speeding up right alongside your CPU.

J

Link to comment
  • 8 years later...

A *really* old thread, but not much has happened since 2006 as far as I can see (LV2013), and perhaps the reason is that the problem is (as Jason wrote in the last entry here) still not acknowledged(?).

 

The graphs decimate the displayed data, as Jason describes, but you still get a serious performance hit above a certain number of points (the GUI slows down to a halt...). For XY-graphs that number is very easy to hit. We have to use non-G alternatives to get proper performance with bigger data sets. 

 

So there seems to be something that slows things down even though, if the decimation worked, the number of points actually drawn should not grow. I could perhaps understand it if the software had a problem holding the full data set in memory (in the background), or if the slowness was only noticable when the user did a change to the GUI that actually required the graph to recalculate which points to draw, but that does not seem to be the case.

 

And obviously, code written in other languages *is* able to cope just fine, so there really is no excuse.

 

 

 

I know this thread is long-past dead, but I had to chime in.



I don't understand the problem here. The LabVIEW graphs DO decimate the datasets before (or while) drawing. Drawing used to be the most expensive part of a graph/chart update, rather than the computation to change coordinate systems from the diagram data to pixels. There is code that specifically decimates co-linear and co-incident data points before drawing. That's pretty much the best we can do, as that's the soonest we can know if any data points are superfluous.

LabVIEW must keep all data points around because at any time the user can zoom the graph. If we didn't allow this functionality, sure, we could throw out datapoints that we knew wouldn't affect the drawing the the graph and we wouldn't have to hang on to them. Unfortunately, we must balance the needs of two groups of users: those who want speed and those who want precision.

So....the only option I see is to give the user an option to tell LabVIEW that all they care about is an approximation of the actual data, and we can use a threshold to determine whether to hold on to the data point or not. However, this will mean that the graph may be misleading when zoomed in, and some time will be spent in computations to figure out if each data point handed to the graph is "necessary" or not.

All in all, the graph and chart mapping/drawing code hasn't changed much in the six years I've been at NI. Some macros were converted to templates for readability/debuggability, which had a slight negative effect on performance, but that was offset by a refactoring of some mapping code that gave a positive gain. As your computer has been getting faster over the years, the graph/chart code has largely stayed the same so it should be speeding up right alongside your CPU.

J
Link to comment

Perhaps Jason can confirm the algorithm. But I don't think the graph controls just decimate in the strictest sense. I think they do something like bi-linear filtering since small artifacts are not lost-as would be with pure decimation.

 

Long term data logging is better served by a DB IMHO. Then it's just a one line query to decimate and you can have a history as big as your disk allows with no memory impact.

Link to comment

It's not really about long term data logging. If you have a huge data set you will have to write it to disk and reload data from the source (DB or other alternative) dynamically anyway. In such cases the user expects, and will therefor accept, that he might need to provide input and perhaps also wait a noticeable time for the new data. 

 

However,  if you have e.g. 50 MB of time stamped doubles you can dump it all into a .Net graph without any worries. The GUI will run smoothly, and you do not need to bother handling events from the user's interactions with the data. The user can zoom and scroll with instant access to the underlying data. The graph will handle that amount of data fine on its own. That's not the case with the native LabVIEW XY graph. On a standard PC of today LabVIEW can easily hold much more data than that in memory (and in other types of controls/indicators on the front panel), just not in a graph.

 

It is obviously much heavier to draw a graph than an array indicator, but if done right the native graph should at least be able to match the alternatives.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.