Jump to content
smithd

alternative image displays?

Recommended Posts

I've had some various issues with the imaq image display in labview, a key one being that it seems to (and according to applications engineers, does) run completely in the UI thread with CPU rendering...so if I wanted to say, display a whole bunch of camera images on one computer I find that I rail a cpu core and the entire UI slows down. I've resolved this in a few ways, but...

I'm curious if anyone has ever found a third-party (and presumably non-labview) image display that is as nice as the labview one (or nicer!). Doing some searching it looks like everyone can display bitmaps and the like of course, and there are some examples out there for how to implement (as an example) zoom in a .net image display, but what I'm specifically looking for is something that includes the nice zoom, the pixel indicator, the drawn ROI selection types, etc. Anyway, I've been unsuccessful so I'm wondering if anyone else has seen anything like this or at the very least if anyone else has tried and also come up empty.

Edited by smithd

Share this post


Link to post
Share on other sites

I gave some serious thought to the picture control, which is still LV and probably UI thread and cpu demanding. I didn't really went that far, but at least sort of implemented zoom and pixel value indicator, and taxing cpu I get in the 30fps range. Not even nearly as nice as the imaq display, but at least an alternative. It's in my https://lavag.org/files/file/232-lvvideo4linux/

  • Like 1

Share this post


Link to post
Share on other sites

As a first stop. Have you tried using the "External Window" tools. I'm not sure of the details, but I seem to remember some people switching to it for similar reasons.

Outside of LabVIEW, I have used Raphael (JavaScript) but had to write my own ROI and annulus which isn't that hard really.  Another that springs to mind is ImageJ. This is Java (....shiver....) but has excellent manipulation tools and certainly should be on your list of "things to look at"

  • Like 1

Share this post


Link to post
Share on other sites
9 hours ago, ensegre said:

 at least an alternative. It's in my https://lavag.org/files/file/232-lvvideo4linux/

Thanks, I'll take a look. I'm actually already shipping the image around as a 2D array over the network so it should be easy to try.

6 hours ago, ShaunR said:

As a first stop. Have you tried using the "External Window" tools.

Nope, but I'll take a look. Now you mention them, I thought I had read somewhere that they were sort of legacy so I didn't really learn more about it, but maybe it has some advantages.

6 hours ago, ShaunR said:

Outside of LabVIEW, I have used Raphael (JavaScript) but had to write my own ROI and annulus which isn't that hard really.  Another that springs to mind is ImageJ. This is Java (....shiver....) but has excellent manipulation tools and certainly should be on your list of "things to look at"

Raphael looks like a possibility, and we actually use imagej so that would be ideal but I got the impression is was more like a standalone editing program (when I say "we use" I mean "they use, and I tried it once and gave up"). I just looked though at the documentation and they seem to allow for some scripting in python or javascript so i may need to reevaluate.

 

Thanks guys

Edited by smithd

Share this post


Link to post
Share on other sites

For the curious:

Looks like external window tools are limited to 16 windows. I probably will throw anyone down the stairs who suggests that we do more than 16 on a given machine, but currently we are displaying way more than that one one machine.

Raphael unless I missed something major only seems to allow drawing individual points or pulling an image from a URL rather than writing a full bitmap to the screen.

ImageJ I haven't yet gotten to

I found out this c# wrapper for OpenCV has a nice image display http://www.emgu.com/wiki/index.php/ImageBox and since its bundled with opencv it might be the best fit of all. I've been playing with it and it seems promising.

Share this post


Link to post
Share on other sites
2 hours ago, smithd said:

I found out this c# wrapper for OpenCV has a nice image display http://www.emgu.com/wiki/index.php/ImageBox and since its bundled with opencv it might be the best fit of all.

I don't have direct experience with it, but I guess that if all you are into is to pass image data and display, that would be practicable, but if your aim to interface even only a subset of opencv directly with LV, that would be quite a different story. The hardness of the task has been mentioned in the past, e.g.

 

  • Like 1

Share this post


Link to post
Share on other sites
5 hours ago, smithd said:

Raphael unless I missed something major only seems to allow drawing individual points or pulling an image from a URL rather than writing a full bitmap to the screen.

I used HTML5 canvas for just displaying an image. But used Raphael to overlay ROI, annulus, annotations and cursors. I could have exported an image to SVG (which Raphael supports) but LabVIEW can't do that. The hard work was by the back-end so this was purely for display purposes to the user. If you are planning on post-processing outside of LabVIEW then JavasScript is definitely not the way to go.

Edited by ShaunR
  • Like 1

Share this post


Link to post
Share on other sites
35 minutes ago, ensegre said:

isn't exporting from LV to file to be read within html, enough of a performance hit?

You don't need a file to display an inline image in HTML and there is the LV Image to PNG Data VI. The limitation is how fast JavaScript can render images.

Edited by ShaunR

Share this post


Link to post
Share on other sites
23 hours ago, ensegre said:

I don't have direct experience with it, but I guess that if all you are into is to pass image data and display, that would be practicable, but if your aim to interface even only a subset of opencv directly with LV, that would be quite a different story. The hardness of the task has been mentioned in the past, e.g.

Even with this? http://www.ni.com/white-paper/53072/en/

It seems to indicate in the help that they provide a simple function for converting from an imaqdx image to an opencv mat.  That having been said, I mention the opencv bit more as a "and hey if I need complex processing at least its handy".

20 hours ago, ShaunR said:

I used HTML5 canvas for just displaying an image. But used Raphael to overlay ROI, annulus, annotations and cursors. I could have exported an image to SVG (which Raphael supports) but LabVIEW can't do that. The hard work was by the back-end so this was purely for display purposes to the user. If you are planning on post-processing outside of LabVIEW then JavasScript is definitely not the way to go.

Ah I see, that makes sense. As an aside, I actually tried implementing motion jpeg (https://en.wikipedia.org/wiki/Motion_JPEG#M-JPEG_over_HTTP) which worked..ok (and in fact you can do png as well), but took me a while to figure out how to do it with the labview web server, and it (on my machine) ended up being slower than the regular imaq stuff, probably because of all the compression and decompression.

 

More fundamentally what I have is sort of a combo of basic machine vision which runs constantly on the acquiring device. What I want to provide on the client side is (a) the original image (b) the result of the basic processing (ie putting a crosshair or a box around a pre-calculated feature) and (c) let the user add their own ROIs to the image to help them perform some more simple analysis (ie use the 'line' roi tool to convert the 2d image into a 1d chart of pixel values, or histogram the region inside of a drawn box). But as you can imagine (a) and (b) are a totally different mode of operation than (c) -- you can't look at the histogram of each of 30 images at once, but you can stand a few feet back from the screen and scan through the images themselves pretty quickly. As I'm writing this down, it makes me wonder if I shouldn't just split those up entirely. Thus far I've been trying to make one exe to rule them all...but maybe it would be better to keep the (c) use case in labview but move (a) and (b) over to something faster like the C# engucv or the html canvas since they are read only and thus would take less effort.

Share this post


Link to post
Share on other sites
3 hours ago, smithd said:

As an aside, I actually tried implementing motion jpeg (https://en.wikipedia.org/wiki/Motion_JPEG#M-JPEG_over_HTTP) which worked..ok (and in fact you can do png as well), but took me a while to figure out how to do it with the labview web server, and it (on my machine) ended up being slower than the regular imaq stuff, probably because of all the compression and decompression.

Interesting. Why didn't you use Websockets, RTSP or WebRTP?

 

3 hours ago, smithd said:

More fundamentally what I have is sort of a combo of basic machine vision which runs constantly on the acquiring device. What I want to provide on the client side is (a) the original image (b) the result of the basic processing (ie putting a crosshair or a box around a pre-calculated feature) and (c) let the user add their own ROIs to the image to help them perform some more simple analysis (ie use the 'line' roi tool to convert the 2d image into a 1d chart of pixel values, or histogram the region inside of a drawn box). But as you can imagine (a) and (b) are a totally different mode of operation than (c)

Well. B & C are the same thing essentially from a display point of view. I have achieved similar things to A in the past with saving to memory mapped files at high data rates which can be exploited by other VIs or even other programs. But your problem seems to be rendering, not acquisition or exploitation. What I'm not understanding at present is if an image needs operator intervention then presumably they can only operate on one image at a time and 30 line profiles or histograms aren't that intensive (why did NI drop array of charts?).

MDI.png

(each vI is updating at 125ms and displaying 20,000 points - CPU utilisation ~ 6%).

So how big are these image files?

 

Edited by ShaunR

Share this post


Link to post
Share on other sites

Well this is odd..I dont think I saw this as unread but I just looked at notifications and here it was. Hrm...

Well the long story short is that it sounds like the solution of splitting the program in two is a good fit. To quickly respond, though...

On 3/9/2017 at 3:00 AM, ShaunR said:

Interesting. Why didn't you use Websockets, RTSP or WebRTP?

The advantage is that it can be displayed (in theory) directly in any browser since its just using http features. I do have a lot of websocket built out as well, so in the existing application thats how I'm transferring images. 

On 3/9/2017 at 3:00 AM, ShaunR said:

Well. B & C are the same thing essentially from a display point of view. I have achieved similar things to A in the past with saving to memory mapped files at high data rates which can be exploited by other VIs or even other programs. But your problem seems to be rendering, not acquisition or exploitation. What I'm not understanding at present is if an image needs operator intervention then presumably they can only operate on one image at a time and 30 line profiles or histograms aren't that intensive (why did NI drop array of charts?).

So how big are these image files?

For display, yes, its the same (I'm just using the ROI and overlay features of imaq). Whats more difficult is all of the low-level UI stuff related to, for example, drawing a box around a feature and having it show up. Its not hard so much as bug-prone and time consuming to develop from scratch.

What I was trying to say is that the way those features are used is totally different. The histogram, drawings, etc are all a part of the offline-mode of operations so I can easily use something (like labview) that has slow rendering capability, so long as I can make a faster-rendering application that does the simple case (A+B, with no user interaction except to resize windows). The images themselves are 3 MB raw or smaller, theres just a ton of them. This is another example of how the use cases are different. Without going into detail the best way to convey the difference is to imagine a large piece of machinery -- when the system is being tweaked, people only want to see their small part. But when the system is operational the entire system must be monitored simultaneously but with a lower level of detail. Thats why I think this split-program will work. Using something with GPU rendering for the high-throughput mode and using labview as a development shortcut for the low-throughput mode.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.