Jump to content

Using the DLL files of an application compiled with C# with labview


Recommended Posts

On 6/12/2022 at 2:34 PM, alvise said:

it still seems to have something to do with "CoUnitialize" because If I uninstall Win32 Decode Img Stream.vi this problem won't occur.

Remove this and try again.

2022-06-13_14-35-25.jpg.072711f43a996de99f6c58c8953242dc.jpg

I think, if your memory stays on nearly the same level (say, 180 MB) for a few hours and does not jump high drastically, then no need to worry.

Link to comment
On 6/12/2022 at 6:38 AM, Rolf Kalbermatter said:

Well, how many times per second do you call that VI?

It was a project with MOXA video server, that was digitalizing an analog video signal and outputting it each 25 to 30 frames per second. That SDK was providing only a JPEG stream, hence I had to use some helper decoder of mine. Later we switched to another video server, that was able to output in many other formats besides, including a RTSP one.

On 6/12/2022 at 6:38 AM, Rolf Kalbermatter said:

And make that VI subroutine.

There exists one more way to make the CLFNs run in the same thread and stay in 'any thread' all the time. I recall that I was having some thread unsafe DLL, that I wanted to use in my project, but I absolutely didn't want it to run in UI thread. Then I used Timed Loop for that and it worked like a charm.

Quote

Each timed structure on the block diagram creates and runs in its own execution system that contains a single thread, so no parallel tasks can occur. A Timed Loop executes in the data flow of a block diagram ahead of any VI not configured to run at a time-critical priority.

But I have never dug those built-in XNodes deep enough to see, how they work internally.

Edited by dadreamer
Link to comment
Posted (edited)
3 hours ago, dadreamer said:

Remove this and try again.

I did what you said above, but I couldn't get any results. Therefore, I used it with the first state.
Finally, the sample project I created is as follows.

memory usage increases over time, but not that fast and not that much (0.1MB).
I tested this increased memory usage in the sample application shared with the SDK and it was also increasing the memory usage in that example.The sudden changing memory usage does not occur much in the method I use below. However, it increases by 3-5 MB from time to time and returns to normal.
This way If I read video in 1280x720 resolution it is using 13% CPU.

 

Hikvision-labviewSDK-Test-v1.1.4.rar

Edited by alvise
Link to comment
On 6/11/2022 at 7:36 PM, alvise said:

- How do we eliminate all CoInitialize work by returning BMP? Why does returning BMP have such an advantage?

- Does  taking it as BMP also cause a drop in camera FPS? As far as I know, BMP is bigger in size.

There is another function in the PlayCtrl.dll called PlayM4_GetBMP()  There is a VI in my last archive that already should get the BMP data using this function. It is supposed to return a Windows bitmap and yes that one is fully decoded. But!

You retrieve currently a JPG from the stream which most likely isn't exactly the same format as what the camera delivers so this function already does some camera stream decoding and than JPEG encoding, only to then have the COM JPEG decoder pull the undecoded data out of the JPEG image anyhow!

Except that you do not know if the BMP decoder in PlayCtrl.dll is written at least as performant as the COM JPEG decoder, it is actually likely that the detour through the JPEG format requires more performance than directly going to BMP. And the BMP format only contains a small header of maybe 50 bytes or so that is prepended in front of the bitmap data. So not really more than what you get after you have decoded your JPEG image. 

Link to comment
Posted (edited)
13 hours ago, Rolf Kalbermatter said:

There is another function in the PlayCtrl.dll called PlayM4_GetBMP()  There is a VI in my last archive that already should get the BMP data using this function. It is supposed to return a Windows bitmap and yes that one is fully decoded. But!

You retrieve currently a JPG from the stream which most likely isn't exactly the same format as what the camera delivers so this function already does some camera stream decoding and than JPEG encoding, only to then have the COM JPEG decoder pull the undecoded data out of the JPEG image anyhow!

Except that you do not know if the BMP decoder in PlayCtrl.dll is written at least as performant as the COM JPEG decoder, it is actually likely that the detour through the JPEG format requires more performance than directly going to BMP. And the BMP format only contains a small header of maybe 50 bytes or so that is prepended in front of the bitmap data. So not really more than what you get after you have decoded your JPEG image. 

Thanks for your answer.
Actually I was able to get the stream using GetBMP function and even view it with "Win32 Decoder Img Stream.vi" with NI IMAQ but I don't think that is the right way.

image.png.412f53ded9af4aae53e5db4d6812f8d0.png
-What should be done to decode video stream information with ''GetBmp'' instead of ''GetJpeg'' function directly?

- In the example you shared, I couldn't get a video stream from the example that uses the "GetPlayedFrames" function. I tried to fix the problem but couldn't find the source.

Edited by alvise
Link to comment
8 hours ago, alvise said:

Thanks for your answer.
Actually I was able to get the stream using GetBMP function and even view it with "Win32 Decoder Img Stream.vi" with NI IMAQ but I don't think that is the right way.

image.png.412f53ded9af4aae53e5db4d6812f8d0.png
-What should be done to decode video stream information with ''GetBmp'' instead of ''GetJpeg'' function directly?

- In the example you shared, I couldn't get a video stream from the example that uses the "GetPlayedFrames" function. I tried to fix the problem but couldn't find the source.

Well a Windows bitmap file starts with a BITMAPFILEHEADER.

typedef struct tagBITMAPFILEHEADER {
  WORD  bfType;
  DWORD bfSize;
  WORD  bfReserved1;
  WORD  bfReserved2;
  DWORD bfOffBits;
} BITMAPFILEHEADER, *LPBITMAPFILEHEADER, *PBITMAPFILEHEADER;

This is a 14 byte structure with the first two bytes containing the characters "BM" which corresponds nicely with our 66 = 'B' and 77 = 'M'. The next 4 bytes are a Little Endian 32-bit unsigned integer indicating the actual bytes in the file. So here we have 56 * 65536 + 64 * 256 + 54 bytes. Then there are two 16-bit integers whose meaning is reserved and and then another 32-bit unsigned integer indicating the offset of the actual bitmap bits from the start of the byte stream, which surprisingly is 54, the 14 byte of this structure plus the 40 bytes of the following BITMAPINFO structure. If you were sure what format of bitmap is in the stream you could just jump right there but that is usually not a good idea. You do want to interpret the bitmap header to find out what format is really in there and only try to "decode" the data if you understand the format.

After this there is a BITMAPINFO (or in some obscure cases a BITMAPCOREINFO structure, this was the format used by OS-2 bitmaps in a long ago past. Windows doesn't create such files but most bitmap functions in Windows are capable of reading it).

Which of the two can be found by interpreting the next 4 bytes as a 32-bit unsigned integer and looking at its value. A BITMAPCOREINFO would have a value of 12 in here, the size of the BITMAPCOREHEADER structure. A BITMAPINFO structure has a value of 40 in here, the size of the BITMAPINFOHEADER inside BITMAPINFO.

Since you have 40 in there it must be a BITMAPINFO structure, surprise! 

typedef struct tagBITMAPINFOHEADER {
  DWORD biSize;
  LONG  biWidth;
  LONG  biHeight;
  WORD  biPlanes;
  WORD  biBitCount;
  DWORD biCompression;
  DWORD biSizeImage;
  LONG  biXPelsPerMeter;
  LONG  biYPelsPerMeter;
  DWORD biClrUsed;
  DWORD biClrImportant;
} BITMAPINFOHEADER, *PBITMAPINFOHEADER;

biWidth and biHeight are clear, biPlanes can be confusing but should usually be one. biBitCount is the most interesting right now as it indicates how many bytes a pixel has. If this is less or equal than 8, a pixel is only an index in the color table that follows directly after the BITMAPINFOHEADER. If it is bigger than 8 there is usually NO color table at all but you need to check biClrUsed to be 0, if this is not 0 there are biClrUsed color elements in the RGBQUAD array that can be used to optimize the color handling. If the bitCount is 8 or less, biClrUsed only indicates which of the color palette elements are important, it always contains 2^bitCount elements. With bitCount > 8 the pixel values encode directly the color.

You have probably either a 24 or 32 in here. 24 means that each pixel consists of 3 bytes and each row of pixels is padded to a 4 byte boundary. 32 means that each pixel is 32-bits and directly encodes a LabVIEW RGB value but you should make sure to mask out the uppermost byte by ANDing the pixels with 0xFFFFFF.

biCompression is also important. If this is not BI_RGB (0) you will likely want to abort here as you have to start bit shuffling. RLE encoding is fairly doable in LabVIEW but if the compression indicates a JPEG or PNG format we are back at square one.

Now the nice thing about all this is that there are actually already VIs in LabVIEW that can deal with BMP files (vi.lib\picture\bmp.llb). The less nice thing is that they are written to work directly on file refnums. To turn them into interpreting a byte stream array, will be some work. A nice exercise in byte shuffling. It's not really complicated but if you haven't done it it s a bit of work the first time around. But a lot easier than trying to get a callback DLL working.

Edited by Rolf Kalbermatter
Link to comment
Posted (edited)
12 hours ago, Rolf Kalbermatter said:

Well a Windows bitmap file starts with a BITMAPFILEHEADER.

 

Thanks for your answer.
-I think you created something similar in the example you posted before. But it was just necessary to decode the extra compressed images, right?
-Currently, there are 2 kinds of image output formats in the camera's own settings, one is JPEG and the other is BMP, the desired output format can be selected.
And as an extra, H.264 or MPEG4 video output can be selected. I tested both separately and the video can be read. Of course, I don't know what is in the background here. Does it really decode H.264 data or does it still receive data in MPEG4 format even though I choose H.264 video output format? I don't know.I tested it with a different camera. It only works with a camera with an H.264 output format.
-To be honest, I don't know if I can work harder right now.I guess it will still require a lot of effort.Maybe there will be people who can add to it in the future :)

And I don't want to bother you any more. You and dadreamer have been very helpful so far, thanks for your help, this example we created will probably help other people as well. It was nice to know that precious people like you still exist.

-The last thing I want to do (using my last energy for this job) is to create an example instead of buttons that are displayed when the VI is run directly and can be turned off when the Windows shutdown button is pressed.

Edited by alvise
Link to comment
13 hours ago, Rolf Kalbermatter said:

To turn them into interpreting a byte stream array, will be some work.

Windows Imaging Component already has native bitmap decoder for BMP format, that works "out-of-box". So why reinvent all that from the scratch? Of course, there would be a reason, if we were on Linux or macOS. As it's about Windows only, then the WinAPI decoder should be already optimized to process the formats fast enough. I doubt it would be worse in performance than the LabVIEW built-in instruments.

Link to comment
Posted (edited)

Without buttons, I want the video to be displayed directly when the VI is run. I tried several methods to use the event structure without the button, but in a way that I can't understand, the video capture speed drops. But in general, in the method I currently use, the frame capture rate decreases over time.

image.png.124bf6fa4c7f147fd8b07a9d0ec0322e.png

I have assigned a Numeric indicator for each event frame and I assign values to the indicators with Value sign, this way I start the Frames one after the other. It's not much problem when it works first time, after a while I notice that the video flow starts to seem slow.

Does this occur due to the accumulation of undesirable events?

The example I'm using right now is the example here, only the flat sequence plugin in the picture was made.

Does anyone have an idea what is wrong with this code?

Edited by alvise
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.