Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,786
  • Joined

  • Last visited

  • Days Won

    245

Posts posted by Rolf Kalbermatter

  1. 100% CPU load on the server would indicate some form of "greedy" loop. If you create a loop in LabVIEW that has no means of throttling its speed it will consume 100% of the CPU core it is assigned to, even if there is nothing in the loop and it does effectively do nothing very fast.

    More precisely, that loop will consume whatever is left over of that core after other VIs clumps had their chance to snoop some time of from that core.

  2. 13 hours ago, infinitenothing said:

    Has anyone done a bandwidth test to see how much data they can push through a 10GbE connection? I'm currently seeing ~2Gbps. One logical processor is at 100%. I could try to push more but I'm wondering what other people have seen out there. I'm using a packet structure similar to STM. I bet jumbo frames would help.

    Processor on PC that transmits the data: Intel(R) Xeon(R) CPU E3-1515M v5 @ 2.80GHz, 2808 Mhz, 4 Core(s), 8 Logical Processor(s)   

    Definitely echo Hooovahh's remark. LabVIEW TCP Nodes may limit the effectively reachable throughput since they do their own intermediate buffering that adds some delays to the read and write operations, but they use select() calls to asynchronously control the socket, which should do a highly efficient yield on the CPU when there is nothing to do yet for a socket. And the buffer copies itself should not be able to max out your CPU, 2Gbps comes down to 250MBps, which even if you account for double buffereing once in LabVIEW and once in the socket, should not be causing a 100% CPU load. Or did you somehow force your TCP server and client VIs into the UI thread? That could have pretty adverse effects but would also be noticeable in that your LabVIEW GUI starts to get very sluggish.

  3. I haven't tried it but in your minimal C wrapper you should be able to install a SIGTERM handler in this way and in there you could call a second export in the shared library to inform your LabVIEW program that it needs to shut down now!

    #include <signal.h>
    #include "SharedLib.h"
    
    void handler(int signum)
    {
    	SharedLibSignal(signum == SIGTERM);
    }
    
    int main()
    {
    	struct sigaction action;
    	memset(&action, 0, sizeof(action));
    
    	action.sa_handler = handler;
    	if (sigaction(SIGTERM, &action, NULL) == -1)
    	{
    		perror("sigaction");
    		exit(EXIT_FAILURE);
    	}
    	return SharedLibEntryPoint();
    }

     

    • Thanks 1
  4. What code do you use? What device? Your address seems to indicate the Modbus decimal addressing scheme. Most LabVIEW ModBus libraries I know of use however the more computer savy hexedecimal naming scheme with explicit register mode selection.

    This means you need to remove the first digit from your address (the number 4) and decrement the remaining address by one to get a zero based address.

    However Modbus Function Code 4 is a (Read Input Registers) operation and there is no (Write Input Register operation as it would not make any sense to write to an input.

    Read Holding Register would be an address starting with 3 and Write Holding Register would start with 6.

    So when using the NI Modbus library for instance in order to read your Modbus address 40001 you would need to use the Read Modbus function, selecting the Input Register group and passing an address of 0. There is no possibility to write to the input registers.

    For Holding Registers the Modbus address would be 30001 for reading and 60001 for writing. And when using the LabVIEW Modbus library you would select the Read and Write function respectively, selecting the Holding register and passing an address of 0.

  5. 9 minutes ago, ShaunR said:

    You are also required to supply the source and compile environments of the vanilla LGPL code even when just dynamic linking and that can be quite big in size. While one may argue it's merely a maintenance overhead, never-the-less, it is an overhead keeping track of (and version control of) other peoples source AND the specific compilation environment one used.

    I would think a link to the original projects website that has the downloads available could also suffice. Of course that leaves you in a bit of a bind if the original developer site goes down or is otherwise made unavailable.

  6. 28 minutes ago, JKSH said:

    LGPL does not require you to open-source your code at all.

    Only if you make absolutely no changes to the library and use some form of dynamic linking.

    If you make any change to the LGPL portion, you are obligated to distribute that change to any user of your software who asks for it.

    And if you don't use dynamic linking, your entire project gets part of the "work" that the LGPLed library presents. There exists no broadly accepted technology that lets you replace static linked libraries in an end product with some other libraries.

    LabWindows/CVI has/had a technique that lets you load actually lib files as if they were shared libraries, but that was a highly CVI specific feature that no other compiler that I'm aware of really supports.

  7. Personally I think the differences between MIT, BSD, Apache and Commons like licenses are fairly small. And unless your project ends up being a huge success that storms the world (fairly small chance in the LabVIEW world for that 😁) you won't notice a real difference between them.

    The ones that clearly stand apart from these are the GPL and LGPL licenses which, while open source too, try to force any user of it (to some smaller degree with the LGPL) to open source their entire code too. 

  8. 1 hour ago, Sascha said:

    Yep, NI won't do this work. They majorly of the developers are on Windows/INTEL and will be in the foreseeable future. 

    Actually they might, but the Mac does form an extra obstacle. For Linux they were pretty much on track to provide finally a real DAQmx driver since they had to develop it for their cRIO platform anyhow. The only problem is the packaging as there are not only at least three different package formats out there (rpm, deb and opkg, with the last one used for the NI embedded platforms) but also many other egregious differences between distributions that make installing a hardware support package a complete pain in the ass. And that is not even mentioning the kernel folks war to fight against allowing non-open sourced kernel drivers to run in their kernel.

    Quote

    macOS LabVIEW is quite nice, actually. I use it for some basic development on component level. But I'll always need to do some block diagram beautifying and front panel rework if I want to use it in Windows. 

    That is in the nature of the beast. These platforms have fairly differing ideas about layout composition in the underlying graphics subsystem and to make matters not to easy there always remains the issue about fonts and their licensing which makes transfering a layout pixel accurate across systems pretty much impossible. Unfortunately the LabVIEW folks choose to implement a pixel based UI system rather than an arbitrary graphics coordinate system but that is understandable. The only platform back in those days that had some form of more abstract coordinate system was Quickdraw on the Mac (and XWindow also has some more abstract coordinates as it was from begin designed to be a remote API where the client did not know, nor care, about the actual graphics hardware used on the server, and sometimes the server has no graphics hardware of its own). Windows GDI was purely pixel oriented and that cost Microsoft a lot of hacks later on to support high resolution displays in Windows. GDI still in essence is pixel based to the current day and that is the API LabVIEW uses to this day to draw to the screen.

  9. 5 hours ago, Sascha said:

    Hi, I think all the formerly happy Mac-Parallels-Windows-LabVIEW developers, me included, are trapped on the INTEL platform.
    I don't see NI going anywhere with ARM. Get prepared to buy a Windows-INTEL-laptop in the future. 🤢

    Actually, supporting the M1/M2 chip would not be such a big deal for NI as far as LabVIEW goes. The LLVM compiler backend they use already supports that chip for quite some time. And the MacOSX version of LabVIEW itself shouldn't really give to many problems to compile with XCode for targetting the M1 hardware since they already did that for quite some versions before and the 64-bit version of LabVIEW for Mac did away with quite a few of the older Carbon compatibility interfaces.

    What will be trickier is support for DAmx, NI-488.2 and other hardware interface drivers. Not impossible but quite a bit of work to get right and the most intense part is probably all the testing needed.

    • Like 1
  10. On 1/26/2022 at 5:56 AM, Jacob7 said:

     

    Did you figure out how to get it installed? I'm currently experiencing the same problems. I was able to install base LV with the math toolbox but nothing else. Bricks my machine every other time. I've gone through ~25 Windows 11 snapshots trying to get at least some basic add-ons installed like DAQ, but haven't had any luck :( 

    Any hardware driver is almost certain to not work. Those hardware drivers depend on kernel drivers that must run in the Windows kernel. And it is almost certainly not possible to fully emulate the x86 hardware in ring1, which is the CPU mode in which the kernel executes. Emulating that part with all the ring context switches that must occur whenever the code execution transitions between kernel and user space is something that no CPU emulator does get fully right to this day.

    Same issue exists when you try to run x64 LabVIEW on a M1 Apple. LabVIEW itself works with minor tweaks to some MacOSX configuration settings for the LabVIEW application but don't try to get any hardware driver installed if you do not like to brick your MacOSX installation. Rosetta2, which is the Apple equivalent for emulating an x64 CPU on the M1 does a remarkable job for user space code but Apple explicitly states that it can NOT emulate an x64 CPU in kernel space and actively tries to prevent the system from trying to install one anyhow.\

    I suppose Apple might have been able to create a Rosetta version that even works for kernel mode code but I have a strong suggestion that they wanted this to work before the end of this decade, so purposefully limited the scope to only emulate user space code. 😁

  11. On 6/21/2022 at 6:49 AM, Antoine Chalons said:

    DSC module is Windows only.

    edit : a runtime is needed for deployed EXEs, I'm pretty sure the runtime is free, at least it was in the past.

    It definitely wasn't free last time I checked. This page would agree with that.

    Also the product page would agree too. You need the Deployment license for every computer you want to run an executable that uses the DSC module.

    There are a few functions of the DSC module that do not necessarily require a license. Maybe the user manager component is part of that.

    Yes it is Windows only, 32-bit only and pretty much depreciated 

     

  12. 22 hours ago, ASTDan said:

    I don't know what NI's policy is about watching a recorded user group meeting.  Personally I don't see the difference.  I don't make the call however...

    There might be no difference technically. But there is one in terms of acknowledgment that you did attend. Typically there is some verification with the user group organizer when someone claims to have been attending one. If you watch a recording it would be hard for NI to verify that you did so and not just claim to have done so.

  13. 1 hour ago, Bobillier said:

    Hi

    I have created one program who call a DotNet assembly component (.NET4   system.dll) and all ok in my LV2011 development ide.

    But when I create EXE with it, each time I run the program it calls me .Net ddl path. I do that on the same PC.

    If I indicate the path, the program loads it and runs correctly. It's a bit painful and not very professional to do that each time.

    To solve this, I have tried lots of work around but without success.

    1) Add Dll caller Vi directly in project folder . (intialy in my user folder)

    2) Add a specific .ini file where I have added a key for the search path (viSearchPath= "C:\Windows\Microsoft.NET\assembly\GAC_MSIL\System\v4.0_4.0.0.0__b77a5c561934e089") to Help LV to found assembly.

    3) Add DLL in my exe construction in data folder

    4) etc etc...

    I have read and test lots of things on NI website but it's not always clear for me.

    If someone can give me a good procedure it will  be christmas  for me.

    Eric

    What System DLL? If it is a strongly named assembly and resides in the Global Assembly Cache, something is wrong. If it doesn't reside there it is NOT a system assembly for sure!

    If adding it to your directory where the exe file itself resides does not help it depends on other DLLs/assemblies and you need to find out which and add them to your exe directory too.

  14. 1 hour ago, ShaunR said:

    A bugger to debug though ;)

    And Mac with the Intel compiler, I believe.

    Very possible since Mac is technically Unix too, BSD Unix at that but still Unix. Intel tries to make their compiler behave as what the platform expects. Microsoft tends to try to make it as they feel is right. Although I would expect their Visual Studio Code platform to at least have a configurable switch somewhere in one of many configuration dialogs to determine if it should behave like GCC on non Windows platforms in this respect. It's not like there would be much of a problem to add "yet another configuration switch" to the zillion already existing ones.

  15. The interesting thing is that they document the header bytes to be 64-bit long numbers. Traditionaly long in C always has been a 32-bit integer and the same as an int (except on 16 bit platforms including Windows 3.1 an int was a 16-bit integer).

    Only in GCC (or Unix) a long is defined to be a 64-bit integer. Windows (and Microsoft Compilers) continue to treat a long as 32-bit integer even in 64-bit mode. If you want to be sure to get 64-bit you should use long long or a compiler/SDK defined private type such as _int64 for MSC and QUAD for Windows APIs. Newer C compilers also tend to have the standard types such as int64_t that you can use when you want to have a specific integer size.

  16. 56 minutes ago, Bruniii said:

    The array of float is indeed an array of singles; after the "End-Of-Line" suggestion by dadreamer, my implementation is working fine. But yours is cleaner and very much appreciated. Thank you!

    What system has created these data? It is extremely rare nowadays that data is stored in Big Endian format which is incidentally what LabVIEW prefers. You need to get rid of the Typecast however in there. Currently the read of the header uses default byte ordering which tells the Binary File read to use Big Endian, so LabVIEW will swap the bytes on read to make it Little Endian since you work on an x86 hardware. So far so good.

    You can directly replace the u32 constant in there with a single precision float constant and forget about the Typecast altogether.

  17. Shaun is right. Some exports from kernel32 seem to be not supported but we can't see the list of bad functions in your screenshot.

    Also the msvcr90.dll sounds weird if you run the DLL checker on the same machine that you verified the DLL to work in LabVIEW.

    MSVC9.0 is the version of the MS C Compiler used in Visual Studio 2008. You need the MS Visual Studio 2008 32-bit C Runtime installed on every system that you want to execute this DLL on.

    But the standard MS C Runtime can't be installed on the Pharlap ETS system since it depends on newer Windows APIs that Pharlap ETS doesn't support. NI has created certain versions of the MS C Runtime that can be installed on Pharlap ETS. You need to install the according support on each realtime target in NI Max. There are installable packages for following MS C Runtimes:

    MSC 7.1 (Visual Studio 2003)

    MSC 9.0 (Visual Studio 2008)

    MSC 10.0 (Visual Studio 2010)

    If you managed to compile your project in MSVC 6.0 then it will also work since that only uses the standard MSVCRT.DLL that is provided with every Windows installation since about Windows 95.

    Only DLLs compiled with Visual Studio 6, 2003, 2008 or 2010 can be possibly made to work on NI Pharlap ETS systems. Any newer Visual Studio version or alternative C compilers are not really an option unless you are prepared to dig into linker script customization, but that is something I haven't really bothered with ever, so it is unlikely you should even try.

    • Like 2
  18. 8 hours ago, alvise said:

    Thanks for your answer.
    Actually I was able to get the stream using GetBMP function and even view it with "Win32 Decoder Img Stream.vi" with NI IMAQ but I don't think that is the right way.

    image.png.412f53ded9af4aae53e5db4d6812f8d0.png
    -What should be done to decode video stream information with ''GetBmp'' instead of ''GetJpeg'' function directly?

    - In the example you shared, I couldn't get a video stream from the example that uses the "GetPlayedFrames" function. I tried to fix the problem but couldn't find the source.

    Well a Windows bitmap file starts with a BITMAPFILEHEADER.

    typedef struct tagBITMAPFILEHEADER {
      WORD  bfType;
      DWORD bfSize;
      WORD  bfReserved1;
      WORD  bfReserved2;
      DWORD bfOffBits;
    } BITMAPFILEHEADER, *LPBITMAPFILEHEADER, *PBITMAPFILEHEADER;

    This is a 14 byte structure with the first two bytes containing the characters "BM" which corresponds nicely with our 66 = 'B' and 77 = 'M'. The next 4 bytes are a Little Endian 32-bit unsigned integer indicating the actual bytes in the file. So here we have 56 * 65536 + 64 * 256 + 54 bytes. Then there are two 16-bit integers whose meaning is reserved and and then another 32-bit unsigned integer indicating the offset of the actual bitmap bits from the start of the byte stream, which surprisingly is 54, the 14 byte of this structure plus the 40 bytes of the following BITMAPINFO structure. If you were sure what format of bitmap is in the stream you could just jump right there but that is usually not a good idea. You do want to interpret the bitmap header to find out what format is really in there and only try to "decode" the data if you understand the format.

    After this there is a BITMAPINFO (or in some obscure cases a BITMAPCOREINFO structure, this was the format used by OS-2 bitmaps in a long ago past. Windows doesn't create such files but most bitmap functions in Windows are capable of reading it).

    Which of the two can be found by interpreting the next 4 bytes as a 32-bit unsigned integer and looking at its value. A BITMAPCOREINFO would have a value of 12 in here, the size of the BITMAPCOREHEADER structure. A BITMAPINFO structure has a value of 40 in here, the size of the BITMAPINFOHEADER inside BITMAPINFO.

    Since you have 40 in there it must be a BITMAPINFO structure, surprise! 

    typedef struct tagBITMAPINFOHEADER {
      DWORD biSize;
      LONG  biWidth;
      LONG  biHeight;
      WORD  biPlanes;
      WORD  biBitCount;
      DWORD biCompression;
      DWORD biSizeImage;
      LONG  biXPelsPerMeter;
      LONG  biYPelsPerMeter;
      DWORD biClrUsed;
      DWORD biClrImportant;
    } BITMAPINFOHEADER, *PBITMAPINFOHEADER;

    biWidth and biHeight are clear, biPlanes can be confusing but should usually be one. biBitCount is the most interesting right now as it indicates how many bytes a pixel has. If this is less or equal than 8, a pixel is only an index in the color table that follows directly after the BITMAPINFOHEADER. If it is bigger than 8 there is usually NO color table at all but you need to check biClrUsed to be 0, if this is not 0 there are biClrUsed color elements in the RGBQUAD array that can be used to optimize the color handling. If the bitCount is 8 or less, biClrUsed only indicates which of the color palette elements are important, it always contains 2^bitCount elements. With bitCount > 8 the pixel values encode directly the color.

    You have probably either a 24 or 32 in here. 24 means that each pixel consists of 3 bytes and each row of pixels is padded to a 4 byte boundary. 32 means that each pixel is 32-bits and directly encodes a LabVIEW RGB value but you should make sure to mask out the uppermost byte by ANDing the pixels with 0xFFFFFF.

    biCompression is also important. If this is not BI_RGB (0) you will likely want to abort here as you have to start bit shuffling. RLE encoding is fairly doable in LabVIEW but if the compression indicates a JPEG or PNG format we are back at square one.

    Now the nice thing about all this is that there are actually already VIs in LabVIEW that can deal with BMP files (vi.lib\picture\bmp.llb). The less nice thing is that they are written to work directly on file refnums. To turn them into interpreting a byte stream array, will be some work. A nice exercise in byte shuffling. It's not really complicated but if you haven't done it it s a bit of work the first time around. But a lot easier than trying to get a callback DLL working.

  19. On 6/11/2022 at 7:36 PM, alvise said:

    - How do we eliminate all CoInitialize work by returning BMP? Why does returning BMP have such an advantage?

    - Does  taking it as BMP also cause a drop in camera FPS? As far as I know, BMP is bigger in size.

    There is another function in the PlayCtrl.dll called PlayM4_GetBMP()  There is a VI in my last archive that already should get the BMP data using this function. It is supposed to return a Windows bitmap and yes that one is fully decoded. But!

    You retrieve currently a JPG from the stream which most likely isn't exactly the same format as what the camera delivers so this function already does some camera stream decoding and than JPEG encoding, only to then have the COM JPEG decoder pull the undecoded data out of the JPEG image anyhow!

    Except that you do not know if the BMP decoder in PlayCtrl.dll is written at least as performant as the COM JPEG decoder, it is actually likely that the detour through the JPEG format requires more performance than directly going to BMP. And the BMP format only contains a small header of maybe 50 bytes or so that is prepended in front of the bitmap data. So not really more than what you get after you have decoded your JPEG image. 

  20. 6 hours ago, dadreamer said:

    Win32 Decode Img Stream.vi does not produce memory leaks of any sort. I've been running it on a production for months. Never ever received errors from that VI. As to CoInitializeEx, it was implemented this way in the original thread on SO, I just borrowed the solution. But I checked now, CoInitializeEx always returns TRUE, no matter what. Extra resources are not allocated. I assume it's safe enough to call it multiple times from the same thread. But you may easily add CoUninitialize to there, if you're afraid it works improperly. I'm just thinking this might be not a good idea, given that description of the function:

    A lot of work would be done on each call. Better to do this once on the app exit. Or leave it to the OS, when LabVIEW quits.

    Well, how many times per second do you call that VI? In this camera application it was called ONLY about 20 to 50 times per second. And MSDN definitely and clearly states that you should balance every call to CoInitialize(Ex) with exactly one call to CoUnitialize. And that of course has to happen in the same thread!

    If it would not allocate something, somehow, somewhere, that would not be necessary as the CoIntialize(Ex) when it returns TRUE (actually S_FALSE) simply indicates that it did not initialize the COM system for the current thread, but MSDN still says you need to call CoUninitialize even when CoInitialize(Ex) returns S_FALSE. That is definitely not for nothing! And if you do it in the LabVIEW VI you have actually a problem as you can not guarantee that CoUnitialze is called in the same thread as CoInitialize was, unless you make the entire VI subroutine priority. This guarantees that LabVIEW will NOT switch threads ever for the duration of the entire VI call. 

    If it always returns TRUE (S_FALSE) even the first time you call it, it simply means that LabVIEW apparently already initialized the COM system for that thread.

    CoUnitialize should not do much, except maybe deallocate that thread local storage that it somehow creates to maintain some management information unless it is the matching call to the first CoIntialize(Ex). In that case it gets indeed rather expensive as it would deinitialize the COM system for the current thread.

    So to be fully proper without risking to allocate new resources with every call I think the safest would be:

    To add a CoUnitialize call at the end of the VI if CoInitializeEx returned 1 (S_FALSE) to make sure nothing is accumulating but don't call it if CoInitializeEx returned 0 (S_OK) as it was the first initialization of COM for the current thread. And make that VI subroutine. It's a pitta for debugging but once it works you simply should guarantee that COM functions execute from the same thread in which an object was created. Most COM class implementations are not guaranteed to work reliable in full multithreading operation.

    And please do NOT try to pass the pointer that the callback function receives directly through the LabVIEW event. That pointer ceases to be valid at the moment the callback function returns control to the caller, but that event goes through an event queue and then the event structure and by the time your event structure sees that event the pointer from the callback function has either been reused by the SDK already for something else, or even completely deallocated. You MUST create a copy of that pointer if you want to read the data outside of the callback function.

    I did check the callback function from earlier again and there is of course memory leak in there!

    The NumericArrayResize() is called twice. This in itself would be just useless but not fatal if there was not also a new eventData structure declared inside the function.  Without that the second call would be simply useless, it would not really do much as the size is both times the same and the DSSetHandleSize() used internally in that function is basically a NO-OP if the new size is the same as what the handle already has.

    extern "C" __declspec(dllexport) void __stdcall DataCallBack(LONG lRealHandle, DWORD dwDataType, BYTE * pBuffer, DWORD dwBufSize, DWORD dwUser)
    {
        if (cbState == LVBooleanTrue)
        {
            LVEventData eventData = { 0 };
            MgErr err = NumericArrayResize(uB, 1, (UHandle*)&(eventData.handle), dwBufSize);
            if (!err) // send callback data if there is no error and the cbstatus is true.
            {
                // LVEventData eventData = { 0 };
                // MgErr err = NumericArrayResize(uB, 1, (UHandle*)&(eventData.handle), dwBufSize); // Not useful
                LVUserEventRef userEvent = (LVUserEventRef)dwUser;
                MoveBlock(pBuffer, (*(eventData.handle))->elm, dwBufSize);
                (*(eventData.handle))->size = (int32_t)dwBufSize;
                eventData.realHandle = lRealHandle;
                eventData.dataType = dwDataType;
                PostLVUserEvent(userEvent, &eventData);
                DSDisposeHandle(eventData.handle);
            }
        }
    }

    But with the eventData structure declared again it creates in fact two handles on each call but only deallocates one of them. Get rid of the first two lines inside the if (!err) block!

  21. 1 hour ago, alvise said:

    Yes just the callback does not cause a memory leak per se it is not a memory leak that is happening right now.
    So how can we solve this problem, is there a solution to it?

    There of course always is. But I have no idea where the memory leak would be. One thing that looks not only suspicious but is in fact possibly unnecessary is the call to CoInitializeEx(). LabVIEW has to do that early on during startup already in order to be ever able to access ActiveX functionality. Of course I'm not sure if LabVIEW does this on every possible thread that it initializes, most likely not. So you run into a potential problem here. The one thread it for sure will do CoInitialize(ex) on is the UI thread. So if you execute all your COM functions in that decoder VI in the UI thread you can forget about calling this function. However that has of course implications for your performance since the entire decoding is then done in the UI thread. If you want to do in any arbitrary thread for performance reason you may need to call the CoInitializeEx anyways just to be sure, BUT!!!!!! Go read the documentation for that function!

    Quote

    To close the COM library gracefully on a thread, each successful call to CoInitialize or CoInitializeEx, including any call that returns S_FALSE, must be balanced by a corresponding call to CoUninitialize.

    Your function does call CoInitializeEx() on every invocation but never the CoUnintialize(). That certainly has the potential of allocating new resources on every single invocation that are never ever released again. You will need to add a CoUnintialize() at the end of that function and not just in the SUCCESS (return value 0) case but also in the Done Nothing (return value 1) case when CoInitializeEx() returns.

    Of course returning a BMP instead has likely the advantage to not only do away with that entire CoInitialize() and CoUninitialze() business but also avoids the potential need of extra resources to decode the MPEG (or with another camera maybe H264) frame, encode it into a JPEG image, and then decode it back into a bitmap. Instead you get immediately a decoded flat bitmap that you only have to index at the right location from interpreting some of the values in the BITMAPINFOHEADER in front of the pixel data.

  22. What I meant to say is that the callback alone should not leak memory. But the frequent allocation and deallocation certainly will fragment memory over time. That is not the same, although it can look similar. Due to fragmentation more and more blocks of memory are getting allocated and while LabVIEW (or whatever manages the memory for the LabVIEW memory manager functions) does know about these allocations and hasn't forgotten about them (which is the meaning of a memory leak) it can't reuse those blocks easily for new memory requests, leaving that memory reserved but unused. That way the memory footprint of an application can slowly increase. It's not a memory leak since that memory is still accounted for internally but it causes more and more memory to be allocated.

    If you however allocate a memory pointer (or handle) and then consequently forget about it you have created a true memory leak. In the case of a handle there is the potential to hand it to LabVIEW for further use in some instances which will make LabVIEW responsible to release it, but these are fairly limited, usually only to parameters that are passed in from the diagram through Call Library Node parameters and then returned back.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.