Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. And here you walk into the mist! LabVIEW is written in C(++) for most of it, but it doesn't and has never created C code for the normal targets you and I are using it with. Before LabVIEW 2010 or so, it translated the code first into a directed graph and then from there directly into machine code. Since then it has a two layer approach. First it translates the diagram into DFIR (Dataflow Intermediate Representation) which is a more formalized version of a directed graph representation with additional features. Most of the algorithme optimization including things like dead code elimination, constant unfolding and many more things are done on this level. From there the data is passed to the open source llvm compiler engine which then generates the actual machine code representation. At no point is any intermediate C code involved, as C code is notorously inadequate for representing complex relationships in an easy to handle way. There is a C generator in LabVIEW that can translate a LabVIEW VI into some sort of C++ code. It was used for some of the early embedded toolkits such as for the AD Blackfin Toolkit, Windows Mobile Toolkit, and ARM Toolkit. But the generated code is pretty unreadable, and the solution proofed very hard to support in the long run. You can still buy the C Generator Addon from NI which gives you a license to use that generator but its price is pretty exorbitant and active support from NI is minimal. Except under the hood for the Touch Panel Module in combination with an embedded Visual C Express installation it is not used in any currently available product from NI AFAIK.
  2. Unfortunately I haven't found much time to work on this in the meantime. However while the ping functionality of this library was a geeky idea that I persuaded for the sake of seeing how hard it would be (and turned out to be pretty easy given the universal low level API of this library), I don't think it has much merit in the context of this library. In order to implement ping directly on the socket library interface one is required to use raw sockets which are a privileged resource that only processes with elevated rights can create. I'm not seeing how it would be useful to start a process as admin just to be able to ping a device. And someone probably will argument that the ping utility in Windows or Linux doesn't need admin privileges. That is right, because that is solved under Linux by giving the ping utility special rights during installation for accessing raw sockets and under Windows through a special ping DLL that interfaces to a privileged kernel driver that implements the ping functionality. At least under Windows this library could theoretically interface to that same DLL, but its API doesn't really fit in easily in this library and I didn't feel like creating a special purpose LabVIEW API that breaks the rest of the library concept and only is possible under Windows anyhow.
  3. Why do graphs have autoscaling on their axis? The IMAQ Vision control has the same for the range. Certainly not a bug but sometimes not what you expect. If you show the Image Information you also see the range that the image currently uses. And you can change that directly in there or through properties.
  4. I would guess this is only true if you use compiled code separated from the VI. Otherwise the according binary compiled code resource in the VI will be very much different and definitely will have some indication of bitness. Still for the rest of the VI it most likely indeed doesn't matter at all, especially for an empty VI. There might be certain things on the diagram that change but I would guess almost all of it is code generation related, so would actually only affect the VI itself if you don't use seperate compiled code.
  5. I'm not really understanding what you are saying. For one, the case that UI controls configured to coerce their values would do the coercion also when the VI is used as subVI: that was true in LabVIEW 3.0 and maybe 4.0 but was removed after that because it was considered indeed a bad idea. I should say that quite a few people got upset at this back then , but you seem to agree that this is not desirable. As to using the fact that the Value Change event does not get triggered to allow you to alert a user that there was a coercion??? Sorry but I'm not following you here at all. How can you use an event that does NOT occur, to trigger any action?? That sounds so involved and out of our space-time continium that my limited brain capacity can't comprehend it. But you can't use the fact that the value is the same to mean that a limit has been reached. The user is free to go to the control, type in exactly the same value as is seen already, and hit enter or click on something else in the panel and the Value Change event will be triggered. Value Changed simply doesn't mean that there is a different value, only that the user did something with the control to change the control to the same or a different value. Sounds involved I know but there have been many discussions both in the LabVIEW devlopment team as well as in many other companies who design UI widget libraries and they generally all agree that you want to trigger on user interaction with the control for maximum functionality and leave the details about if an equal value should mean something or not to the actual implementor. The name for the event may indeed be somewhat misleading here. In LabWindows CVI NI used the VALUE_COMMIT term for the same event. However I suppose the word "commit" was considered to technical for use in LabVIEW.
  6. I'm not sure I would agree her fully. Yes security is a problem as you can not get at the underlaying socket in a way that would allow to inject OpenSSL or similar into the socket for instance. So TCP/IP using LabVIEW primtives is limited to unencrypted communication. Performance wise they aren't that bad. There is some overhead in the built in data buffering that consumes some performance, but it isn't that bad. The only real limit is the synchronous character towards the application which makes some high throughput applications more or less impossible. But that are typically protocols that are rather complicated (Video streaming, VOIP, etc) and you do not want to reimplement them on top of the LabVIEW primitives but rather import an existing external library for that anyways. Having a more asynchronous API would be also pretty hard to use for most users. Together with the fact that it is mostly only really necessary for rather complex protocols I wouldn't see any compelling reason to spend to much time on that. I worked through all this pretty extensively when trying to work on this library. Unfortunately the effort to invest into such a project is huge and the immediate needs for it were somewhat limited. Shaun seems to be working on something similar at the moment but making the scope of it possibly even bigger. I know that he prefers to solve as much as possible in LabVIEW itself rather than creating an intermediate wrapper shared library. One thing that would concern me here is implementation of the intermediate buffering in LabVIEW itself. I'm not sure that you can get a similar performance there than doing the same in C, even when making heavy use of the In-Place structure in LabVIEW.
  7. Hooovahh mentioned it on the side after ranting a bit about how bad the NI INI VIs were , but the Variant Config VIs have a "floating point format" input! Use that if you don't want the library to write floating point values in the default %.6f format. You could use for instance %.7e for scientific format with 7 digits of precision or %.7g for scientific format with exponents of a multiple of 3.
  8. I wonder if this is very useful. The Berkeley TCP/IP socket library, which is used on almost all Unix systems including Linux, and on which the Winsock implementation is based too, has various configurable tuning parameters. Among them are also things like number of outstanding acknowledge packets as well as maximum buffer size per socket that can be used before the socket library simply blocks any more data to come in. The cRIO socket library (well at least for the newer NI Linux systems, the vxWorks and Pharlap libraries may be privately baked libraries that could behave less robust) being in fact just another Linux variant certainly uses them too. Your Mega-Jumbo data packet simply will block on the sender side (and fill your send buffer) and cause more likely a DOS attack on your own system than one on the receiving side. Theoretically you can set your send buffer for the socket to 2^32 -1 bytes of course but that will impact your own system performance very badly. So is it useful to add yet another "buffer limit" on the higher level protocol layers? Aren't you badly muddying the waters about proper protocol layer respoinsiblities by such bandaid fixes? Only the final high level protocol can really make any educated guesses about such limits and even there it is often hard to do if you want to allow variable sized message structures. Limiting the message to some 64KB for instance wouldn't even necessarily help if you get a client that maliciously attempts to throw thousends of such packets at your application. Only the final upper layer can really take useful action to prepare for such attacks. Anything in between will always be possible to circumvent by better architected attack attempts. In addition you can't set a socket buffer above 2^16-1 bytes after the connection has been established as the according windows need to be negotiated during the connection establishment. Since you don't get at the refnum in LabVIEW before the socket has been connected this is therefore not possible. You would have to create your DOS code in C or similar to be able to configure a sender buffer above 2^16-1 bytes on the unconnected socket before calling the connect() function.
  9. Not really like this! My code uses generally a header of a fixed size with more than just a size value. So there is some context that can be verfied before interpreting the size value. The header usually includes some protocol identifier, version number and a message identifier before specifying the size of the actual message. If the header doesn't evaluate to a valid message the connection is closed and restarted in the client case. For the server it simply waits for a reconnection from the client. Of course if you maliciously create a valid header specifying your ridiculous length value it may still go wrong, but if you execute your client code on your own machine you will probably run into trouble before it hits the TCP Send node. I usually don't go through the trouble of trying to guess if a length value might be usefull after the header has been determined to be valid. Might as well consider that in the future, based on the message identifier, but if you have figured out my protocol you may as well find a way to cause a DOS attack anyways. Not all messages types can be made fixed size and imposing an arbitrary limit on such messages may look good today but bite you in your ass tomorrow. And yes I have used white listing on an SMS server implementation in the past. Not really funny if anyone in the world could send SMS messages through your server where you have to pay for each message.
  10. They may not be meant to leave your C function but your question was about if you should catch them or any others or not. As to Structured Exception Handling, that's indeed the official term for the Windows way although I have seen it used for other forms of exception handling. The Windows structured exeception handling is part of the Windows kernel and can be used from both C and C++, but Microsoft nowadays recommends to use the ISO C++ exception handling for C++ code for portability reasons. But C++ exceptions on the other hand is done in the C++ compiler itself for the most part, It may or may not be based on the Windows SEH. If it is not, you can't really mix and match them easily together. Specifically this page shows that /EHsc will actually cause problems for the destruction of local objects when an SEH is triggered and that you should probably use /EHa instead to guarantee that local C++ objects are properly deallocated during stack unwinding of the SEH exception. This page shows that you may have to do actual SEH transformation in order to get more context from an SEH exception in a C++ exception handler. In general it seems that while C++ can catch SEH (and SEH transformation can be used to gain more detailed information about specific SEH exceptions into your C++ exception), the opposite is not true. So if LabVIEW uses SEH around the Call Library Node, which I believe it does, it will not really see (and catch) any C++ exceptions your code throws. It also can't rely on the fact that the external code may use a specific C++ exception model since that code may have been compiled with different compilers including standard C compilers which don't really support C++ exception handling at all. It may be even useful to add __declspec(nothrow) to the declaration of your exported DLL functions to indicate that they do not throw exceptions to the calling code. But I'm not really sure if that makes a difference for the code generation of the function itself. It seems to be mostly for the code generation of callers who can then optimize the calling code to account for the fact that this function will never throw any exceptions. But maybe it will even cause the compiler to generate warnings if it can determine that your code could indeed cause termination by uncatched exceptions in this function. If your code is a CPP module (or C but set in the compiler options to compile as C++) the EH options will emit code that enables unwinding the stack and when you use the EHa option to also cause object destruction for SEH. However if your code isn't really C++ this indeed probably won't make any difference. C structures and variables don't have destructors so unwinding the stack won't destruct anything on the way except try to adjust the stack properly as it walks through the various stack frames. As to what to catch, I would tend to only really catch what my code can generate including possible library functioins used and leave SEH alone unless I know exactly what I'm doing in a specific place. Generally catching the low level SEH exceptions is anyhow a rather complicated issue. Catching things like illegal address access, division by zero execution, and similar for more than a "Sorry you are hosed dialog" is a rather complicated endeavour. Continuing from there as if nothing has happend is generally not a good idea and trying to fix after the fact whatever has caused this, most of the times not really possible.
  11. It's not undifined. These exceptions are really caused by hardware interrupts and translated by Windows into its own exception handling. An application can hook into that exception handling by calling Windows API functions. If it doesn't you get the well known "Your application has caused a General Protection Fault error" or similar dialog with the option to abort your application or kill it (but not to continue) . If your C++ exceptions are caught by such an application hook depends entirely on the fact if your C runtime library actually goes to the extra effort of making its exceptions play nice with the OS exception mechanisme. And no I wouldn't know if they do and which might or might not do that.
  12. Well, this is mostly guessing, but almost all functions documented in the extcode.h file existed long before LabVIEW was compiled as C++ code. Even then much of it remained as C code but just got compiled with the C++ compiler. So it is highly unlikely that any of these low level managers thow any explicit exceptions of their own. That still leaves of course the OS exceptions that are mostly directly generated from the CPU and VMM hardware as interrupts. Windows has its own exception mechanisme that predates the C++ exceptions by many years. The implementation of it requires assembly and the Windows API for it is not used by many applications explicitedly because it is somewhat difficult to handle right. Supposedly your C runtime library with exception handling would intercept those exceptions though and integrate them in its own exception handling. How well that really works I wouldn't know. Now the exception handling in the C runtime is compiler specific (and patent encumbered) so each C runtime implements its own exception handling architecture that is anything but binary compatible. Therefore if you mix and match different binary object files together you are pretty lucky if your exceptions will not just crash when crossing those boundaries. I'm not sure what LabVIEW all does around the Call Library Node. Because of the binary incompatibilities between exception handling and the fact that a C only interface can't even properly use C++ exceptions in a meaningful way I'm pretty sure LabVIEW doesn't jus add a standard try catch around the Call Library Node call. That would go completely havoc in most cases. What LabVIEW can do however is hooking into the Windows exception mechanisme, This interface is standardized and therefore doesn't suffer from these compiler difficulties. How much of your C++ exceptions can get caught like this depends entirely how your C++ runtime library is able to interact with the Windows exception interface. If it can translate its exceptions from and to this interface whenever it traverses from Windows API to C++ runtime and back from that when leaving the code module (your DLL) then it will work. Otherwise you get all kind of messed up behaviour. Of course a C++ exception library that couldn't translate those low level OS exceptions into its own exceptions would be pretty useless. So that is likely covered. Where it will get shaky is about explicit C++ exceptions that are thrown in your code. How they translate back into the standard Windows exception mechanisme I have no idea. If they do it's a marvelous piece of code for sure, that I would not want to touch for any money in the world . If they don't, well....!!! C++ exceptions are great to use but get a complete fiasco if you need to write code that spans over object modules created in different C compilers or even just versions. C++ code in general suffers from this in a great way, as ABI specifications including class memory layouts are also compiler specific.
  13. Who would throw them then? When LabVIEW calls your function, the actual thread is really blocked for your function and there will be nothing else executing in that thread until you return from your function. So not sure what you mean with this. Exception handling is AFAIK thread specific so other threads in LabVIEW throwing exceptions should not affect your code. Otherwise exception handling would be pretty meaningless in a multithreaded application.
  14. Several remarks first: 1) You should put lv_prolog.h and lv_epilog.h includes around the error structure definition to make sure the element alignment is correct. 2) You don't show the definition of WrpExcUser exception class but if it derives from some other exception class it will catch those too. 3) Your attempt to generalize the code for catching the exception through a function pointer, so you can reuse it in multiple functions, is in principle not bad, but you loose the ability to call functions which take parameters. Not really very interesting for a bigger library. I suppose that is why you made it a template so you can replace the function pointer with specific definitions for each function but that tends to get heavy and pretty hard to maintain too. I'm not sure what your question about default error checking should mean. As far as external code goes, you as implementor define what the error checking is and how it should be performed. It's a pipe dream to have template error checking in all places the same way, reality simply doesn't work that way. Sometimes an error is fatal, sometimes it is temporary and sometimes it is even expected. Your code has to account for this on a case to case situation. As far as calling code from LabVIEW goes, unless you disable the error handling level in the Call Library Node configuration, LabVIEW will wrap the call into an exception handler of its own and return an according error in the error cluster of the Call Library Node. The reported error is not very detailed as LabVIEW has to use the most generic exception class there is in order to catch all possible exceptions but it is at least something. So generally if you don't want to do custom error handling in the external code you could leave it all to LabVIEW.
  15. Well, generally if your DLL uses global variables, one of the easier ways to guarantee that it is safe to call this DLL from LabVIEW more than once is to set all the Call Library Nodes calling any function that reads or writes this global to run in the UI thread. However in this case since the callback is also called from the internal thread, that is not enough to make it strictly safe. The callback after all only makes any sense when it is called from another context than your LabVIEW diagram. Even in this trivial example that isn't really meant to show a real use case but just how to use the PostLVUserEvent() function, the callback is called from a new thread inside the DLL, and therefore can access the global variable at the same time as your LabVIEW diagram. Now these are all general rules and the reality is a bit more complicated. In this case, without some alignment pragmas that put the global variables on an unaligned address, each read of the two global variables inside the callback is really atomic on any modern system. Even if your LabVIEW code calls the initialize function at exactly the same time, the read in the callback will either read the old value or the new one but never a mix of them. So with careful safeguarding of the order of execution and copying the global into a local variable inside the callback first before checking it to be valid (non-null) and using it, it is maybe not truely thread safe but safe enough in real world use. Same about the b_ThreadState variable which is actually used here as protection and being a single byte even fully thread safe for a single read. Still, calling ResetLabVIEWInterrupt and SetLabVIEWInterrupt in a non sequentual way (no strict datadependency) without setting the Call Library Nodes to UI thread could cause nasty race conditions. So you could either document that these functions can't be called in parallel ever to avoid undefined behaviour or simply protect it by setting them to run in the UI thread. The second is definitely more safe as some potential LabVIEW users may not even understand what parallel execution means. The original 8051 was special in that it had only 128 byte of internal RAM and the lowest bank of it was reserved for the stack. The stack there also grows upwards while most CPU architectures have a stack that grows downwards. Modern 8051 designs allow to have 64 kb of RAM or more and the stack simply is in the lowest area of that RAM but not really in a different sort memory than the rest of the heap. As to PUSH and POP that are still the low level assembly commands used on most CPUs nowadays. Compiled C code still contains them to push the parameters on the stack and pull (pop) them from it inside the function.
  16. Yes, local variables are generally placed on the stack by the C compiler. I say generally, since there exist CPU architectures which handle that differently but they do not really have any significance outside of very specialized embedded architectures. However they are not posted on the stack (and in my opinion even allocated feels wrong, as I associate that with an explicit malloc or similar call) but in a broader sense I suppose allocated is a sensible term here. The PostLVUserEvent() function then "posts" the data to the LabVIEW event queue associated with the event structure that registered for the user event. And yes, the stack is typically not explicitly put in the cache, although it certainly could and probably does end up there, but that is not of concern to you, but very much of the CPU designer who has to devise all sorts of tricks and protections to make sure everything stays coherent anyways. The stack is usually in a reserved area of the heap that for most processor architectures starts at a high address and grows downwards until it meets a stack limit or the normally managed heap memory limit, which is when you get the stack overflow error.
  17. Well this error is usually because of setting the compiler default alignment to 1! The 64 Bit platforms, and non-x86 in general, don't use a special compiler alignment at all. So you have to be careful about adding this compiler option to your project. Basically, never use unconditional pragma pack's in your code if you ever intend to port the code to any other LabVIEW platform and yes x64 is an entirely different platform from x86 technically, eventhough NI doesn't treat it differently from a licensing point of view. The proper way to do this is actually to replace the two pragmas with the includes for lv_prolog.h and lv_epilog.h. They contain the proper magic to make the compiler behave. Now of course you should apply those includes ONLY around structures that will somehow be interfacing to the LabVIEW datatype-system. For other structures and elements you'll have to decide for yourself what alignment is the right one, and if you want to take that into the source code or if you rather use a globally set alignment through a compiler option. Personally I think that if you need a specific alignment for some code it belongs into that code and only there and anything else should remain with default alignment. Changing the default alignment only makes sense if you control more or less the entire platform and the target hardware has a significant performance penalty with the default alignment. But most often you have to deal with constraints put on you from the software components you interface with. There the alignment should be locally changed where necessary and otherwise left alone to the default alignment. Why LabVIEW uses 1 byte alignement on x86? Well, when LabVIEW was ported from the Mac to Windows 3.1, computers with 4MB of physcal memory were considered state of the art machines. 8MB was seen as high end. 16MB wasn't even possible on most because of BIOS, chipset and board design limitations. There, a default alignment of 8 bytes could waste a lot of memory on a platform that used predominantly 32 bit entities with the exception of a double precision floating point, which wasn't that special in LabVIEW as it was an engineering tool and used often for floating point calculations. Yes Jack, it is all soooooooo 1900 but that is the century when LabVIEW was developed. And something like byte alignment can't be changed later on on a whim without rendering almost every DLL interface that has been developed until then incompatible. The problem will however soon solve itself with the obsoletion of the x86 platform in general and in LabVIEW especially. Your other remarks do sound more angry than jaded, Jack! Yes I also feel the pain from extcode.h which is in some ways a bit dated and hasn't really seen much of development in the last 20 years. PostLVUserEvent() was one of the few additions in that timeframe and it wasn't the greatest design for sure. Incidentially NI doesn't really use it themselves but they rather use the undocument OM (Object Manager) API which supports also event posting (and custom refnums like DAQmx, IMAQdx, etc) but uses an API that is basically impossible to use without further detailed insight into the LabVIEW source code, despite a documentation leak in the 8.0 cintools headers for some of them. And the fact that you can't tell PostLVUserEvent() to take ownership of the data is certainly a flaw, however if you use it for posting large amounts of data to the user event loop, you certainly have a much bigger problem in your software architecture. It's easy to do that I know, but it is absolutely not a clean design to send lots of data through such channels. The event handling should be able to proceed quickly and cleanly and should not do large data handling at all. It's much better to limit the event to enough data to allow the receiver to identify the data in some ways and retrieve it from your DLL directly when your final data handler sees fit, rather than forcing the whole data handling into the event itself. That is not only because of the limit of PostLVUserEvent() but generally a better design than coupling event handling and data processing tightly together, even if PostLVUSerEvent would have an explicitedly synchronously working sibbling (which only could work with callback VIs and a new Filter User Event). Personally I think the fact that user events work with LabVIEW callback VIs is not so much an intended design feature than more of a somehow unintentional side effect from adding ActiveX event support to the user event infrastructure. Or was that even before the User Event structure?? Also your observation that you can't throttle the sender through the receiver is valid, but should be again solved outside of the event handling. Once you let the event delegate the data handling to some specific actor or action engine, or whatever, the retrieval of the data through this entity gives you the possibility to implement whatever data throttling you would want on the sender side. Yes I know, queues in LabVIEW are a lot easier to use than in C code, but I have to admit that it would be a pretty involved exercise to come up with a LabVIEW C API that allows to address all the caveats you mentioned about PostLVUserEvent() and would still be usable without a major in computer science. And with such a degree doing that yourself in your DLL is not that a hard exercise anymore and allows the generic LabVIEW interface to be simple. That would be a really bad PRNG. Relyng on random data in memory is anything bad truely random.
  18. Am I missing something here? In your example you allocate a handle each time you send an event but never deallocate it. And right below that you say that it's not necessary to copy the original data since PostLVEvent will create its own copy of the data!
  19. There isn't a really compelling reason not to use static import libraries other than that you won't be able to load the wrapper DLL without the secondary DLL being available too in a Windows searchable location. With LoadLibrary() you can implement your own runtime failure when the DLL can't be found (or add custom directories to attempt to load your secondary DLL from) while with the static import library LabVIEW will simply bark at you that it could not load the wrapper DLL despite that you can clearly see it on disk at the expected location.
  20. I've used VPN routers in other setups, not with NI Linux RT based cRIOs. However unless it is for your own hobby use I would not recommend to rely on the sometimes built in VPN capabilities of standard home ADSL or similar routers. They are pretty weak and often with various vulnerabilities and there is seldom even the possibility to upgrade the firmware in a reasonable way. A Cisco or similar router may be more expensive but if you talk about business data then this is the wrong place to save a few bucks. Also setting up VPN tunnels between different routers from different manufacturers is generally not a trivial exercise. There are different standards for key exchange and various protocols that not all manufacturers support in all ways, which can make it a taunting exercise to find a method that works with both sides and can still be considered secure.
  21. Even on a NI Linux RT cRIO device I would probably offload the VPN handling to an embedded router that sits between the outside network and the local cRIO network. And of course use a VPN server on the remote side of the customers network and not a cloud based one. With your example of an oil rig, I would suppose they already use VPN protected links between offshore and land side, if they have any network connection at all.
  22. Doo, if the function protototype really looks like: int Send(tyLinkID iLinkID,tyMessage message); then the message struct is indeed passed by value and the function prototype effectively would be mostly equivalent to int Send(tyLinkID iLinkID, tyMessageType nMsgType, DWORD nAddress, BYTE *cType, WORD nTypeLength, BYTE *cData, WORD nDataLength); For this particular type it should be ok to configure the CLN like this, but there is an undefined behaviour for other possible data structs where the element alignment would cause the elements to be aligned on other than the natural address boundary. For instance a struct like this: typedef struct { char byte1; char byte2; char byte3; char *string; }; it's not sure if the C compiler would pack the first 3 bytes into a single function parameter or not, making the binary function interface potentially different between different C compilers! As to UHandle reference counting. As far as I'm aware of, LabVIEW does not do reference counting on handles. The reason being that simple reference counting isn't really enough for LabVIEW optimizations but there also needs to be some way of marking if a handle is stomped on (modified). Doing this kind of optimization during compilation (LabVIEW diagram to DFIR and DFIR optimization) delivers much better results than trying to optimize handle reuse at runtime. Generally LabVIEW C functions do not take ownership of a handle passed into it, unless specifically documented to do so. From anything I can see, PostLVUserEvent() is no execption to this. LabVIEW does simply create a deep copy of the entire data structure.
  23. Don't! Add a wrapper to you DLL that accepts the parameters as LabVIEW strings/byte buffers and build the struct in that wrapper. cType and cData are pointers. If you try to call that directly from LabVIEW you end up with two different clusters for 32 Bit and 64 bit LabVIEW if you ever want to go that route, have to bother about struct element alignment yourself and need to do C pointer voodoo on your LabVIEW diagram. In C this is a lot more straightforward and you can let the compiler worry about most of those issues. Jack certainly raised a few interesting points, from which most I simply assumed as given already, considering the question. The one nasty point could be if the caller of the callback function expects a meaningful return value from the callback, which Jack described as requiring a synchronous callback. PostLVUserEvent() will post the message into the user event queue and then return without waiting for the event to be handled. If your DLL expects from the callback to do some specific action such as clearing an error condition or whatever, you either have to build that into your callback before you return, making it really asynchronous to the actual handling of the event, or you need to add extra synchronization between your callback and the actual handling of your user event in a LabVIEW user event structure.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.