-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Not really like this! My code uses generally a header of a fixed size with more than just a size value. So there is some context that can be verfied before interpreting the size value. The header usually includes some protocol identifier, version number and a message identifier before specifying the size of the actual message. If the header doesn't evaluate to a valid message the connection is closed and restarted in the client case. For the server it simply waits for a reconnection from the client. Of course if you maliciously create a valid header specifying your ridiculous length value it may still go wrong, but if you execute your client code on your own machine you will probably run into trouble before it hits the TCP Send node. I usually don't go through the trouble of trying to guess if a length value might be usefull after the header has been determined to be valid. Might as well consider that in the future, based on the message identifier, but if you have figured out my protocol you may as well find a way to cause a DOS attack anyways. Not all messages types can be made fixed size and imposing an arbitrary limit on such messages may look good today but bite you in your ass tomorrow. And yes I have used white listing on an SMS server implementation in the past. Not really funny if anyone in the world could send SMS messages through your server where you have to pay for each message.
-
They may not be meant to leave your C function but your question was about if you should catch them or any others or not. As to Structured Exception Handling, that's indeed the official term for the Windows way although I have seen it used for other forms of exception handling. The Windows structured exeception handling is part of the Windows kernel and can be used from both C and C++, but Microsoft nowadays recommends to use the ISO C++ exception handling for C++ code for portability reasons. But C++ exceptions on the other hand is done in the C++ compiler itself for the most part, It may or may not be based on the Windows SEH. If it is not, you can't really mix and match them easily together. Specifically this page shows that /EHsc will actually cause problems for the destruction of local objects when an SEH is triggered and that you should probably use /EHa instead to guarantee that local C++ objects are properly deallocated during stack unwinding of the SEH exception. This page shows that you may have to do actual SEH transformation in order to get more context from an SEH exception in a C++ exception handler. In general it seems that while C++ can catch SEH (and SEH transformation can be used to gain more detailed information about specific SEH exceptions into your C++ exception), the opposite is not true. So if LabVIEW uses SEH around the Call Library Node, which I believe it does, it will not really see (and catch) any C++ exceptions your code throws. It also can't rely on the fact that the external code may use a specific C++ exception model since that code may have been compiled with different compilers including standard C compilers which don't really support C++ exception handling at all. It may be even useful to add __declspec(nothrow) to the declaration of your exported DLL functions to indicate that they do not throw exceptions to the calling code. But I'm not really sure if that makes a difference for the code generation of the function itself. It seems to be mostly for the code generation of callers who can then optimize the calling code to account for the fact that this function will never throw any exceptions. But maybe it will even cause the compiler to generate warnings if it can determine that your code could indeed cause termination by uncatched exceptions in this function. If your code is a CPP module (or C but set in the compiler options to compile as C++) the EH options will emit code that enables unwinding the stack and when you use the EHa option to also cause object destruction for SEH. However if your code isn't really C++ this indeed probably won't make any difference. C structures and variables don't have destructors so unwinding the stack won't destruct anything on the way except try to adjust the stack properly as it walks through the various stack frames. As to what to catch, I would tend to only really catch what my code can generate including possible library functioins used and leave SEH alone unless I know exactly what I'm doing in a specific place. Generally catching the low level SEH exceptions is anyhow a rather complicated issue. Catching things like illegal address access, division by zero execution, and similar for more than a "Sorry you are hosed dialog" is a rather complicated endeavour. Continuing from there as if nothing has happend is generally not a good idea and trying to fix after the fact whatever has caused this, most of the times not really possible.
-
It's not undifined. These exceptions are really caused by hardware interrupts and translated by Windows into its own exception handling. An application can hook into that exception handling by calling Windows API functions. If it doesn't you get the well known "Your application has caused a General Protection Fault error" or similar dialog with the option to abort your application or kill it (but not to continue) . If your C++ exceptions are caught by such an application hook depends entirely on the fact if your C runtime library actually goes to the extra effort of making its exceptions play nice with the OS exception mechanisme. And no I wouldn't know if they do and which might or might not do that.
-
Well, this is mostly guessing, but almost all functions documented in the extcode.h file existed long before LabVIEW was compiled as C++ code. Even then much of it remained as C code but just got compiled with the C++ compiler. So it is highly unlikely that any of these low level managers thow any explicit exceptions of their own. That still leaves of course the OS exceptions that are mostly directly generated from the CPU and VMM hardware as interrupts. Windows has its own exception mechanisme that predates the C++ exceptions by many years. The implementation of it requires assembly and the Windows API for it is not used by many applications explicitedly because it is somewhat difficult to handle right. Supposedly your C runtime library with exception handling would intercept those exceptions though and integrate them in its own exception handling. How well that really works I wouldn't know. Now the exception handling in the C runtime is compiler specific (and patent encumbered) so each C runtime implements its own exception handling architecture that is anything but binary compatible. Therefore if you mix and match different binary object files together you are pretty lucky if your exceptions will not just crash when crossing those boundaries. I'm not sure what LabVIEW all does around the Call Library Node. Because of the binary incompatibilities between exception handling and the fact that a C only interface can't even properly use C++ exceptions in a meaningful way I'm pretty sure LabVIEW doesn't jus add a standard try catch around the Call Library Node call. That would go completely havoc in most cases. What LabVIEW can do however is hooking into the Windows exception mechanisme, This interface is standardized and therefore doesn't suffer from these compiler difficulties. How much of your C++ exceptions can get caught like this depends entirely how your C++ runtime library is able to interact with the Windows exception interface. If it can translate its exceptions from and to this interface whenever it traverses from Windows API to C++ runtime and back from that when leaving the code module (your DLL) then it will work. Otherwise you get all kind of messed up behaviour. Of course a C++ exception library that couldn't translate those low level OS exceptions into its own exceptions would be pretty useless. So that is likely covered. Where it will get shaky is about explicit C++ exceptions that are thrown in your code. How they translate back into the standard Windows exception mechanisme I have no idea. If they do it's a marvelous piece of code for sure, that I would not want to touch for any money in the world . If they don't, well....!!! C++ exceptions are great to use but get a complete fiasco if you need to write code that spans over object modules created in different C compilers or even just versions. C++ code in general suffers from this in a great way, as ABI specifications including class memory layouts are also compiler specific.
-
Who would throw them then? When LabVIEW calls your function, the actual thread is really blocked for your function and there will be nothing else executing in that thread until you return from your function. So not sure what you mean with this. Exception handling is AFAIK thread specific so other threads in LabVIEW throwing exceptions should not affect your code. Otherwise exception handling would be pretty meaningless in a multithreaded application.
-
Several remarks first: 1) You should put lv_prolog.h and lv_epilog.h includes around the error structure definition to make sure the element alignment is correct. 2) You don't show the definition of WrpExcUser exception class but if it derives from some other exception class it will catch those too. 3) Your attempt to generalize the code for catching the exception through a function pointer, so you can reuse it in multiple functions, is in principle not bad, but you loose the ability to call functions which take parameters. Not really very interesting for a bigger library. I suppose that is why you made it a template so you can replace the function pointer with specific definitions for each function but that tends to get heavy and pretty hard to maintain too. I'm not sure what your question about default error checking should mean. As far as external code goes, you as implementor define what the error checking is and how it should be performed. It's a pipe dream to have template error checking in all places the same way, reality simply doesn't work that way. Sometimes an error is fatal, sometimes it is temporary and sometimes it is even expected. Your code has to account for this on a case to case situation. As far as calling code from LabVIEW goes, unless you disable the error handling level in the Call Library Node configuration, LabVIEW will wrap the call into an exception handler of its own and return an according error in the error cluster of the Call Library Node. The reported error is not very detailed as LabVIEW has to use the most generic exception class there is in order to catch all possible exceptions but it is at least something. So generally if you don't want to do custom error handling in the external code you could leave it all to LabVIEW.
-
Well, generally if your DLL uses global variables, one of the easier ways to guarantee that it is safe to call this DLL from LabVIEW more than once is to set all the Call Library Nodes calling any function that reads or writes this global to run in the UI thread. However in this case since the callback is also called from the internal thread, that is not enough to make it strictly safe. The callback after all only makes any sense when it is called from another context than your LabVIEW diagram. Even in this trivial example that isn't really meant to show a real use case but just how to use the PostLVUserEvent() function, the callback is called from a new thread inside the DLL, and therefore can access the global variable at the same time as your LabVIEW diagram. Now these are all general rules and the reality is a bit more complicated. In this case, without some alignment pragmas that put the global variables on an unaligned address, each read of the two global variables inside the callback is really atomic on any modern system. Even if your LabVIEW code calls the initialize function at exactly the same time, the read in the callback will either read the old value or the new one but never a mix of them. So with careful safeguarding of the order of execution and copying the global into a local variable inside the callback first before checking it to be valid (non-null) and using it, it is maybe not truely thread safe but safe enough in real world use. Same about the b_ThreadState variable which is actually used here as protection and being a single byte even fully thread safe for a single read. Still, calling ResetLabVIEWInterrupt and SetLabVIEWInterrupt in a non sequentual way (no strict datadependency) without setting the Call Library Nodes to UI thread could cause nasty race conditions. So you could either document that these functions can't be called in parallel ever to avoid undefined behaviour or simply protect it by setting them to run in the UI thread. The second is definitely more safe as some potential LabVIEW users may not even understand what parallel execution means. The original 8051 was special in that it had only 128 byte of internal RAM and the lowest bank of it was reserved for the stack. The stack there also grows upwards while most CPU architectures have a stack that grows downwards. Modern 8051 designs allow to have 64 kb of RAM or more and the stack simply is in the lowest area of that RAM but not really in a different sort memory than the rest of the heap. As to PUSH and POP that are still the low level assembly commands used on most CPUs nowadays. Compiled C code still contains them to push the parameters on the stack and pull (pop) them from it inside the function.
-
Yes, local variables are generally placed on the stack by the C compiler. I say generally, since there exist CPU architectures which handle that differently but they do not really have any significance outside of very specialized embedded architectures. However they are not posted on the stack (and in my opinion even allocated feels wrong, as I associate that with an explicit malloc or similar call) but in a broader sense I suppose allocated is a sensible term here. The PostLVUserEvent() function then "posts" the data to the LabVIEW event queue associated with the event structure that registered for the user event. And yes, the stack is typically not explicitly put in the cache, although it certainly could and probably does end up there, but that is not of concern to you, but very much of the CPU designer who has to devise all sorts of tricks and protections to make sure everything stays coherent anyways. The stack is usually in a reserved area of the heap that for most processor architectures starts at a high address and grows downwards until it meets a stack limit or the normally managed heap memory limit, which is when you get the stack overflow error.
-
Well this error is usually because of setting the compiler default alignment to 1! The 64 Bit platforms, and non-x86 in general, don't use a special compiler alignment at all. So you have to be careful about adding this compiler option to your project. Basically, never use unconditional pragma pack's in your code if you ever intend to port the code to any other LabVIEW platform and yes x64 is an entirely different platform from x86 technically, eventhough NI doesn't treat it differently from a licensing point of view. The proper way to do this is actually to replace the two pragmas with the includes for lv_prolog.h and lv_epilog.h. They contain the proper magic to make the compiler behave. Now of course you should apply those includes ONLY around structures that will somehow be interfacing to the LabVIEW datatype-system. For other structures and elements you'll have to decide for yourself what alignment is the right one, and if you want to take that into the source code or if you rather use a globally set alignment through a compiler option. Personally I think that if you need a specific alignment for some code it belongs into that code and only there and anything else should remain with default alignment. Changing the default alignment only makes sense if you control more or less the entire platform and the target hardware has a significant performance penalty with the default alignment. But most often you have to deal with constraints put on you from the software components you interface with. There the alignment should be locally changed where necessary and otherwise left alone to the default alignment. Why LabVIEW uses 1 byte alignement on x86? Well, when LabVIEW was ported from the Mac to Windows 3.1, computers with 4MB of physcal memory were considered state of the art machines. 8MB was seen as high end. 16MB wasn't even possible on most because of BIOS, chipset and board design limitations. There, a default alignment of 8 bytes could waste a lot of memory on a platform that used predominantly 32 bit entities with the exception of a double precision floating point, which wasn't that special in LabVIEW as it was an engineering tool and used often for floating point calculations. Yes Jack, it is all soooooooo 1900 but that is the century when LabVIEW was developed. And something like byte alignment can't be changed later on on a whim without rendering almost every DLL interface that has been developed until then incompatible. The problem will however soon solve itself with the obsoletion of the x86 platform in general and in LabVIEW especially. Your other remarks do sound more angry than jaded, Jack! Yes I also feel the pain from extcode.h which is in some ways a bit dated and hasn't really seen much of development in the last 20 years. PostLVUserEvent() was one of the few additions in that timeframe and it wasn't the greatest design for sure. Incidentially NI doesn't really use it themselves but they rather use the undocument OM (Object Manager) API which supports also event posting (and custom refnums like DAQmx, IMAQdx, etc) but uses an API that is basically impossible to use without further detailed insight into the LabVIEW source code, despite a documentation leak in the 8.0 cintools headers for some of them. And the fact that you can't tell PostLVUserEvent() to take ownership of the data is certainly a flaw, however if you use it for posting large amounts of data to the user event loop, you certainly have a much bigger problem in your software architecture. It's easy to do that I know, but it is absolutely not a clean design to send lots of data through such channels. The event handling should be able to proceed quickly and cleanly and should not do large data handling at all. It's much better to limit the event to enough data to allow the receiver to identify the data in some ways and retrieve it from your DLL directly when your final data handler sees fit, rather than forcing the whole data handling into the event itself. That is not only because of the limit of PostLVUserEvent() but generally a better design than coupling event handling and data processing tightly together, even if PostLVUSerEvent would have an explicitedly synchronously working sibbling (which only could work with callback VIs and a new Filter User Event). Personally I think the fact that user events work with LabVIEW callback VIs is not so much an intended design feature than more of a somehow unintentional side effect from adding ActiveX event support to the user event infrastructure. Or was that even before the User Event structure?? Also your observation that you can't throttle the sender through the receiver is valid, but should be again solved outside of the event handling. Once you let the event delegate the data handling to some specific actor or action engine, or whatever, the retrieval of the data through this entity gives you the possibility to implement whatever data throttling you would want on the sender side. Yes I know, queues in LabVIEW are a lot easier to use than in C code, but I have to admit that it would be a pretty involved exercise to come up with a LabVIEW C API that allows to address all the caveats you mentioned about PostLVUserEvent() and would still be usable without a major in computer science. And with such a degree doing that yourself in your DLL is not that a hard exercise anymore and allows the generic LabVIEW interface to be simple. That would be a really bad PRNG. Relyng on random data in memory is anything bad truely random.
-
Am I missing something here? In your example you allocate a handle each time you send an event but never deallocate it. And right below that you say that it's not necessary to copy the original data since PostLVEvent will create its own copy of the data!
-
There isn't a really compelling reason not to use static import libraries other than that you won't be able to load the wrapper DLL without the secondary DLL being available too in a Windows searchable location. With LoadLibrary() you can implement your own runtime failure when the DLL can't be found (or add custom directories to attempt to load your secondary DLL from) while with the static import library LabVIEW will simply bark at you that it could not load the wrapper DLL despite that you can clearly see it on disk at the expected location.
-
I've used VPN routers in other setups, not with NI Linux RT based cRIOs. However unless it is for your own hobby use I would not recommend to rely on the sometimes built in VPN capabilities of standard home ADSL or similar routers. They are pretty weak and often with various vulnerabilities and there is seldom even the possibility to upgrade the firmware in a reasonable way. A Cisco or similar router may be more expensive but if you talk about business data then this is the wrong place to save a few bucks. Also setting up VPN tunnels between different routers from different manufacturers is generally not a trivial exercise. There are different standards for key exchange and various protocols that not all manufacturers support in all ways, which can make it a taunting exercise to find a method that works with both sides and can still be considered secure.
-
Even on a NI Linux RT cRIO device I would probably offload the VPN handling to an embedded router that sits between the outside network and the local cRIO network. And of course use a VPN server on the remote side of the customers network and not a cloud based one. With your example of an oil rig, I would suppose they already use VPN protected links between offshore and land side, if they have any network connection at all.
-
Doo, if the function protototype really looks like: int Send(tyLinkID iLinkID,tyMessage message); then the message struct is indeed passed by value and the function prototype effectively would be mostly equivalent to int Send(tyLinkID iLinkID, tyMessageType nMsgType, DWORD nAddress, BYTE *cType, WORD nTypeLength, BYTE *cData, WORD nDataLength); For this particular type it should be ok to configure the CLN like this, but there is an undefined behaviour for other possible data structs where the element alignment would cause the elements to be aligned on other than the natural address boundary. For instance a struct like this: typedef struct { char byte1; char byte2; char byte3; char *string; }; it's not sure if the C compiler would pack the first 3 bytes into a single function parameter or not, making the binary function interface potentially different between different C compilers! As to UHandle reference counting. As far as I'm aware of, LabVIEW does not do reference counting on handles. The reason being that simple reference counting isn't really enough for LabVIEW optimizations but there also needs to be some way of marking if a handle is stomped on (modified). Doing this kind of optimization during compilation (LabVIEW diagram to DFIR and DFIR optimization) delivers much better results than trying to optimize handle reuse at runtime. Generally LabVIEW C functions do not take ownership of a handle passed into it, unless specifically documented to do so. From anything I can see, PostLVUserEvent() is no execption to this. LabVIEW does simply create a deep copy of the entire data structure.
-
Don't! Add a wrapper to you DLL that accepts the parameters as LabVIEW strings/byte buffers and build the struct in that wrapper. cType and cData are pointers. If you try to call that directly from LabVIEW you end up with two different clusters for 32 Bit and 64 bit LabVIEW if you ever want to go that route, have to bother about struct element alignment yourself and need to do C pointer voodoo on your LabVIEW diagram. In C this is a lot more straightforward and you can let the compiler worry about most of those issues. Jack certainly raised a few interesting points, from which most I simply assumed as given already, considering the question. The one nasty point could be if the caller of the callback function expects a meaningful return value from the callback, which Jack described as requiring a synchronous callback. PostLVUserEvent() will post the message into the user event queue and then return without waiting for the event to be handled. If your DLL expects from the callback to do some specific action such as clearing an error condition or whatever, you either have to build that into your callback before you return, making it really asynchronous to the actual handling of the event, or you need to add extra synchronization between your callback and the actual handling of your user event in a LabVIEW user event structure.
-
Actually as far as the functions are concerned that do not use any callback pointers you can also call the original DLL directly. A DLL, unless it does some nasty SxS (side by side) loading voodoo, does only get mapped once into every process space and any module (DLL) inside that process referencing a DLL with the same name will have the function pointers relinked to that single memory space. While the method of Shaun to only implement the callback function in your external DLL will work too, it requires you to do a LoadLibrary() and GetProcAddress() call for that callback function. Personally I would tend to wrap the entire function needing a callback parameter into a second DLL.
-
OpenG imaging and machine vision tools?
Rolf Kalbermatter replied to caleyjag's topic in OpenG General Discussions
It's not meant to be a discouragement and I would be definitely trying to help with advice for the more low level difficulties, but I have no time and resources to substantially contribute to and drive such a project myself. If I would start this seriously, it would be out of a project for which we need that and that would almost automatically mean that it could not be provided as open source without a lot of hassles. -
OpenG imaging and machine vision tools?
Rolf Kalbermatter replied to caleyjag's topic in OpenG General Discussions
It's an interesting idea but also one which requires a lot of work to get to a point that is even remotely useful. IMAQ Vision isn't just a small collection of C functions gathered together but a pretty involved function library and at the same time lots of supporting glue to put those functions in an easy to use way into LabVIEW without sacrificing performance in a big way. OpenCV would probably be the library of choice for such a project. I've been looking into it to make some DLL interface that integrates OpenCV in an easy to use way into LabVIEW. But OpenCV also has a legacy with an old style C interface and a more modern C++ interface and I have found that not every functionality is equally available in both. That makes a generic interface to LabVIEW more complicated as you need to provide different datatypes for the different functions. Creating OpenG interfaces to Halcon, Sherlock, Matrox, Keyence, etc, is probably not very interesting for a larger audience. These softwares are expensive, with strict license control and totally out of reach of most non-commercial developers and even many commercial projects. Whatever you decide, it's a serious undertaking to get something like this started and even more work to get something usable up and running. -
Speech To text Conversion(Speech Recognition) Using LabVIEW
Rolf Kalbermatter replied to Roopa's topic in LabVIEW General
What doesn't work for you in the library that is provided in the LVSpeak link mentioned earlier in this thread? -
Directing TCP Open
Rolf Kalbermatter replied to Cat's topic in Remote Control, Monitoring and the Internet
The reason there isn't is that it is usually not necessary. Binding a client socket to a specific network card address is basically a fix of a problem that should better be resolved in the routing configuration. For generic clients like a web browser you can't usually specifiy to bind to a particular network address either. It requires from the end user knowledge about the network configuration on his computers that you want to avoid whenever possible. If the routing tables are correct and the subnet ranges for each network adapter configured properly the packets should be routed to the right network adapter based on the destination address. If you have overlapping subnet ranges and those network cards are not logically connected to the same network you will of course run into problems that the routing may choose the wrong adapter. Same about if there has been set up a to large subnet range for a network adpater. It may capture all the packets that should really go to the default adapter. Depending on the network card driver Windows may have trouble to determine that a specific adapter is not connected at all to any network and may still use its configuration in the routing of packets making them land in the adapter queue but because of the disconnected state they will never leave the computer. -
I would guess a configuration issue in the network stack. The LabVIEW function calls into WinSock which then will query the DNS resolver to return the TCP/IP network name. The WinSock provider works with internal timeouts when quering other services such as the DNS resolver. There is no easy way to influence those timeouts from a WinSock client. The .Net function most likely calls GetComputerNameEx() which queries some information in the registry that is usually updated with some largish interval in the background.
-
LabVIEW can in principle create ARM code for the crosscompilation to the ARM based NI RT RIO devices. But that doesn't work on other ARM targets easily. For one it must be an ARM Cortex A7 compatible device. And you need the LabVIEW runtime library for NI Linux RT which is technically not trivial to get running on a different target and legally you need to buy a runtime license from NI to be allowed to do that. Also it doesn't use Windows at all but the NI Linux RT OS which you would have to port to that board too. Supposedly the guys from TSExperts are working on a version of their cross compilation toolchain that is supposed to work for the Raspberry Pi device which is also an ARM based embedded board. I have no idea how they get to create code from LabVIEW to port to those targets but would assume they make use of the LabVIEW C Code Generator module which has a hefty price tag. What their license deal with NI might be I also have no idea, but I don't expect this to be standard procedure. So in conclusion, it is not a clear no as tst put it, but for most applications still not a feasable thing to attempt. To the OP: The Windows 10 version running on the DragonBoard is not a normal Windows version as used on your desktop computer but the Windows RT kernel which is also used for the Windows Mobile platform. This is a Windows version build around .Net technology and does not provide any Win32 API but only the .Net API. Also it is typically not compiled for the x86 CPU but for some RISC based architecture like ARM. LabVIEW for Windows definitely can't run on this and never will since it's interfacing to the Win32 API and is compiled for the x86 CPU.
-
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
I'm not sure how that code being LGPL would change much in that respect. The required knowledge remains the same and LGPL doesn't prevent anybody from modifying code and relicensing under LGPL (or even GPL but nothing else really) or simply request commit access to the repository and add the change to the code with a nice copyright line added in the head of the file. I might agree or not with the quality, or method used in that change and being still active might ask the person to reconsider certain aspects but other than that, there isn't really anything wrong about that. Even better would be of course to first consult me with a patch that can be cleanly applied. And if we are at it and somebody is just now digging out his patches, which he has worked hard for, he or she could also mention what license they would prefer to be applied to that contribution. In all those years I haven't really received any request for any of those libraries that contained even one single line of patched code that could directly be applied. The only thing that I received occasionally were bug reports often with very little technical details, and I don't think this would change in any way with another license. Most likely I could put it in the Public Domain and except that someone somewhere might use it to create his own product and try to sell it as his own, nothing would change. It might actually already have happened but then I hope they feel at least some form of guilt. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
I did a rundown of the projects which I have provided to OpenG and which contain shared libraries which are still under LGPL: lvzip: Contains ZLIB and Minizip sources and sources developed by myself. ZLIB uses the ZLIB license which is actually more lenient than BSD as it does NOT REQUIRE attribution but only inclusion of the ZLIB license text (which may not be removed from a source distribution). Minizip is somewhat unclear, there is a readme stating it uses the ZLIB license but there is also a license text in the unzip.c file refering to the InfoZIP license which is in fact more or less the same as the 3 clause BSD license. This seems to apply to the crypt code in there which was taken (albeit in reduced form) from the MiniZIP software. So compiling with NOUNCRYPT define might technically remove the code this license applies too, but I find it a pretty shaky assumption that you can get away with just the ZLIB license if you include the unzip.c in any way in your code (which NI apparently did in LabVIEW too). All the rest is copyright from me. So yes it seems I could change the license to BSD here, since the only C code and project files that is not originally copyrighted under BSD or ZLIB is from me alone. LabPython: As far as the C code goes, everything is copyright by me. OpenG Pipe library: As far as the C code goes, everything is copyright by me. This never has been released as an official OpenG package so not likely a problem for most people so far. OpenG PortIO: As far as the C code goes, everything is copyright by me. However this is obsoleted by the fact that the used technique doesn't work on modern Windows systems anymore and there is really no practical way to make it work in a different way. Remains the question if I should change it. As it is, without some effort to also make a uniform single license file that could be added to every installation of any OpenG Toolkit and that users then could include in an application build, I do however not see much merit in going to change it. As to using another license for future work: It won't help much as long as there is one single VI with the old license in the Toolkit. And more importantly, active development on OpenG libraries has almost stopped, with the exception of the libraries I'm involved. So unless someone new steps up and does new development, there really won't be any future work to apply such a license too. Also unless I totally misunderstand the Apache license text, section 4d would pretty much mean to me a similar attribution requirement in any build application too, if there is any mentioning of your own copyright in that application in a license file, about dialog or similar. Basically a build LabVIEW application would not be allowed to display any copyright information of any form anywhere or this section applies. And MIT also makes a requirement to include the MIT license text with the orginal copyright in any copy or substantial portions of the software and although it doesn't explicitedly say that this applies to object form too, it also doesn't say anything about source only, so substantial portion can and probably should be just as well understood to include object form too. Therefore I wouldn't see MIT or Apache as a solutions to the problem at hand. I think this is slightly misleading. NI has stopped to request royalities for the LabVIEW runtime license and many other licenses too such as NI-VISA, many years ago. For driver software excluding NI-VISA and NI-IMAQdx under certain circumstances, the roalities are covered by the purchase of the according NI hardware which quite often are directly purchased from NI without involvement of the software developer who distributes a compiled app. The exception to this are certain Toolkits and drivers such as IMAQ Vision, LabVIEW DataLogging and Supervisor module, and some of the special analysis libraries etc. and also for instant TestStand. If you use them you need to get a runtime license from NI. You usually will notice quickly, at least under Windows, as those components require license activation to run properly. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
Probably but who is going to take the time to do all the administrative work? I'm not opposed to change that license but feel very little for doing all the other work that would be involved in this. Coincidentially I was trying to find any mentioning of ZLIB and the minizip copyright in there and failed. That is almost 100% sure the base for the ZIP functionality in LabVIEW since even the exported function calls are the same as defined in minizip. It could be my search foo that failed me here but not sure.