-
Posts
3,871 -
Joined
-
Last visited
-
Days Won
262
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Yes, local variables are generally placed on the stack by the C compiler. I say generally, since there exist CPU architectures which handle that differently but they do not really have any significance outside of very specialized embedded architectures. However they are not posted on the stack (and in my opinion even allocated feels wrong, as I associate that with an explicit malloc or similar call) but in a broader sense I suppose allocated is a sensible term here. The PostLVUserEvent() function then "posts" the data to the LabVIEW event queue associated with the event structure that registered for the user event. And yes, the stack is typically not explicitly put in the cache, although it certainly could and probably does end up there, but that is not of concern to you, but very much of the CPU designer who has to devise all sorts of tricks and protections to make sure everything stays coherent anyways. The stack is usually in a reserved area of the heap that for most processor architectures starts at a high address and grows downwards until it meets a stack limit or the normally managed heap memory limit, which is when you get the stack overflow error.
-
Well this error is usually because of setting the compiler default alignment to 1! The 64 Bit platforms, and non-x86 in general, don't use a special compiler alignment at all. So you have to be careful about adding this compiler option to your project. Basically, never use unconditional pragma pack's in your code if you ever intend to port the code to any other LabVIEW platform and yes x64 is an entirely different platform from x86 technically, eventhough NI doesn't treat it differently from a licensing point of view. The proper way to do this is actually to replace the two pragmas with the includes for lv_prolog.h and lv_epilog.h. They contain the proper magic to make the compiler behave. Now of course you should apply those includes ONLY around structures that will somehow be interfacing to the LabVIEW datatype-system. For other structures and elements you'll have to decide for yourself what alignment is the right one, and if you want to take that into the source code or if you rather use a globally set alignment through a compiler option. Personally I think that if you need a specific alignment for some code it belongs into that code and only there and anything else should remain with default alignment. Changing the default alignment only makes sense if you control more or less the entire platform and the target hardware has a significant performance penalty with the default alignment. But most often you have to deal with constraints put on you from the software components you interface with. There the alignment should be locally changed where necessary and otherwise left alone to the default alignment. Why LabVIEW uses 1 byte alignement on x86? Well, when LabVIEW was ported from the Mac to Windows 3.1, computers with 4MB of physcal memory were considered state of the art machines. 8MB was seen as high end. 16MB wasn't even possible on most because of BIOS, chipset and board design limitations. There, a default alignment of 8 bytes could waste a lot of memory on a platform that used predominantly 32 bit entities with the exception of a double precision floating point, which wasn't that special in LabVIEW as it was an engineering tool and used often for floating point calculations. Yes Jack, it is all soooooooo 1900 but that is the century when LabVIEW was developed. And something like byte alignment can't be changed later on on a whim without rendering almost every DLL interface that has been developed until then incompatible. The problem will however soon solve itself with the obsoletion of the x86 platform in general and in LabVIEW especially. Your other remarks do sound more angry than jaded, Jack! Yes I also feel the pain from extcode.h which is in some ways a bit dated and hasn't really seen much of development in the last 20 years. PostLVUserEvent() was one of the few additions in that timeframe and it wasn't the greatest design for sure. Incidentially NI doesn't really use it themselves but they rather use the undocument OM (Object Manager) API which supports also event posting (and custom refnums like DAQmx, IMAQdx, etc) but uses an API that is basically impossible to use without further detailed insight into the LabVIEW source code, despite a documentation leak in the 8.0 cintools headers for some of them. And the fact that you can't tell PostLVUserEvent() to take ownership of the data is certainly a flaw, however if you use it for posting large amounts of data to the user event loop, you certainly have a much bigger problem in your software architecture. It's easy to do that I know, but it is absolutely not a clean design to send lots of data through such channels. The event handling should be able to proceed quickly and cleanly and should not do large data handling at all. It's much better to limit the event to enough data to allow the receiver to identify the data in some ways and retrieve it from your DLL directly when your final data handler sees fit, rather than forcing the whole data handling into the event itself. That is not only because of the limit of PostLVUserEvent() but generally a better design than coupling event handling and data processing tightly together, even if PostLVUSerEvent would have an explicitedly synchronously working sibbling (which only could work with callback VIs and a new Filter User Event). Personally I think the fact that user events work with LabVIEW callback VIs is not so much an intended design feature than more of a somehow unintentional side effect from adding ActiveX event support to the user event infrastructure. Or was that even before the User Event structure?? Also your observation that you can't throttle the sender through the receiver is valid, but should be again solved outside of the event handling. Once you let the event delegate the data handling to some specific actor or action engine, or whatever, the retrieval of the data through this entity gives you the possibility to implement whatever data throttling you would want on the sender side. Yes I know, queues in LabVIEW are a lot easier to use than in C code, but I have to admit that it would be a pretty involved exercise to come up with a LabVIEW C API that allows to address all the caveats you mentioned about PostLVUserEvent() and would still be usable without a major in computer science. And with such a degree doing that yourself in your DLL is not that a hard exercise anymore and allows the generic LabVIEW interface to be simple. That would be a really bad PRNG. Relyng on random data in memory is anything bad truely random.
-
Am I missing something here? In your example you allocate a handle each time you send an event but never deallocate it. And right below that you say that it's not necessary to copy the original data since PostLVEvent will create its own copy of the data!
-
There isn't a really compelling reason not to use static import libraries other than that you won't be able to load the wrapper DLL without the secondary DLL being available too in a Windows searchable location. With LoadLibrary() you can implement your own runtime failure when the DLL can't be found (or add custom directories to attempt to load your secondary DLL from) while with the static import library LabVIEW will simply bark at you that it could not load the wrapper DLL despite that you can clearly see it on disk at the expected location.
-
I've used VPN routers in other setups, not with NI Linux RT based cRIOs. However unless it is for your own hobby use I would not recommend to rely on the sometimes built in VPN capabilities of standard home ADSL or similar routers. They are pretty weak and often with various vulnerabilities and there is seldom even the possibility to upgrade the firmware in a reasonable way. A Cisco or similar router may be more expensive but if you talk about business data then this is the wrong place to save a few bucks. Also setting up VPN tunnels between different routers from different manufacturers is generally not a trivial exercise. There are different standards for key exchange and various protocols that not all manufacturers support in all ways, which can make it a taunting exercise to find a method that works with both sides and can still be considered secure.
-
Even on a NI Linux RT cRIO device I would probably offload the VPN handling to an embedded router that sits between the outside network and the local cRIO network. And of course use a VPN server on the remote side of the customers network and not a cloud based one. With your example of an oil rig, I would suppose they already use VPN protected links between offshore and land side, if they have any network connection at all.
-
Doo, if the function protototype really looks like: int Send(tyLinkID iLinkID,tyMessage message); then the message struct is indeed passed by value and the function prototype effectively would be mostly equivalent to int Send(tyLinkID iLinkID, tyMessageType nMsgType, DWORD nAddress, BYTE *cType, WORD nTypeLength, BYTE *cData, WORD nDataLength); For this particular type it should be ok to configure the CLN like this, but there is an undefined behaviour for other possible data structs where the element alignment would cause the elements to be aligned on other than the natural address boundary. For instance a struct like this: typedef struct { char byte1; char byte2; char byte3; char *string; }; it's not sure if the C compiler would pack the first 3 bytes into a single function parameter or not, making the binary function interface potentially different between different C compilers! As to UHandle reference counting. As far as I'm aware of, LabVIEW does not do reference counting on handles. The reason being that simple reference counting isn't really enough for LabVIEW optimizations but there also needs to be some way of marking if a handle is stomped on (modified). Doing this kind of optimization during compilation (LabVIEW diagram to DFIR and DFIR optimization) delivers much better results than trying to optimize handle reuse at runtime. Generally LabVIEW C functions do not take ownership of a handle passed into it, unless specifically documented to do so. From anything I can see, PostLVUserEvent() is no execption to this. LabVIEW does simply create a deep copy of the entire data structure.
-
Don't! Add a wrapper to you DLL that accepts the parameters as LabVIEW strings/byte buffers and build the struct in that wrapper. cType and cData are pointers. If you try to call that directly from LabVIEW you end up with two different clusters for 32 Bit and 64 bit LabVIEW if you ever want to go that route, have to bother about struct element alignment yourself and need to do C pointer voodoo on your LabVIEW diagram. In C this is a lot more straightforward and you can let the compiler worry about most of those issues. Jack certainly raised a few interesting points, from which most I simply assumed as given already, considering the question. The one nasty point could be if the caller of the callback function expects a meaningful return value from the callback, which Jack described as requiring a synchronous callback. PostLVUserEvent() will post the message into the user event queue and then return without waiting for the event to be handled. If your DLL expects from the callback to do some specific action such as clearing an error condition or whatever, you either have to build that into your callback before you return, making it really asynchronous to the actual handling of the event, or you need to add extra synchronization between your callback and the actual handling of your user event in a LabVIEW user event structure.
-
Actually as far as the functions are concerned that do not use any callback pointers you can also call the original DLL directly. A DLL, unless it does some nasty SxS (side by side) loading voodoo, does only get mapped once into every process space and any module (DLL) inside that process referencing a DLL with the same name will have the function pointers relinked to that single memory space. While the method of Shaun to only implement the callback function in your external DLL will work too, it requires you to do a LoadLibrary() and GetProcAddress() call for that callback function. Personally I would tend to wrap the entire function needing a callback parameter into a second DLL.
-
OpenG imaging and machine vision tools?
Rolf Kalbermatter replied to caleyjag's topic in OpenG General Discussions
It's not meant to be a discouragement and I would be definitely trying to help with advice for the more low level difficulties, but I have no time and resources to substantially contribute to and drive such a project myself. If I would start this seriously, it would be out of a project for which we need that and that would almost automatically mean that it could not be provided as open source without a lot of hassles. -
OpenG imaging and machine vision tools?
Rolf Kalbermatter replied to caleyjag's topic in OpenG General Discussions
It's an interesting idea but also one which requires a lot of work to get to a point that is even remotely useful. IMAQ Vision isn't just a small collection of C functions gathered together but a pretty involved function library and at the same time lots of supporting glue to put those functions in an easy to use way into LabVIEW without sacrificing performance in a big way. OpenCV would probably be the library of choice for such a project. I've been looking into it to make some DLL interface that integrates OpenCV in an easy to use way into LabVIEW. But OpenCV also has a legacy with an old style C interface and a more modern C++ interface and I have found that not every functionality is equally available in both. That makes a generic interface to LabVIEW more complicated as you need to provide different datatypes for the different functions. Creating OpenG interfaces to Halcon, Sherlock, Matrox, Keyence, etc, is probably not very interesting for a larger audience. These softwares are expensive, with strict license control and totally out of reach of most non-commercial developers and even many commercial projects. Whatever you decide, it's a serious undertaking to get something like this started and even more work to get something usable up and running. -
Speech To text Conversion(Speech Recognition) Using LabVIEW
Rolf Kalbermatter replied to Roopa's topic in LabVIEW General
What doesn't work for you in the library that is provided in the LVSpeak link mentioned earlier in this thread? -
Directing TCP Open
Rolf Kalbermatter replied to Cat's topic in Remote Control, Monitoring and the Internet
The reason there isn't is that it is usually not necessary. Binding a client socket to a specific network card address is basically a fix of a problem that should better be resolved in the routing configuration. For generic clients like a web browser you can't usually specifiy to bind to a particular network address either. It requires from the end user knowledge about the network configuration on his computers that you want to avoid whenever possible. If the routing tables are correct and the subnet ranges for each network adapter configured properly the packets should be routed to the right network adapter based on the destination address. If you have overlapping subnet ranges and those network cards are not logically connected to the same network you will of course run into problems that the routing may choose the wrong adapter. Same about if there has been set up a to large subnet range for a network adpater. It may capture all the packets that should really go to the default adapter. Depending on the network card driver Windows may have trouble to determine that a specific adapter is not connected at all to any network and may still use its configuration in the routing of packets making them land in the adapter queue but because of the disconnected state they will never leave the computer. -
I would guess a configuration issue in the network stack. The LabVIEW function calls into WinSock which then will query the DNS resolver to return the TCP/IP network name. The WinSock provider works with internal timeouts when quering other services such as the DNS resolver. There is no easy way to influence those timeouts from a WinSock client. The .Net function most likely calls GetComputerNameEx() which queries some information in the registry that is usually updated with some largish interval in the background.
-
LabVIEW can in principle create ARM code for the crosscompilation to the ARM based NI RT RIO devices. But that doesn't work on other ARM targets easily. For one it must be an ARM Cortex A7 compatible device. And you need the LabVIEW runtime library for NI Linux RT which is technically not trivial to get running on a different target and legally you need to buy a runtime license from NI to be allowed to do that. Also it doesn't use Windows at all but the NI Linux RT OS which you would have to port to that board too. Supposedly the guys from TSExperts are working on a version of their cross compilation toolchain that is supposed to work for the Raspberry Pi device which is also an ARM based embedded board. I have no idea how they get to create code from LabVIEW to port to those targets but would assume they make use of the LabVIEW C Code Generator module which has a hefty price tag. What their license deal with NI might be I also have no idea, but I don't expect this to be standard procedure. So in conclusion, it is not a clear no as tst put it, but for most applications still not a feasable thing to attempt. To the OP: The Windows 10 version running on the DragonBoard is not a normal Windows version as used on your desktop computer but the Windows RT kernel which is also used for the Windows Mobile platform. This is a Windows version build around .Net technology and does not provide any Win32 API but only the .Net API. Also it is typically not compiled for the x86 CPU but for some RISC based architecture like ARM. LabVIEW for Windows definitely can't run on this and never will since it's interfacing to the Win32 API and is compiled for the x86 CPU.
-
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
I'm not sure how that code being LGPL would change much in that respect. The required knowledge remains the same and LGPL doesn't prevent anybody from modifying code and relicensing under LGPL (or even GPL but nothing else really) or simply request commit access to the repository and add the change to the code with a nice copyright line added in the head of the file. I might agree or not with the quality, or method used in that change and being still active might ask the person to reconsider certain aspects but other than that, there isn't really anything wrong about that. Even better would be of course to first consult me with a patch that can be cleanly applied. And if we are at it and somebody is just now digging out his patches, which he has worked hard for, he or she could also mention what license they would prefer to be applied to that contribution. In all those years I haven't really received any request for any of those libraries that contained even one single line of patched code that could directly be applied. The only thing that I received occasionally were bug reports often with very little technical details, and I don't think this would change in any way with another license. Most likely I could put it in the Public Domain and except that someone somewhere might use it to create his own product and try to sell it as his own, nothing would change. It might actually already have happened but then I hope they feel at least some form of guilt. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
I did a rundown of the projects which I have provided to OpenG and which contain shared libraries which are still under LGPL: lvzip: Contains ZLIB and Minizip sources and sources developed by myself. ZLIB uses the ZLIB license which is actually more lenient than BSD as it does NOT REQUIRE attribution but only inclusion of the ZLIB license text (which may not be removed from a source distribution). Minizip is somewhat unclear, there is a readme stating it uses the ZLIB license but there is also a license text in the unzip.c file refering to the InfoZIP license which is in fact more or less the same as the 3 clause BSD license. This seems to apply to the crypt code in there which was taken (albeit in reduced form) from the MiniZIP software. So compiling with NOUNCRYPT define might technically remove the code this license applies too, but I find it a pretty shaky assumption that you can get away with just the ZLIB license if you include the unzip.c in any way in your code (which NI apparently did in LabVIEW too). All the rest is copyright from me. So yes it seems I could change the license to BSD here, since the only C code and project files that is not originally copyrighted under BSD or ZLIB is from me alone. LabPython: As far as the C code goes, everything is copyright by me. OpenG Pipe library: As far as the C code goes, everything is copyright by me. This never has been released as an official OpenG package so not likely a problem for most people so far. OpenG PortIO: As far as the C code goes, everything is copyright by me. However this is obsoleted by the fact that the used technique doesn't work on modern Windows systems anymore and there is really no practical way to make it work in a different way. Remains the question if I should change it. As it is, without some effort to also make a uniform single license file that could be added to every installation of any OpenG Toolkit and that users then could include in an application build, I do however not see much merit in going to change it. As to using another license for future work: It won't help much as long as there is one single VI with the old license in the Toolkit. And more importantly, active development on OpenG libraries has almost stopped, with the exception of the libraries I'm involved. So unless someone new steps up and does new development, there really won't be any future work to apply such a license too. Also unless I totally misunderstand the Apache license text, section 4d would pretty much mean to me a similar attribution requirement in any build application too, if there is any mentioning of your own copyright in that application in a license file, about dialog or similar. Basically a build LabVIEW application would not be allowed to display any copyright information of any form anywhere or this section applies. And MIT also makes a requirement to include the MIT license text with the orginal copyright in any copy or substantial portions of the software and although it doesn't explicitedly say that this applies to object form too, it also doesn't say anything about source only, so substantial portion can and probably should be just as well understood to include object form too. Therefore I wouldn't see MIT or Apache as a solutions to the problem at hand. I think this is slightly misleading. NI has stopped to request royalities for the LabVIEW runtime license and many other licenses too such as NI-VISA, many years ago. For driver software excluding NI-VISA and NI-IMAQdx under certain circumstances, the roalities are covered by the purchase of the according NI hardware which quite often are directly purchased from NI without involvement of the software developer who distributes a compiled app. The exception to this are certain Toolkits and drivers such as IMAQ Vision, LabVIEW DataLogging and Supervisor module, and some of the special analysis libraries etc. and also for instant TestStand. If you use them you need to get a runtime license from NI. You usually will notice quickly, at least under Windows, as those components require license activation to run properly. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
Probably but who is going to take the time to do all the administrative work? I'm not opposed to change that license but feel very little for doing all the other work that would be involved in this. Coincidentially I was trying to find any mentioning of ZLIB and the minizip copyright in there and failed. That is almost 100% sure the base for the ZIP functionality in LabVIEW since even the exported function calls are the same as defined in minizip. It could be my search foo that failed me here but not sure. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
I guess it could. But then ever looked in a LabVIEW distribution? And noticed all the open source licenses in there? I guess if you want to split hairs like this, and build a LabVIEW application you need to add all those licenses too nowadays or risk violating one of them if you add an innocent VI. HTTP VIs used anywhere, SMTP? Sorry they all use open source software underneath. Just take a look at C:\Program Files (x86)\National Instruments\_Legal Information for a moment and feel overwhelmed. The OpenG software problem looks trivial in comparison. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
The DLLs only which are dynamic by nature without any special tricks necessary. And the entire source code being on sourceforge. So what remains is a copyright notice like for the rest. -
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
You are right on all accounts. LGPL required you to separate the LGPL code into external libraries that could be dynamically called and theoretically replaced with the source code version. That made building apps pretty complicated albeit not impossible. BSD is IMHO one of the most practical open source software licenses to allow commercial use. It's unfortunate that even that seems to troublesome for some but I wouldn't know of a better solution. -
The Transpose can be a free operation but it doesn't have to stay free throughout the diagram. LabVIEW maintains flags for arrays that indicate for instance the order (forward or backward) as well as if it is (transposed or not) The Transpose function then sets that according flag (as does the Revert 1D array does the according flag). Any function consuming the array either has to support that flag and process the array accordingly or first call a function that will normalize the array anyways. So while Transpose may be free in itself it doesn't mean that processing a transposed array is never going to incur the additional processing that goes along with physically transposing the array. I believe it is safe to assume that all native LabVIEW nodes will know how to handle such "subarrays" as will probably autoindexing and similar. However when such an array is passed to a Call Library Node for instance LabVIEW will ALWAYS normalize the array prior to calling the external code function. Similar things account for other array operations such as Array Subset which doesn't always physically create a new array and copies data into it but also can create a subarray that only maintains things like the offset and length into the original array. Of course many of these optimizations will be void and invalidated as soon as your diagram starts to have wire branches that many times require seperate copies of the array data in order to stay consistent.
-
The download ZIP button is not to download an installer but rather an image of the source files as present in the github project repository. Not sure I would consider this unfortunate as one might expect users going to github to know a bit about software development and the difference between a source code tree and a build package. Sourceforge has it in that respect a bit more clearly structured where you have the code section where you can browse the source code and download an image of it and the files section where the project maintainer usually puts installers or source tree executable packages for end users to download and use. If only it wasn't acquired by slashdot and turned into a cash machine with advertisment and download wrappers for popular projects, with the wrappers trying to force all kinds of adware on a users computer.
-
I would questions someones engineerings abilities who finds 150 Euros to be expensive for something which can make the difference between a properly working system and one which regulalry looses communication and/or trips the computer in a blue screen of death. If an engineer has to spend two hours to debug such an error caused by a noname serial port adapter then the more expensive device has paid itself already more than once. And two engineering hours are just the top of the ice peak, with lost productivity, bad image towards the customer and what else not even counted in.
-
Keeping track of licenses of OpenG components
Rolf Kalbermatter replied to Mellroth's topic in OpenG General Discussions
Yes as is mentioned in the post #12 by James David Powell, the VIPM attributes the individual names. The reason being that OpenG started some more than 15 years ago like this and it would be pretty unpractical to get agreement by all authors to change that now, since some might not even be involved in LabVIEW work and impossible to contact anymore. There definitely is nobody who seriously considered to do that so far and I'm not volunteering. I would guess that VIPM does use most of the OpenG libraries in one way or the other and its license attribution is pretty complete but I can not talk for the VIPM developers nor for JKI and they would really be the more appropriate people to contact about this. One other thing to consider here: If you only use OpenG inside projects that are used inside your company, your company is your own customer and you maintaining the source code of the applications on a company provided source code control system (You do that, right???) does take care of all the license requirements of even more stringent Open Source licenses like GPL. Of course you have to document such use as otherwise an unsuspecting collegue may turn over a built of your application to a contractor or other third party or such and create a license violation in that way. Only when you start to develop applications that your company intends to sell, lend, or otherwise make available to third parties without source code, will you have to seriously consider the various implications of most open source licenses out there, with the BSD license being definitely one of the most lenient licenses out there (with the exception of maybe the WTFPL (Do What the Fuck You Want to Public License), which some lawyers feel is so offending that they dispute the validity of it. And of course there is the Public Domain code but again some lawyers feel that it is impossible to abandon copyright and putting code into Public Domain is an impossibility. Isn't law great and live without lawyers would be so easy?