Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. In my LabVIEW 2016 in which I'm still mostly working Clear Error.vi is set as clone and to be inlined!
  2. I prefer to only wire 1 of them and NOT enable automatic error handling in the VI. My default for automatic error handling is "disable". I find it a terrible idea to have random error popups show up if you want to ignore some errors and don't want to place thousends of Clear Error.vi all around the place.
  3. Should I contact the Software Freedom Conservancy? Considering the recent discussions about GNU Licensing in the kernel and the Linux Maintainers wanting rather to help people to comply than getting them before court, I would say JKI is fairly safe. I totally agree with their sentiment. (L)GPL is not a means to sue people but to protect software to remain open source rather than being inappropriately claimed by big companies for their own evil plans.
  4. I see your point about lack of proper documentation of the LGPL used for the shared library part. It is contained in the source but as I have never been involved with the actual packaging itself so far it got somehow swept under the floor. I will make sure to remedy that in the next release by adding an according LICENSE file to the package that states the actual situation. And I will also add a link to the OpenG Sourceforge project site to that file, so anyone adding that file to his application will be in full compliance then . Including the entire build environment for all possible targets (11 last time I counted) really is not an option. That simply would totally explode the package. I don't even have them all on one machine. Some are in Virtual machines that I only connect specifically for compiling and testing these libraries.
  5. No. All my submissions to OpenG have originally been licensed as LGPL just as everything else in OpenG. Jim Kring wanted at some point to relicense all OpenG libraries to BSD, because of community feedback and maybe also becuase he already envisioned something like VIPM to be created. He contacted every submitter and we discussed it and I recognized the difficulties about LabVIEW VIs having to be build into an application in such a way that they could be dynamically linked to the application in order to comply with the LGPL if you did not want to release the application itself as (L)GPL. As a result I decided to relicense the VI part of all libraries under the BSD license just as all the other submitters agreed to. Since the shared library in those projects (there are other libraries such as the Port IO, LabPython, Pipe IO) all are naturally dynamically linked to any application using them, it did not seem necessary to change that license too, so I left that as LGPL. And as long as you do not make any modifications to the shared library, the fact that the source code is freely downloadable from the sourceforge OpenG project site really should satisfy any obligation any user of that library might have to make the source code available to any other potential user. I consider that a more long term safe option than what many Alliance Members could guarantee if they would host the source code themselves somewhere, since Alliance Members have known to go out of business in the past, and then nobody will honor source code requests from potential users of such an application. And yes I consider that part of the LGPL in fact the real benefit, the fact that any user of an application with that library in there has the right to see the source code for that library, together with the fact that if someone modifies the shared library and publishes an application of it he is obligated to make the modifications available to the community. If someone does not like that the solution is simple, don't modify the library! If someone wants to use the LGPL as a means to not use a software library because of perceived problems to comply with it, he is free to do so. I don't see the problem and don't intend to change my opinion about that. In my opinion it is more often than not just a legalize excuse for the "not invented here" syndrome or some variant of that. My understanding is that if you add a statement to the fact that the application includes OpenG libraries and a link to the sourceforge project site, either in a readme file in the installation directory, the application documentation or the About dialog in your application, you really did everything necessary to comply with the LGPL license for the shared library part. And that statement needs to be there somewhere even for the BSD licensed code! Even if I wanted to, which I don't, I can not remove the code from the sourceforge site anymore. I could delete it from the repository yes, but it is still there in the form of the entire SubVersion history for anyone to grab and publish somewhere else if he or she decided so. If there are specific situations where someone wants to discuss a different license for whatever reason with me for any of these libraries, I'm willing to discuss that. But as a community submission it will remain as it is for the foreseeable future.
  6. The main reason that I want to support syminks comes from the fact that under Linux when installing shared libraries you normally are creating various symlinks all pointing to the actual shared library in order to have a versioning scheme. Without support for symlinks in the package itself you have to do some involved Münchhausen trick by using post install hooks to create them through the command line, which is also OS and even OS version specific sometimes. Also a shared library on OS X is in fact a whole directory structure with various symlinks pointing to each other for the resource and version information and the actual binary object file. Without support for this you have to zip the shared library package up, add the zip file to the OpenG package and on install unzip it again to the right location. Support for links under Windows was mostly just a fun to have addition to make the symlink functionality work on all LabVIEW platforms but the practical benefit under Windows is fairly limited in terms of support for install packages. And in hindsight the effort to implement it under Windows was pretty heavy. But it does allow to test the library also under Windows without special exceptions.
  7. Actually only if you delete the last occurrence of a hardlink pointing to that file. The file object itself maintains information about how many hardlinks exist for it. The built-in LabVIEW file APIs also simply use the Win32 APIs so saying that Windows APIs are link aware and LabVIEW APIs are not is a bit misleading. However LabVIEW path handling has some particularies that always resolves the path to the actual file even if the original path was pointing to any kind of link including the good old Windows 95 shortcut and even over multiple redirections. The problem lays therein that LabVIEW functions always operate on the actual file or directory entry rather than the link or shortcut. So I had to also implement file directory listing and other file functions in order to be able to operate on the actual link file rather than the target of it.
  8. Actually creating symlinks are as of the latest Windows 10 build not anymore a privileged operation. Then you have Junction Points, Hardlinks and Soflinks. In total Microsoft tradition they made it unneccessary complicated that one only works for directories while the other only works for files. There are some implications. Symlinks like under Unix are not verified to point to a valid location, so they can be created for non-existing files and directories as well as end up pointing into nirvana when the actual file or directory is deleted. Junction Points only work for directories and must be absolute and point to a local vollume. They can specifically not point to remote locations. Hardlinks only work for files and they are simply addiional directory entries pointing to the actual file content in the filesystem. In fact every directory entry in an NTFS volume is a hardlink, so each file has at least one hardlink entry on a NTFS volume. Once the last hardlink for a file is deleted the file itself gets deleted from the filesystem. Personally I have for now decided to support reading either of them (except hardlinks as every file entry that is not one of the above reparse point types is in fact a hardlink so there is no way to decide if a file that has multiple hardlinks pointing to it, which would be possible to detect, should be treated as link or as actual file) and storing them as a link entry in the ZIP file and on restoring them to create at least under Windows always symlinks.
  9. It's simply an update to the OpenG ZIP library. As such the LabVIEW VIs are BSD licensed and the shared library code is LGPL. Shouldn't matter for anyone unless you want to modify the shared library somehow yourself. If someone has a problem with that please elaborate what is the problem but don't tell me "I don't like anything that sounds like GPL" The shared libary really is not meant to be modified by others and in the 20 years that that library exists I have not received one single patch, fix or other suggestion for that part. If your lawayer claims that a LGPL licensed shared library component is not possible in a commercial app or whatever let them think again. They are talking out of their neck.
  10. I think it sums it more or less up. I would NOT bother about license installation at this time. The NI 3rd party license manager app is pretty outdated and Windows only, so not a good solution anyways. And anyone recently interested to have that working can always add some batch file script or custom post install hook to do that anyways if needed. There definitely should be a way to support custom repositories in some way, preferably through some kind of plugin interface to make it flexible for different services. FYI: I'm making some progress on the new ZIP ibrary with transparent support for (sym)links. Still need to figure out some issues but it starts working. Soon there will need to be some testing done to make sure it works on more than just my own systems :-).
  11. I almost always have to do that. Except this one of course But indeed if it takes longer to edit and/or going to other tabs in Chrome it will not post.
  12. It does not crash if you typecast the integer to the right refnum (be it DVR or any other LabVIEW refnum), But it can badly crash if you ever convert DVR_A into an int and then try to convert it back into DVR_B or any other LabVIEW refnum. The int typecast removes any type safety and LabVIEW has no foolproof way to detect in the Typecast to Refnum conversion that everything is alright. LabVIEW does have some safety built in there such that part of the uint32 value of any refnum is reserved as a random identifier for the refnum type pool it resides in (so a file refnum has another bit pattern for those bits than a network refnum) but it can not verify safely that the int coming from a DVR_X is fully compatible to the DVR_Y you want to convert it to. And then as soon as it attempts to access the internals from DVR_Y according to its datatype it badly reaches into nirvana. So if you do the typecasting all safely behind the private interface of a class so that nobody else can ever meddle with it AND you are very very careful when developing your class to never ever typecast the wrong integer into a specific DVR (or Queue, Notifier, Semaphore, etc) you are fine, but there is no LabVIEW holding your hand here to prevent you from shooting in your own feet! The original idea comes from before there where DVRs and LabVIEW GOOP. In LabVIEW 6 or 7 someone developed a first version of LVOOP, which used a private "refnum" repository implemented in the LabVIEW kernel and accessed through private VIs using a Call Library Node. It was basically like a DVR that was also the class wire. Obvriously DD was not something that would work seemlessly but had to be programmed explicitedly by the class developer and hence was seldom used. The pointer (which was back then an int32 since all LabVIEW versions were 32 bit only) was then typecast into a datalog file refnum with a specific enum type for typesafety so that you had a typesafe refnum wire to carry around as class wire. Some of the DVR based class design (reference based classes) in the Endevo/NI GOOP Framework still reminds of this original idea, but now using proper LabVIEW classes as the class wire. Additional bonus is that this buys them automatic dynamic dispatch capability.
  13. My idea was a project provider for the specific file type such that once you right click on the package file you can "install" it either globally for the LabVIEW system in question or locally in a project specific location. The idea would be that the files are stored in a hidden location for the current LabVIEW version and bitness and then symlinks are added in the necessary locations to make the package directories available to the LabVIEW or project in the correct locations.
  14. I definitely like the idea to use symlinks to map specific libraries from a fixed location into LabVIEW installations and/or projects. The only problem I see with that is that if you work with multiple LabVIEW versions, as they will keep wanting to recompile the libraries and if you don't write-protect them, you sooner or later will end up with "upgraded" libraries that then fail to load into another project that uses an older LabVIEW version. Other than that it's probably one of the best solutions in managing libraries for different projects on a demand base. If you want to get really fancy you could even create a project provider that allows to setup project specific library reference specifications that then can be applied with a single right click in the LabVIEW project
  15. While I have seen some difficulties to get a shared library recognized in Linux (or NI Linux RT) in a way that it could be loaded in a LabVIEW application through the shared library node, I can't really imagine how the symlink on Linux could fail in itself if it points to the valid file, even if it is through intermediate symlinks. The Linux libc library handles the symlink resolution transparently through the kernel unless an application uses specifically the link aware variants of the API functions. e.g. calling stat() on a link will return the file information of the final target file even across multiple symlins and if you want to know the details about the symlink itself you have to call lstat() instead, with lstat() returning the same as stat() for non symlink filepaths. I don't think LabVIEW does something special when not specifically trying to find out if a path is really a link so it should not matter if a symlink is redirected over multiple symlinks. What could matter is the access rights for intermediate symlinks which might break the ability to see the final file.
  16. Might be the incentive to finally finish the LVZIP library update. I have file helper code in there that supports symlink support for Linux and Windows (and likely Mac OS X though haven't really looked at that yet). The main reason was that I want to support the possibility to add symlinks as real symlinks into the ZIP archive rather than pulling in the original file as the LabVIEW file functions will do. There are some semantical challenges such as what to do with symlinks that are not pointing to other files inside the archive but something entirely different and how to reliably detect this without a time consuming prescan of the entire hierarchy. But the symlink functionality to detect if a path is a symlink, to read the target it points to and to create a symlink is basically present in the shared library component. Under Windows there is the additional difficulty about how to handle shortcuts. Currently I have some code in there that simply tries to treat them like symlinks but that is not really correct as a shortcut is a file in its own with additional resources such as icons etc. that get lost in this way, on the other hand it is the only way to keep it platform independent except to choose to drop shortcut support entirely.
  17. Well, what I have posted is really mostly just to install libraries and geared towards application specific installs rather than project specific ones. So it is not quite the tool you guys are looking for for sure. It was developed at a time, LabVIEW would not have projects for around 10 more years, so that is at least an excuse. :-) I was toying at some point with the idea to simply just call the DEAB tool after massaging the various paths and settings into the necessary format to perform the necessary steps, but the fact that DEAB is really predating many of the newer features after LabVIEW 8.0 made me hesitate and eventually other things got in the way. As to the VIPM format it's pretty much the same as ogpm with a few additions for things like the license manager and a few other niceties. At least a few of them I even added to the OGPM version I posted. As it is an INI file it is really trivial to add more things, only the installer component needs to support them too, and I really did not feel like building that part too. I only briefly looked at NIPM. I was expecting to see something which supports the current NI installers for Windows for all other platforms, having been led to believe that it should be eventually used for that, as well as a full featured VIPM replacement on steroids. I found it to be a half baked replacement for the installers and way to lacking to even support basic VIPM features. Of course being fully aware that creating a good package manager and installer is a project that many have attempted and only few really got to a decent working thing, and I'm not just talking LabVIEW here, I wasn't really to surprised . I would be interested to have something better but have to admit that my motivation to create such a thing is not high enough to do the work for it
  18. Ohh I'm not saying that the pre/post build steps should be removed. Just that it shouldn't be necessary to use them to install 32 or 64 bit shared libraries depending what LabVIEW version you install into.
  19. Well no menu editor in what I have. I find menus quite important to use but way to infrequent to create to have felt I would want to spend lots of time into this. I did add a feature to allow for 32 bit and 64 bit shared library support but as I do actually rely on VIPM to be able to install the created package, couldn't quite go all the way as I had wanted. The VIPM format is missing a specific flag that could work to flag files to be used in a specific bitness mode and the platform string as used in OpenG and hence VIPM is to much massaged to allow that distinction. My suggestion to add such a feature to VIPM was not really considered as it was considered to obscure. Instead I had to rely on a Post Install VI that will then rename the according shared library to the required name and delete the unneeded one. Basically the OpenG platform consisted of several tools. First there was the Package Builder, which as already mentioned is a glorified ZIP file builder. OpenG packages and VIPM files too are in fact simply ZIP files with one or more specific files in the root, one of them being usally an INI type spec file specifying the contents of the package and the rules to use when installing it, another one being the package bitmap and optional license text. The rest are folders corresponding to file groups that need to be installed to the target system according to the rules in the spec file. The VIPM spec file contains a few additional optional items to support some of the later VIPM features. Then there was first an OpenG Application Builder project hat eventually morphed into the DEAB tool which was the OpenG variant of an application builder. It also allows renaming of file hierarchies to create source distributions with file prefix or postfix additions before NI really supported such a feature. This was the tool used by the OpenG libraries to namespace the library files by adding the __ogtk postfix to each filename. Creating an OpenG package simply meant running first the DEAB tool to rename/relink the source VIs into a new location that was meant to mirror the final VI hierarchy on the target system, then running the package builder task to turn this into an OpenG package. It seems the OpenG SVN repository contains a new sub project Builder, that is another wrapper that tries to automate the execution of the DEAB task and the package builder in a very simple UI. My attempt was to basically extend the old Package Builder UI to allow some additional configuration options that could be used to create the necessary data structures to run the DEAB tool programmatically before invoking the package step. But in comparison to VIPM this UI is more involved and for many users probably less intuitive. The main GUI is found in "source/GUI/OpenG_Package_Builder.vi" OpenG Package Builder V2.0.zip
  20. It's nothing to fancy. I added a few things to the UI to support more features and in preparation of adding the VI renamining/relinking step that was done seperately in the OpenG DEAB tool before calling the OpenG package builder. But I never got around to really add the deab part into the package builder. It's kind of extra difficult as the DEAB compononent doesn't currently support newer features like lvclass and lvlib at all and of course no mallable VIs etc. I can post what I have somewhere, but don't get to excited.
  21. Realistically that might mean that there is no new LabVIEW feature in many years! So I doubt that this rigorous specification is ever reasonably considered.
  22. I don't think it can be LAVA's responsibility to verify licenses infringement, copyright and whatever issues of such code. Obviously the code review process should take care about glaring problems, but it can not be the idea that LAVA could and should be responsible about guaranteeing that no such issues exist. If we require that we can stop right now because we can't do that, even if we hire several full time lawyers . By submitting code for the CR each submitter basically needs to state that he is providing the code according to the the license requirements that he selected and that to the best of his knowledge there are no copyright or license issues. Anything more than that is simply not workable. As to providing a LAVA repository I don't think we would need a special OS/hardware system other than some internet accessible FTP/HTTP storage, which I'm sure the existing LAVA server could do, except for the additional server space that might require. I'm not sure about the exact internal workings of VIPM and it seems that even packages released on the OpenG Sourceforge site don't automatically are picked up, though it shouldn't be very difficult to do that. (Yes I did some package update in the last year and posted it to the OpenG repository but it still doesn't show up in VIPM). It seems that even for that repository there is some manual step involved from JKI to add a package to some list that VIPM then will be able to see. or maybe I did something wrong when posting the package on sourceforge. If JKI would then add this LAVA repository as another default repository besides the Tools Networks and OpenG sourceforge repository, we would be all set even without owning a Pro license of VIPM. And I might be even persuaded to get our company to commit to a few VIPM Pro licenses then .
  23. Yes, and therefore it is courtesy to mention that you posted elsewhere as well. Not everyone frequents both sites and so it is helpful if they can look at the other post and see if the answer they have may already have been given there. Also in the future when someone has the same problem he may come across one post and only see what was answered here, while there might have been other helpful answers in the other thread. You may feel that it is redundant to mention this and maybe counterproductive as people may be less inclined to answer your post if they know you posted elsewhere too, but believe me the people who are likely to answer you either here or on the NI forum will be not noobies who dismiss a post because it has been posted elsewhere, but will want to look at the other thread and if the information they have is not mentioned there, will gladly share it with you. But some of the really valuable posters might get annoyed at seeing the same question on different boards without any mentioning of crossposting and might decide to not bother about spending time to answer your problem even if they might have helpful information. After all we all do this on a volunteering base with no obligation other than goodwill to share the information we know with anyone on these fora. If you don't get the answer you look for, it won't be because you mentioned that you had crossposted, but instead because nobody else who frequents any of these board has the answer or is not allowed to share it because of some NDA agreement. And yes there are manufacturers who believe that their product is so super special that nobody should be allowed to know how to communicate with it, and if they ever share any information with some specially valued customer, that they need to hold them at gunpoint with a lengthy legal document about sharing that super confidential information. Generally I think it hurts them more than anyone else, but it is their choice. They seem to be based in India and don't mention any international sales or technical support offices, so it may be indeed hard to get any information from them if you are not located in India yourself. The site also looks kind of last century in some ways and they do seem to be secretive even about pricing information, so I'm not sure if it is the best supplier for such hardware. We have generally quite good experience with Watlow and Eurotherm devices and they are not difficult about sharing technical information about their products.
  24. It may be valid. The math function may behave badly on wrong or empty data sets. Sure you could detect the actual input arrays to be of valid sizes etc., and a solid algorithme definitely should do that anyways, but the error in is an immediate stop gap to trying to process on data that may have been not even generated because of an error upstream. I understand that not every function needs an error in/error out but I prefer to have a superfluous error cluster on a VI that may never really be used for anything but dataflow dependency than none, and later when revising the function and having to add one anyhow, having to go everywhere and rewire the function.
  25. If you happen to use these VIs in LabVIEW 64 bit you will certainly have to review them and make sure that all Windows handle datatypes such as the HWND used in those functions are configured to be pointer sized integers rather than 32 bit integers (and change the according LabVIEW controls to be of type 64 bit integer).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.