Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. From my limited experience with MIPI this is very critical. The differential serial lanes of a MIPI connection need all to be within a few mm of length to guarantee proper signal transmission. That means that if you design a PCB you generally have to use meander microstrips to make sure that every connection has exactly the same length. In addition the MIPI standard is designed as a chip to chip interface and is not meant to be routed through long cables. The D-PHY specification defines the maximum lane flight time to 2 ns. so that means that on an FR-4 PCB using matched microstriplines you get at most 25 to 30 cm of trace length. The typical FPC flex cable used to connect a camera module to a board does have similar electrical characteristics and is therefore not that much different. That includes the traces from the chip to the FPC cable connector, the FPC cable itself and the traces from the FPC connector to the framegrabber chip.
  2. Well you can offer them to use a hammer 😀. It's about as senseful as that rule 😉. But there might be rules about bringing in a hammer too. But honestly, Linux has everything on board and even a slimmed down NI Linux RT should still know the dd command line tool. There you can do: Filling the disk with all zeros (This may take a while, as it is making every bit of data 0) : dd if=/dev/zero of=/dev/sdX bs=1M *replace X with the target drive letter. If you are wiping your hard drive for security, you should populate it with random data rather than zeros (This is going to take even longer than the first example.) : dd if=/dev/urandom of=/dev/sdX bs=1M *replace X with the target drive letter. The reason one should fill with urandom in case of required security is explained here: http://www.marksanborn.net/howto/wiping-a-hard-drive-with-dd/ If all else fails you could create a mini USB stick with minimal unix install and the dd tool and then get it through security check at said place (Hey it does not contain any Windows 😀) boot from it and wipe the drive from there. Needless to say that such a controller won't boot up after this anymore and you will need a bootable USB drive to turn it alive again, but that seems to be the price as they want to be sure no secret can sneak out through the door with any device. Edit: Ohh, and you will of course need to take a keyboard with you. Somehow you will have to enter those commands on the shell. I hven't tried but I would expect that if you plug a USB keyboard in the controller, you will actually be able to use it. The next challenge will be to have some form of display. A VT 100 terminal may come in handy as you can connect it to the RS-232 port on the controller if it has one. Otherwise you have to login blind (good luck) and enter those commands blind too (even more good luck with that). All in all I think the hammer method is still the best one and simply buy a new controller 😀. A headless controller really is headless, and to control it you do need some kind of terminal with a display and a keyboard at least. That could be a very simple Linux device but I fail to see how a Linux computer would be more secure to take in than a Windows one. With these rules, just tell them to get the controller out there themselves!
  3. You can do the same with any of the other OpenG libraries. Yes they are in a SubVersion repository so not the exact same experience as with GIT repos, but similar enough.
  4. I could see how this might have been confusing. Enterprise has since a long time always seemed to be even more than Professional, especially if you look at Microsoft. So that Enterprise would add the feature to subscribe to custom repositiories that you need Professional to create is kind of weird. Unless Enterprise would have been a kind of site license that everyone in a company could install, but was it that?
  5. I think that's a short term fix but not really something that should be part of every new OpenG library release cycle, nor a possible future LavaG library release.
  6. I've been making a few small changes in the past to the string package in the repository based on bug reports I saw. Revision: 1494 Author: labviewer Date: vrijdag 23 mei 2014 23:07:42 Message: Attempt to fix the String to 1D Array.vi function when ignore duplicate delimiters is true. ---- Modified : /trunk/string/source/library/String to 1D Array.vi Modified : /trunk/string/tests/TEST - String to 1D Array.vi But I'm not familiar with the exact release procedure. When not seeing any more action I eventually build a new VIP file oglib_string-4.2.0.13.vip and uploaded it to the sourceforge file store in 2017 but that seems not enough to make it appear in VIPM unfortunately.
  7. I think it would be possible for JKI to add besides the JKI package repository and the NI Tools Network repository one more well known repositorey such as a LAVA URL to the free edition. That would not be a big deal to add I'm sure. Another option I would definitely consider, is a paid version of VIPM that costs a small fee but then allows to add custom repositories to that client. You still would need a full pro license to create new repositories but not to access them. The current license model is impossible to defend in a company as ours where you not only need a pro license to create your own company wide repository, but anybody wanting to access that repository needs that license too in order to add it to the locations VIPM will check for packages. So something like the free community edition as now with additional LAVA repository added. Then an intermediate Developer Client edition that allows to subscribe to custom repositories that everybody in a company could use. And last but not least the full featured VIPM Pro as it is now. The old OpenG package manager could more or less do the two first editions already, it is just outdated and doesn't handle new LabVIEW file tyypes properly anymore, that were coming out with LabVIEW 8 and later. EDIT: About 2 years ago we were evaluating in our company how to manage internal libraries and the distribution thereof. VIPM was considered too but the fact that every developer would need a Pro license killed that idea very quickly. If there would have been a Developer Client edition that normal LabVIEW developers could use and could have been purchased as a site license for a feasable fee, I'm pretty sure we would have been able to get that solution approved together with a few Pro licenses.
  8. In my LabVIEW 2016 in which I'm still mostly working Clear Error.vi is set as clone and to be inlined!
  9. I prefer to only wire 1 of them and NOT enable automatic error handling in the VI. My default for automatic error handling is "disable". I find it a terrible idea to have random error popups show up if you want to ignore some errors and don't want to place thousends of Clear Error.vi all around the place.
  10. Should I contact the Software Freedom Conservancy? Considering the recent discussions about GNU Licensing in the kernel and the Linux Maintainers wanting rather to help people to comply than getting them before court, I would say JKI is fairly safe. I totally agree with their sentiment. (L)GPL is not a means to sue people but to protect software to remain open source rather than being inappropriately claimed by big companies for their own evil plans.
  11. I see your point about lack of proper documentation of the LGPL used for the shared library part. It is contained in the source but as I have never been involved with the actual packaging itself so far it got somehow swept under the floor. I will make sure to remedy that in the next release by adding an according LICENSE file to the package that states the actual situation. And I will also add a link to the OpenG Sourceforge project site to that file, so anyone adding that file to his application will be in full compliance then . Including the entire build environment for all possible targets (11 last time I counted) really is not an option. That simply would totally explode the package. I don't even have them all on one machine. Some are in Virtual machines that I only connect specifically for compiling and testing these libraries.
  12. No. All my submissions to OpenG have originally been licensed as LGPL just as everything else in OpenG. Jim Kring wanted at some point to relicense all OpenG libraries to BSD, because of community feedback and maybe also becuase he already envisioned something like VIPM to be created. He contacted every submitter and we discussed it and I recognized the difficulties about LabVIEW VIs having to be build into an application in such a way that they could be dynamically linked to the application in order to comply with the LGPL if you did not want to release the application itself as (L)GPL. As a result I decided to relicense the VI part of all libraries under the BSD license just as all the other submitters agreed to. Since the shared library in those projects (there are other libraries such as the Port IO, LabPython, Pipe IO) all are naturally dynamically linked to any application using them, it did not seem necessary to change that license too, so I left that as LGPL. And as long as you do not make any modifications to the shared library, the fact that the source code is freely downloadable from the sourceforge OpenG project site really should satisfy any obligation any user of that library might have to make the source code available to any other potential user. I consider that a more long term safe option than what many Alliance Members could guarantee if they would host the source code themselves somewhere, since Alliance Members have known to go out of business in the past, and then nobody will honor source code requests from potential users of such an application. And yes I consider that part of the LGPL in fact the real benefit, the fact that any user of an application with that library in there has the right to see the source code for that library, together with the fact that if someone modifies the shared library and publishes an application of it he is obligated to make the modifications available to the community. If someone does not like that the solution is simple, don't modify the library! If someone wants to use the LGPL as a means to not use a software library because of perceived problems to comply with it, he is free to do so. I don't see the problem and don't intend to change my opinion about that. In my opinion it is more often than not just a legalize excuse for the "not invented here" syndrome or some variant of that. My understanding is that if you add a statement to the fact that the application includes OpenG libraries and a link to the sourceforge project site, either in a readme file in the installation directory, the application documentation or the About dialog in your application, you really did everything necessary to comply with the LGPL license for the shared library part. And that statement needs to be there somewhere even for the BSD licensed code! Even if I wanted to, which I don't, I can not remove the code from the sourceforge site anymore. I could delete it from the repository yes, but it is still there in the form of the entire SubVersion history for anyone to grab and publish somewhere else if he or she decided so. If there are specific situations where someone wants to discuss a different license for whatever reason with me for any of these libraries, I'm willing to discuss that. But as a community submission it will remain as it is for the foreseeable future.
  13. The main reason that I want to support syminks comes from the fact that under Linux when installing shared libraries you normally are creating various symlinks all pointing to the actual shared library in order to have a versioning scheme. Without support for symlinks in the package itself you have to do some involved Münchhausen trick by using post install hooks to create them through the command line, which is also OS and even OS version specific sometimes. Also a shared library on OS X is in fact a whole directory structure with various symlinks pointing to each other for the resource and version information and the actual binary object file. Without support for this you have to zip the shared library package up, add the zip file to the OpenG package and on install unzip it again to the right location. Support for links under Windows was mostly just a fun to have addition to make the symlink functionality work on all LabVIEW platforms but the practical benefit under Windows is fairly limited in terms of support for install packages. And in hindsight the effort to implement it under Windows was pretty heavy. But it does allow to test the library also under Windows without special exceptions.
  14. Actually only if you delete the last occurrence of a hardlink pointing to that file. The file object itself maintains information about how many hardlinks exist for it. The built-in LabVIEW file APIs also simply use the Win32 APIs so saying that Windows APIs are link aware and LabVIEW APIs are not is a bit misleading. However LabVIEW path handling has some particularies that always resolves the path to the actual file even if the original path was pointing to any kind of link including the good old Windows 95 shortcut and even over multiple redirections. The problem lays therein that LabVIEW functions always operate on the actual file or directory entry rather than the link or shortcut. So I had to also implement file directory listing and other file functions in order to be able to operate on the actual link file rather than the target of it.
  15. Actually creating symlinks are as of the latest Windows 10 build not anymore a privileged operation. Then you have Junction Points, Hardlinks and Soflinks. In total Microsoft tradition they made it unneccessary complicated that one only works for directories while the other only works for files. There are some implications. Symlinks like under Unix are not verified to point to a valid location, so they can be created for non-existing files and directories as well as end up pointing into nirvana when the actual file or directory is deleted. Junction Points only work for directories and must be absolute and point to a local vollume. They can specifically not point to remote locations. Hardlinks only work for files and they are simply addiional directory entries pointing to the actual file content in the filesystem. In fact every directory entry in an NTFS volume is a hardlink, so each file has at least one hardlink entry on a NTFS volume. Once the last hardlink for a file is deleted the file itself gets deleted from the filesystem. Personally I have for now decided to support reading either of them (except hardlinks as every file entry that is not one of the above reparse point types is in fact a hardlink so there is no way to decide if a file that has multiple hardlinks pointing to it, which would be possible to detect, should be treated as link or as actual file) and storing them as a link entry in the ZIP file and on restoring them to create at least under Windows always symlinks.
  16. It's simply an update to the OpenG ZIP library. As such the LabVIEW VIs are BSD licensed and the shared library code is LGPL. Shouldn't matter for anyone unless you want to modify the shared library somehow yourself. If someone has a problem with that please elaborate what is the problem but don't tell me "I don't like anything that sounds like GPL" The shared libary really is not meant to be modified by others and in the 20 years that that library exists I have not received one single patch, fix or other suggestion for that part. If your lawayer claims that a LGPL licensed shared library component is not possible in a commercial app or whatever let them think again. They are talking out of their neck.
  17. I think it sums it more or less up. I would NOT bother about license installation at this time. The NI 3rd party license manager app is pretty outdated and Windows only, so not a good solution anyways. And anyone recently interested to have that working can always add some batch file script or custom post install hook to do that anyways if needed. There definitely should be a way to support custom repositories in some way, preferably through some kind of plugin interface to make it flexible for different services. FYI: I'm making some progress on the new ZIP ibrary with transparent support for (sym)links. Still need to figure out some issues but it starts working. Soon there will need to be some testing done to make sure it works on more than just my own systems :-).
  18. I almost always have to do that. Except this one of course But indeed if it takes longer to edit and/or going to other tabs in Chrome it will not post.
  19. It does not crash if you typecast the integer to the right refnum (be it DVR or any other LabVIEW refnum), But it can badly crash if you ever convert DVR_A into an int and then try to convert it back into DVR_B or any other LabVIEW refnum. The int typecast removes any type safety and LabVIEW has no foolproof way to detect in the Typecast to Refnum conversion that everything is alright. LabVIEW does have some safety built in there such that part of the uint32 value of any refnum is reserved as a random identifier for the refnum type pool it resides in (so a file refnum has another bit pattern for those bits than a network refnum) but it can not verify safely that the int coming from a DVR_X is fully compatible to the DVR_Y you want to convert it to. And then as soon as it attempts to access the internals from DVR_Y according to its datatype it badly reaches into nirvana. So if you do the typecasting all safely behind the private interface of a class so that nobody else can ever meddle with it AND you are very very careful when developing your class to never ever typecast the wrong integer into a specific DVR (or Queue, Notifier, Semaphore, etc) you are fine, but there is no LabVIEW holding your hand here to prevent you from shooting in your own feet! The original idea comes from before there where DVRs and LabVIEW GOOP. In LabVIEW 6 or 7 someone developed a first version of LVOOP, which used a private "refnum" repository implemented in the LabVIEW kernel and accessed through private VIs using a Call Library Node. It was basically like a DVR that was also the class wire. Obvriously DD was not something that would work seemlessly but had to be programmed explicitedly by the class developer and hence was seldom used. The pointer (which was back then an int32 since all LabVIEW versions were 32 bit only) was then typecast into a datalog file refnum with a specific enum type for typesafety so that you had a typesafe refnum wire to carry around as class wire. Some of the DVR based class design (reference based classes) in the Endevo/NI GOOP Framework still reminds of this original idea, but now using proper LabVIEW classes as the class wire. Additional bonus is that this buys them automatic dynamic dispatch capability.
  20. My idea was a project provider for the specific file type such that once you right click on the package file you can "install" it either globally for the LabVIEW system in question or locally in a project specific location. The idea would be that the files are stored in a hidden location for the current LabVIEW version and bitness and then symlinks are added in the necessary locations to make the package directories available to the LabVIEW or project in the correct locations.
  21. I definitely like the idea to use symlinks to map specific libraries from a fixed location into LabVIEW installations and/or projects. The only problem I see with that is that if you work with multiple LabVIEW versions, as they will keep wanting to recompile the libraries and if you don't write-protect them, you sooner or later will end up with "upgraded" libraries that then fail to load into another project that uses an older LabVIEW version. Other than that it's probably one of the best solutions in managing libraries for different projects on a demand base. If you want to get really fancy you could even create a project provider that allows to setup project specific library reference specifications that then can be applied with a single right click in the LabVIEW project
  22. While I have seen some difficulties to get a shared library recognized in Linux (or NI Linux RT) in a way that it could be loaded in a LabVIEW application through the shared library node, I can't really imagine how the symlink on Linux could fail in itself if it points to the valid file, even if it is through intermediate symlinks. The Linux libc library handles the symlink resolution transparently through the kernel unless an application uses specifically the link aware variants of the API functions. e.g. calling stat() on a link will return the file information of the final target file even across multiple symlins and if you want to know the details about the symlink itself you have to call lstat() instead, with lstat() returning the same as stat() for non symlink filepaths. I don't think LabVIEW does something special when not specifically trying to find out if a path is really a link so it should not matter if a symlink is redirected over multiple symlinks. What could matter is the access rights for intermediate symlinks which might break the ability to see the final file.
  23. Might be the incentive to finally finish the LVZIP library update. I have file helper code in there that supports symlink support for Linux and Windows (and likely Mac OS X though haven't really looked at that yet). The main reason was that I want to support the possibility to add symlinks as real symlinks into the ZIP archive rather than pulling in the original file as the LabVIEW file functions will do. There are some semantical challenges such as what to do with symlinks that are not pointing to other files inside the archive but something entirely different and how to reliably detect this without a time consuming prescan of the entire hierarchy. But the symlink functionality to detect if a path is a symlink, to read the target it points to and to create a symlink is basically present in the shared library component. Under Windows there is the additional difficulty about how to handle shortcuts. Currently I have some code in there that simply tries to treat them like symlinks but that is not really correct as a shortcut is a file in its own with additional resources such as icons etc. that get lost in this way, on the other hand it is the only way to keep it platform independent except to choose to drop shortcut support entirely.
  24. Well, what I have posted is really mostly just to install libraries and geared towards application specific installs rather than project specific ones. So it is not quite the tool you guys are looking for for sure. It was developed at a time, LabVIEW would not have projects for around 10 more years, so that is at least an excuse. :-) I was toying at some point with the idea to simply just call the DEAB tool after massaging the various paths and settings into the necessary format to perform the necessary steps, but the fact that DEAB is really predating many of the newer features after LabVIEW 8.0 made me hesitate and eventually other things got in the way. As to the VIPM format it's pretty much the same as ogpm with a few additions for things like the license manager and a few other niceties. At least a few of them I even added to the OGPM version I posted. As it is an INI file it is really trivial to add more things, only the installer component needs to support them too, and I really did not feel like building that part too. I only briefly looked at NIPM. I was expecting to see something which supports the current NI installers for Windows for all other platforms, having been led to believe that it should be eventually used for that, as well as a full featured VIPM replacement on steroids. I found it to be a half baked replacement for the installers and way to lacking to even support basic VIPM features. Of course being fully aware that creating a good package manager and installer is a project that many have attempted and only few really got to a decent working thing, and I'm not just talking LabVIEW here, I wasn't really to surprised . I would be interested to have something better but have to admit that my motivation to create such a thing is not high enough to do the work for it
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.