Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,903
  • Joined

  • Last visited

  • Days Won

    269

Everything posted by Rolf Kalbermatter

  1. I almost always have to do that. Except this one of course But indeed if it takes longer to edit and/or going to other tabs in Chrome it will not post.
  2. It does not crash if you typecast the integer to the right refnum (be it DVR or any other LabVIEW refnum), But it can badly crash if you ever convert DVR_A into an int and then try to convert it back into DVR_B or any other LabVIEW refnum. The int typecast removes any type safety and LabVIEW has no foolproof way to detect in the Typecast to Refnum conversion that everything is alright. LabVIEW does have some safety built in there such that part of the uint32 value of any refnum is reserved as a random identifier for the refnum type pool it resides in (so a file refnum has another bit pattern for those bits than a network refnum) but it can not verify safely that the int coming from a DVR_X is fully compatible to the DVR_Y you want to convert it to. And then as soon as it attempts to access the internals from DVR_Y according to its datatype it badly reaches into nirvana. So if you do the typecasting all safely behind the private interface of a class so that nobody else can ever meddle with it AND you are very very careful when developing your class to never ever typecast the wrong integer into a specific DVR (or Queue, Notifier, Semaphore, etc) you are fine, but there is no LabVIEW holding your hand here to prevent you from shooting in your own feet! The original idea comes from before there where DVRs and LabVIEW GOOP. In LabVIEW 6 or 7 someone developed a first version of LVOOP, which used a private "refnum" repository implemented in the LabVIEW kernel and accessed through private VIs using a Call Library Node. It was basically like a DVR that was also the class wire. Obvriously DD was not something that would work seemlessly but had to be programmed explicitedly by the class developer and hence was seldom used. The pointer (which was back then an int32 since all LabVIEW versions were 32 bit only) was then typecast into a datalog file refnum with a specific enum type for typesafety so that you had a typesafe refnum wire to carry around as class wire. Some of the DVR based class design (reference based classes) in the Endevo/NI GOOP Framework still reminds of this original idea, but now using proper LabVIEW classes as the class wire. Additional bonus is that this buys them automatic dynamic dispatch capability.
  3. My idea was a project provider for the specific file type such that once you right click on the package file you can "install" it either globally for the LabVIEW system in question or locally in a project specific location. The idea would be that the files are stored in a hidden location for the current LabVIEW version and bitness and then symlinks are added in the necessary locations to make the package directories available to the LabVIEW or project in the correct locations.
  4. I definitely like the idea to use symlinks to map specific libraries from a fixed location into LabVIEW installations and/or projects. The only problem I see with that is that if you work with multiple LabVIEW versions, as they will keep wanting to recompile the libraries and if you don't write-protect them, you sooner or later will end up with "upgraded" libraries that then fail to load into another project that uses an older LabVIEW version. Other than that it's probably one of the best solutions in managing libraries for different projects on a demand base. If you want to get really fancy you could even create a project provider that allows to setup project specific library reference specifications that then can be applied with a single right click in the LabVIEW project
  5. While I have seen some difficulties to get a shared library recognized in Linux (or NI Linux RT) in a way that it could be loaded in a LabVIEW application through the shared library node, I can't really imagine how the symlink on Linux could fail in itself if it points to the valid file, even if it is through intermediate symlinks. The Linux libc library handles the symlink resolution transparently through the kernel unless an application uses specifically the link aware variants of the API functions. e.g. calling stat() on a link will return the file information of the final target file even across multiple symlins and if you want to know the details about the symlink itself you have to call lstat() instead, with lstat() returning the same as stat() for non symlink filepaths. I don't think LabVIEW does something special when not specifically trying to find out if a path is really a link so it should not matter if a symlink is redirected over multiple symlinks. What could matter is the access rights for intermediate symlinks which might break the ability to see the final file.
  6. Might be the incentive to finally finish the LVZIP library update. I have file helper code in there that supports symlink support for Linux and Windows (and likely Mac OS X though haven't really looked at that yet). The main reason was that I want to support the possibility to add symlinks as real symlinks into the ZIP archive rather than pulling in the original file as the LabVIEW file functions will do. There are some semantical challenges such as what to do with symlinks that are not pointing to other files inside the archive but something entirely different and how to reliably detect this without a time consuming prescan of the entire hierarchy. But the symlink functionality to detect if a path is a symlink, to read the target it points to and to create a symlink is basically present in the shared library component. Under Windows there is the additional difficulty about how to handle shortcuts. Currently I have some code in there that simply tries to treat them like symlinks but that is not really correct as a shortcut is a file in its own with additional resources such as icons etc. that get lost in this way, on the other hand it is the only way to keep it platform independent except to choose to drop shortcut support entirely.
  7. Well, what I have posted is really mostly just to install libraries and geared towards application specific installs rather than project specific ones. So it is not quite the tool you guys are looking for for sure. It was developed at a time, LabVIEW would not have projects for around 10 more years, so that is at least an excuse. :-) I was toying at some point with the idea to simply just call the DEAB tool after massaging the various paths and settings into the necessary format to perform the necessary steps, but the fact that DEAB is really predating many of the newer features after LabVIEW 8.0 made me hesitate and eventually other things got in the way. As to the VIPM format it's pretty much the same as ogpm with a few additions for things like the license manager and a few other niceties. At least a few of them I even added to the OGPM version I posted. As it is an INI file it is really trivial to add more things, only the installer component needs to support them too, and I really did not feel like building that part too. I only briefly looked at NIPM. I was expecting to see something which supports the current NI installers for Windows for all other platforms, having been led to believe that it should be eventually used for that, as well as a full featured VIPM replacement on steroids. I found it to be a half baked replacement for the installers and way to lacking to even support basic VIPM features. Of course being fully aware that creating a good package manager and installer is a project that many have attempted and only few really got to a decent working thing, and I'm not just talking LabVIEW here, I wasn't really to surprised . I would be interested to have something better but have to admit that my motivation to create such a thing is not high enough to do the work for it
  8. Ohh I'm not saying that the pre/post build steps should be removed. Just that it shouldn't be necessary to use them to install 32 or 64 bit shared libraries depending what LabVIEW version you install into.
  9. Well no menu editor in what I have. I find menus quite important to use but way to infrequent to create to have felt I would want to spend lots of time into this. I did add a feature to allow for 32 bit and 64 bit shared library support but as I do actually rely on VIPM to be able to install the created package, couldn't quite go all the way as I had wanted. The VIPM format is missing a specific flag that could work to flag files to be used in a specific bitness mode and the platform string as used in OpenG and hence VIPM is to much massaged to allow that distinction. My suggestion to add such a feature to VIPM was not really considered as it was considered to obscure. Instead I had to rely on a Post Install VI that will then rename the according shared library to the required name and delete the unneeded one. Basically the OpenG platform consisted of several tools. First there was the Package Builder, which as already mentioned is a glorified ZIP file builder. OpenG packages and VIPM files too are in fact simply ZIP files with one or more specific files in the root, one of them being usally an INI type spec file specifying the contents of the package and the rules to use when installing it, another one being the package bitmap and optional license text. The rest are folders corresponding to file groups that need to be installed to the target system according to the rules in the spec file. The VIPM spec file contains a few additional optional items to support some of the later VIPM features. Then there was first an OpenG Application Builder project hat eventually morphed into the DEAB tool which was the OpenG variant of an application builder. It also allows renaming of file hierarchies to create source distributions with file prefix or postfix additions before NI really supported such a feature. This was the tool used by the OpenG libraries to namespace the library files by adding the __ogtk postfix to each filename. Creating an OpenG package simply meant running first the DEAB tool to rename/relink the source VIs into a new location that was meant to mirror the final VI hierarchy on the target system, then running the package builder task to turn this into an OpenG package. It seems the OpenG SVN repository contains a new sub project Builder, that is another wrapper that tries to automate the execution of the DEAB task and the package builder in a very simple UI. My attempt was to basically extend the old Package Builder UI to allow some additional configuration options that could be used to create the necessary data structures to run the DEAB tool programmatically before invoking the package step. But in comparison to VIPM this UI is more involved and for many users probably less intuitive. The main GUI is found in "source/GUI/OpenG_Package_Builder.vi" OpenG Package Builder V2.0.zip
  10. It's nothing to fancy. I added a few things to the UI to support more features and in preparation of adding the VI renamining/relinking step that was done seperately in the OpenG DEAB tool before calling the OpenG package builder. But I never got around to really add the deab part into the package builder. It's kind of extra difficult as the DEAB compononent doesn't currently support newer features like lvclass and lvlib at all and of course no mallable VIs etc. I can post what I have somewhere, but don't get to excited.
  11. Realistically that might mean that there is no new LabVIEW feature in many years! So I doubt that this rigorous specification is ever reasonably considered.
  12. I don't think it can be LAVA's responsibility to verify licenses infringement, copyright and whatever issues of such code. Obviously the code review process should take care about glaring problems, but it can not be the idea that LAVA could and should be responsible about guaranteeing that no such issues exist. If we require that we can stop right now because we can't do that, even if we hire several full time lawyers . By submitting code for the CR each submitter basically needs to state that he is providing the code according to the the license requirements that he selected and that to the best of his knowledge there are no copyright or license issues. Anything more than that is simply not workable. As to providing a LAVA repository I don't think we would need a special OS/hardware system other than some internet accessible FTP/HTTP storage, which I'm sure the existing LAVA server could do, except for the additional server space that might require. I'm not sure about the exact internal workings of VIPM and it seems that even packages released on the OpenG Sourceforge site don't automatically are picked up, though it shouldn't be very difficult to do that. (Yes I did some package update in the last year and posted it to the OpenG repository but it still doesn't show up in VIPM). It seems that even for that repository there is some manual step involved from JKI to add a package to some list that VIPM then will be able to see. or maybe I did something wrong when posting the package on sourceforge. If JKI would then add this LAVA repository as another default repository besides the Tools Networks and OpenG sourceforge repository, we would be all set even without owning a Pro license of VIPM. And I might be even persuaded to get our company to commit to a few VIPM Pro licenses then .
  13. Yes, and therefore it is courtesy to mention that you posted elsewhere as well. Not everyone frequents both sites and so it is helpful if they can look at the other post and see if the answer they have may already have been given there. Also in the future when someone has the same problem he may come across one post and only see what was answered here, while there might have been other helpful answers in the other thread. You may feel that it is redundant to mention this and maybe counterproductive as people may be less inclined to answer your post if they know you posted elsewhere too, but believe me the people who are likely to answer you either here or on the NI forum will be not noobies who dismiss a post because it has been posted elsewhere, but will want to look at the other thread and if the information they have is not mentioned there, will gladly share it with you. But some of the really valuable posters might get annoyed at seeing the same question on different boards without any mentioning of crossposting and might decide to not bother about spending time to answer your problem even if they might have helpful information. After all we all do this on a volunteering base with no obligation other than goodwill to share the information we know with anyone on these fora. If you don't get the answer you look for, it won't be because you mentioned that you had crossposted, but instead because nobody else who frequents any of these board has the answer or is not allowed to share it because of some NDA agreement. And yes there are manufacturers who believe that their product is so super special that nobody should be allowed to know how to communicate with it, and if they ever share any information with some specially valued customer, that they need to hold them at gunpoint with a lengthy legal document about sharing that super confidential information. Generally I think it hurts them more than anyone else, but it is their choice. They seem to be based in India and don't mention any international sales or technical support offices, so it may be indeed hard to get any information from them if you are not located in India yourself. The site also looks kind of last century in some ways and they do seem to be secretive even about pricing information, so I'm not sure if it is the best supplier for such hardware. We have generally quite good experience with Watlow and Eurotherm devices and they are not difficult about sharing technical information about their products.
  14. It may be valid. The math function may behave badly on wrong or empty data sets. Sure you could detect the actual input arrays to be of valid sizes etc., and a solid algorithme definitely should do that anyways, but the error in is an immediate stop gap to trying to process on data that may have been not even generated because of an error upstream. I understand that not every function needs an error in/error out but I prefer to have a superfluous error cluster on a VI that may never really be used for anything but dataflow dependency than none, and later when revising the function and having to add one anyhow, having to go everywhere and rewire the function.
  15. If you happen to use these VIs in LabVIEW 64 bit you will certainly have to review them and make sure that all Windows handle datatypes such as the HWND used in those functions are configured to be pointer sized integers rather than 32 bit integers (and change the according LabVIEW controls to be of type 64 bit integer).
  16. No no. Gnu C on Windows has no significant use. You might need it if you use one of the Open Source C compilers such as DevC or or MinGW or Cygwin, which are based on gcc but Visual Studio is based on Microsoft Visual C and will never use the GNU C library! It might work with the Visual Studio 2003 runtime installer but there is no 64 bit version for that. Visual Studio 2005 was the first version to properly support creating 64 bit applications. If it works depends on what Visual Studio version was used to create the installer executable. You shouldn't need to install the entire Visual Studio software, just the redistributable Visual C runtime library for the correct Visual Studio version.
  17. 8.5 came out in 2007 and 8.2 in 2006, so they and their installer may or may not be built in Visual Studio 2005. There is a good change that they were in fact created using Visual Studio .Net 2003 or even Visual Studio .Net 2002 though that last one doesn't make to much sense as it was pretty flaky. So you may need to specifically add support for the Visual Studio 2003 C runtime library to your system. Starting with Visual Studio 2002 Microsoft deviated from the previous principle that all Microsoft C compiled programs were referencing the standard MSVCRT.dll that was also part of the Windows installed components. Instead each created application was using its own version of the C runtime library depending on the Visual C version used. Those C runtime versions often are installed in a standard Windows installation as many components that are distributed with Windows were created with various versions of Microsoft C (some pretty old). But it is very feasible to assume that Microsoft managed to extinguish in Windows 10 any application that still requires a C runtime library version prior to 2008. It may still be present on a fresh install of Windows 10 sometimes, depending on additional installs like video or network card drivers for instance with their own specific utilities, but it is very likely that it does not come with any Visual C 2005 support or earlier. Unfortunately it is pretty much impossible to convince Visual C to not use the version specific C runtime library. Writing a new C compiler is almost easier . The installer will of course install the C runtime library support that LabVIEW 8.2 or 8.5 will need, but in order to install the C runtime library support for the installer itself you would need an installer for the installer. A sure catch 22 situation. Only applications created in Visual Studio 6 can run on any Windows version without needing specific C runtime versions to be installed. But I'm sure Microsoft is considering ways to break that at some point
  18. When this library was created somewhere around 2000 or so, LabVIEW was still more than 5 years away from supporting 64 bit integers in any way. So it could not be added then even if one had wanted to. And the OpenG movement has lost a bit of traction in the last 15 years or so. Nobody seems to have noticed this deficit or if they did they just added the extra cases without reporting it back. The real problem is however that these libraries, while not perfect, do work for most users good enough to not bother about putting the effort into it to update them and the original maintainers for each of those libraries almost all moved into other positions that often mean less or now daily LabVIEW programming anymore. The OpenG libraries also follow a development model that makes it difficult to push updates by other users into it and release a new version. But the according Read Key VI also has that omission, only here the fix is more complicated to perform since the VI uses the NI INI file class which still only supports 32 bit integers. So at least for the 64 bit variant something else needs to be developed and in doing so it probably would be best to change the other integer types too to use the same method.
  19. It may not help to calm you down at all, but even many of the security experts just know barely what they are talking about. It's a special world and the really tricky part is that what is considered safe enough today is already tomorrow outright inadequate. A computer from just 10 years ago wouldn't survive even a single day nowadays when connected to the net without being compromised in some ways, and even fully up to date computer systems are under continuous attack when they are visible in any way to the big dangerous net out there. You can only hope that your network modem will keep your internal network invisible and that the modem itself hasn't been compromised in some way already. Mirai and friends don't only can be used to attack network cameras but just about any network appliance.
  20. I think this is it. https://indico.cern.ch/event/306567/attachments/583776/803586/Why_Control_System_Cyber-Security_Sucks__CLA_2014.pptx
  21. You can't do that! Your LabVIEW code really is equivalent to this .Net code which will definitely throw an error! int stringdata[] = {1, 2}; String string = new String(stringdata); SqlString sql = (SqlString)string; // throwing an error and String is not an SqlString in a long shot and neither is an SqlString a String, as it does not inherit from String at all. It is its own object type. Maybe .Net does some super magic behind the scenes and the C# code you wrote is valid but purely from a type compatible point of view reinterpreting the String object o as SqlString isn't a direct compatible conversion. The proper code construct would be something along the lines of: SqlString sql = new SqlString(o); In LabVIEW use the SqlString contstructor that allows to pass in a String paremeter for the initialization.
  22. Nope! If you have the according certificate, Wireshark has an option to decrypt the SSL encrypted data stream directly into fully readable data (which might be binary or not). Obviously the client should only have the certificate containing the public key which won't make direct decryption possible and only the server should know the certificate with the private key but that is already an attack surface.
  23. I'm pretty sure that the IPE structure where the DVR wire was directly wired from the left access node to the right access node without doing anything else with the wire (except a unbundle to get some data out of the DVR content) was already optimized by LabVIEW to be more or less a NOP (no operation) for the right access node. The only real impromvement with read-only setting is that the DVR doesn't need to be locked for the entire IPE but only for the read acces at the left access node.
  24. The performance boost can be from more or less negligible to serious. It really depends what else you are doing on that DVR. If there is no other access to the same DVR at the same time, the gain from just locking the DVR for the value read rather than the duration of the entire IPE structure really won't matter at all. There is no other code that could be blocked waiting for the DVR and consequently nothing that could be slowed down. If you have many concurrent read only accesses to the same DVR then you will see a significant improvement. Before the read only access, each DVR IPE access had to wait for any previous IPE access to that DVR until it was finished. You could achieve similar behaviour before by just using the IPE structure to reference the necessary data in the DVR and do any further processing outside of the DVR. As such the only advantage of the read only access is that the DVR doesn't need to be accessed a second time before leaving the IPE. If there is a IPE non-read-only access to the DVR active however any read-only access still needs to wait for the write access to finish and therefore can be blocked.
  25. I fail to see what Israel has to do with this. While I have been working with ACS controllers from LabVIEW, I only used them through the TCP/IP interface protocol that the controller provides. It's a powerful device with lots of possibilities to implement control functionality inside the controller itself in their basic like programming language (which I assume is what you show in your image). But we never got around to use gantry mode but rather implemented the coupling of the axis in LabVIEW itself and then sending according commands. So I can't really help you with this problem, also because I didn't do any of the programming on the ACS controller itself.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.