Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. I would tend to say that it is overkill. If you really want to handle this it coudn't really be more than a 1:1 copy of the complete content to the user selected destination directory. Personally I find that this is easier to handle in 7-zip or similar on windowed desktops and on the command line if you work predominantly there (though why would you use LabVIEW then 😀).
  2. I think a better "workaround" is to use the paths from the hierarchy as read from disk to fixup the paths in the hierarchy list after loading and renaming the VIs, before writing it back with the Write VI Hierarchy method. Having to click away warnings sounds pretty unintuitive.
  3. Well yes I do build my own .opg package with a version of the OpenG Package Builder. And this version does indeed not do many things and none that are required to support new post 8.0 features like lvlib, lvclass, etc. The only fixing that is done during the build process is renaming the VIs with a postfix then packing everything up into the opg package which is basically still the same as a vip. If VIPM indeed does fixup the library names on package creation, which it may not even do intentionally but simply may be a side effect of the part where the entire hierarchy is first read into memory and then saved back to the VIs, then there is a serious problem. If the fixup only happens at masscompile time after installation there should be no problem at all. The part in the OpenG Package Builder that does the postfix renaming uses in fact the same method to load and modify and then save the renamed hierarchy but I do execute it in pre 8.0 LabVIEW as the VIs are saved in the format of the LabVIEW version in which this step is performed. As all of the wildcard magic in the library name except the shared library ending itself is really only 2009 and higher gimmicks, it may be that the according function in later LabVIEW versions fixes up the library names on recompilation and then the built package indeed ends up with an undesired fixed library name. In that case VIPM should in fact restore the wildcard shared library names after renaming and recompolation before writing the VI hierarchy back to disk in order to keep the original names. Also I obviously only use the shared library file ending wildcard and not the others for 32/64 bit library distinction. Instead I install both shared libraries to the target directory and then use a post install hook that renames the correct shared library for the current platform to the generic name without bitness indication and deletes the other one. That way the VIs can always use the same generic library name without need to distinguish between bitness there.
  4. Error 1 is not only a GPIB controller error but also the canonical "Invalid Argument" error. And since there is obviously no GPIB node involved here (it's the old 488 and 488.2 GPIB functions that return such errors, not the VISA nodes, so it is actually VERY unlikely that anyone nowadays is even running software that uses them) it must mean that the property node considers the value you pass to it to be an "Invalid Argument". 😀
  5. The header file is indeed not necessary when you have a good and complete documentation. However I haven't seen many such documentations. More often than not the documentation contains typos or remains from older versions of the library that are not valid anymore. The header file should be considered the canonical documentation about data types and function prototypes, and the accompagning documentation contains hopefully more than just a bare repetition of what a good C programmer could determine from the header alone. It should contain things like what errors a function can return, what special memory allocation certain parameters may need, the exact meaning of every parameter and its possible and/or allowed values, or the minimum memory size a string or array pointer should have for the function to write into. If it is really good it also documents things like what prerquests a function may have such as other APIs that need to be called first and in what order. In the end however this last part is best learned from sample code that comes with the library and which you then need to translate to LabVIEW. If you create a library of VIs that resembles the dll library functions, this is not very difficult. But your VIs should NOT be a one to one mapping of parameters on the front panel to function parameters in all cases. In C you may have to pass an array into a function and as a second parameter a size. In LabVIEW the array size is part of the array itself so creating a front panel that contains both an array and a numeric for the array size is pretty stupid (which is what the Import Library Wizard for instance does, because a C header file does not tell the C compiler (and hence the Library Wizard too) which parameter is definining the size of the array buffer). That is left to the programmer to know, either from the naming of the parameter names or much better because it is documented in the documentation like that.
  6. It seems to work fine for the OpenG ZIP library and other libraries I did in the past. So not exactly sure what the problem would be. Of course you need to make sure that the .so and .dll and .framework all end up in the same location relative to the VIs, the shared library name needs to be the same except a possible lib prefix on the Linux platform, and the calling convention needs to be cdecl on all platforms (where you have choice). Most new platforms only know one possible calling convention, so no choice there anyhow. Windows 64-bit for instance uses a new calling convention called fastcall, but that is the only one supported (and generally used outside kernel space).
  7. the last step in the description on the sourceforge post could be made simpler: Instead of changing the library path in all Call Library nodes to <some long path>/lvzmq32.so and having to maintain two versions of the VIs because of the different library name ending, you should replace the .so with .*. LabVIEW will then replace the .* with whatever file ending is standard for the current platfom (.so on Linux, .dll on Windows, .framework on Mac)
  8. Usually the lib library only is the import library for the DLL. C can't directly and magically call DLL functions. Somehow it needs to know in what DLL they are and how to load them into memory. This can be either done explicitly by calling LoadLibrary() and GetProcAddress() or by linking the import library that came with the DLL. The import library is simply a precompiled lib file that does all the LoadLibrary() and GetProcAddress() stuff already. In LabVIEW you can't link in a lib library but have to use a Call Library Node, which is the LabVIEW interface to LoadLibrary() and GetProcAddress().
  9. I never really liked the new combined probe window. The old one I used frequently, yes sure you could get into a real mess with probe windows all over the place and it could be hard to find the one with the new value from the wire you were just single stepping through. But the new probe window doesn't solve that at all, you still need to find the specific probe entry in the list, click on it to see its value in the graphic probe view and then get that damn probe window out of the way again ASAP as it covers so many other things. I usually decide then that I can just as well get rid of it, except "Are you really sure you want to close this window?" Damn it! No of course not, I clicked on the upper right corner by accident and never ever would think about getting rid of this huge monster window on my desktop screen.
  10. I"m not sure it's conflation only. Many seem to be focused specifically on the fact that LabVIEW GUIs don't look like the latest hyped Office version, which of course will be again different over 2 years when yet another modern style design guide claims that everything needs to be high contrast again, or maybe alpha shaded with psychadelic animations (you need to find reason to sell high end GPUs to spreadsheet jugglers). Your second point is indeed one thing I feel LabVIEW could have made more advances. Dynamic control creation while indeed complicated to get into the dataflow paradigma of LabVIEW would be possible with the VI Server reference model, although it would not be classical dataflow programming anymore for such GUIs at least for the part that you use such dynamic instantiation in. XControls was a badly executed project for an in principle good idea, Splitters are a God send for more dynamic UIs together with Subpanels, but to try to edit splitters once they are placed on a frontpanel really can be an exercise in self control. In doing so I feel the same frustration as when I'm forced to edit some Visio drawings. The editor almost seems to know what you want to do, because it always does the opposite of that, no matter what.
  11. Hmm, well that is at least an improvement. Still what you describe is indeed clunky! What I still not understand is that so many people complain about how last century LabVIEW Classic looks, but nobody finds the smooth baby rounded icons and light gray in more light grey colored controls in any way odd. In some ways it reminds me of HPVee, which I thought was dead 😀. Did NI decide that HP might have done something not so bad after all these years of fighting it as pure evil?
  12. You mean multiple projects? That's something, but I prefer to keep all the different windows wherever I put them even if they belong to one and the same project or application. And I defnitely never tile, my God what a mess. My windows have the size they need to have to show the code or frontpanel and that is seldem half of the screen in any direction. Artificially resize them to something dictated by the monitor size feels just as evil as being bound to MDI constraints.
  13. It's unnatural and unneccessary to force everything into one single application window frame. I don't even like the left over MDI artefacts of MS office applications. They used to force every spreadsheet and word document into a single main window, probably because they hired the old filemanager user interface designer for their office products, but since got smarter by allowing at least a seperate application window for every document. That makes it at least usable on multimonitor setups. Macs didn't even have any native support for MDI (and Microsoft and others spend lots of man hours to create this unneccessary "feature" on the Mac for their applications).
  14. No. The developer blames his new borns colics for that 😀. But from what I understood it doesn't consists of such a difficult thing as it was supposed to be using pling. Technically it's quite simple, the challenge lays in the fact to make it work on all LabVIEW platforms, including RT, which is mainly a serious amount of testing and testing and testing again. I feel similar about NXG. Lots of change for often pretty vague reason. There was one point about the old LabVIEW code base, it did contain some very ugly code, a lot of it originally caused by the fact that the C compilers at that time struggled to compile the LabVIEW code base at all. Apple repeatedly had to increase the symbol table space in the C compiler in order to allow NI to compile LabVIEW. That resulted in some code that is probably almost unmaintainable. Still, I don't know if throwing the baby out with the bath water was such a good idea to fix this. A more modular rewrite of variaous code parts of LabVIEW might have been easier and develivered similar results, and left classic LabVIEW not out in the cold for new features as has happend for almost 10 years now. And as far as the new MDI style GUI goes, I simply hate it. And the loss of multiplatform support is another big minus. I was under the impression that it did work in LabVIEW for Mac too. But no hard evidence for that. I'm not quite sure what that has to do with LabVIEW NXG or not. ODBC is not something that even LabVIEW Classic supports in any way out of the box. Sure there used to be the one Database Toolkit that was using ODBC and then got bought by NI and redisigned to use ADO instead of ODBC to interface to databases. So are you rather asking about database access in general, independent of ODBC in special? If so that should be one thing where NXG is actually easier as it makes access to .Net APIs simpler. And the .Net database API is all that you need to interface to most databases. Maybe not as trivial as installing the database toolkit and be done, but definitely much easier than many other things you can think up. If you really specifically talk about ODBC support then you shouldn't expect that from NI. ODBC is an old legacy technology that even Microsoft would like to forget if they could.
  15. NI still provides Eclipse installations to cross compile shared libraries for their NI Linux RT based hardware targets. http://www.ni.com/download/labview-real-time-module-2014/4846/en/ http://www.ni.com/download/labview-real-time-module-2017/6731/en/ Still need to test if it is mandatory to use both versions depending on the targeted LabVIEW version or if shared libraries created in the 2014 version will also work on LabVIEW versions 2017 and 2018. And I agree, the difference between the Intel and ARM versions of the targets in terms of compiling C source code for just about any software that does not contain assembly code instructions is really the same.
  16. It's definitely fixed in LabVIEW 2016, although it can be a pain. LabVIEW will break all VIs where a cluster or enum typedef is present whose elements contain values that it can not mutate unambigously. And you get a dialog that lists all those locations and it shows you an old and new view of the data (with a best guess for the new value) where you have to edit/select the new data value and then confirm to use that from now. This applies to enum values that somehow changed their name (or were removed) as well as to cluster elements with different name or a reordering that makes it impossible to reassign the old values unambigously. Simply removing an element in a cluster or enum does however leave the other elements data intact, so it does a reassignment based on label/name rather than just oridinal order of the data. It's a huge improvement, although it can feel a bit painful at times as an entire hierarchy can seem to break completely because of a minor edit in an enum label.
  17. Yes I know but the mentioned rule makes even that difficult. First if your target has no physical DIP switch you have to use NI MAX to set it into safe mode. -> Bummer. There is probably some ini file setting somewhere that you could change through SSH and then reboot, so the NI MAX is probably avoidable but: SSH requires a device that can run it, which will be in almost all cases a real computer. Maybe the "No Windows box allowed behind this line" offers an escape by allowing to use a Linux box, but then I would consider that rule even more stupid. Either you ban all computers or none, just excluding Windoze makes no real sense. The more important question is what nisystemformat really does. Does it wipe the disk completely and if so why not just use dd? If not there will be data left on the device, which I'm sure is the entire reason to require it to be wiped before leaving that area.
  18. From my limited experience with MIPI this is very critical. The differential serial lanes of a MIPI connection need all to be within a few mm of length to guarantee proper signal transmission. That means that if you design a PCB you generally have to use meander microstrips to make sure that every connection has exactly the same length. In addition the MIPI standard is designed as a chip to chip interface and is not meant to be routed through long cables. The D-PHY specification defines the maximum lane flight time to 2 ns. so that means that on an FR-4 PCB using matched microstriplines you get at most 25 to 30 cm of trace length. The typical FPC flex cable used to connect a camera module to a board does have similar electrical characteristics and is therefore not that much different. That includes the traces from the chip to the FPC cable connector, the FPC cable itself and the traces from the FPC connector to the framegrabber chip.
  19. Well you can offer them to use a hammer 😀. It's about as senseful as that rule 😉. But there might be rules about bringing in a hammer too. But honestly, Linux has everything on board and even a slimmed down NI Linux RT should still know the dd command line tool. There you can do: Filling the disk with all zeros (This may take a while, as it is making every bit of data 0) : dd if=/dev/zero of=/dev/sdX bs=1M *replace X with the target drive letter. If you are wiping your hard drive for security, you should populate it with random data rather than zeros (This is going to take even longer than the first example.) : dd if=/dev/urandom of=/dev/sdX bs=1M *replace X with the target drive letter. The reason one should fill with urandom in case of required security is explained here: http://www.marksanborn.net/howto/wiping-a-hard-drive-with-dd/ If all else fails you could create a mini USB stick with minimal unix install and the dd tool and then get it through security check at said place (Hey it does not contain any Windows 😀) boot from it and wipe the drive from there. Needless to say that such a controller won't boot up after this anymore and you will need a bootable USB drive to turn it alive again, but that seems to be the price as they want to be sure no secret can sneak out through the door with any device. Edit: Ohh, and you will of course need to take a keyboard with you. Somehow you will have to enter those commands on the shell. I hven't tried but I would expect that if you plug a USB keyboard in the controller, you will actually be able to use it. The next challenge will be to have some form of display. A VT 100 terminal may come in handy as you can connect it to the RS-232 port on the controller if it has one. Otherwise you have to login blind (good luck) and enter those commands blind too (even more good luck with that). All in all I think the hammer method is still the best one and simply buy a new controller 😀. A headless controller really is headless, and to control it you do need some kind of terminal with a display and a keyboard at least. That could be a very simple Linux device but I fail to see how a Linux computer would be more secure to take in than a Windows one. With these rules, just tell them to get the controller out there themselves!
  20. You can do the same with any of the other OpenG libraries. Yes they are in a SubVersion repository so not the exact same experience as with GIT repos, but similar enough.
  21. I could see how this might have been confusing. Enterprise has since a long time always seemed to be even more than Professional, especially if you look at Microsoft. So that Enterprise would add the feature to subscribe to custom repositiories that you need Professional to create is kind of weird. Unless Enterprise would have been a kind of site license that everyone in a company could install, but was it that?
  22. I think that's a short term fix but not really something that should be part of every new OpenG library release cycle, nor a possible future LavaG library release.
  23. I've been making a few small changes in the past to the string package in the repository based on bug reports I saw. Revision: 1494 Author: labviewer Date: vrijdag 23 mei 2014 23:07:42 Message: Attempt to fix the String to 1D Array.vi function when ignore duplicate delimiters is true. ---- Modified : /trunk/string/source/library/String to 1D Array.vi Modified : /trunk/string/tests/TEST - String to 1D Array.vi But I'm not familiar with the exact release procedure. When not seeing any more action I eventually build a new VIP file oglib_string-4.2.0.13.vip and uploaded it to the sourceforge file store in 2017 but that seems not enough to make it appear in VIPM unfortunately.
  24. I think it would be possible for JKI to add besides the JKI package repository and the NI Tools Network repository one more well known repositorey such as a LAVA URL to the free edition. That would not be a big deal to add I'm sure. Another option I would definitely consider, is a paid version of VIPM that costs a small fee but then allows to add custom repositories to that client. You still would need a full pro license to create new repositories but not to access them. The current license model is impossible to defend in a company as ours where you not only need a pro license to create your own company wide repository, but anybody wanting to access that repository needs that license too in order to add it to the locations VIPM will check for packages. So something like the free community edition as now with additional LAVA repository added. Then an intermediate Developer Client edition that allows to subscribe to custom repositories that everybody in a company could use. And last but not least the full featured VIPM Pro as it is now. The old OpenG package manager could more or less do the two first editions already, it is just outdated and doesn't handle new LabVIEW file tyypes properly anymore, that were coming out with LabVIEW 8 and later. EDIT: About 2 years ago we were evaluating in our company how to manage internal libraries and the distribution thereof. VIPM was considered too but the fact that every developer would need a Pro license killed that idea very quickly. If there would have been a Developer Client edition that normal LabVIEW developers could use and could have been purchased as a site license for a feasable fee, I'm pretty sure we would have been able to get that solution approved together with a few Pro licenses.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.