Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. That's pretty much how it all works. Of course that doesn''t mean that you can't bend the system if you have particularly deep pockets to buy the necessary lawyers to get a court ruling that may sound and feel like the opposite of this. Generally however the money involved is not high enough for such things. The only real problem if flarn would want to sue someone using that VI is of course to find out about the illegal use first and then to proof that that other person didn't invent it themselves independently of his posting. But that is an entirely different story. Having right doesn't always mean to get that right. Who knows, Oracle might buy his rights some day and then go and sue everybody. 😃
  2. Legally that is indeed how it works, unless you live in a banana republic maybe 😃 Copyright is gained automatically when a work is created. At least in the US, works first published after March 1, 1989 need not include a copyright notice to gain protection under the law. It legally prevents anyone from copying it, unless it is accompagnied by a license that specifically allows that.
  3. The difference is that Mouse Down is generated first for the control that the user clicks on, then if that processes successfully the Key Focus is changed to the new control which commits the value to the control that had previously the Key Focus, which results in the Value Changed event for that control. So at the time the Mouse Down event is processed, the Value Change event for the previous control has not been processed yet. When you use the Value Change event for the button instead, it is guaranteed that the Value Change event for the string that is a result of loosing the Key Focus for that control, has been already processed.
  4. Very nice tool. I don't generally customize wires but this looks like a fun project. 😃
  5. We use our internal SVN services so this might not be the answer you look for. But I generally refrain from adding the build executables to the repository and prefer to set tags for each release. Yes this means that I might have to go back to a specific tag and rebuild the executable, which is time consuming and has the potential to not create exactly the same executable anymore depending on driver installations that happened in the meantime for another project. But this happens very rarely as most times you want to have the latest and greatest software version. Obviously if you develop a product that might be sold to different customers over the years and might require you to be able to reproduce old versions frequently to support a specific client, this might be something that makes this approach very unusable. And I definitely never ever add the Installer to the repository. That is going to mess with your repository big time, no matter what.
  6. Why do you post this in the LAVA Lounge and not in the LabVIEW->External Code forum? You really need to give more information. You mention .Net, C++ and CLI. That is all possible to combine, though not necessarily the most common way. You talk about native pointers and managed memory in a somewhat ambigous way that makes it very hard to know what you really have. A memory pointer is not simply a memory pointer. It is important to know how it was allocated and with which exact function. That determines if a simple pointer is still valid when accessed from a different managed environment or not. Generally memory allocated in one of the involved managed environments (.Net or LabVIEW) can only be accessed through functions of that managed environment, otherwise you need to do marshalling, which LabVIEW does automatically when you use the .Net functionality in LabVIEW. If your C++ code explicitly allocates memory through Win32 API calls or Standard C runtime calls (C++ allocations could be trickier especially in combination with compilation as .Net assembly), this memory is not managed by .Net but by your code. If you pass this pointer to LabVIEW and make sure that the pointer remains valid (no realloc() or free() calls at all ever between passing the pointer to LabVIEW and trying to access it in LabVIEW) you can simply use the function MoveBlock() as mentioned in those linked articles. Of course you need to understand the difference between a pointer and a pointer reference in order to be able to correctly pass this pointer to the MoveBlock() function. Show us your code and what you think is not working there. One other question is why do you use C++ when you want to create a .Net assembly (or the opposite, why create a .Net assembly when you want to use C++). It is not the most logical combination as your .Net assembly still contains non IL compiled code, so is equally binary platform specifc as a normal DLL. Creating a real DLL instead might be more straightforward.
  7. If you install it in instr.lib or user.lib and do a menu refresh (application property node) it should be added anyways although with a default icon but what else would you want to add there? Creating a nicer default icon?
  8. I would tend to say that it is overkill. If you really want to handle this it coudn't really be more than a 1:1 copy of the complete content to the user selected destination directory. Personally I find that this is easier to handle in 7-zip or similar on windowed desktops and on the command line if you work predominantly there (though why would you use LabVIEW then 😀).
  9. I think a better "workaround" is to use the paths from the hierarchy as read from disk to fixup the paths in the hierarchy list after loading and renaming the VIs, before writing it back with the Write VI Hierarchy method. Having to click away warnings sounds pretty unintuitive.
  10. Well yes I do build my own .opg package with a version of the OpenG Package Builder. And this version does indeed not do many things and none that are required to support new post 8.0 features like lvlib, lvclass, etc. The only fixing that is done during the build process is renaming the VIs with a postfix then packing everything up into the opg package which is basically still the same as a vip. If VIPM indeed does fixup the library names on package creation, which it may not even do intentionally but simply may be a side effect of the part where the entire hierarchy is first read into memory and then saved back to the VIs, then there is a serious problem. If the fixup only happens at masscompile time after installation there should be no problem at all. The part in the OpenG Package Builder that does the postfix renaming uses in fact the same method to load and modify and then save the renamed hierarchy but I do execute it in pre 8.0 LabVIEW as the VIs are saved in the format of the LabVIEW version in which this step is performed. As all of the wildcard magic in the library name except the shared library ending itself is really only 2009 and higher gimmicks, it may be that the according function in later LabVIEW versions fixes up the library names on recompilation and then the built package indeed ends up with an undesired fixed library name. In that case VIPM should in fact restore the wildcard shared library names after renaming and recompolation before writing the VI hierarchy back to disk in order to keep the original names. Also I obviously only use the shared library file ending wildcard and not the others for 32/64 bit library distinction. Instead I install both shared libraries to the target directory and then use a post install hook that renames the correct shared library for the current platform to the generic name without bitness indication and deletes the other one. That way the VIs can always use the same generic library name without need to distinguish between bitness there.
  11. Error 1 is not only a GPIB controller error but also the canonical "Invalid Argument" error. And since there is obviously no GPIB node involved here (it's the old 488 and 488.2 GPIB functions that return such errors, not the VISA nodes, so it is actually VERY unlikely that anyone nowadays is even running software that uses them) it must mean that the property node considers the value you pass to it to be an "Invalid Argument". 😀
  12. The header file is indeed not necessary when you have a good and complete documentation. However I haven't seen many such documentations. More often than not the documentation contains typos or remains from older versions of the library that are not valid anymore. The header file should be considered the canonical documentation about data types and function prototypes, and the accompagning documentation contains hopefully more than just a bare repetition of what a good C programmer could determine from the header alone. It should contain things like what errors a function can return, what special memory allocation certain parameters may need, the exact meaning of every parameter and its possible and/or allowed values, or the minimum memory size a string or array pointer should have for the function to write into. If it is really good it also documents things like what prerquests a function may have such as other APIs that need to be called first and in what order. In the end however this last part is best learned from sample code that comes with the library and which you then need to translate to LabVIEW. If you create a library of VIs that resembles the dll library functions, this is not very difficult. But your VIs should NOT be a one to one mapping of parameters on the front panel to function parameters in all cases. In C you may have to pass an array into a function and as a second parameter a size. In LabVIEW the array size is part of the array itself so creating a front panel that contains both an array and a numeric for the array size is pretty stupid (which is what the Import Library Wizard for instance does, because a C header file does not tell the C compiler (and hence the Library Wizard too) which parameter is definining the size of the array buffer). That is left to the programmer to know, either from the naming of the parameter names or much better because it is documented in the documentation like that.
  13. It seems to work fine for the OpenG ZIP library and other libraries I did in the past. So not exactly sure what the problem would be. Of course you need to make sure that the .so and .dll and .framework all end up in the same location relative to the VIs, the shared library name needs to be the same except a possible lib prefix on the Linux platform, and the calling convention needs to be cdecl on all platforms (where you have choice). Most new platforms only know one possible calling convention, so no choice there anyhow. Windows 64-bit for instance uses a new calling convention called fastcall, but that is the only one supported (and generally used outside kernel space).
  14. the last step in the description on the sourceforge post could be made simpler: Instead of changing the library path in all Call Library nodes to <some long path>/lvzmq32.so and having to maintain two versions of the VIs because of the different library name ending, you should replace the .so with .*. LabVIEW will then replace the .* with whatever file ending is standard for the current platfom (.so on Linux, .dll on Windows, .framework on Mac)
  15. Usually the lib library only is the import library for the DLL. C can't directly and magically call DLL functions. Somehow it needs to know in what DLL they are and how to load them into memory. This can be either done explicitly by calling LoadLibrary() and GetProcAddress() or by linking the import library that came with the DLL. The import library is simply a precompiled lib file that does all the LoadLibrary() and GetProcAddress() stuff already. In LabVIEW you can't link in a lib library but have to use a Call Library Node, which is the LabVIEW interface to LoadLibrary() and GetProcAddress().
  16. I never really liked the new combined probe window. The old one I used frequently, yes sure you could get into a real mess with probe windows all over the place and it could be hard to find the one with the new value from the wire you were just single stepping through. But the new probe window doesn't solve that at all, you still need to find the specific probe entry in the list, click on it to see its value in the graphic probe view and then get that damn probe window out of the way again ASAP as it covers so many other things. I usually decide then that I can just as well get rid of it, except "Are you really sure you want to close this window?" Damn it! No of course not, I clicked on the upper right corner by accident and never ever would think about getting rid of this huge monster window on my desktop screen.
  17. I"m not sure it's conflation only. Many seem to be focused specifically on the fact that LabVIEW GUIs don't look like the latest hyped Office version, which of course will be again different over 2 years when yet another modern style design guide claims that everything needs to be high contrast again, or maybe alpha shaded with psychadelic animations (you need to find reason to sell high end GPUs to spreadsheet jugglers). Your second point is indeed one thing I feel LabVIEW could have made more advances. Dynamic control creation while indeed complicated to get into the dataflow paradigma of LabVIEW would be possible with the VI Server reference model, although it would not be classical dataflow programming anymore for such GUIs at least for the part that you use such dynamic instantiation in. XControls was a badly executed project for an in principle good idea, Splitters are a God send for more dynamic UIs together with Subpanels, but to try to edit splitters once they are placed on a frontpanel really can be an exercise in self control. In doing so I feel the same frustration as when I'm forced to edit some Visio drawings. The editor almost seems to know what you want to do, because it always does the opposite of that, no matter what.
  18. Hmm, well that is at least an improvement. Still what you describe is indeed clunky! What I still not understand is that so many people complain about how last century LabVIEW Classic looks, but nobody finds the smooth baby rounded icons and light gray in more light grey colored controls in any way odd. In some ways it reminds me of HPVee, which I thought was dead 😀. Did NI decide that HP might have done something not so bad after all these years of fighting it as pure evil?
  19. You mean multiple projects? That's something, but I prefer to keep all the different windows wherever I put them even if they belong to one and the same project or application. And I defnitely never tile, my God what a mess. My windows have the size they need to have to show the code or frontpanel and that is seldem half of the screen in any direction. Artificially resize them to something dictated by the monitor size feels just as evil as being bound to MDI constraints.
  20. It's unnatural and unneccessary to force everything into one single application window frame. I don't even like the left over MDI artefacts of MS office applications. They used to force every spreadsheet and word document into a single main window, probably because they hired the old filemanager user interface designer for their office products, but since got smarter by allowing at least a seperate application window for every document. That makes it at least usable on multimonitor setups. Macs didn't even have any native support for MDI (and Microsoft and others spend lots of man hours to create this unneccessary "feature" on the Mac for their applications).
  21. No. The developer blames his new borns colics for that 😀. But from what I understood it doesn't consists of such a difficult thing as it was supposed to be using pling. Technically it's quite simple, the challenge lays in the fact to make it work on all LabVIEW platforms, including RT, which is mainly a serious amount of testing and testing and testing again. I feel similar about NXG. Lots of change for often pretty vague reason. There was one point about the old LabVIEW code base, it did contain some very ugly code, a lot of it originally caused by the fact that the C compilers at that time struggled to compile the LabVIEW code base at all. Apple repeatedly had to increase the symbol table space in the C compiler in order to allow NI to compile LabVIEW. That resulted in some code that is probably almost unmaintainable. Still, I don't know if throwing the baby out with the bath water was such a good idea to fix this. A more modular rewrite of variaous code parts of LabVIEW might have been easier and develivered similar results, and left classic LabVIEW not out in the cold for new features as has happend for almost 10 years now. And as far as the new MDI style GUI goes, I simply hate it. And the loss of multiplatform support is another big minus. I was under the impression that it did work in LabVIEW for Mac too. But no hard evidence for that. I'm not quite sure what that has to do with LabVIEW NXG or not. ODBC is not something that even LabVIEW Classic supports in any way out of the box. Sure there used to be the one Database Toolkit that was using ODBC and then got bought by NI and redisigned to use ADO instead of ODBC to interface to databases. So are you rather asking about database access in general, independent of ODBC in special? If so that should be one thing where NXG is actually easier as it makes access to .Net APIs simpler. And the .Net database API is all that you need to interface to most databases. Maybe not as trivial as installing the database toolkit and be done, but definitely much easier than many other things you can think up. If you really specifically talk about ODBC support then you shouldn't expect that from NI. ODBC is an old legacy technology that even Microsoft would like to forget if they could.
  22. NI still provides Eclipse installations to cross compile shared libraries for their NI Linux RT based hardware targets. http://www.ni.com/download/labview-real-time-module-2014/4846/en/ http://www.ni.com/download/labview-real-time-module-2017/6731/en/ Still need to test if it is mandatory to use both versions depending on the targeted LabVIEW version or if shared libraries created in the 2014 version will also work on LabVIEW versions 2017 and 2018. And I agree, the difference between the Intel and ARM versions of the targets in terms of compiling C source code for just about any software that does not contain assembly code instructions is really the same.
  23. It's definitely fixed in LabVIEW 2016, although it can be a pain. LabVIEW will break all VIs where a cluster or enum typedef is present whose elements contain values that it can not mutate unambigously. And you get a dialog that lists all those locations and it shows you an old and new view of the data (with a best guess for the new value) where you have to edit/select the new data value and then confirm to use that from now. This applies to enum values that somehow changed their name (or were removed) as well as to cluster elements with different name or a reordering that makes it impossible to reassign the old values unambigously. Simply removing an element in a cluster or enum does however leave the other elements data intact, so it does a reassignment based on label/name rather than just oridinal order of the data. It's a huge improvement, although it can feel a bit painful at times as an entire hierarchy can seem to break completely because of a minor edit in an enum label.
  24. Yes I know but the mentioned rule makes even that difficult. First if your target has no physical DIP switch you have to use NI MAX to set it into safe mode. -> Bummer. There is probably some ini file setting somewhere that you could change through SSH and then reboot, so the NI MAX is probably avoidable but: SSH requires a device that can run it, which will be in almost all cases a real computer. Maybe the "No Windows box allowed behind this line" offers an escape by allowing to use a Linux box, but then I would consider that rule even more stupid. Either you ban all computers or none, just excluding Windoze makes no real sense. The more important question is what nisystemformat really does. Does it wipe the disk completely and if so why not just use dd? If not there will be data left on the device, which I'm sure is the entire reason to require it to be wiped before leaving that area.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.