Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,926
  • Joined

  • Last visited

  • Days Won

    271

Everything posted by Rolf Kalbermatter

  1. You are right on all accounts. LGPL required you to separate the LGPL code into external libraries that could be dynamically called and theoretically replaced with the source code version. That made building apps pretty complicated albeit not impossible. BSD is IMHO one of the most practical open source software licenses to allow commercial use. It's unfortunate that even that seems to troublesome for some but I wouldn't know of a better solution.
  2. The Transpose can be a free operation but it doesn't have to stay free throughout the diagram. LabVIEW maintains flags for arrays that indicate for instance the order (forward or backward) as well as if it is (transposed or not) The Transpose function then sets that according flag (as does the Revert 1D array does the according flag). Any function consuming the array either has to support that flag and process the array accordingly or first call a function that will normalize the array anyways. So while Transpose may be free in itself it doesn't mean that processing a transposed array is never going to incur the additional processing that goes along with physically transposing the array. I believe it is safe to assume that all native LabVIEW nodes will know how to handle such "subarrays" as will probably autoindexing and similar. However when such an array is passed to a Call Library Node for instance LabVIEW will ALWAYS normalize the array prior to calling the external code function. Similar things account for other array operations such as Array Subset which doesn't always physically create a new array and copies data into it but also can create a subarray that only maintains things like the offset and length into the original array. Of course many of these optimizations will be void and invalidated as soon as your diagram starts to have wire branches that many times require seperate copies of the array data in order to stay consistent.
  3. The download ZIP button is not to download an installer but rather an image of the source files as present in the github project repository. Not sure I would consider this unfortunate as one might expect users going to github to know a bit about software development and the difference between a source code tree and a build package. Sourceforge has it in that respect a bit more clearly structured where you have the code section where you can browse the source code and download an image of it and the files section where the project maintainer usually puts installers or source tree executable packages for end users to download and use. If only it wasn't acquired by slashdot and turned into a cash machine with advertisment and download wrappers for popular projects, with the wrappers trying to force all kinds of adware on a users computer.
  4. I would questions someones engineerings abilities who finds 150 Euros to be expensive for something which can make the difference between a properly working system and one which regulalry looses communication and/or trips the computer in a blue screen of death. If an engineer has to spend two hours to debug such an error caused by a noname serial port adapter then the more expensive device has paid itself already more than once. And two engineering hours are just the top of the ice peak, with lost productivity, bad image towards the customer and what else not even counted in.
  5. Yes as is mentioned in the post #12 by James David Powell, the VIPM attributes the individual names. The reason being that OpenG started some more than 15 years ago like this and it would be pretty unpractical to get agreement by all authors to change that now, since some might not even be involved in LabVIEW work and impossible to contact anymore. There definitely is nobody who seriously considered to do that so far and I'm not volunteering. I would guess that VIPM does use most of the OpenG libraries in one way or the other and its license attribution is pretty complete but I can not talk for the VIPM developers nor for JKI and they would really be the more appropriate people to contact about this. One other thing to consider here: If you only use OpenG inside projects that are used inside your company, your company is your own customer and you maintaining the source code of the applications on a company provided source code control system (You do that, right???) does take care of all the license requirements of even more stringent Open Source licenses like GPL. Of course you have to document such use as otherwise an unsuspecting collegue may turn over a built of your application to a contractor or other third party or such and create a license violation in that way. Only when you start to develop applications that your company intends to sell, lend, or otherwise make available to third parties without source code, will you have to seriously consider the various implications of most open source licenses out there, with the BSD license being definitely one of the most lenient licenses out there (with the exception of maybe the WTFPL (Do What the Fuck You Want to Public License), which some lawyers feel is so offending that they dispute the validity of it. And of course there is the Public Domain code but again some lawyers feel that it is impossible to abandon copyright and putting code into Public Domain is an impossibility. Isn't law great and live without lawyers would be so easy?
  6. I'm not a lawyer and as such my advice will definitely not help your company lawyers to think different. But IMHO if you use OpenG functions you simply need to add somewhere in your application or at least your documentation a reference to that fact and with the OpenG BSD style license text. This license text basically means you can not claim to have written those functions yourself and you have no right to sue any OpenG developer in any way if your application causes a nuclear meltdown or similar. Not really that much different to commercial software where you also don't usually get any rights to claim damages if the software doesn't perform as you want. If you use for the license text in your application or documentation the same text as is used by VIPM you should be fairly safe. It contains more or less all people who were at some point substantially active in providing functions to the Toolkit libraries. More than that really isn't there. You can also add a link to the OpenG Toolkit sourceforge site as extra service for anyone who wants to check out where this all comes from.
  7. Yes it is. It's not just a few Windows API calls to create a menu and set it as the windows menu but you also need to somehow hook the Windows message queue and intercept the according WM_COMMAND messages and handle them somehow either directly or by redirecting them as user event to an event structure. Not impossible to do but I don't see how this could be easily made into a reusable library. Also there is the potential that your menu command IDs might conflict with command IDs that LabVIEW is using itself and that would mess up the whole pretty good. Not sure LabVIEW is even using the WM_COMAMND messages at all, since it doesn't really use Windows menus but there is still a change that it somewhere somehow does use them for some reasons.
  8. It's clearly a bug. The encoding even goes well and the actual JSON decoding too, but then somewhere between there and the conversion into a proper LabVIEW string handle things go awry by using a standard C string function instead of allowing for embedded \0 characters.
  9. FTDI also gets counterfeited regularly. They had even a pretty bad issue when they released a driver through Windows Update that had specific code in it to clear the USB PID/VID of a chip. For genuine FTDI chips that did nothing since those locations were not writable, for counterfeit chips it cleared the PID/VID and made the chip unusable, although with some low level tools one could fairly easily reprogram them back to a working state, at least if you used a Linux computer for that, On Windows it was more complicated and generally beyond most users ability. People got really mad at FTDI, with many promising to never again buy anything that contains FTDI chips, which is strictly speaking a bit hilaric since they didn't buy anything containing real FTDI in the first place already. That the real culprits really are the copycats who sell cheap chips with the FTDI logo and saving engineering costs by making the chip use the original FTDI device driver is of course another story that is hard to sell to the average computer user.
  10. RS-232 may be an old standard, and pretty hard to do wrong, that can't be said about the USB controller in an USB-to RS-232 converter. They almost all use the same two types of chips and the according chip manufactorer has drivers released that do work (most of the time). But these drivers aren't really industrial quality and are more a reference design that the OEM actually should improve and stress test before releasing a product with that driver. However most no name manufacturers compete on price, not on stability of their product and they release the product as a copy paste design from the reference design of the chip manufacturer with the standard reference design driver. Their only added value to the whole is a more or less fancy casing around the chip and plug. And then you have the manufacturers who actually use a copycat silicon in their products. Their is no guarantee that this product is working the same under all circumstances. EMI and other environmental influences require specific considerations that are not really any concern of those copycat manufacturers. The only thing that counts is to sell as many chips as possible with as little expenses as possible. There is no brand name that can be damaged since their operation only works in the shadows anyways and they have no intention of coming out with their own name for those products as that would be admitting their IP theft and after who the original IP owner needs to go. USB communication can be made pretty reliable but that requires knowledge about both electrotechnical and electromagnetic matters as well as how to write a reliable device driver for any modern OS. And of course about logic design of the chip itself but that is another story.
  11. I can't say for sure, but at least under MacOS Classic LabVIEW used indead the standard OS menu. However at least the 64 Bit version of LabVIEW for MacOSX had to undergo a serious rewrite since Apple discontinued most of the Carbon APIs for 64 bit applications. Everything UI related and many more things most likely got moved to use the Quartz and Cocoa API rather than the Carbon API. So during that rewrite many things could have changed completely. But it is quite probable that they did not redraw the menu themselves since it is really part of the OS and not part of the individual window.
  12. Don't think so. While it seems that LabVIEW does for some things use the QT framework somehow, that is almost certainly not going to be the case for any standard window features like menus, scrollbars and its controls. These originate all from way before LabVIEW 8.0 or so where some QT DLL mysteriously appeared in the LabVIEW directory. LabVIEW draws its controls, menus and most anything including the scrollbars itself in an attempt to provide a multiplattform experience that looks and behaves everywhere as much as possible the same. One of the only things where it relays heavily on the platform itself are fonts.
  13. What Hooovahh already said. Windows APIs meant to work for menus specifically, won't work for LabVIEW menus since LabVIEW renders and handles its menus itself, not by using Windows menu infrastructure. As far as Windows is concerned the menu area in a LabVIEW window is simply normal client area.
  14. Nope. Binary shared library dependencies only get copied over on the Pharlap targets during deployment. All other targets need the binary dependencies to be copied by hand (VxWorks targets only) or explicitedly installed with a software install script from within MAX (the latest ZIP Toolkit beta does install these scripts onto the development machine when you install the Toolkit into a 32 Bit LabVIEW for Windows version, other LabVIEW versions don't support realtime development anyways).
  15. Not really. PosLVUserEvent() isn't really documented in any way that deserves the word documention. Yes it is mentioned in the External Code Reference manual just as all the other public LabVIEW manager functions. But for most of them that documentation goes seldom further than the function prototype, some more or less meaningful parameter names and a short description of each parameter that mostly repeats what can be derived from the parameter name too. The fact that you can register callback VIs for other events than .Net events is not only not documented but has been mentioned by LabVIEW developers to be a byproduct of the event architecture of LabVIEW and shouldn't really be relied upon to always work exactly the same. Never heard anyone mentioning that they could be used for PostLVUserEvent() but it is a logical step considering that it does work for other LabVIEW user events and I was waking up this morning with the idea that this might be something to try out here. Nice that Jack confirmed this already and the extra tidbit about being synchronous for callback VIs is an interesting information too, although logical if you think about it. Of course it also allows the callback VI developer to easily block the whole system, if he ends up accessing the same API again which was invoking the callback VI!
  16. One possible solution might be what I did in the OpenG ZIP Library. There the open function is polymorphic and allows to select if the subsequent operations should be performed on a file on disk or on a byte array stream. For the Unzip on stream operation the "stream" to read from is passed to the Open function as a LabVIEW string (really should be a byte array but traditionally LabVIEW uses strings for APIs that should work as byte stream like TCP/UDP, VISA etc.). For the ZIP operation on a stream the Close function returns a LabVIEW string which contains the byte stream data. This isn't pluggable with user provided VIs that provide direct stream access as you intend and has the drawback that the entire data has to be in memory during the entire operation but it is at least possibly to reasonably implement.
  17. That's not going to work in this case like that I'm afraid without some means of synchronization. The FMpeg library does not call the callback to send data to the client but to request the next junk of data from the "stream". As such it is also not a classic callback interface but rather a stream API with a device driver handle that contains specific method function pointers for the various stream operations (open/read/write/seek/close). The Library calls thes functions and expects them to return AFTER the function has done the necessary work on the "stream". While not exactly impossible it is a rather complicated and cumbersome interface. You could write a VI that can act as "callback", it's not really the normal callback type but rather as explained above a driver API with a "handle" with specific access methods as function pointers in it. It's a very popular C technique for pluggable APIs, but really only easily accessible from C too. You could also see it as the C implementation of a C++ object pointer. Then compile those VIs into a DLL that you can then LoadLibrary() the entry points of it into your LabVIEW cluster mimicking that API structure with the function pointers. But there are many problems with that: 1) The DLL will run in the LabVIEW runtime version that corresponds to the LabVIEW version used to create it. If your users are going to run this in a different LabVIEW version there will be a lot of data marshalling between the users LabVIEW system and the LabVIEW runtime in which the DLL runs. 2) If you want to make this pluggable with user VIs your DLL will somehow have to have a way of registering the users LabVIEW VI server instance with it so that your VI can proxy to the user VI through VI server, which adds even more marshalling overhead to the whole picture. 3) Every little change will require you to recreate the LabVIEW DLL, distribute it somehow and hope it doesn't break with some users setup. 4) The whole story about loading the DLL function pointers into a LabVIEW cluster to serve as your FMpeg library handle is utterly ugly, error prone and will in case of supporting both 32 and 64 bit LabVIEW versions also require entirely different clusters that your code needs to conditionally use depending on which bitness LabVIEW has. 5) The chance to get this working reliably is pretty small, will require lots and lots of work that will need to be rechecked everytime you modify something anywhere in that code and allow a user to actually mess up the whole thing if he is careless enough to go into those parts of the VIs and modify anything.
  18. That's more or less how you need to do it with the published LabVIEW APIs. Nothing else will really work easier or better without going the route of undocumented LabVIEW manager calls. NI does have the possibility to call VIs from within C code directly.But that is only used within LabVIEW itself, not in external code AFAIK. And that functionality may hit bad issues if used from external code. Lua for LabVIEW does something similar but without using the PostLVUserEvent() as that wasn't really available when Lua for LabVIEW was developed. The Lua for LabVIEW API allows to register VIs in the C interface with a Lua function name. When the Lua bytecode interpreter encounters such a name the call is passed back to a VI deamon (a background VI process that Lua for LabVIEW starts behind the scenes) and that deamon then pulls the parameters from the Lua stack, calls the VI and pushes any return values back on the Lua stack before handing control back to the Lua engine. Quite involved and tricky to handle correctly but the only feasable way to deal with this problem. There is also a lot of sanity checking of parameters and their types necessary to avoid invalid execution and crashes as you do want to avoid the situation where a user can crash the whole thing by making an error in the VI interface.
  19. You have a misunderstanding here. LabVIEW really knows two variant types. The ActiveX variant which is supposedly what you also can interface to with the cviauto.h file and the native Variant. When you pass a native variant to a Call Library Node configured to accept an ActiveX variant, LabVIEW is converting from one to the other which in fact means creating a complete copy of all the data inside. However ActiveX variants do not know attributes in the sense as the LabVIEW variant does. So those attributes are not only not converted but can not be passed in a meaningful way along with the ActiveX variant and are therefore simply not present on the ActiveX side. While you can pass a native variant to the C code with the Adapt to Type configuration in the CLN this doesn't buy you anything since the C API to access the native variant data is not officially documented by NI.
  20. Which LabVIEW version? According to my own tests some time ago, there was no way to get lvlib, lvclass or similar >= LabVIEW 8.x files into an LLB.
  21. The project directory is fine for DLLs that you declare by name only inside the Call Library Node. However for Windows the project directory of LabVIEW has absolutely no meaning. LabVIEW tries do load "a.dll" in your project directory which depends on "b.dll" and "c.dll". After LabVIEW attempts to LoadLibrary() with the correct path for "a.dll" everything is out of the hands of LabVIEW and only Windows search path rules apply. That means Windows will not search in the directory where you load your project file from but in the directory where the current executable is located. For a build application this is the directory where your myapp.exe is located but when you work in the LabVIEW IDE then it is in the install directory of LabVIEW itself where labview.exe is located.
  22. This is probably one of your problems. DLLs that are directly called from LabVIEW VIs can be confiigured to be moved in any folder by the Application Builder (default is the "data" folder) as the Application Builder will adjust the library path in the Call Library Node to point to that location, when building the application. However secondary dependencies are not resolved by LabVIEW but either by Windows or very seldom by the DLL explicitedly. Windows knows absolutely nothing about a "data" folder and will NOT search in there for DLLs unless you happen to add that directory to the PATH environment variable. But that is not a good solution to do anyhow. Instead you need to move these secondary DLLs into the the same directory as where you executable is. This is always the first directory Windows will search when asked to load a DLL. I usually modify the Application Builder script to install all DLLs into the main directory instead of into a seperate data subdirectory.
  23. Well, I never worked with ATM directly but did early on work in a company which made telecommunication products and one of the products that was developed there did use ATM. As it would seem the fact that whatever you must implement uses ATM isn't really the main issue here. There is nothing like a standard API for ATM on modern computer platforms. So the question really boils down to how is your computer even connected to the ATM network and do you have documentation about the API for the driver of that card?
  24. I did, using a variant of the factory pattern. It was an instrument driver for a USB serial device that could have two different device types. Implemented the low level drivers for each device as a derived class of the main class which was the actual interface used by a user of the driver. Depending on a selection by the user either of the two low level drivers was instantiated and used for the actual implementation of the device communication. Worked fine except that you need to do some path magic to allow for execution on the RT target. It's mostly the same as what you need to do for execution in a build application but there is a difference between the paths when you deploy the program directly from the project (during debugging for instance) and when you build an rtexe application and execute that.
  25. That's not a good idea!! The new 64 bit shared library has various changes that will not work without an updated ZLIP VI library and support files at all. The VIs as they are in the sourceforge repository are the updated ones. A new packages needs to be build but I have delayed that since there are still some issues with additional functionality I wanted to include. This here is an early beta version of a new package which adds support for 64 bit Windows. The MacOSX package support hasn't been added yet so that part won't work at all. What it does contain is an installer for support for the NI realtime targets. This RT installer will however only get installed if you install the package into 32 bit LabVIEW for Windows, since that is the only version which supports realtime development so far. Once it is installed you can go into MAX and select to install extra software for your RT target. Then select custom install and in there should be an OpenG ZIP Toolkit entry which will make sure to install the necessary shared library to your target. For the deflate and inflate alone the replacement of the shared library may indeed be enough but trying to run any of the other ZIP library functions has a very big chance to crash your system if you mix the new shared library with the old VIs. That package was released in 2011, LabVIEW was already present then for 64 bit (since LabVIEW 2009) but VIPM didn't know about 64 bit LabVIEW yet and one could not even attempt to try to convince VIPM to make a difference there. Also the updated package was mainly just a repackiging of the older version from 2009 to support the new VI palette organization, and nothing else. oglib_lvzip-4.1.0-b3.ogp
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.