Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,871
  • Joined

  • Last visited

  • Days Won

    262

Everything posted by Rolf Kalbermatter

  1. It should AFAIK, at least if you as the application developer legally own the LabVIEW for Linux development system.
  2. By this reasoning you wouldn't be able to add anything from lavag either including this library. I find that a bogus reason. Strictly speaking even your already existing reuse libraries and even any newly developed VIs for a specific project would have to be considered unapproved under this aspect. Yes you can write unit tests and whatever else to get them stamped as some sort of approved, but so can you with 3rd party libraries. Not saying here you are wrong but trying to point out that the hysteric fear about contaminating something with open source and whatever is leading down a very slippery path for sure.
  3. The latest package was done by Jonathan Green. I guess he missed the platform limitations when porting the package file to VIPM. Personally I would say the LargeFile library doesn't really make sense in post 8.21 at all, and even in 8.20 only very little. I'm not using VIPM for package generation so I can't modify the package configuration myself.
  4. The availability for all platforms is definitely an error. Not sure when that got into the Toolkit. I only wrote the library years ago never really packaged it. As to LabVIEW versions, since LabVIEW 8.0 almost all file functions use 64 bit offsets and since about 8.20 they actually work mostly fine. So it makes no sense to include this library in a Toolkit that is >= LabVIEW 2009 only.
  5. For (L)GPL software (which applies to the DLL portion of the libraries that have them) you do need to mention it both in compiled apps as well as the source code (the second is logical as you can not remove existing copyright notices from the source code). All versions of the BSD license also have one clause for source and one for binary distribution and both of them apply independent of each other. Since the LabVIEW OpenG part is basically BSD the answer to your question would be therefore a clear yes! I would consider it enough to mention in the About box the fact that you use the OpenG libraries and then add a license.txt or similar file to the installation where you print the license text. If your app makes use of the OpenG ZIP, LabPython or other library using a DLL you should strictly speaking also add the relevant LGPL license text and point to the Sourceforge OpenG project where one can download the source code of those libraries.
  6. Actually it's not! You are right that the shared library will refuse to work if your clock is set after June 2010 or so, by simply posting a dialog at runtime. But the reason of the original refnum problem is that this library makes use of so called user refnums. These are defined by resource files that get installed by the package and in order for those refnums to be valid the according resource files have to get loaded by LabVIEW, which it only does at startup (at least I'm not aware of any VI server method to do that at runtime too, like the refresh Palette method the VIPM uses after installation of a new package). I'l be having a look at the library soon and see if I can do anything to resurrect it, but feedback has been very limited, so I was simply assuming that nobody was using it. Please note that the SSL support of that library is really minimal. It allows to get an https: connection up and running but lacks any and all support to modify properties and access methods of the SSL context to change its behavior or for instance add private certificates to it.
  7. In addition to what asbo said, testing with no name adapters is anything but conclusive. Some things can work, with a particular hardware device and depending what driver version you get installed and it then can not work after some seemingly unconnected changes like Windows updates. So even if NI can conclude one thing it may only be valid for that exact HW/SW combination on that computer and behave rather differently on other setups. Sometimes NI HW may seem expensive in comparison to no-name, or semi no-name asian low cost devices, but there is a difference in what you get. One is a hardware device where the people at NI actually sat down and wrote a specific driver for it by people who have some serious device driver development experience, the other is usually a minimalized copy of the reference design from the chip manufacturer with often a completely unaltered device driver from the same chip manufacturer. However reference designs are usually not meant to be end user sell-able items but developer platforms to "develop" a product with. The provided device drivers with those reference designs are at best a starting framework but seldom a fully featured device driver. FTDI drivers are an exception in that respect, as they already are pretty feature complete, although support for things like line breaks, no-data detection are still not something that I would rely upon from the reference design driver. It's the reason why FTDI based adapters are usually fairly functional for standard serial port applications, no matter what no-name producer sells them. Most no-name manufacturers wouldn't know what button to push to compile their own device driver!
  8. Yes hardware exposure on digital logic certainly helps the understanding. If you think of integers as a simple register or counter, the oddball in the mix is rather the signed integer than the unsigned due to its use of one's complements! Think about it. The MSB is the sign! -128 => 0x80 -1 => 0xFF 1 => 0x01 127 => 0x7F A naive approach to this could be instead (offset notation): -128 => 0x00 127 => 0xFF or (separate sign bit): -127 => 0x8F -0 => 0x80 0 => 0x00 127 => 0x7F The reason computer use the one's complement is that all the others are rather difficult to implement efficiently in logic for either addition and especially subtraction. I have seen such code too in the past and simply assumed that the original programmer either didn't think further than his nose, or may have used signed integers before, then in a frenzy to avoid coercion dots, changed them to unsigned without reviewing all the code and noticing the now superfluous positive check. In one or two cases the effective code was in fact in the negative branch of the case which of course never could happen, so that made me scratching my head a bit.
  9. I'm not sure where the claim comes from, about that understanding unsigned numbers is difficult. It supposedly is the reason that Java doesn't have unsigned integers since the original Java inventers claimed, that nobody understands them correctly anyhow so a language better does not provide support for them. I still have the feeling that they did not understand it and assumed that what is true for them must be true for everyone. Now if you need to implement binary protocol parsing that contains unsigned integers you really have to go nilly willy to get the right result in Java.
  10. PID being a certain algorithme, indeed has a high chance to come up with similar code. But to be safe it is definitely a good idea to go from the text book description of the algorithme and not from looking at another implementation of it. Open Source development for instance usually allows for so called clean room development. It means it is considered permissible for someone to use reverse engineering practices to produce a textbook description of the interface and its requirements and someone else using that description to implement the code. But the reverse engineer has to be careful to not describe the internal algorithm in more details than absolutely necessary to allow for a compatible implementation. The implementation of an algorithme can be copyrighted, the workings of it not, that is possibly a case for patent protection, another can of worms. But much of the confusion about copyright also comes from a confusion between copyright and patent right. One protects the form, or specific implementation of an idea, the other more the content of the idea itself.
  11. I consider it in fact one of the more successful ones. I'm no Apple fanboy by a long stretch but not everything Apple does is bad. But I have not seen any speech recognition solution yet that had not various troubles in one way or the other, and some simply did not work at all. Designing algorithms to recognize perfectly recorded voice isn't that complicated, but we usually do not want to go into a multi million dollar recording studio to dictate our little smart phone some commands.
  12. Well ask OpenSource programmers! Many think that looking at non-free code is already more than enough to endanger an Open Source project. Wine for instance has a clear policy there. Anyone having had access to Windows source code is not welcome to provide any code patches. They can do testing, documentation and such things but source code patches are refused if project leaders have any suspicion that the person sending in the patch might have been exposed to that code either through the Microsoft shared source initiative or the illegally leaked source code a few years ago. Also they state explicitly that someone having looked at the (incomplete) source code of the C runtime or MFC library, that comes with every Visual Studio installation for many years, should not attempt to provide any patches to code related to these areas. If the submitted code raises suspicions of such influence, it is refused. They even have for many years refused code patches from people involved in the ReactOS project, another Open Source project trying to create a Windows compatible kernel but not building on Linux but directly sitting on top of the BIOS interface, meaning it is a fully featured OS in itself, because some of the contributors to that project have more or less openly admitted to the use of disassembling Windows for reverse engineering purposes. So not just exposure to source code is a serious risk to creating copyright challenged source code but also looking at the compiled product of such source code to closely. Some Open Source programmers even refuse to look at GPL source code since they believe that it poses a risk if you do not plan to release your own source code under (L)GPL yourself, but under a different possibly more permissive open source license like BSD. Copying GPL source code in anything non-GPL is anyhow a sure way of copyright violation. Memorizing source code and recreating it is more complicated but could be in many jurisdictions a serious legal risk already. And very often the question is not who is more right, but who has a longer financial breath to go through all the legal procedures. So be careful offering to recreate copyrighted code. NI may not be interested to go after you in general, or where you currently live or for whatever other reason, but many little things like this could build up to something undesirable in the future. Also you have to think about such things anyhow. Just doing it has always the danger of the so called sliding perception. If this hasn't caused problems today I should be fine going a little further tomorrow and even further next week and before you are aware of it you operate in truly dangerous areas.
  13. It may not be very much if the purpose of that project is to actually do some work on the speech recognition algorithmes themselves and not just create an application that can do speech recognition. However there have been many companies trying to get a well working speech recognition software designed and more than one of them failed. So it is definitely not trivial and an area of expertise with only very few people knowing the in depth details. Most of them probably aren't here on lava but on more special interest boards into that area.
  14. I think you might still end up with interlinking problems at least in some versions of LabVIEW. I know that LabVIEW will revisit ALL CLNs loaded into memory linking to a specific DLL name if you change one CLN to load this DLL name from a different location. It should of course completely ignore anything inside a conditional disable structure, but I'm sure it didn't in some versions. Also with simply installing the right DLL for the platform you can avoid the conditional disable structure AND also develop on only one platform without the need to load the VIs on both platforms and make edits. Of course you have to test it on the other platform too, but you don't have to load each VI in every different platform everytime you change something on the CLN.
  15. That's what he seems to point at here: And it indeed adds an extra hassle to building an application with such a library. But I don't think the solution is in allowing even more wild card options to also specify relative path patterns. A 64 Bit DLL simply shouldn't be installed on a 32 Bit app installation and vice versa. If VIPM would support specifying platform settings for individual files or file groups, that would be quite a moot point, I think. I solved it for me by creating my own improved version of the OpenG Package Builder, which supports that option. (The improvements were not to support this, it already does! But its user interface and some other options are a bit limited. Also note that VIPM does provide quite a different UI for defining a package. More user friendly but harder to add the feature to specify platforms and versions for individual files)
  16. The proper solution is to use some kind of installation tool like VIPM and install either DLL, depending on the bitsize of the installation target into the same directory. Ohh wait VIPM doesn't support file specific version settings, when building a package! Well the good old OpenG Package Builder does! And its opg file format is still used and recognized by VIPM too. If you want to stay with VIPM you have to use the PostInstall step to make those modifications after the installation. Your proposed solution has at most some hackish merits. It will cause many problems further down the road as you will edit the VIs and forget sometimes to follow the very strict VI open protocol to avoid it. And even if you won't others using your library for sure will.
  17. Well for simplicity I consider variable sized messages with prepended size information similar enough to fixed size messages, as in both cases the size is known before you start the read operation. And the link to the VISA Abort VI can be found here!
  18. Fixed block size binary messages also falls under the global group of terminated messages, as I have indicated in my post. You may have to read a data packet in more than one read to retrieve for instance block length indications for variable sized data, but each block in itself is always of a specific size that is known before you start the read. Normal device communication always is based on the principle that you send a request and then receive some kind of response and each response is usually terminated either by a termination character or by well known block sizes. This in fact only leaves Bytes at Serial Port for situations where the device spews data without having been queried first (or you simulate a device that of course needs to check for new commands regulary). Even here, Bytes at Serial Port should be normally only used to determine that there is any data and the more protocol specifc termination method should be used to read that data instead of using the value from Bytes at Serial Port to read that many bytes. And VISA Read can be aborted, just not with a native VISA node. But a little Call Library Node to call viTErminate does actually work fine, I just can't find the post where I put this VI up a few days ago.
  19. Don't do that unless you write some kind of Hyperterminal clone. Any real world instrument communication using Bytes at Serial Port is in 99.9% a bad choice. If I had a say I would disable the Bytes at Serial Port property and only make it available if the users enters something like neverReallyCorrectVISAUse=True in LabVIEW.ini. Proper instrument communication should ALWAYS use some kind of termination operation. This can be a termination character (CRLF for RS-232) a handshake signal (EOI on GPIB) or in case of binary communication often fixed size protocol. Using Bytes at Serial Port either results in regularly cutoff messages or in VERY complex handling around the Read operation to analyse any read buffer and store any superfluous data in a shift register or something for use with the next read.
  20. Since you can not look at the source code of both LabVIEW and probably more interestingly the Windows kernel and how it routes SendInput() events along the BIOS keyboard interface, it is hard to say where the problem could be (not that insight in both source codes would likely help much without VERY intense study. This is most likely one of the more delicate parts of the Windows kernel, where a lot of code has accumulated over the years for backwards compatibility, bug circumvention and circumvention of bug circumvention and so on).
  21. Well if it would be a LabVIEW only problem it would also happen on non-Parallels installation and as far as I can see that is not the case. It must have to do with the particular way Parallels generates dead keys in its keyboard virtualization driver and how LabVIEW interprets them. That LabVIEW most likely is not just doing the normal standard processing is obvious, but if that is illegal in terms of the Windows API and just happens to work on all real world hardware scenarios or if Parallels is messing up somehow when simulating the keyboard BIOS interface is not possible to say at this point. If you really want to spend more time, I would try to install VirtualBox and check with that.
  22. I wouldn't hold my hand in the fire and promis that inline will be always inlined, but I was under the impression, that unless you do something else non-standard in the VI settings, this should be the case anyhow. It's possible that a newer LabVIEW version might introduce a threshold which when the VI is above that in terms of some complexity would prevent the VI from getting inlined, but I'm not aware of such an option yet (which of course doesn't mean it couldn't be already there)!
  23. Why do you think that the reentrant setting would still have any influence when VIs are inlined? Basically they then all have their own data space AND code path anyhow (using the data space of the VI they are inlined to and the code path which is copied verbatim into that VI), so no possible contention from having to protect data space and/or code execution from multiple concurrent accesses. And RT has no influence on that. Preallocated is best on RT as it minimizes the amount of memory allocations and reallocations, but since there are no clones that could compete for data space allocations anyhow here, this setting is again irrelevant for inlined VIs. That doesn't mean it has to be irrelevant for the VI that contains the inlined VIs but I think that should be obvious.
  24. Looks a bit like what I have done with LabPython . I think it has some merits to have a more simple interface but LuaVIEW was developed to allow both calling Lua scripts from LabVIEW as well as calling back into LabVIEW from Lua scripts. And that was actually a requirement not just a wish for the project at hand. And LuaVIEW 2.0 will support binary modules too (could be made to work in LuaVIEW 1.2.1 as I have created luavisa and luainterface for a project of a client that integrated LuaVIEW 1.2.1, but it was indeed not exactly trivial to do).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.