Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. I'm not sure where the claim comes from, about that understanding unsigned numbers is difficult. It supposedly is the reason that Java doesn't have unsigned integers since the original Java inventers claimed, that nobody understands them correctly anyhow so a language better does not provide support for them. I still have the feeling that they did not understand it and assumed that what is true for them must be true for everyone. Now if you need to implement binary protocol parsing that contains unsigned integers you really have to go nilly willy to get the right result in Java.
  2. PID being a certain algorithme, indeed has a high chance to come up with similar code. But to be safe it is definitely a good idea to go from the text book description of the algorithme and not from looking at another implementation of it. Open Source development for instance usually allows for so called clean room development. It means it is considered permissible for someone to use reverse engineering practices to produce a textbook description of the interface and its requirements and someone else using that description to implement the code. But the reverse engineer has to be careful to not describe the internal algorithm in more details than absolutely necessary to allow for a compatible implementation. The implementation of an algorithme can be copyrighted, the workings of it not, that is possibly a case for patent protection, another can of worms. But much of the confusion about copyright also comes from a confusion between copyright and patent right. One protects the form, or specific implementation of an idea, the other more the content of the idea itself.
  3. I consider it in fact one of the more successful ones. I'm no Apple fanboy by a long stretch but not everything Apple does is bad. But I have not seen any speech recognition solution yet that had not various troubles in one way or the other, and some simply did not work at all. Designing algorithms to recognize perfectly recorded voice isn't that complicated, but we usually do not want to go into a multi million dollar recording studio to dictate our little smart phone some commands.
  4. Well ask OpenSource programmers! Many think that looking at non-free code is already more than enough to endanger an Open Source project. Wine for instance has a clear policy there. Anyone having had access to Windows source code is not welcome to provide any code patches. They can do testing, documentation and such things but source code patches are refused if project leaders have any suspicion that the person sending in the patch might have been exposed to that code either through the Microsoft shared source initiative or the illegally leaked source code a few years ago. Also they state explicitly that someone having looked at the (incomplete) source code of the C runtime or MFC library, that comes with every Visual Studio installation for many years, should not attempt to provide any patches to code related to these areas. If the submitted code raises suspicions of such influence, it is refused. They even have for many years refused code patches from people involved in the ReactOS project, another Open Source project trying to create a Windows compatible kernel but not building on Linux but directly sitting on top of the BIOS interface, meaning it is a fully featured OS in itself, because some of the contributors to that project have more or less openly admitted to the use of disassembling Windows for reverse engineering purposes. So not just exposure to source code is a serious risk to creating copyright challenged source code but also looking at the compiled product of such source code to closely. Some Open Source programmers even refuse to look at GPL source code since they believe that it poses a risk if you do not plan to release your own source code under (L)GPL yourself, but under a different possibly more permissive open source license like BSD. Copying GPL source code in anything non-GPL is anyhow a sure way of copyright violation. Memorizing source code and recreating it is more complicated but could be in many jurisdictions a serious legal risk already. And very often the question is not who is more right, but who has a longer financial breath to go through all the legal procedures. So be careful offering to recreate copyrighted code. NI may not be interested to go after you in general, or where you currently live or for whatever other reason, but many little things like this could build up to something undesirable in the future. Also you have to think about such things anyhow. Just doing it has always the danger of the so called sliding perception. If this hasn't caused problems today I should be fine going a little further tomorrow and even further next week and before you are aware of it you operate in truly dangerous areas.
  5. It may not be very much if the purpose of that project is to actually do some work on the speech recognition algorithmes themselves and not just create an application that can do speech recognition. However there have been many companies trying to get a well working speech recognition software designed and more than one of them failed. So it is definitely not trivial and an area of expertise with only very few people knowing the in depth details. Most of them probably aren't here on lava but on more special interest boards into that area.
  6. I think you might still end up with interlinking problems at least in some versions of LabVIEW. I know that LabVIEW will revisit ALL CLNs loaded into memory linking to a specific DLL name if you change one CLN to load this DLL name from a different location. It should of course completely ignore anything inside a conditional disable structure, but I'm sure it didn't in some versions. Also with simply installing the right DLL for the platform you can avoid the conditional disable structure AND also develop on only one platform without the need to load the VIs on both platforms and make edits. Of course you have to test it on the other platform too, but you don't have to load each VI in every different platform everytime you change something on the CLN.
  7. That's what he seems to point at here: And it indeed adds an extra hassle to building an application with such a library. But I don't think the solution is in allowing even more wild card options to also specify relative path patterns. A 64 Bit DLL simply shouldn't be installed on a 32 Bit app installation and vice versa. If VIPM would support specifying platform settings for individual files or file groups, that would be quite a moot point, I think. I solved it for me by creating my own improved version of the OpenG Package Builder, which supports that option. (The improvements were not to support this, it already does! But its user interface and some other options are a bit limited. Also note that VIPM does provide quite a different UI for defining a package. More user friendly but harder to add the feature to specify platforms and versions for individual files)
  8. The proper solution is to use some kind of installation tool like VIPM and install either DLL, depending on the bitsize of the installation target into the same directory. Ohh wait VIPM doesn't support file specific version settings, when building a package! Well the good old OpenG Package Builder does! And its opg file format is still used and recognized by VIPM too. If you want to stay with VIPM you have to use the PostInstall step to make those modifications after the installation. Your proposed solution has at most some hackish merits. It will cause many problems further down the road as you will edit the VIs and forget sometimes to follow the very strict VI open protocol to avoid it. And even if you won't others using your library for sure will.
  9. Well for simplicity I consider variable sized messages with prepended size information similar enough to fixed size messages, as in both cases the size is known before you start the read operation. And the link to the VISA Abort VI can be found here!
  10. Fixed block size binary messages also falls under the global group of terminated messages, as I have indicated in my post. You may have to read a data packet in more than one read to retrieve for instance block length indications for variable sized data, but each block in itself is always of a specific size that is known before you start the read. Normal device communication always is based on the principle that you send a request and then receive some kind of response and each response is usually terminated either by a termination character or by well known block sizes. This in fact only leaves Bytes at Serial Port for situations where the device spews data without having been queried first (or you simulate a device that of course needs to check for new commands regulary). Even here, Bytes at Serial Port should be normally only used to determine that there is any data and the more protocol specifc termination method should be used to read that data instead of using the value from Bytes at Serial Port to read that many bytes. And VISA Read can be aborted, just not with a native VISA node. But a little Call Library Node to call viTErminate does actually work fine, I just can't find the post where I put this VI up a few days ago.
  11. Don't do that unless you write some kind of Hyperterminal clone. Any real world instrument communication using Bytes at Serial Port is in 99.9% a bad choice. If I had a say I would disable the Bytes at Serial Port property and only make it available if the users enters something like neverReallyCorrectVISAUse=True in LabVIEW.ini. Proper instrument communication should ALWAYS use some kind of termination operation. This can be a termination character (CRLF for RS-232) a handshake signal (EOI on GPIB) or in case of binary communication often fixed size protocol. Using Bytes at Serial Port either results in regularly cutoff messages or in VERY complex handling around the Read operation to analyse any read buffer and store any superfluous data in a shift register or something for use with the next read.
  12. Since you can not look at the source code of both LabVIEW and probably more interestingly the Windows kernel and how it routes SendInput() events along the BIOS keyboard interface, it is hard to say where the problem could be (not that insight in both source codes would likely help much without VERY intense study. This is most likely one of the more delicate parts of the Windows kernel, where a lot of code has accumulated over the years for backwards compatibility, bug circumvention and circumvention of bug circumvention and so on).
  13. Well if it would be a LabVIEW only problem it would also happen on non-Parallels installation and as far as I can see that is not the case. It must have to do with the particular way Parallels generates dead keys in its keyboard virtualization driver and how LabVIEW interprets them. That LabVIEW most likely is not just doing the normal standard processing is obvious, but if that is illegal in terms of the Windows API and just happens to work on all real world hardware scenarios or if Parallels is messing up somehow when simulating the keyboard BIOS interface is not possible to say at this point. If you really want to spend more time, I would try to install VirtualBox and check with that.
  14. I wouldn't hold my hand in the fire and promis that inline will be always inlined, but I was under the impression, that unless you do something else non-standard in the VI settings, this should be the case anyhow. It's possible that a newer LabVIEW version might introduce a threshold which when the VI is above that in terms of some complexity would prevent the VI from getting inlined, but I'm not aware of such an option yet (which of course doesn't mean it couldn't be already there)!
  15. Why do you think that the reentrant setting would still have any influence when VIs are inlined? Basically they then all have their own data space AND code path anyhow (using the data space of the VI they are inlined to and the code path which is copied verbatim into that VI), so no possible contention from having to protect data space and/or code execution from multiple concurrent accesses. And RT has no influence on that. Preallocated is best on RT as it minimizes the amount of memory allocations and reallocations, but since there are no clones that could compete for data space allocations anyhow here, this setting is again irrelevant for inlined VIs. That doesn't mean it has to be irrelevant for the VI that contains the inlined VIs but I think that should be obvious.
  16. Looks a bit like what I have done with LabPython . I think it has some merits to have a more simple interface but LuaVIEW was developed to allow both calling Lua scripts from LabVIEW as well as calling back into LabVIEW from Lua scripts. And that was actually a requirement not just a wish for the project at hand. And LuaVIEW 2.0 will support binary modules too (could be made to work in LuaVIEW 1.2.1 as I have created luavisa and luainterface for a project of a client that integrated LuaVIEW 1.2.1, but it was indeed not exactly trivial to do).
  17. The LabVIEW key handling will definitely not be based on DirectX access but directly work on the events returned by Windows in its internal GetMessage() loop. I haven't seen such an issue yet, but I'm not using Macs much, and definitely not with Parallels. I do use Virtual Box quite regularly but with a Windows host and Windows and Linux guests. Would love a Mac guest too but that is just to much trouble to get working reliably. And those VMs sure can do some weird things when passing down events and other stuff to the guest OS. For instance I can reliably crash my computer completely and without even any BSOD if I startup the Windows 7 64 Bit VM and my host Windows system has been operating for quite some time. After a fresh restart of the host it never crashes.
  18. This thing doesn't exist and would be more or less impossible to compile as there are potentially many differences between LabVIEW versions. Basically error handling is best done on the principle, that if there is an error and it is not a very specific error you know about to be a legitimate error result in this particular situation (e.g. timeout when communicating with an instrument or over network) you should always assume a real error and bail out in one way or the other. Code that does extensive if (error == xx) else if (error == yy) is always going to be a pain when upgrading to a newer LabVIEW version (or just installing a new IO driver too) since these error codes can sometimes change (when for instance more descriptive error codes are introduced). That is my principle anyhow, maybe there are others, but I would consider them to be unmaintainable over a longer time. My code often does suppress timeout errors in the error cluster and detects the timeout case and then simply goes back into waiting for the next message. Also when you do network communication, you have besides timeout also other errors such as connection closed by peer, or similar, that are real errors in term of the communication link, but most likely should be handled explicitly by your protocol handler by closing the connection and reconnecting to the peer, without causing any error report to the higher hierarchy of your application. But in general unless you know a specific error should be handled in a certain way, you should treat any error as a simple indication to bail out and prompt the user or log the error or something.
  19. Well you are right that Flatten Variant doesn't work. I was mislead by a quick trial and had no time to verify further as I had to leave for private obligations. However I can't accept defeat so I remembered another post where someone wanted to have an VT_NULL Variant and investigating into the solution I have come up then showed an easy and totally official way to do it. The VARIANT memory layout is fully documented on MSDN and the LabVIEW Call Library Node supports explicitly the ActiveX Variant type. So simply passing the Variant as ActiveX Variant pointer to a C function that looks at the first two bytes in the structure is all that is needed to get at the VT_ values. Enclosed are the VI to create specifically a VT_NULL variant and the VI to read the VT_ type. I haven't entered the entire range of VT codes into the enum and since those codes are in fact not continous it is probably better to use a Ring control instead but that is an exercise left for whomever is wanting to use this VI. Get OLE Variant Type.vi NULL Variant.vi
  20. I haven't really the time to look at this right now, but I think Variant to Flattened String would allow to do what you need. Basically it returns as type for ActiveX variants the value 0x84 and then another int16 that is the actual VT_ value. So it should be not to difficult to extend the OpenG Variant tools to also be able to identify the subtype of ActiveX Variants.
  21. I was thinking about creating both a 32 bit and 64 bit DLL but would like to keep one VI interface only. A fixed always 64 Bit integer would probably work except that it is not a normal integer but really a distinct datatype that should not be connected to anything else. The enum inside a datalog refnum abuse is a nice trick to ensure this, yet most refnums in LabVIEW (except the user refnum) are always 32 Bit entities so that would be not an option. I also need to pass out some kind of refnum to manage the message hook throughout the program. In practice this is the HWND of the hooked window too, but since this refnum is only supposed to be used with functions from that library there are several ways to deal with this in a transparent way as the user does not need to be concerned about what the refnum really is. However in the message structure I do not have that luxury. The only reason for it to exist in there, is to allow a possible user of the library to use it to do something on Windows API level with it and I have no intentions on providing any wrappers for any Windows API calls to work with a private version of this refnum. It may be a always 64 Bit sort of hack really that will be done similar to what LabVIEW does when dealing with pointer sized variables.
  22. As I'm working on the sidelines on this I run into a difficulty. Windows handles are really opaque pointers and as such they are surprisingly 32 bits on Windows 32 Bit and 64 Bits on Windows 64 Bit. This is a bit of a problem as the original Windwos Message Queue library contains a Windows handle in its data structure, as it completely mirrors the MSG structure used in the WinAPI. There seems to be only one datatype in LabVIEW that truely mimicks this behaviour and that is the so called user refnum. That is a refnum that is defined by object description files in the LabVIEW resource directory and as such not documented at all. So the question is now, does anyone know of another LabVIEW datatype that is sure to be truely pointer sized when embedded in a cluster or alternatingly is there any objection to not include the Windows handle in the message structure?
  23. You can't do that. The OS condition is meant to be a fixed value that depends on the selected target and nothing else. Allowing to override that would allow to cause all kinds of obscure troubles, as surely there will be people trying to click around in their ignorance. It seems you will have to revisit your software and change all the conditions into something that depends on a customizable project property.
  24. Extra complication: The function may expect the array to be allocated by the caller too. If that is the case can only tell the documentation to that function. C is simply not detailed enough to describe the finer semantics of parameter passing.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.