Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,837
  • Joined

  • Last visited

  • Days Won

    259

Everything posted by Rolf Kalbermatter

  1. Can you elaborate more about what you mean with sysconfig? For me that is a configuration file under /etc/sysconfig on most *nix systems not a DLL or shared library I would know off. From the name of it, if it is a DLL, it would probably be a highly UI centric interface meant to be used from a configuration utility, which could explain the synchronous nature of the API, since users are generally not able to handle umptien configuration tasks in parallel. But then I wonder what in such an API could be necessary to be called continously in normal operation and not just during reconfiguration of some hard- or software components. As to LabVIEW allocating threads dynamically, that is a pretty tricky thing. I'm pretty sure it could be done, but not without a complete overhaul of the current thread handling. And something like this is an area where even small changes can have far reaching and sometimes very surprising effects, so I can understand that it's not on the top of the priorities to work on, when you take the advantages and the risks into account. Besides, a simple programming error in your LabVIEW code could easily create a thread collector and although Windows is pretty good at managing threads, they do consume quite a bit of memory and the management does take some CPU horse power too, and once you exhaust the Windows kernel resources you get a hard crash, not a DOS for your own application only. So personally I would prefer my application to run into thread starvation at some point rather than the whole Windows system crashing hard when doing something that uses up to many threads. As to if it is the task of LabVIEW to make our life easier, I would generally of course agree. However calling DLLs is a pretty advanced topic already anyhow, so I would really think that someone working on that level should be possible to be bothered about using asynchonous APIs if there is a chance that the synchonous ones might block threads for long periods.
  2. But LabVIEW strings are in the system encoding (codepage on Windows)!
  3. Not sure about the .Net details really. But .Net is somehow executed in a special subsystem of LabVIEW and communication with the rest of LabVIEW is indeed in some form of queues I would assume. No such voodoo is necessary for normal DLL calls though. They just execute in whatever thread and on whatever stack space that thread has assigned at the moment LabVIEW invokes the function. Invocation is a simple "call" assembly code instruction after setting up the parameters on the stack according to the Call Library Node configuration. It's as direct and immediate as you can ever imagine. And of course for the duration of the function call the thread LabVIEW has used to invoke the function is completely consumed and unavailable for LabVIEW in any way. The only thing LabVIEW could do is aborting the thread altogether but that has a high chance to lead to unrecoverable complications, that only a process kill can clean up.
  4. DLL calls will block the thread they are executing in for the duration of the DLL function call. So yes if you do many different DLL calls in parallel that all take nasty long to execute, then you can of course use up all the preallocated threads in a LabVIEW execution system even if all the Call Library Nodes are configured to run in the calling thread. However if your DLL consists of many long running and synchronous calls you have already trouble before you get to that point, since your DLL is basically totally unusable from non-LabVIEW programming environments, which generally are not multi-threading out of the box without explicit measures taken by the application programmer. So I would guess that if you call such DLL functions you either didn't understand the proper programming model of that DLL, or took the super duper easy approach of only calling into the upper most, super easy dummy mode API that only exists to demo the capability of the DLL, not to use it for real! .Net has in addition to that some extra complications since LabVIEW has to provide a specific .Net context to run any .Net method call safely. So there it is quite easily possible to run into thread starvation situations if you tend to just call into the fullly synchonous Beginner API level of those .Net assemblies. But please note that this is not a limitation of LabVIEW, in fact if you call lengthy synchronous APIs in most other environments you run into serious problems at the second such call in parallel already if you don't explicitedly delegate those calls to other threads in your application (which of course have to be created explicitedly in the first place). The problem of LabVIEW is that it allows you to call easily more than one of these functions in parallel and it doesn't break down immediately, but only after you exhausted the preallocated threads in a specific execution system. By using lower level asynchonous APIs instead you can completely prevent these issues and do the arbitration on the LabVIEW cooperative multithreading level, at the cost of a somewhat more complex programming, but with proper library design that can be fully abstracted away into a LabVIEW VI library or class so that the end user only sees the API that you want them to use.
  5. The password protection used in a ZIP archive is not something you can just remove. The entire data stream for a specific file entry is completely encrypted with a hash that is generated from the password. The original password hashing in the original PkZip application has several weaknesses that make it less secure than the theoretical complexity given by the bit size of the hash key. Later versions have fixed some of these weaknesses and added other password algorithmes like AES encryption, which by modern standards aren't unbreakable either, but still quite an effort to brute force. The principle of all the password removers simply is to generate various passwords (or use dictionaries), determine the used password encryption algorithme from the directory entry, then decrypt the stream with the according password hash and check if the CRC of the decrypted stream matches with the CRC stored in the directory entry. The OpenG ZIP library can do that by simply trying to retrieve a specific file with all possible passwords until you don't get an error. There is still a small chance that the CRC check matches, but the actual content is not correctly decrypted since the CRC is just a 32 bit integer, and there are of course possible collisions for multiple data streams producing the same CRC. If your ZIP file uses the old non-AES password algortithme that should simply work with the currently released OpenG ZIP library. If it uses AES then the next version which is currently in the works will support that too.
  6. Well LabVIEW is in fact MBCS aware and as such using whatever MBCS standard is set on the system. That includes UTF-8 on Linux for instance. For most things that is pretty similar to ASCII, but not always. I don't believe it to be possible to set Windows to UTF-8 though as default MBCS. And no you would not use btowc and mbstrtowc together. Rather mbstrtowc does for a string, what btowc does for a single character (well more really mbtowc). btowc only works for single byte characters, which a LabVIEW string doesn't necessarily have to be (defininitely the asian language versions are mbcs for sure).
  7. Well, straightforward is a little bit oversimplified. wchar_t on Unix systems is typically an unsigned int, so a 32-bit unicode (UTF-32) character, which is technically absolutely not the same as UTF-16. The conversion between the two is however a pretty trivial bit shifting and masking for all but some very obscure characters (from generally dead or artificial languages like Klingon ). Also btowc() is only valid for conversion from the current mbcs (which could be UTF-8) set by the C runtime library LC_TYPE setting. Personally for string conversion I think mbsrstowc() is probably more useful, but it has the same limit about the C runtime library setting, which is process global so a nasty thing to change.
  8. I haven't looked into the Linux side of this very much yet. You should however consider that most Linux systems nowadays already use UTF-8 as default codepage, so basically the strings are already in Unicode (but 8-bit UTF encoded, rather than the Windows 16-bit Unicode standard). That might make things pretty simple for the situation where the system is already using UTF-8, but could pose problems when a user runs his terminal session in one of the previous MBCS encodings, for whatever strange reason. On Mac you have similar standard OS functions as on Windows, with potentially small variations in the translation tables, since Windows tends to not use the official Unicode collation tables but has slightly different ones. There is a standard VI library on the Mac in vi.lib somewhere, that actually helps in calling these functions, by creating MacOS X CFStringRef's that you can then use with functions like CFStringCreateWithBytes() (similar to MultiByteToWideChar()) and CFStringGetBytes() (similar to WideCharToMultiByte()). All in all using Unicode in a MBCS environment is a pretty nasty mess and the platform differences make it even more troublesome. It's the main reason that Unicode support in LabVIEW is still just experimental. Making sure that everything MBCS based keeps working as is when upgrading to a full Unicode version of LabVIEW is a nightmare. The only way to go about that with reasonable effort is to start over again with a completely Unicode based LabVIEW version and provide some MBCS support for communicating with ASCII based devices, and accepting that there is no clean upgrade path for existing projects without some serious refactoring work, when dealing with MBCS (or ASCII) strings.
  9. Basically, if you have working C code that compiles and operates fine and can be put in an isolated unit to work and be called, then the DLL path is going to be by far the most simple solution, unless your C code is pretty trivial and can be translated easily into LabVIEW code. And an automatic C to LabVIEW translation tool is probably never going to happen. C is simply to low level and its syntax way to terse that you have any change to translate that properly into another language without heavy user assistence, to interpret and understand what the C code is trying to do. You mentioned that you don't want that answer, but it's simply the only answer I can give. If you don't believe me you are free to try to proof that it can be done by developing such a translation program. Good luck on that!
  10. Your problem is most likely that you can't open the same serial port two times! When you add an Error Handler.vi to the end of both error cluster wires, you will get a more descriptive error message than just an error number.
  11. For the distribution and archiving of the source code I rely on sourceforge. No need to provide my own download page for that (just yet). Admittingly, sourceforge is heading into a direction that may at some point pose a problem for this. If and when that happens I will bother with it at that time and not let it deprive me of my sleep at this point already. In the worst case I have to move it to Github or some other place and exchange the ease of SVN with the pericles of GIT. I might also loose the SVN history in that process, if sourceforge just gets taken out of the air by übercommercial Geeknet Inc without a chance to properly export everything first.
  12. Basically, I have yet to see anyone bother with the C source at all. Until that happens I see no reason to change anything on the license. There is absolutely nothing evil about LGPL when the resulting binary is anyhow dynamically linked, as is the case with all shared libraries. It doesn't contanimate your LabVIEW application in any way with anything GPL related.
  13. Testing and more testing. The code as it is does sort of work for the things I tried, although I do have an occasional hang where the initial connection seems to not go through, but trying a second time then always seems to work. The pipe idea in Windows is a powerful feature but at the same time also something that I feel they didn't entirely go through with. It definitely is a niche feature that gets seldom used by real code, unlike under Unix (Linux) systems where pipes are more or less the infrastructure many applications use for all kind of interprocess communication. And when you look at the C code of the DLL you will notice that the code to do standard IO redirection looks pretty complicated in the Windows case. It certainly has a flair of an additional API that got tacked onto the existing Windows process model in order to provide Unix like features. I'll go and create in the next few days a thread here and attach the latest version of an OpenG package for lvpipe here. If I get some good feedback about it with more than just "it works" or "it doesn't work" and preferably some easy to reproduce (it shouldn't involve installing other software) unit tests, I might be tempted to actually create a real package and add it to the sourceforge download repository so it is then available from within VIPM.
  14. What LogMan already said. Most people who were active in OpenG have moved on, either into other non-LabVIEW related work, or more into management where the daily coding has been replaced by other tasks that don't involve as much direct LabVIEW programming anymore. There are still some watching the list and trying to keep up with the things that are OpenG related, but other paid work usually takes precedence before something like OpenG. Also many have families now, so when away from work and at home, they often spend their time other than behind the computer. The canonical code repository for all OpenG related stuff has been and still is on https://sourceforge.net/projects/opengtoolkit/. Look for the SVN repository, the CVS repository is an old version of the Toolkit before sourcforge supported SVN. However while there haven't been any new packages released since 2011, I still am actively working on some of the pet projects that are mine. Mainly the lvzip library and to a lesser extend the lvpipe and labpython library. I also have committed one or two other bug fixes to other libraries in the past, that users have posted here, but didn't go through the trouble to release a new package for them as there were in fact other areas that I would have liked to improve on for those packages, but with absolutely no other feedback in those years, it does also feel a little bit like pulling a dead horse. While I have been a member from the beginning on sourceforge and still am, I do not believe that I have administrative privileges to add other developers to it. And I don't think it is very common to just add random people to open source projects without a little track report of commitment and code style by them. So the best approach would be in my opinion to try to post some improvements here in this subforum. If they look reasonable, I'm willing to commit them to the repository with proper credit. After a few such patches I think we could convince the project admins to add that person with commit access to the repository. Adding new code to the repository is however only half of the work. You also then have to create a new OpenG or VIPM package of the library and then get the people from JKI to update the list of available packages on their servers, so that VIPM can show them properly for all users. Of course, nobody can prevent you to simply fork the repository and start your own as long as you honor the existing licenses. Notice that most of the LabVIEW related code is BSD style licensed, while the C source code for the binary shared libraries that I have developed in the past is all under the LGPL license. Changing that license to any other license for any of the code without full approval by all of the previous authors is basically not an option. I would however find the forking of the repository the absolutely last measure. It would most likely rest in an obscure place, with no way to add its released packages to the VIPM list, so most users would never be aware that it exists and be able to install it on their systems.
  15. Unflatten from String happens to consume as many bytes from the stream as make up complete outgoing data elements. It even creates error 74 if the string happens to be to small to create an array with at least one element. It all depends on the orginal endianess of the data stream. If it comes from a big endian device then swapped is the same as little endian. It's the classical problem of definining what is swapped in respect to the source (device) or the target (LabVIEW), which is always debatable. Your new code has one big problem, it doesn't swap the 16 bit entities anymore, and in both versions another problem. Double floats are 64 bit values and depending on the target they can be swapped in the 32 bit entities too but LabVIEW doesn't have a Swap 32 bit element. There are two Big Endian standards for 64 bit integers, one where only the bytes per 32 bit part are swapped and one where the 32 bit elements are swapped too. Depending which of the two the device is using, Unflatten from String may not be the correct one, but it should be consistent to the full reversal as used in the original code. (your first version doesn't swap the 32 bit values, and your second one doesn't even swap the 16 bit elements).
  16. And that is neither guaranteed to work! Depending on gateways and other limitations on the everchanging interweb of IP, an IP packet exceeding the minimum MTU of 576 for IPv4 may be dropped for any reason that any router device on the way may feel like, including scarce internal memory or CPU resources, or just a temporary mood of the day. And a single dropped or corrupted IP fragment in a complete UDP packet will cause the entire UDP packet to be refused (dropped) by the receiving UDP endpoint. Since the IP packet also needs to contain the IP and UDP header, the actual "safe" minimum maximum UDP packet size is really more like 508 bytes (accounting for extra IP options in the header). The maximum size of an UDP packet of 64kB is a Windows feature. Other socket implementations can use lower limits. The 64kB is also the theoretical maximum limit, since UDP only uses a 16 bit length indicator in its header, so it can't really transmit more than 64kB in a single packet. And UDP itself does not provide for a mechanisme to fragment and reassemble bigger packets. So I'm pretty sure all the socket implementations out there, simply cap any attempt of a user application to transmit more than those 64kB. You could argue that LabVIEW as high level programming environment should produce an error if you try to send more, but that is a pretty fuzzy idea. Nothing in the UDP implementation guarantees the transmission of any packet size across the network, so why bother to create an error? Also, if the entire route from sender to receiver is guaranteed to be IPv6, all the above changes considerably. The MTU changes to about 2000 bytes and by using IPv6 jumbograms, UDP can transmit larger packets than 64B. So above limitation doesn't apply but since the LabVIEW network nodes currently only support IPv4 anyways, this is of no concern in terms of LabVIEW programming.
  17. Well that's the problem of using layered software. Standard socket operations are normally synchronous and don't have a timeout parameter themselves. In order to program asynchronously, one has to first set the socket into O_NONBLOCK and then use select() or poll() before attempting to do a read() or write(). And the timeout for most synchronous socket operations is an option that can be changed for each socket through the setsockopt() call. VI Server might use internally asynchronous sockets but it is designed as a synchonous interface to the user. The reason is obviously ease of use, since asynchonous interfaces are usually pretty hard to use and debug. And unfortunately nobody thought ever about adding a possibility to add a timeout property to the VI server refnum class. But that's understandable since the VI server refnum is not generally a network connection, it just can be routed over a TCP or ActiveX connection, but just as much be a LabVIEW process internal protocol layer. A timeout property would only be meaningful to the TCP protocol layer but wouldn't apply to the other two classes. It's the classical problem of abstraction where you can't and don't want to expose all the lower level details to the higher level interfaces, especially when they are not universal for all the lower level implementations.
  18. Not sure about OPC UA details but generally server to server doesn't make much sense and simply makes setup of the system more complex. Possibly OPC UA allows for some hybrid configuration where a server can be configured to relay data from another server as some sort of gateway between differently secured network segments, but other than that I see no real benefit in complicating the server configuration by adding the possibility to let a server relay data from another server.
  19. The issue is a little more complicated actually since NI Linux RT isn't a single platform but really two (x64 and ARM). There is no easy solution to provide a virtualized environment for testing on the ARM platform (but with the myRIO you do have a fairly accessible platform for testing). So far we have these variations of platforms for LabVIEW RT: x86 Pharlap ETS PPC vxWorks 6.1 (LV 8.2.1) and vxWorks 6.3 (>= LV 8.5) ARM NI-Linux RT x64 NI-Linux RT For all but the Pharlap ETS system virtual machines for testing are only theoretically feasable for the NI-Linux RT for x64 platform. All the others already fail because of a very different CPU architecture and adding Bochs or something similar to the virtualization effort is definitely creating an even bigger problem. Still remains the issue that without an NI sanctioned way of creating an NI-Linux RT (x64) image, it's not only legally but also technically pretty impossible to create a fully functional LabVIEW RT target for virtual machine execution. There are simply to many runtime libraries that need to be copied to the right locations, that are not part of NI Linux RT itself but additional components with specific NI license restrictions. There are also possibilities for some of the x86/x64 systems to run Windows Embedded but for testing purposes of libraries they are really the same as a standard Windows systems if you don't access very obscure features.
  20. I can't speak for others, but for me the NI Linux RT virtualized hardware system would be mostly for testing and debugging LabVIEW software on one of the x86 based cRIO targets without having to always be connected to the actual hardware target. Yes it doesn't allow for full hardware access such as DAQmx, RIO etc. but it would allow some easy testing of the general RT application and in my case especially shared libraries on these platforms.
  21. It's not that easy. You do not pay for a runtime license of the Linux kernel. That is open source and NI provides all the sources to recreate it, if you wish. But NI Linux RT while a full OS is simply that! No LabVIEW runtime, NI-VISA runtime, NI-488.2 runtime, NI-DAQmx runtime, NI-RIO, NI-this and NI-that. And for these things, and especially the LabVIEW runtime, NI is free to define whatever license conditions they like and they decided that running these on non-NI hardware generally will require a paid license from them. You are free to build your own NI Linux RT OS for whatever hardware you like, but you are not free to put any of the NI software even in runtime form on it without a license from NI. And no, just because you can redistribute a LabVIEW executable including the LabVIEW runtime on a desktop system without additional runtime licenses does not mean that you have the right to grab a LabVIEW runtime from an NI Linux embedded target and copy it to your own hardware , even if it would be technically possible, which for most targets is not easily possible even if they use the same CPU architecture. NI Pharlap isn't that different really. They probably have a royality free source code license from Pharlap for this (which would cost A LOT of money, which of course has to be incurred somehow and probably comes with a similar expensive yearly maintenance contract for them). The license cost for the NI Pharlap system on your own hardware is for the most part the same as for the NI Linux software. It's for the LabVIEW runtime and all the other necessary software drivers, not so much for the Pharlap OS. NI makes quite a bit of money with their realtime platforms and they want to protect that somehow. And being the copyright owner of all the NI developed software drivers and LabVIEW, they have every right to do so in pretty much every way they find ok. That you and me would rather have it different isn't really a valid argument against that.
  22. I haven't really looked into this and I'm not aware of such a specific guide. My guess is you have to setup a linux cross compile setup on your PC, then download the NI Linux RT source code and go through the configuration settings of the linux compilation, selecting everything that would make sense for your (virtual) target hardware. That gives you in the end a hopefully working Linux RT kernel that is compatible to NI Linux RT. Now, NI Linux RT is of course a nice thing but completely useless as a target for LabVIEW RT without also the LabVIEW runtime engine, NI-VISA, NI-DAQmx, NI-half a hundred other software libraries and drivers. And here you start to bite in the dust. You can't just recreate them for your target system and you are not allowed to copy them over from another LabVIEW RT system without expressed consent from NI and in the case of the LabVIEW runtime engine an actual license payment too!
  23. In my experience almost all forum searches suck badly in comparison to the standard set by Google. So I would say: Disable it!!
  24. NI's stance on such things (and basically any company in the US and most other places) is: We do not comment on internal developments, plans for development or lack thereof or whatever, unless we are at a stage with it that it is ready for prime time. It may sound dissatisfactory for us mere users, but reality is that there are simply to many legal and other implications in our modern system, that speaking before the fact can have far reaching consequences. Basically every employee knows that commenting on unreleased products or technologies without approval from higher management might be the signature under a resigning letter that has no possibility for reappeal. So you can hope for a reaction from someone from NI about this, but unless they have it ready to be announced at coming NI week you can keep waiting until the hell freezes over. Even then it is not likely that they will react here.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.