Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,924
  • Joined

  • Last visited

  • Days Won

    271

Everything posted by Rolf Kalbermatter

  1. I can't help you with this. We have created many cRIO and some sbRIO systems in LabVIEW and while we see the supply chain disruption too that makes getting the correct systems getting shipped in time a real problem, we have not yet considered redesigning any of them without LabVIEW. If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way. The design documents are likely the only thing that will be really helpful, which is one reason to actually do them and not just trust into "LabVIEW code is self documenting". It seldom is when you look at it a year or more later unless it is very trivial code and for FPGA code it really never is trivial but there are many involved code segments used typically. Even the realtime part would need to be rebuild with something else as interfacing LabVIEW to 3rd-party FPGA designs is not easy. You would at least need to replace the entire cRIO shared library with something of your own that interfaces to whatever FPGA architecture you are using.
  2. Smart thinking, I was overthinking this with the conditional compile structure. Just create a 64-bit array and declare the parameter as pointer sized integer array and LabVIEW will happily do the conversion automatically on 32-bit platform. Just one caveat although it is of no significance nowadays anymore. This trick of treating the fd_set structure simply as an array of integers only works on Little Endian with 8-byte compiler alignment. On Big Endian this would go wrong as the count stays a 32-bit integer no matter what and the low significant 32-bit of the first 64-bit integer would go in the wrong place. But the only Big Endian platform that LabVIEW still had until recently was the VxWorks real-time targets and their Berkeley socket implementation is mostly Unix like although with some interesting deviations, that usually would not really matter if you just compile a program from C sources with the correct headers but definitely could go very bad when trying to interface with a LabVIEW VI to it that has no notation of their header files but simply requires you to play header parser yourself.
  3. That library was reworked by me in 2008, before LabVIEW had any 64-bit version. Unfortunately, SOCKET in Windows is an UINT_PTR which means that it is a 32-bit integer on 32-bit LabVIEW and 64-bit integer on 64-bit LabVIEW. But you can NOT port this over to Linux as is. There the file-descriptor used for all socket functions is an explicit int, so always a 32-bit value! This is because the Berkeley socket library is originally developed for Unix and build around the generic socket concept in Unix which traditionally uses int file descriptors. When Microsoft took the Berkeley socket library and adapted it for Windows as WinSock library, it was already very revolutionary to have this as a fully 32-bit code library, Windows itself was still mainly a selector based 16-bit environment. And Microsoft likes to use handles, which are opaque pointers and happened to be 32-bit integers too back then. The library as posted on the NI forum does NOT work in 64-bit Windows, and that is not just because of the 64-bit SOCKET handle itself but also because of the fd_set data structure which is defined as follows: typedef struct fd_set { u_int fd_count; /* how many are SET? */ SOCKET fd_array[FD_SETSIZE]; /* an array of SOCKETs */ } fd_set; Now on 32-bit LabVIEW there is nothing strange here. The fd_array with SOCKET handles begins at offset 4, so the LabVIEW array of 2 * 32-bit integers works perfectly fine. The first array element corresponds to fd_count and the second element to the first element in the fd_array and if fd_count doesn't contain more than 1 as value it does not matter that the fd_array doesn't contain all 64 elements that this structure is declared for. On 64-bit the SOCKET is a 64-bit integer and is naturally aligned on an 8 byte boundary in that structure. So there is a dummy 32-bit filler element between fd_count and fd_array. And the first SOCKET should be a 64-bit integer. Solution to make this work on 64-bit LabVIEW, besides changing the SOCKET handle parameter for all functions to be a pointer sized integer, is to use a conditional compile structure. For the 32-bit case use an array of 32-bit integers as fd_set just as is used now, for the 64-bit case you need to use an array of 64-bit integers. The rest remains the same. You even can use an array of 2 * 64-bit integers, just the same as for the 32-bit case with an array of 32-bit integers. Since we run on a Little Endian machine under Windows, the lower significant 32-bits of the first 64-bit element happen to match the location in memory where the fd_count is expected, the extra 4 filler bytes are overwritten by the higher significant 32-bits of the first 64-bit array element which is not a problem. They are really DON'T CARE. Your observation that this weird error can happen if the fd_set structure is not correctly initialized was spot on. But not because it is not really correctly initialized in the VI (it is) but because the array of 32-bit values has a different layout than what the 64-bit Winsock library expects.
  4. Did you start your program as Administrator? Raw sockets is a privileged resource on all modern OSes, only available to especially elevated processes such as being started as root/admin or under Linux with an explicit privilege for the process. IcmpSendEcho() supposedly calls into a device driver (or maybe service) to do the actual work. Since they execute in the context of the Windows kernel, they are not restricted by those pesky user right limitations. By default they execute with System privileges which is almost like an Administrator and in certain ways even more.
  5. Would seem strange. Nagle is an algorithm on the TCP/IP protocol level, not on the IP level. More likely a limitation of the synchronous nature of that API. Using the asynchronous IcmpSendEcho2() might give more control. However it is more complex as you have to either use a Windows event handle or a callback routine, with the second being not really a feasible option in the LabVIEW Call Library Node.
  6. On Windows I recommend that function. ICMP is a low level protocol and can only be implemented in user space by using raw sockets, but that is a privileged resource that can only be opened by elevated processes on Windows, and by a process which is either root (UID=0) or has the CAP_NET_RAW permission granted on Linux (and likely MacOS X, which also uses BSD socket library). My network library, which I had put up elsewhere on here some 12 years ago or so, did provide raw sockets, besides TCP and UDP and had VIs implementing the ICMP echo command, but that was of very limited value because of these permission issues. The command line ping utility on Linux is THE way to solve it there. It's unfortunate, but with raw sockets someone really could easily start doing very nasty things, accidentally or on purpose, so I understand why they are protected. Also note that many servers nowadays disable the ICMP Echo command on purpose to avoid attempts to be DOS attacked.
  7. Is that spam to advertise a link or do you have a specific question?
  8. You need to compile it into a DLL to be able to call it. But LabVIEW code IS compiled too and fairly performant. 40us is not a lot of time to do those kind of mathematical operations. Even if you use a highly optimizing C compiler like the Intel C compiler you are most likely not going to see huge differences when using that code as a DLL. You can of course try but you will need to use a C compiler of some sorts for this. And it is C++, using the standard template classes. You will also need to write a small C wrapper around this in order to be able to call it from the LabVIEW Call Library node. As the code is GCC specific I can't help you. If it would be compilable with Visual C as is, I might try to create the DLL, but as already said, my hopes that you will see significant performance improvements if you use the C++ code is not that great.
  9. However, be aware that the github code as is is an experiment in construction. Lots of things don't work right yet and there are many bugs in the underlaying shared library that simply won't do the right thing yet. I'm slowly working on it but it is a side project and sometimes I just don't feel like debugging C code very much.
  10. Read through this thread and read specifically Brian (Hoovahs) response.
  11. It for sure helps. I was really thinking that I was overlooking something here. But with this explanation everything makes sense. The actual code in the DLL is substantially different to 4.0. In 4.0 most was just a very thin wrapper around existing LabVIEW functions. But those LabVIEW functions do not support Unicode paths so I have been refactoring that code substantially to support full Unicode paths for the underlying functions (and create compatibility wrappers that still use the old LabVIEW paths, which of course won't support Unicode path names). The advantage of using full Unicode throughout the ZIP tools, eventually will be that path names can contain characters not present in your current ANSI locale and that path names can be almost arbitrarily long, namely 32k characters. The ZIP standard internally already supports UTF8 encoded file names, so once this is fully working you can also extract and create ZIP files that are using UTF8 filenames. But this complete porting to full Unicode support is not yet finished. Most of the actual programming is done, but it needs more testing.
  12. So I did take a look and yes it was that function but no, it doesn't only fail in 64-bit mode but also in 32-bit mode. So I'm a little lost why you feel it did work with an UNC path when using it in LabVIEW 32-bit. Going to do some more tests with this and trying to clean up a few related things.
  13. Error 1 is the all generic "invalid parameter" error. could be indeed a problem in interpreting UNC paths somehow. I'll try to look into that. Haven't really run the code yet on a 64-bit system with UNC paths, but I see where the error 1 seems to come from when looking in the source code. It looks like LabVIEW has changed its stance about what an UNC path represents between 32-bit and 64-bit. I use the function FIsAPathOfType() to check that the passed in path is an absolute path ( I do not want to try to open relative, and of course empty or invalid paths, as I have no way to know where they should be relative to, and find the idea to use the current directory an atrocity that has absolutely no place in a modern multithreading application). Going to verify that this is the culprit, as it could be also from somewhere else, but it looks suspicious and I know that internally UNC paths are treated as a different type in LabVIEW, but so far it considered them absolute too (which they are).
  14. What would be the error you get?
  15. Some people would say that that is your problem. Others that it is a bliss. 😀
  16. I'm pretty sure the .NET RTF control is not much more than a fairly simple wrapper around the actual RichTextEdit Control which is pretty much a Windows Common Control component. Pretty much everything of the business logic is in the according Windows DLL and the API is exposed as macros around the Windows messages that you send to the control.
  17. There is a reason that it is still marked Beta (and likely will remain so for the foreseeable future). It is a telltale sign that even the RichTextEdit control, which is a Microsoft technology has problems with that setting. Basically enabling UTF8 as a codepage feature would be a nice idea, IF all Windows applications were properly prepared to work with codepages that can be more than 1 byte per character. But since this simple assumption of 1 byte == 1 character works for all English speaking countries, there have been many sins committed in this respect and nobody ever noticed. Enabling this feature tries to solve something that can not really be solved since there is simply to much cruft out there that will fail with it (and yes LabVIEW has also areas where it will stumble over this). Linux is in that respect a bit better of. The Linux developers never were shy about simply abandoning something and put people up with facts and tell them, this is how we will do it from now on. Take it or leave it but don't complain if it doesn't work for you in the future if you do not want to follow the new standard. Most desktop distributions nowadays simply use UTF8 as standard locale throughout, pretty much what this setting would do under Windows. And distributions simply removed applications that could not deal with it properly.
  18. That grammer sounds almost as bad as what those Nigerian scammers use, who pretend to have embezzled a few millions and now are eager to find someone who would be happy to take that money from them. 😀
  19. I don't know about VLA. Never used one myself. Our company is on a Partner Software Lease contract, which has been an annual subscription based license for as long as I remember. In theory I don't have to care about all this as long as I'm employed at Averna, but I do care about LabVIEW and think it is a bad move for people who are not under such a company provided license agreement with NI. That the justifications that NI gave for moving to a subscription based license model only, almost all sound to me like marketing jumbo-mumbo that tries to turn the entire meaning of words upside down, or are actually completely misconstructed arguments, didn't help that at all. For normal perpetual licenses it is definitely how it works. If you make use of the NI offer to extend your expiring SSP for up to 3 years of subscription licensing for the price an SSP was in the past (about half of what a yearly subscription costs now) your perpetual license automatically converts to a subscription license. Instead you could choose to buy a new subscription license for the full cost and let your existing SSP expire. In that case you own the perpetual license from your old license, which gives you the right to install and use LabVIEW 2021 and in addition to that a subscription to the newest LabVIEW version for as long as you keep your subscription active. Once you let the subscription expire you still have the perpetual license for LabVIEW 2021 but can't (easily) look at all the VIs you may have created with newer LabVIEW versions under the subscription model. For VLAs a different solution may exist but as I said I never had to deal with VLAs myself and have absolutely no knowledge about them.
  20. Ahhh well! Yes that was a choice I made at that point. Without a predefined length I have to loop with ever increasing (doubling every time) buffer sizes to try to inflate the string. But each time I try with a longer buffer, the ZLIB decoder will start filling the buffer until it runs out of buffer space. Then I have to increase the space and try again. The comment is actually wrong. It ends up looping 8 times which results in a buffer that will be 256 times as large as the input. That should still work with a buffer that has been compressed with over 99.6% actually! The only thing I could think of is to increase the buffer even more aggressively than 2^(x+1), maybe 4^(x+1)? That would with the current 8 iterations offer an up to 65536 times as big inflated buffer for an input buffer. In each iteration the ZLIB stream decoder will work on more and more bytes and then if it is to small, all will be thrown away and started over again. A real performance intense operation and I also do not want to loop indefinitely, as there is always the chance that corrupted bits in the stream might throw the decoder off in a way that it never will terminate and then your application will be looping until it runs eventually out of memory which is a pretty hard crash in LabVIEW. So if you know that your data is going to be very compressible, you have to do your own calculation and specify a starting buffer size that is big enough. If you do this over network I would anyhow recommend to prepend the uncompressed size to the stream. That really will help to not destroy the performance gain that you tried to achieve with the ZLIB compression in the first place.
  21. Without a more qualified statement about how you get to this conclusion such as what numbers are used, there is no way I can believe this. If you look at other indicators such as participation in the various forums, NI, LavaG and LabVIEWForum.de all I can say is that those numbers look VERYYYYYY much lower than a few years back. So either all those new users that are added year over year are real cracks who do not need any support of any kind, or NI has a secret support channel they can tap into, that us mere mortals do not have, or something is totally off. The public visible exposure of LabVIEW, just as NI itself, definitely has been diminishing in the last 5 years tremendously. Maybe all those new users are inherent user licenses included with the semiconductor test setups that are sold. Buying LabVIEW on the website is an almost impossible exercise recently and getting informed quotes also.
  22. There is another "little" culprit, and its the most likely reason for this discrepancy. LabVIEW only uses 8-bit ASCII text and accordingly only posts a so called ANSI (that's what Windows calls it when you use an 8-bit codepage encoding) to the clipboard. Notepad and Notepad++ are definitely Unicode applications. While they may enumerate clipboard data formats and only request ANSI if there is no Unicode string format in the clipboard, they almost certainly will use the MultiByteToWideChar() Windows API to translate the text, and if they do request Unicode anyhow, Windows will be helpfully translating it for them using that function. But this function will terminate converting a string on the first occurrence of a NULL character. Most code doesn't bother to check if the translated code has consumed all the input bytes. It's also not trivial to do, as the function returns how many codepoints it placed into the output buffer, but that does not have to match the number of input bytes, since some ANSI encodings can use more than one byte for some characters, and the used UTF-16 standard in Windows can theoretically generate more than one codepoint per character for certain very rarely used characters. For instance the MUSICAL SYMBOL G CLEF is outside of the 16-bit code range that UTF-16 can represent in a single codepoint. So if you want to preserve possible input strings beyond an embedded NULL character, things get fairly hairy when using the Windows conversion function as you would have to call it repeatedly on each individual text section that is separated by a NULL character. But trying to build your own conversion routine is an even worse idea. Nobody in his sane mind wants to do encoding translations themselves. 😀
  23. My projects usually have one or two folders called Tests and Junk. Tests are VIs that I create to test certain functionality. For instance in a recent project I created a number of test VIs for various subVIs that I used in an FPGA program. These are typically not real tests in the sense of Unit Tests but more a test bed to easily run the VIs interactively and test functionality and improvements as well as behaviour of the various functions. Junk I put VIs in that I sometimes create for a quick and dirty test of some function, occasionally also VIs that I might create for helping in a forum post while waiting for the FPGA compiler or some tests to finish. Outside of these two folders there is usually almost never any unused VI. I make a point to regularly check for VIs that are not anymore used and to simply delete them (or sometimes move them into the Junk if I think there might be some future possibility that it is needed again), but most are left overs from earlier attempts of reworked VIs that are now used in the program, so they can safely go away. And of course everything gets regularly checked into Version Control, with some more or less useful commit message. 😀
  24. Hmmm, clipboard copy! That has a very good chance of trying to be smart and to do text reformatting. I would definitely drag the entire control with all the data from one VI to the other, which should avoid Windows trying to be helpful. As a control, LabVIEW puts it in an application private format in the clipboard together with an image of the control. LabVIEW itself can pull the private format out of the clipboard, other applications will not understand that format and pull the image from the clipboard. If you only select the text, LabVIEW will store it as normal ASCII text in the clipboard and Windows may try to do all kinds of things including trying to translate it to proper Windows text, which could replace all \r "characters" with \r\n and there is even the chance that the text goes through ASCII to UTF-16 and back to ASCII on the way through the clipboard and that is not always a fully 100% back and forth translation, even though they may look optically the same. Text encoding translations is a total pitta to fully understand.
  25. I can't guarantee that there is not some problem somewhere in a function, but I didn't find anything in my testing. How did you copy the deflated string? As binary data or as string? If as string, are you sure your transfer mechanism didn't do some text translation such as automatic \n to \r\n translation somehow? Did you use the LabVIEW Text File Read and Write functions to write your strings? A deflated stream is not a text string but a byte stream, no matter if LabVIEW lets you display it as a string. It is not a problem for LabVIEW itself as it does not use special characters such as a terminating NULL character. But if you are not careful and use the File Text Write and Read functions in line conversion mode, your binary stream gets of course modified and that destroys the integrity of the binary information as the inflate algorithme expects it (and checks it with CRCs too).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.