-
Posts
3,903 -
Joined
-
Last visited
-
Days Won
269
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Rolf Kalbermatter
-
Actually that is wrong. It will still work on 32-bit LabVIEW. The 32-bit Windows Winsock is documented to use 32-bit SOCKET values, LabVIEW will pass a 32-bit value to the DLL even if the LabVIEW wire is 64-bit as you configured it as pointer sized variable in the Call Library Node. This means all is peachy in both 32-bit and 64-bit LabVIEW no matter what Windows decides to do underneath. With your current setup the 64-bit value returned by socket(..) will be downconverted to 32-bit and transported as such through all VIs and then upconverted each time to a 64-bit value when calling a WinSock library function (if that parameter is correctly configured as pointer sized integer. If not only the lower 32-bit for that parameter are guaranteed to be initialized and the upper 32-bit may contain garbage, which may or may not be a problem). That may go fine as long as the upper 32-bits aren't used but it may also go wrong. If you change it to be a LabVIEW 64-bit integer everywhere in LabVIEW itself and configure those parameters everywhere as pointer sized integer, it will always go right, until Windows 128-bit comes out of course, but I have a suspicion that by that time, we both will not program in LabVIEW anymore, and LabVIEW may be an anecdotal remark in the history of programming languages that existed somewhere in a long ago past.
-
It's common industry standard. So why should NI be an exception? Wanting to download Windows XP for whatever strange reason? Visual Studio 2005 or 2008, just to be a bit obnoxious? It all can be done, but no IT company is trying to make that simple without some special paid support contract. Why? One reason is that people who use those versions will eventually often need some kind of support too and giving even the most simple of support such as license activation or some installation troubles is significantly more effort to do for anything but the last version.
- 28 replies
-
- labview2017sp1
- labview rt
-
(and 3 more)
Tagged with:
-
What I mean here is that SOCKET is defined to be an UINT_PTR. This means on 64-bit Windows it is a 64-bit entity even nowadays. The fact that the upper 32-bit may or may not be used internally is an implementation detail that we as API users should NOT rely on. So while it MAY work to only transport 32-bit around in the LabVIEW library, this may be something that only works by change on your machine, or may sometime in the future fail generally because of some under the hood design change. While I agree that you could argue that such future changes are not your current problem and you are justified to charge in the future for that modification if it starts to fail, it is the fact that it may even nowadays fail. We don't know what a SOCKET is and even by looking at its numeric value and finding that it never goes higher than a few 1000, indicating that it is really more like an index into some internal table, there is no guarantee for that. It could very well be a pointer that happens to usually be mapped to a below 4GB address but may suddenly also be allocated above that limit and then your 32-bit handling fails. Factually, treating the SOCKET variable always as 32-bit value at even a single point in your program is a bug as it does not match the published API documentation. The fact that it may not crash or fail for you, is no argument to leave that bug in the code. It's the same thing generally with people who dabble with the Call Library Node: "Look mom, it doesn't crash anymore!!", wrap it up and declare victory is the totally wrong mindset here. Once it doesn't always crash immediately anymore, the real hard work only starts, as not every memory corruption immediately will crash your process, but instead may crash way down the timeline after you have written and added several dozen more modules to your program and then the bug hunting is VERY cumbersome as you have no idea where to start and may instinctively suspect the last changes you made (well I usually have an idea, if there is a DLL involved anywhere it has about a 95% chance to be the culprit even if it never crashed before 😀). The only exception to that rule was for me in the past if it was a LabVIEW binding that came with NI drivers. Those are usually very stable and almost never the cause of a crash. But any other DLL binding, self developed or from any 3rd party is a prime suspect in those cases.
-
I think you should. The upper 32-bits of the SOCKET on 64-bit platforms may not be used nowadays but there is no guarantee that it won't be used in the future, for instance as you get Windows 12 or Windows Jubilee or something, that will only be available as 64-bit OS anyways. Someone at Microsoft may decide to stick something into those unused 32-bits of the SOCKET or may decide that it should be directly a pointer anyways. Then this library won't work if you didn't consequently change the socket handle to be a 64-bit integer in LabVIEW and all Call Library Nodes to use a pointer sized integer for these parameters. As to byte swapping, it can be an annoyance but I find that sort of byte twiddling actually entertaining. 😀
-
I can't help you with this. We have created many cRIO and some sbRIO systems in LabVIEW and while we see the supply chain disruption too that makes getting the correct systems getting shipped in time a real problem, we have not yet considered redesigning any of them without LabVIEW. If we would go that route I do not expect to reuse much of the existing LabVIEW code in any way. The design documents are likely the only thing that will be really helpful, which is one reason to actually do them and not just trust into "LabVIEW code is self documenting". It seldom is when you look at it a year or more later unless it is very trivial code and for FPGA code it really never is trivial but there are many involved code segments used typically. Even the realtime part would need to be rebuild with something else as interfacing LabVIEW to 3rd-party FPGA designs is not easy. You would at least need to replace the entire cRIO shared library with something of your own that interfaces to whatever FPGA architecture you are using.
-
Smart thinking, I was overthinking this with the conditional compile structure. Just create a 64-bit array and declare the parameter as pointer sized integer array and LabVIEW will happily do the conversion automatically on 32-bit platform. Just one caveat although it is of no significance nowadays anymore. This trick of treating the fd_set structure simply as an array of integers only works on Little Endian with 8-byte compiler alignment. On Big Endian this would go wrong as the count stays a 32-bit integer no matter what and the low significant 32-bit of the first 64-bit integer would go in the wrong place. But the only Big Endian platform that LabVIEW still had until recently was the VxWorks real-time targets and their Berkeley socket implementation is mostly Unix like although with some interesting deviations, that usually would not really matter if you just compile a program from C sources with the correct headers but definitely could go very bad when trying to interface with a LabVIEW VI to it that has no notation of their header files but simply requires you to play header parser yourself.
-
That library was reworked by me in 2008, before LabVIEW had any 64-bit version. Unfortunately, SOCKET in Windows is an UINT_PTR which means that it is a 32-bit integer on 32-bit LabVIEW and 64-bit integer on 64-bit LabVIEW. But you can NOT port this over to Linux as is. There the file-descriptor used for all socket functions is an explicit int, so always a 32-bit value! This is because the Berkeley socket library is originally developed for Unix and build around the generic socket concept in Unix which traditionally uses int file descriptors. When Microsoft took the Berkeley socket library and adapted it for Windows as WinSock library, it was already very revolutionary to have this as a fully 32-bit code library, Windows itself was still mainly a selector based 16-bit environment. And Microsoft likes to use handles, which are opaque pointers and happened to be 32-bit integers too back then. The library as posted on the NI forum does NOT work in 64-bit Windows, and that is not just because of the 64-bit SOCKET handle itself but also because of the fd_set data structure which is defined as follows: typedef struct fd_set { u_int fd_count; /* how many are SET? */ SOCKET fd_array[FD_SETSIZE]; /* an array of SOCKETs */ } fd_set; Now on 32-bit LabVIEW there is nothing strange here. The fd_array with SOCKET handles begins at offset 4, so the LabVIEW array of 2 * 32-bit integers works perfectly fine. The first array element corresponds to fd_count and the second element to the first element in the fd_array and if fd_count doesn't contain more than 1 as value it does not matter that the fd_array doesn't contain all 64 elements that this structure is declared for. On 64-bit the SOCKET is a 64-bit integer and is naturally aligned on an 8 byte boundary in that structure. So there is a dummy 32-bit filler element between fd_count and fd_array. And the first SOCKET should be a 64-bit integer. Solution to make this work on 64-bit LabVIEW, besides changing the SOCKET handle parameter for all functions to be a pointer sized integer, is to use a conditional compile structure. For the 32-bit case use an array of 32-bit integers as fd_set just as is used now, for the 64-bit case you need to use an array of 64-bit integers. The rest remains the same. You even can use an array of 2 * 64-bit integers, just the same as for the 32-bit case with an array of 32-bit integers. Since we run on a Little Endian machine under Windows, the lower significant 32-bits of the first 64-bit element happen to match the location in memory where the fd_count is expected, the extra 4 filler bytes are overwritten by the higher significant 32-bits of the first 64-bit array element which is not a problem. They are really DON'T CARE. Your observation that this weird error can happen if the fd_set structure is not correctly initialized was spot on. But not because it is not really correctly initialized in the VI (it is) but because the array of 32-bit values has a different layout than what the 64-bit Winsock library expects.
-
Did you start your program as Administrator? Raw sockets is a privileged resource on all modern OSes, only available to especially elevated processes such as being started as root/admin or under Linux with an explicit privilege for the process. IcmpSendEcho() supposedly calls into a device driver (or maybe service) to do the actual work. Since they execute in the context of the Windows kernel, they are not restricted by those pesky user right limitations. By default they execute with System privileges which is almost like an Administrator and in certain ways even more.
-
Would seem strange. Nagle is an algorithm on the TCP/IP protocol level, not on the IP level. More likely a limitation of the synchronous nature of that API. Using the asynchronous IcmpSendEcho2() might give more control. However it is more complex as you have to either use a Windows event handle or a callback routine, with the second being not really a feasible option in the LabVIEW Call Library Node.
-
On Windows I recommend that function. ICMP is a low level protocol and can only be implemented in user space by using raw sockets, but that is a privileged resource that can only be opened by elevated processes on Windows, and by a process which is either root (UID=0) or has the CAP_NET_RAW permission granted on Linux (and likely MacOS X, which also uses BSD socket library). My network library, which I had put up elsewhere on here some 12 years ago or so, did provide raw sockets, besides TCP and UDP and had VIs implementing the ICMP echo command, but that was of very limited value because of these permission issues. The command line ping utility on Linux is THE way to solve it there. It's unfortunate, but with raw sockets someone really could easily start doing very nasty things, accidentally or on purpose, so I understand why they are protected. Also note that many servers nowadays disable the ICMP Echo command on purpose to avoid attempts to be DOS attacked.
-
I worry about NI hardware controller release
Rolf Kalbermatter replied to Thang Nguyen's topic in Hardware
Is that spam to advertise a link or do you have a specific question? -
Faster Spline interpolation - c++ dll implementation?
Rolf Kalbermatter replied to Bruniii's topic in Calling External Code
You need to compile it into a DLL to be able to call it. But LabVIEW code IS compiled too and fairly performant. 40us is not a lot of time to do those kind of mathematical operations. Even if you use a highly optimizing C compiler like the Intel C compiler you are most likely not going to see huge differences when using that code as a DLL. You can of course try but you will need to use a C compiler of some sorts for this. And it is C++, using the standard template classes. You will also need to write a small C wrapper around this in order to be able to call it from the LabVIEW Call Library node. As the code is GCC specific I can't help you. If it would be compilable with Visual C as is, I might try to create the DLL, but as already said, my hopes that you will see significant performance improvements if you use the C++ code is not that great. -
However, be aware that the github code as is is an experiment in construction. Lots of things don't work right yet and there are many bugs in the underlaying shared library that simply won't do the right thing yet. I'm slowly working on it but it is a side project and sometimes I just don't feel like debugging C code very much.
-
Read through this thread and read specifically Brian (Hoovahs) response.
-
It for sure helps. I was really thinking that I was overlooking something here. But with this explanation everything makes sense. The actual code in the DLL is substantially different to 4.0. In 4.0 most was just a very thin wrapper around existing LabVIEW functions. But those LabVIEW functions do not support Unicode paths so I have been refactoring that code substantially to support full Unicode paths for the underlying functions (and create compatibility wrappers that still use the old LabVIEW paths, which of course won't support Unicode path names). The advantage of using full Unicode throughout the ZIP tools, eventually will be that path names can contain characters not present in your current ANSI locale and that path names can be almost arbitrarily long, namely 32k characters. The ZIP standard internally already supports UTF8 encoded file names, so once this is fully working you can also extract and create ZIP files that are using UTF8 filenames. But this complete porting to full Unicode support is not yet finished. Most of the actual programming is done, but it needs more testing.
-
So I did take a look and yes it was that function but no, it doesn't only fail in 64-bit mode but also in 32-bit mode. So I'm a little lost why you feel it did work with an UNC path when using it in LabVIEW 32-bit. Going to do some more tests with this and trying to clean up a few related things.
-
Error 1 is the all generic "invalid parameter" error. could be indeed a problem in interpreting UNC paths somehow. I'll try to look into that. Haven't really run the code yet on a 64-bit system with UNC paths, but I see where the error 1 seems to come from when looking in the source code. It looks like LabVIEW has changed its stance about what an UNC path represents between 32-bit and 64-bit. I use the function FIsAPathOfType() to check that the passed in path is an absolute path ( I do not want to try to open relative, and of course empty or invalid paths, as I have no way to know where they should be relative to, and find the idea to use the current directory an atrocity that has absolutely no place in a modern multithreading application). Going to verify that this is the culprit, as it could be also from somewhere else, but it looks suspicious and I know that internally UNC paths are treated as a different type in LabVIEW, but so far it considered them absolute too (which they are).
-
What would be the error you get?
-
Some people would say that that is your problem. Others that it is a bliss. 😀
-
I'm pretty sure the .NET RTF control is not much more than a fairly simple wrapper around the actual RichTextEdit Control which is pretty much a Windows Common Control component. Pretty much everything of the business logic is in the according Windows DLL and the API is exposed as macros around the Windows messages that you send to the control.
-
There is a reason that it is still marked Beta (and likely will remain so for the foreseeable future). It is a telltale sign that even the RichTextEdit control, which is a Microsoft technology has problems with that setting. Basically enabling UTF8 as a codepage feature would be a nice idea, IF all Windows applications were properly prepared to work with codepages that can be more than 1 byte per character. But since this simple assumption of 1 byte == 1 character works for all English speaking countries, there have been many sins committed in this respect and nobody ever noticed. Enabling this feature tries to solve something that can not really be solved since there is simply to much cruft out there that will fail with it (and yes LabVIEW has also areas where it will stumble over this). Linux is in that respect a bit better of. The Linux developers never were shy about simply abandoning something and put people up with facts and tell them, this is how we will do it from now on. Take it or leave it but don't complain if it doesn't work for you in the future if you do not want to follow the new standard. Most desktop distributions nowadays simply use UTF8 as standard locale throughout, pretty much what this setting would do under Windows. And distributions simply removed applications that could not deal with it properly.
-
That grammer sounds almost as bad as what those Nigerian scammers use, who pretend to have embezzled a few millions and now are eager to find someone who would be happy to take that money from them. 😀
-
I don't know about VLA. Never used one myself. Our company is on a Partner Software Lease contract, which has been an annual subscription based license for as long as I remember. In theory I don't have to care about all this as long as I'm employed at Averna, but I do care about LabVIEW and think it is a bad move for people who are not under such a company provided license agreement with NI. That the justifications that NI gave for moving to a subscription based license model only, almost all sound to me like marketing jumbo-mumbo that tries to turn the entire meaning of words upside down, or are actually completely misconstructed arguments, didn't help that at all. For normal perpetual licenses it is definitely how it works. If you make use of the NI offer to extend your expiring SSP for up to 3 years of subscription licensing for the price an SSP was in the past (about half of what a yearly subscription costs now) your perpetual license automatically converts to a subscription license. Instead you could choose to buy a new subscription license for the full cost and let your existing SSP expire. In that case you own the perpetual license from your old license, which gives you the right to install and use LabVIEW 2021 and in addition to that a subscription to the newest LabVIEW version for as long as you keep your subscription active. Once you let the subscription expire you still have the perpetual license for LabVIEW 2021 but can't (easily) look at all the VIs you may have created with newer LabVIEW versions under the subscription model. For VLAs a different solution may exist but as I said I never had to deal with VLAs myself and have absolutely no knowledge about them.
-
Ahhh well! Yes that was a choice I made at that point. Without a predefined length I have to loop with ever increasing (doubling every time) buffer sizes to try to inflate the string. But each time I try with a longer buffer, the ZLIB decoder will start filling the buffer until it runs out of buffer space. Then I have to increase the space and try again. The comment is actually wrong. It ends up looping 8 times which results in a buffer that will be 256 times as large as the input. That should still work with a buffer that has been compressed with over 99.6% actually! The only thing I could think of is to increase the buffer even more aggressively than 2^(x+1), maybe 4^(x+1)? That would with the current 8 iterations offer an up to 65536 times as big inflated buffer for an input buffer. In each iteration the ZLIB stream decoder will work on more and more bytes and then if it is to small, all will be thrown away and started over again. A real performance intense operation and I also do not want to loop indefinitely, as there is always the chance that corrupted bits in the stream might throw the decoder off in a way that it never will terminate and then your application will be looping until it runs eventually out of memory which is a pretty hard crash in LabVIEW. So if you know that your data is going to be very compressible, you have to do your own calculation and specify a starting buffer size that is big enough. If you do this over network I would anyhow recommend to prepend the uncompressed size to the stream. That really will help to not destroy the performance gain that you tried to achieve with the ZLIB compression in the first place.
-
I am taking a sabbatical from LabVIEW and NI R&D
Rolf Kalbermatter replied to Aristos Queue's topic in LAVA Lounge
Without a more qualified statement about how you get to this conclusion such as what numbers are used, there is no way I can believe this. If you look at other indicators such as participation in the various forums, NI, LavaG and LabVIEWForum.de all I can say is that those numbers look VERYYYYYY much lower than a few years back. So either all those new users that are added year over year are real cracks who do not need any support of any kind, or NI has a secret support channel they can tap into, that us mere mortals do not have, or something is totally off. The public visible exposure of LabVIEW, just as NI itself, definitely has been diminishing in the last 5 years tremendously. Maybe all those new users are inherent user licenses included with the semiconductor test setups that are sold. Buying LabVIEW on the website is an almost impossible exercise recently and getting informed quotes also.