mje Posted October 18, 2013 Report Share Posted October 18, 2013 It's not so rare that I deal with operating system calls directly via the call library function node (CLFN), I'm sure many of us break out these calls from time to time. When dealing with pointers, LabVIEW conveniently allows us to declare a CLVN terminal as a pointer sized integer (SZ) or an unsigned pointer sized integer (USZ). The behavior of such terminals is that at run time the value that is passed in is coerced to the right size depending on the architecture of the host operating system. For those who are unfamiliar, this is not something that can be resolved before hand in all cases. A 32-bit LabVIEW application may indeed be run on a 64-bit operating system, it's not until run-time that this can be resolved. When using SZ or USZ nodes, LabVIEW treats them all as 64-bit numbers on the block diagram, and coerces them down to 32-bits if necessary depending on the host operating system when the node executes. To be clear, this works excellently. But what to do when you need to pass pointers around as part of non-native types. Take for example the OVERLAPPED structure which bears the following typedef: typedef struct _OVERLAPPED { ULONG_PTR Internal; ULONG_PTR InternalHigh; union { struct { DWORD Offset; DWORD OffsetHigh; }; PVOID Pointer; }; HANDLE hEvent;} OVERLAPPED, *LPOVERLAPPED; Not to get into details of the Windows API, but the ULONG_PTR, PVOID, and HANDLE types are all pointer sized values, the DWORD is a "double word" and always a 32-bit value. So if we know we're dealing with a 64-bit host operating system, one possible way to represent this structure in LabVIEW is a cluster of the form: { U64 Internal; U64 InternalHigh; U32 Offset; U32 OffsetHigh; I64 hEvent;} This assumes that you'd rather interact with the union as a pair of Offset/OffsetHigh values rather than a single Pointer value. On a 32-bit host operating system though, we need a different cluster: { U32 Internal; U32 InternalHigh; U32 Offset; U32 OffsetHigh; I32 hEvent;} So what's the problem here? Well, basically we need to duplicate code for each of these situations. If I have a string of CLFN calls, I need to have one case for each host OS type. This seems error prone to me because the two cases would otherwise be identical other than the typedef I'm stringing around for the cluster/struct. So at long last, do you think there would be value in being able to directly have a pointer sized type we can put in a cluster? { USZ Internal; USZ InternalHigh; U32 Offset; U32 OffsetHigh; SZ hEvent;} The behavior of these would be similar to how they behave in the actual CLFN node. For all intents and purposes in LabVIEW they're 64-bit numbers. However when passing through the CLVN node, their size is coerced when necessary. I'd also go so far as to perhaps expect their size to be properly coerced for example when typecasting, flattening etc. Am I way off base here or would there be an actual use case for this? Note several typos where CLVN should be CLFN. I'm loathe to edit the post because the last few times I've tried doing as much lava has more or less destroyed any semblance of formatting that existed... Quote Link to comment
LogMAN Posted October 18, 2013 Report Share Posted October 18, 2013 For those who are unfamiliar, this is not something that can be resolved before hand in all cases. A 32-bit LabVIEW application may indeed be run on a 64-bit operating system, it's not until run-time that this can be resolved. When using SZ or USZ nodes, LabVIEW treats them all as 64-bit numbers on the block diagram, and coerces them down to 32-bits if necessary depending on the host operating system when the node executes. To clarify something here: A 32-Bit application does never have to deal with a 64-Bit pointer! Therefore you only have to deal with different pointer sizes on 32-Bit vs. 64-Bit applications which of course depends on your LabVIEW IDE at compile time. There is also no way you could call a 64-Bit DLL from a 32-Bit application. Now to the question at hand: I think a pointer sized integer could help in many situations if you want to support both bitness types. However this is only true if you can be sure, that the same library call could in fact link to two separate libraries. For example, the kernel32.dll could be called from either a 32Bit or 64Bit application from the same location (C:WindowsSystem32kernel32.dll), but the 32Bit application is redirected to a different file at runtime. This is very well described here: http://msdn.microsoft.com/en-us/library/aa384187%28v=vs.85%29.aspx If you want to write an application that supports both types (depending on your LabVIEW IDE), I suggest you handle all data in 64Bit integers and depending on your bitness, typecast to the lower bitness on 32Bit calls (This is what LabVIEW does with the SZ or USZ afaik). The bitness can be determained by a conditional disable structure, as described here: http://digital.ni.com/public.nsf/allkb/F9770A64A5D5EF4A862576E8005985A8 In my opinion, a pointer-sized integer would be nice to have as a choice for integer representation. However it must be made very clear, that the number of bits does change depending on the IDE bitness, as using it the wrong way could cause serious problems in some situations. Anyways I have some situations in mind where I would like to have such a type Quote Link to comment
mje Posted October 18, 2013 Author Report Share Posted October 18, 2013 Good point, I was totally wrong on that part. Not even going to try to fake my way through justifying what I wrote. I guess that's the magic of Windows on Windows, the wizardry that happens behind the scenes allows you to make calls like that that just automatically work. I've not really dealt with this, the overlapped calls were just something I ran into a few times now but never attempted because synchronized access worked just fine in either case. So basically there is no real use for this as far as what I've outlined above. I can just throw my cluster in a conditional disable structure and be done with it. Depending on the IDE, it will pick the right cluster and then we're off to the races. Well that was far easier than I thought. Quote Link to comment
Rolf Kalbermatter Posted October 20, 2013 Report Share Posted October 20, 2013 So basically there is no real use for this as far as what I've outlined above. I can just throw my cluster in a conditional disable structure and be done with it. Depending on the IDE, it will pick the right cluster and then we're off to the races. Well that was far easier than I thought. Theoretically there could be some use, making the conditional compile structure unnecessary, but!! It would violate a very standard paradigm that LabVIEW has kept intact since its inception as multiplattform development system: A flattened datatype is on all systems the same format! Either that or the Flatten function would have to treat the special pointer typed datatype everywhere as 64 bit entity (and we would have to hope that the 128 bit pointers are far enough into the future that this wouldn't be obsoleted at some time or require a new large pointer type for the whole purpose of maintaining the flatten format consistent. Personally I find this anyhow rather academical, since if you start to deal with API calls with such parameters the time is ready to write an intermediate shared library which translates between this type of structure and a more LabVIEW friendly parameter list. In there the compiler will typically take care of any target specific bitness issues automatically (with some care when writing the C code to not introduce bitness troubles) and the LabVIEW diagram stays clean and proper for all platforms. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.