Taylorh140 Posted January 4, 2022 Report Share Posted January 4, 2022 Is there a Labview Type that changes from 32 to 64 bits based on rather your running a different development environment? Quote Link to comment
Rolf Kalbermatter Posted January 5, 2022 Report Share Posted January 5, 2022 17 hours ago, Taylorh140 said: Is there a Labview Type that changes from 32 to 64 bits based on rather your running a different development environment? Depends what you mean. The Call Library Node lets you configure pointer sized integers to pass data to DLLs that have pointer size. On die Diagram and Frontpanel you always use a 64-bit (un)signed integer control. That should work for almost everything. The only thing where I can think that you might need a control that changes size depending on the platform is if you want to pass clusters to a Call Library Node that contain pointers. But LabVIEW is a very strict typed programming environment and doesn't know such a control. The only solution for that is the Conditional Compile Structure where you have to program the 32-bit and 64-bit version in different frames and let LabVIEW choose the right one. Quote Link to comment
Taylorh140 Posted January 5, 2022 Author Report Share Posted January 5, 2022 So I'm a bit confused right now. perhaps you can set me straight on this. I'm running x64 windows and 32 bit LabVIEW. but when i'm Call Library Function Node and calling a dll from (SysWow64) the "pointer sized integers" are 64 bits? I would expect them to be 32 bit for 32 bit LabVIEW as i don't think it can interact with 64 bit dll's regarless of the pointer size. and yes I'm sure that 64 bits would hold the value. However, since we need to match the c prototype, it makes me think this will cause problems. Quote Link to comment
Rolf Kalbermatter Posted January 5, 2022 Report Share Posted January 5, 2022 10 minutes ago, Taylorh140 said: So I'm a bit confused right now. perhaps you can set me straight on this. I'm running x64 windows and 32 bit LabVIEW. but when i'm Call Library Function Node and calling a dll from (SysWow64) the "pointer sized integers" are 64 bits? I would expect them to be 32 bit for 32 bit LabVIEW as i don't think it can interact with 64 bit dll's regarless of the pointer size. and yes I'm sure that 64 bits would hold the value. However, since we need to match the c prototype, it makes me think this will cause problems. No! What LabVIEW passes to the DLL in this case is the lower 32-bit of the 64-bit. LabVIEW does not know variable sized numerics for a number of reason. So a pointer sized integer parameter in the Call Library Node is ALWAYS transported as 64-bit integer in LabVIEW. That is totally independent on the OS and LabVIEW bitness. The Call Library Node will generate the necessary code to pass the right part of the 64-bit integer to the actual parameter. But on the LabVIEW diagram (and front panel controls) it is ALWAYS a 64-bit integer. 1 Quote Link to comment
Taylorh140 Posted January 5, 2022 Author Report Share Posted January 5, 2022 Well that's way more useful than i thought it would be then. so no need to distinguish, and better portability. This makes much more sense. Quote Link to comment
ShaunR Posted January 5, 2022 Report Share Posted January 5, 2022 One caveat is because, as Rolf states, LabVIEW passes the lower 32 bits (Big Endian), if you are manipulating pointers returned from functions you have to be careful if they are Little Endian. You don't come across it very often in LabVIEW since many API's have a create or new function and it can be treated as opaque. But there are Windows functions that return pointers that need to be converted. Quote Link to comment
Rolf Kalbermatter Posted January 6, 2022 Report Share Posted January 6, 2022 13 hours ago, ShaunR said: One caveat is because, as Rolf states, LabVIEW passes the lower 32 bits (Big Endian), if you are manipulating pointers returned from functions you have to be careful if they are Little Endian. You don't come across it very often in LabVIEW since many API's have a create or new function and it can be treated as opaque. But there are Windows functions that return pointers that need to be converted. Would you care to elaborate which pointers you mean? I can't think right now of where what you say could apply. Quote Link to comment
ShaunR Posted January 6, 2022 Report Share Posted January 6, 2022 1 hour ago, Rolf Kalbermatter said: Would you care to elaborate which pointers you mean? I can't think right now of where what you say could apply. Most recently I had to do this with QUERY_SERVICE_CONFIGA Quote Link to comment
Rolf Kalbermatter Posted January 6, 2022 Report Share Posted January 6, 2022 (edited) 29 minutes ago, ShaunR said: Most recently I had to do this with QUERY_SERVICE_CONFIGA I think I misunderstood or you explained it somewhat poorly. There should be no reason to have to do any byte (word, int) swapping in this case unless you use the Typecast function to Typecast a properly sized byte array to or from the cluster. And that would be actually not the right thing to do. Typecast always will (De)Standardize a bytestream to or from Big Endian format. If you really want to pass a byte array to the function, you can instead use the Flatten To String or Unflatten from String with the type set as Native Format. Then there won't be any swapping done. Alternatively I usually create two separate clusters, one with 32 bit ints and one with 64 bit ints for the pointers. Then using Conditional Compile I setup two separate calls to 32-bit and 64-bit variants of the Call Library Node and pass the according cluster to it. And yes it is a pain in the ass to do that for more than one or two callsites, so when there are many of them and/or other nasties that don't fit well with a LabVIEW direct interface, I tend to create an intermediate shared library that properly translates between LabVIEW friendly clusters, handles and what else and the C specific API datatypes directly in C. Edited January 6, 2022 by Rolf Kalbermatter Quote Link to comment
ShaunR Posted January 6, 2022 Report Share Posted January 6, 2022 (edited) 5 hours ago, Rolf Kalbermatter said: unless you use the Typecast function to Typecast a properly sized byte array to or from the cluster. Insightful and amazing you identified that from my terse comment. Yes indeed. It seems that the source was indeed a type-cast and I was unaware of this "feature". I retract my previous comment but won't delete it for historical reference. 5 hours ago, Rolf Kalbermatter said: Alternatively I usually create two separate clusters, one with 32 bit ints and one with 64 bit ints for the pointers. Indeed. However the function that returns it required a call to create localheap memory (in the Win32 API) which I then needed to get back into a cluster. Otherwise I would have used this method. This, in itself, was unusual so maybe I'm missing something else too in this particular example. Edited January 6, 2022 by ShaunR Quote Link to comment
Rolf Kalbermatter Posted January 6, 2022 Report Share Posted January 6, 2022 (edited) 4 hours ago, ShaunR said: Indeed. However the function that returns it required a call to create localheap memory (in the Win32 API) which I then needed to get back into a cluster. Otherwise I would have used this method. This, in itself, was unusual so maybe I'm missing something else too in this particular example. Would have to see the code in question. But generally I would think that you can call Functions like LocalAlloc() with the return value configured as pointer sized integer and using a 64-bit integer to transport it on the diagram/front panel. Then when passing it to an API, pass it again as pointer sized variable and when putting it into a cluster, assign it to either the 32-bit integer or the 64-bit integer respectively. There should be no problem with this since if you are on a 32-bit platform only the lower significant 32-bit should have been assigned by the Call Library Node, although there might be a sign extension happening if you happen to configure it as signed integer but that should not be a problem either. Edited January 6, 2022 by Rolf Kalbermatter Quote Link to comment
ShaunR Posted January 6, 2022 Report Share Posted January 6, 2022 19 minutes ago, Rolf Kalbermatter said: Would have to see the code in question. But generally I would think that you can call Functions like LocalAlloc() with the return value configured as pointer sized integer and using a 64-bit integer to transport it on the diagram/front panel. Then when passing it to an API, pass it again as pointer sized variable and when putting it into a cluster, assign it to either the 32-bit integer or the 64-bit integer respectively. There should be no problem with this since if you are on a 32-bit platform only the lower significant 32-bit should have been assigned by the Call Library Node, although there might be a sign extension happening if you happen to configure it as signed integer but that should not be a problem either. At the risk of derailing the thread; that's not really what I was talking about. It's that I *have* to use the localheap for the function to fill with data. I then used moveblock to copy the data from the localheap to a U8 Array and it's this copy which is when I was casting to a cluster. My surprise was that a function that populated the memory *required* the memory created with localheap, not the pointer, per se. Now I should be able to moveblock it directly to a cluster, now I know what's happening, and avoid the cast completely. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.