Jump to content

Network Streams Simplification Demos / 64 Bit Calling DLL Example


Recommended Posts

Hey all.  So I'm planning on giving a Network Streams presentation to our local user group and was working on some examples and code, and figured I could make it into a full blown package at some point to help simplify the Client Host connection having automated retries, connection status, and using VIMs have the ability to specify the Request and Response data as a type def.

The code still needs some work, but is in a decent place as far as the examples go, but is missing documentation and icons.  It requires LabVIEW 2020 or newer, and the last example requires both 64 and 32 bit versions of LabVIEW.  The basic examples all seem to work with the one I'm most interested in feedback on being the 64 Bit DLL calling code, but first the others.

Example 1 Client Host Together

This is the simplest demo showing the Client and Host in a single VI.  All that is needed is to specify what the Client and Host identification is.  Here it is hard coded as "Client Loop" and "Host Loop".  With it a Host can send a request for some work, and get a response.  The Client will sit in a loop and get a user event for the work, then send the response.

1298923464_Demo1.png.f32a834032e8ba2cc1062434ef551cf5.png

Example 2 Client Host Together

The next progression is a Client and Host that are separated in different instances or contexts.  The code is practically the same but each loop now has its own project and can be built into separate EXEs.  Here the IP address is localhost so both Client and Host need to be on the same computer.

Example 3 Remote Client

Here is where it starts getting fun.  The Host is ran on Windows (EXE or development) and then the Client can also be either ran from a separate EXE or development.  This Client can be another Windows PC on the same network or an RT target that has Network Streams installed through MAX.  The example project has an RT target.  And the Host has an IP address for finding the Client.

Example 4 64 Bit DLL Calling

And this brings me to the real fun.  So a 64 Bit executable can't call a 32-bit DLL directly.  It either needs some kind of wrapper, or another translation layer.  Since Network Streams supports talking between different versions of LabVIEW it also supports sending data to and from different bitness of LabVIEW too.  So what you can do is open the "Remote Client Test - Host Main.vi" under Example 4 in LabVIEW 64-bit.  Then run the EXE named "Remote Client.exe" which is a 32 bit EXE.  Instead of running this EXE you can also open and run "Remote Client Test - Client Main" in the 32-bit version of LabVIEW.

If both the Client and Host are running, and both show a Connected status on the front.  Then what you should be able to do is put a file in the Path control, then click "Get MD5 Method 1".  This will send the path of the file from that control (just the path not the actual data of the file) over a Network Stream to the Client, and tell it to run a series of Call Library Nodes to get the MD5 of the file.  You can also try clicking Method 2 which will instead of doing it in one call, it will read the file in chunk on the 64-bit side, then send that data over Network Streams to the 32-bit side, where the DLL calls are made.

I don't like that to make this work you basically need to duplicate your work.  On the 32-bit side you need to code talking to the DLL, and get the request data from the Host converting it to what you want, then getting the result and sending it back to the Host as the response.  Then on the Host side you need to write a subVI that takes inputs, flattens them down into sending the request, getting the response and unflattening.  At the moment I used Variant attributes because I wasn't being creative.

I'd love a more streamlined method that could work by specifying DLL inputs and data types.  Or just make it easier for developers, possibly with scripting?  XNodes replacing Call Library Nodes?  Or using them as a template?  Thoughts?

Also another issue is needing to run a separate EXE.  I'd love to be able to embed in the 64-bit VIs the needed 32-bit EXEs, and then have a daemon for starting up and stopping that EXE.  This will mean the first call to it will take several extra seconds, while the 32-bit EXE is started up in the runtime engine and gets ready to get a Network Stream request.  In a real application this could be done on start easily.

MD5 Details

No you shouldn't use this method to actually get the MD5 of a file from a 64-bit version of LabVIEW.  There are better alternatives.  But what I wanted was an example of getting a 32-bit binary to do some work, that a 64-bit version of LabVIEW could have it do.  By the way this code is lifted from a discussion here.

Actual Usage?

So as for the actual usage of a thing like this.  I have a few pieces of hardware that talk over a 32-bit DLL and don't have a way to talk to them once I go to LabVIEW 64-bit.  They aren't critical hardware, and are old so maybe I should just let it go.  But part of me wants to update these APIs so that all versions and types of LabVIEW can talk to them.  I also have a couple of customer DLLs that are 32-bit and are just black boxes that I need to use for security reasons.

Performance?

Untested, but probably not great depending on the amount of data that gets sent back and forth.

Alternatives?

If anyone knows of good examples of having a 64-bit version of LabVIEW call a 32-bit DLL I'd be interested.  Regardless the Network Streams portion of this code will likely be published at some point after the user group presentation.  Thanks.

1069999985_NetworkStreamsTrialand64DLLCalling.zip

Link to comment

I don't really understand why you want a DLL at all :wacko: but calling a 32 bit DLL from 64 bit is called "thunking" and you really, really don't want to go there.

If it's just a case of choosing a 32 bit or 64 bit depending on the LV bitness then the CLFN wildcards will do that for you (for different platforms too).

Link to comment
On 9/11/2021 at 12:23 AM, ShaunR said:

I don't really understand why you want a DLL at all :wacko: but calling a 32 bit DLL from 64 bit is called "thunking" and you really, really don't want to go there.

If it's just a case of choosing a 32 bit or 64 bit depending on the LV bitness then the CLFN wildcards will do that for you (for different platforms too).

Sometimes you don't really have a choice. But I agree, if at all possible, don't try to do it! In my case it is usually about my own DLLs/shared libraries, so this particular problem doesn't really present itself for me. I just recompile the DLL/shared library in whatever bitness is needed.

Tidbit: While there is indeed thunking, and Windows internally uses it in the SysWOW64 layer that makes the 64-bit kernel API available to 32-bit application, this mechanism was very carefully shielded by Microsoft to not be available to anything outside of the SysWOW64 layer and therefore not provide any thunking facilities for user code between 32-bit and 64-bit code. It generally also only works from 32-bit code calling into 64-bit code and not the opposite at all. I suppose Microsoft wanted to avoid the situation when they went from the segmented 16-bit Windows memory model to the 32-bit flat memory model and just documented how the thunking can be done and everybody was starting to develop all kinds of mechanisms in weird to horrible assembly code to do just that. There was a lot of low level assembly involved in doing so, it had many restrictions and difficulties and once almost everybody had moved to 32-bit, really everybody tried to forget as quickly as possible about this episode. So when going to 64-bit model they carefully avoided this mistake and simply stated from the start that there was no in-process 32-bit to 64-bit translation layer at all (which is technically incorrect since SysWOW64 is just that, but you can't use its services from application code other than indirectly through calling the official Windows APIs).

The method used here with executing the different bitness code in a separate process and communicate with it through network communication (or possibly some other Inter-Process Communication method) is not really thunking but rather out of process invocation. There is no officially sanctioned way of thunking between 32-bit and 64-bit code although I'm pretty sure that with enough determination, time and grey matter, there have been people developing their own thunking solutions in assembly. But it would require deep study of the Intel microcode documentation about how 32-bit and 64-bit code execution can interact together and it would probably result in individual assembly thunking wrappers for every single function that you want to call. Definitely not something most people could or would want to do. And to make matters worse, you would never be sure that there are not some CPU models that somehow do something just a little bit different than what you interpreted the specification to be and catastrophically fail on your assembly code thunk.

Link to comment
1 hour ago, Rolf Kalbermatter said:

The method used here with executing the different bitness code in a separate process and communicate with it through network communication (or possibly some other Inter-Process Communication method) is not really thunking but rather out of process invocation. There is no officially sanctioned way of thunking between 32-bit and 64-bit code although I'm pretty sure that with enough determination, time and grey matter, there have been people developing their own thunking solutions in assembly. But it would require deep study of the Intel microcode documentation about how 32-bit and 64-bit code execution can interact together and it would probably result in individual assembly thunking wrappers for every single function that you want to call. Definitely not something most people could or would want to do. And to make matters worse, you would never be sure that there are not some CPU models that somehow do something just a little bit different than what you interpreted the specification to be and catastrophically fail on your assembly code thunk.

The usual solution for this kind of thing is RPC. I'm still struggling to understand the need for thunking. I might be missing something but If you are calling a 32 bit DLL on a machine, I expect LabVIEW 32 bit is being used to do it. In the couple of years :ph34r: I've been doing LabVIEW, I've never needed to do this.

I wrote a network thingy a long time ago (Dispatcher) with similar characteristics. It was a publish/subscribe RPC but with an emphasis on data streaming. Servers would tell a broker what functions or channels they supported and the and clients would connect or call the functions directly on the the servers. I didn't use network streams but maybe they were not available then. It sounds like this is something similar.

Edited by ShaunR
Link to comment
14 minutes ago, ShaunR said:

The usual solution for this kind of thing is RPC. I'm still struggling to understand the need for thunking. I might be missing something but If you are calling a 32 bit DLL on a machine, I expect LabVIEW 32 bit is being used to do it. In the couple of years :ph34r: I've been doing LabVIEW, I've never needed to do this.

I wrote a network thingy a long time ago (Dispatcher) with similar characteristics. It was a publish/subscribe RPC but with an emphasis on data streaming. Servers would tell a broker what functions or channels they supported and the and clients would connect or call the functions directly on the the servers. I didn't use network streams but maybe they were not available then. It sounds like this is something similar.

Sometimes you may be forced to develop in 64-bit (image acquisition, large data processing or similar requirements) but also need to interface to a driver whose manufacturer never made the move to 64-bit and possibly never will. The opposite may also be possible: that you develop in 32-bit because the majority of your drivers are only available in 32-bit  but one specific driver is only available in 64-bit. If the device protocol is documented and going over a standard bus like GPIB, serial or TCP/IP I would always recommend to implement the driver for at least the oddball device in LabVIEW instead of trying to mix and match bitnesses.

If that is not an option, the only feasible solution is to create a separate executable and communicate to it through some IPC (RPC) mechanisme.

Edited by Rolf Kalbermatter
Link to comment

I've done this kind of thing, with a 32-bit-only dll needed from a 64-bit one.  Actually, it was two 64-bit Test Stations that both needed the same info from the 32-bit equipment, so having both be Clients of the 32-bit Server worked well.  I used the TCP capability of Messenger Library, which is very little effort.  

One question: why Network Streams?  If you are wrapping things in your own API then why not a standard TCP connection?  What are Network Streams giving you?

Edited by drjdpowell
Link to comment
15 hours ago, drjdpowell said:

One question: why Network Streams?  If you are wrapping things in your own API then why not a standard TCP connection?  What are Network Streams giving you?

Because this whole endeavor started as a Network Streams user group presentation.  In making Network Streams examples I realized it could be simplified.  Then while creating demos I thought it could be used for this.  The purpose of this post is really to discuss the Network Streams examples, one of which is this 64-bit 32-bit DLL calling business.  I just figured that was the topic most interesting to others.  It certainly could be just TCP stuff instead.

I didn't see any actual examples of 64-bit LabVIEW calling a 32-bit DLL.   But I saw on the forums lots of people asking for such a thing for a variety of reasons.  If anyone has any examples of this I missed please let me know.

So this might go into a larger topic.  But LabVIEW is sorta backwards when it comes to 32 versus 64 bit.  If I go to download Google Chrome what version is recommended?  But when it comes to LabVIEW 32-bit is still recommended.  However I get the feeling NI is going to push 64-bit more, with the recent releases adding more 64-bit tools.  I'd like to make the plunge to 64-bit LabVIEW.  But for actual application stuff I occasionally use 32-bit DLLs as mentioned in the post.  One case for me is a security DLL we are provided.  But lots of random hardware drivers are just DLL wrappers.  And if that manufacturer doesn't have 64 bit support or went away, what options are there in 64-bit LabVIEW?

Link to comment

I've never used Network Streams.  It appears to me to be an API that makes TCP communication easier, for a specific use case of a one-way stream.  But I've seen, on more than one occasion, people use it build APIs for entirely different use cases, ones that would (in the end) be simpler and more performant to base on straight TCP.  

Link to comment

Right well this set of code wraps Network Streams, adding synchronous two way communication, and uses VIMs to allow for the Request and Response data types to be typed and more easily used.  It also adds some periodic reconnect, and connection status feature that I find helpful.  Maybe XNodes would be better but VIMs are just so much easier to make, especially for a first release.  It is true that Network Streams are themselves wrappers around TCP technology.  And for those that don't know the intricacies of TCP this is a pretty easy way to setup two way communication between two pieces of LabVIEW code running on the same machine, on a different machine, or on different targets, just by specifying a Host and Client identifier, and specifying the Client IP address (if on a remote system).  I'm certain that any example I give for Network Streams, you could make better with TCP. (EDIT: This comment wasn't meant to be sarcastic in anyway.  I haven't done any real TCP development and am sure you are more familiar with it than me)

Link to comment

Networks Streams implement (and abstract) the concept of Quality of Service without the developer needing to add code for managing intermittent connections. A handshake ensures that all the data transmitted has reached the other endpoint in the order they were sent. Yes, it is probably just a buffer on the sender's side and some ACK replies from the receiver (and incoming buffer), bundled into a seemingly unidirectional stream. For sure there's an overhead compared to straight TCP, but I'm not sure the overhead is that large if you take into account the handshaking that you would need to reproduce the same QoS feature.

TCP is more flexible and interoperable.

If you need to add QoS to your app, and your app is all LabVIEW on both sides, then Network Streams make sense for reliable 1-to-1 high speed data transfers. If you send lots of small messages, an IoT protocol over TCP is probably more suited for the task (MQTT, etc.).

 

Edited by Francois Normandin
Link to comment

The problem I have with the Network Streams QoS arguement is that TCP already does this, using ACKs and retries to ensure an ordered stream.  And if I really needed to be sure a message is handled, I need to verify it all the way through the application to the final result, not just through the Network Stream message delivery.  For example, if I need to send something to be saved to disk, I need QoS all the way to disk, and would have to implement a custom buffer on the send side, layered on top of Network Streams.

Link to comment
  • 3 months later...

Network streams give you a few things:

1) TCP's guaranteed delivery can be foiled by the OS. I know in some cases you can pull the cable and one side will not get an error because it put the message in the OS buffer but the other side will obviously fail to get the message. It's a little tricky at this point to figure out exactly which messages need to be transmitted. I believe network streams can tolerate that disconnect.

2) Explicit buffer sizing at the application layer. TCP uses buffering at the OS layer which is much harder to poke into. Of course, with network streams, memory use is really bad with variable sized messages so YRMV.

3) A flush method. This comes in handy if you want the host to do something smart when a message takes too long to get to the other side. This is also useful in cases where the host and client are developed by two different parties and the sender wants to prove their transmission. You can of course roll your own with TCP but that's one more thing TBD.

4) "Connected" property node. I'm guessing there's some sort of heartbeat underneath. You're totally allowed to ignore this

 

 

Link to comment

My point about Network Streams is not that they aren't a useful set of features for some uses cases, but that building something different, with contrary features, on top of them usually makes no sense.  For example, the package hooovahh has posted does (if I read it right) use pinging to check messages are being received, and will close the Network Streams if it doesn't receive a response in about 1 second.  This destroys the buffers of any waiting messages and means there is no QoS delivery at all, making all that stuff useless overhead.  Not that it matters, because it is also a Request-Reply system where there can only be one active Request, and thus there is nothing to buffer.  This makes the "explicit buffer sizing" and "flush method" features entirely meaningless.  Network Streams are bringing nothing to teh party here but overhead.

Edited by drjdpowell
Link to comment
5 hours ago, hooovahh said:

I'm just glad that I was able to make a synchronous network transport mechanism that uses VIMs, has status, automatic reconnection, and can target applications running on different platforms and different operating systems.  All of this with no networking experience, and the amount of effort needed to make this was pretty minimal. 

Yeah sorry I did not mean to derail things. That sounds really cool and the more VIMs we get into the community the better.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.