Jump to content

Using two network cards with LabVIEW


Recommended Posts

I have a Windows XP PC with dual Gigabit Ethernet network cards in it. I would like to use one of them for reading TCP data from a remote (non-windows) computer, and one of them to write TCP data to the same remote machine.

Any pointers or caveats? Would there be a substantial performance gain in separating out the read and write tasks to different network cards?

Any other pointers on speeding up TCP reads? It currently takes about 50-100ms to read about 500k of data over Gig-E.

thanks,

Neville.

Link to comment

QUOTE(Neville D @ Nov 9 2007, 07:24 PM)

I have a Windows XP PC with dual Gigabit Ethernet network cards in it. I would like to use one of them for reading TCP data from a remote (non-windows) computer, and one of them to write TCP data to the same remote machine...

Hi,

I don't think Windows actually supports having the two cards on the same subnet.

The problem, if I remember correctly, is that two cards on the same subnet will still cause windows to use one of the cards for communication, even if the other card was specified for the TCP traffic.

/J

Link to comment

QUOTE(Neville D @ Nov 9 2007, 01:24 PM)

I have a Windows XP PC with dual Gigabit Ethernet network cards in it. I would like to use one of them for reading TCP data from a remote (non-windows) computer, and one of them to write TCP data to the same remote machine.

Any pointers or caveats? Would there be a substantial performance gain in separating out the read and write tasks to different network cards?

Any other pointers on speeding up TCP reads? It currently takes about 50-100ms to read about 500k of data over Gig-E.

Windows can't do that. And to be honest I don't think any hardware except maybe some very special dedicated high speed routers would support that. The IP routing for such a system would get way to complicated with packets ending up being echoed over and over again.

Also you would have to have two network cards on both ends anyhow and in that case what belets you to make them part of two different subnets each and maintain two different connections, one on each subnet?

I think also that you expect a bit to much of Gig-E. The theoretical bandwidth of 100MB per second definitely is a lot slower for an end to end connection, both due to TCP/IP overhead as well as limited throughput in the businterface and even more so the TCP/IP stacks. They haven't really been designed for such high speeds and can't usually support them even close to the possible limit. Also the way data is handled in the LabVIEW TCP/IP nodes makes them also slower than what you can get in a more direct access to the socket library but that access is also a lot more though to manage than it is in LabVIEW.

Rolf Kalbermatter

Link to comment

QUOTE(Neville D @ Nov 9 2007, 01:24 PM)

Any other pointers on speeding up TCP reads? It currently takes about 50-100ms to read about 500k of data over Gig-E.

Do you have the option of using UDP instead of TCP? UDP should be faster and the dedicated link between the machines should prevent data loss and mis-ordering.

Link to comment

QUOTE(neB @ Nov 12 2007, 04:45 PM)

Disabling Nagle is useful if you have small messages to send over TCP/IP, and you want to keep the delay of the messages to a minimum.

But disabling Nagle in this case would probably give you less total throughput, since the size of the transfer is quite big.

Just my 2c

/J

Link to comment

You might want to consider tinkering under the hood a bit and try modifying the Nagle example. By modifying the TCP socket's buffer size, you can let the OS buffer more and prevent the transport protocol from blocking.

I did this to increase the default buffer size for UDP sockets (8192 bytes) and it worked out well.

For all available options for tuning sockets under Windows, Google "winsock2.h".

  • Like 1
Link to comment

QUOTE(ned @ Nov 12 2007, 05:16 AM)

Do you have the option of using UDP instead of TCP? UDP should be faster and the dedicated link between the machines should prevent data loss and mis-ordering.

Hi Ned,

Our system consists of a VME computer (non-LV) that transmits line scan data to a windows PC running LabVIEW and Vision. The image data is about 500k, and once processed by the LV PC, the returned results are about 1KB or so.

We tried UDP, and it definitely is a LOT faster.. a few ms to transfer the data. But on the windows (receive) side, we seem to be missing packets inspite of having a parallel loop that simply reads the data and buffers to an Img buffer.

To the others that have replied with helpful comments:

Ben, and JFM,

I am in the process of experimenting with Nagle, and it might help on the transmit side (windows->VME) where the data is quite small and it takes about 10ms to just open a socket. Disabling Nagle on the VME didn't make any difference.

I will try experimenting with different packet sizes on the VME side.

LVPunk,

I have also tried increasing UDP packet size to 4k and it seems to work OK.

I have just got a quad-core machine and have upgraded to Gig-E. Preliminary tests are encouraging. The Gig-E with even a dual core allows us to use UDP without missing packets.

Also, going from 100MB/s to Gig-E seems to offer roughly twice the performance.

Will post more info for reference, later on.

Many thanks to all of you!

Neville.

Link to comment

QUOTE(Neville D @ Nov 13 2007, 12:56 PM)

Hi Ned,

Our system consists of a VME computer (non-LV) that transmits line scan data to a windows PC running LabVIEW and Vision. The image data is about 500k, and once processed by the LV PC, the returned results are about 1KB or so.

We tried UDP, and it definitely is a LOT faster.. a few ms to transfer the data. But on the windows (receive) side, we seem to be missing packets inspite of having a parallel loop that simply reads the data and buffers to an Img buffer.

To the others that have replied with helpful comments:

Ben, and JFM,

I am in the process of experimenting with Nagle, and it might help on the transmit side (windows->VME) where the data is quite small and it takes about 10ms to just open a socket. Disabling Nagle on the VME didn't make any difference.

I will try experimenting with different packet sizes on the VME side.

LVPunk,

I have also tried increasing UDP packet size to 4k and it seems to work OK.

I have just got a quad-core machine and have upgraded to Gig-E. Preliminary tests are encouraging. The Gig-E with even a dual core allows us to use UDP without missing packets.

Also, going from 100MB/s to Gig-E seems to offer roughly twice the performance.

Will post more info for reference, later on.

Many thanks to all of you!

Neville.

Thank you for the update. You may know this already but a timed loop will let you spec which CPU the code runs in. The timed loop does not have to iterate, it can just hold the code you want to control whcih CPU it execute on (at least that is how it will work on LV-RT). This may help "hedge your bets" with keeping up on the UDP incoming. But short of trying to implement TCP/IP over UDP, there will always be a potential for missed packets.

If you get to a point were it looks like even the Gigabit E-net can't transport data fast enough, then you may want to look at SCRAMNet by Curtiss-Wright. It screams (both in performance and cost, ouch).

Ben

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.