Jump to content

Multiple 1GbE Channel Ethernet Data Streaming


Recommended Posts

All,

I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput?

Example:

The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6

will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports?

Any suggestions are appreciated.Thanks

Peter

Link to comment

All,

I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput?

Example:

The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6

will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports?

Any suggestions are appreciated.Thanks

Peter

I believe IPv6 has some inherent capability for such load balancing (it is somehow done by assigning addresses from a specifically reserved address range to the interfaces, although I'm not sure how that would have to be setup) but with IPv4 this is a lot of hassle. You basically have to program those three links as independent communication channels and do the load balancing yourself somehow. So three listen sockets bound to the correct interface card address on one side acting as individual servers and three client connections. The client will have to load the data into memory and split it up and add some sequencing information into a header and the server will have to accept those data and resequence the packages into a continuous stream. A bit like reimplementing TCP/IP on top of TCP/IP.

Rolf Kalbermatter

Link to comment

All,

I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput?

Example:

The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6

will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports?

Any suggestions are appreciated.Thanks

Peter

Most modern adapters allow teaming with multiple ports (so you could team your servers adapters)

http://www.intel.com...b/cs-009747.htm

The old way used to be (under windows) creating bridged connections. Well, I say old way, you still can, but its more CPU intensive than allowing the adapters to handle the throuput. There is an overhead involved, but it is simple to implement and scalable.

Sounds to me like you want 10GbE :)

Edited by ShaunR
Link to comment

All,

Good suggestions!

I have tried the teaming of adapters already. It seems that with each ethernet card being point-to-point to individual server ethernet card it does not give me any throughput improvements. I am not sure if I require a switch that supports aggregation to see any bandwidth in excess of ~115MB/s. I have recently tried the test with the Win32 File IO read/write benchmarks (downloaded from NI). I mapped three network drives over three point-to-point ethernet connections using map option under windows. Over the three drives I was writing to the server RAM disk three independent files at the same time (I will try to do single file stiching at a later time once I get this bandwidth issue sorted out). It seems that I have not gained any bandwidth increase, still ~115MB/s. I am not sure if the client three ethernet cards are not capped in terms bus of bandwidth. If I run the same benchmark using just one of the 1GbE link (not three at same time), I also get 115MB/s.

I do use Intel adapters. Yes, I would love to go to 10GbE, the problem is that the client platform so far does not allow for that upgrade.

Any suggestions are widely appreciated.

Peter

Edited by kondratko
Link to comment

All,

Good suggestions!

I have tried the teaming of adapters already. It seems that with each ethernet card being point-to-point to individual server ethernet card it does not give me any throughput improvements. I am not sure if I require a switch that supports aggregation to see any bandwidth in excess of ~115MB/s. I have recently tried the test with the Win32 File IO read/write benchmarks (downloaded from NI). I mapped three network drives over three point-to-point ethernet connections using map option under windows. Over the three drives I was writing to the server RAM disk three independent files at the same time (I will try to do single file stiching at a later time once I get this bandwidth issue sorted out). It seems that I have not gained any bandwidth increase, still ~115MB/s. I am not sure if the client three ethernet cards are not capped in terms bus of bandwidth. If I run the same benchmark using just one of the 1GbE link (not three at same time), I also get 115MB/s.

I do use Intel adapters. Yes, I would love to go to 10GbE, the problem is that the client platform so far does not allow for that upgrade.

Any suggestions are widely appreciated.

Peter

Aha.

Sounds like you've reached the PCI bandwidth limitation, which is about 133MB/s max and about 110MB/s sustained. Not a lot you can do about that except change the motherboard for one that supports PCIe.

PCI = 133MB/s Max.

PCIe x1= 250MB/s Max.

PCIe x4 = 1GB/s Max.

PCIe x8 = 2GB/s Max.

PCIe x16 = 4GB/s Max.

Edited by ShaunR
  • Like 1
Link to comment

Guys,

I have checked, and the three enthernet ports are sitting on the PCIe x1- which should give me 250MB/s. I have never played with Jumbo Frames. Is that essentially a setting for each ethernet.

I think its time to ping the mobo manufacturer.

Peter

Have you concidered using a Raid drive and forget about the server?

SCRAMNet for Curtiss-Wright is another data path that alows for fast data transfers. Yse you have to write your own file transfer code to run on both nodes....

Ben

Link to comment

Guys,

I have checked, and the three enthernet ports are sitting on the PCIe x1- which should give me 250MB/s. I have never played with Jumbo Frames. Is that essentially a setting for each ethernet.

I think its time to ping the mobo manufacturer.

Peter

Whats the model number? Some older PC motherboards used a PCIe to PCI bridge effectively giving you the slots but not the bandwidth. The figure you are seeing reeks of PCI.

PICe x1 cards are relatively rare in comparison to x8 and x16. I'm surprised you have them!

Link to comment

Have you concidered using a Raid drive and forget about the server?

SCRAMNet for Curtiss-Wright is another data path that alows for fast data transfers. Yse you have to write your own file transfer code to run on both nodes....

Ben

I actually have tested going through RAID on this. I am trying to kill of the latency involved with the client write and server read. I just want to dump directly to memory of the server. The data throughput eon the SCRAMNet only exceeds 200 MBps. I was hoping for like 800MBps ;-)

Peter

Whats the model number? Some older PC motherboards used a PCIe to PCI bridge effectively giving you the slots but not the bandwidth. The figure you are seeing reeks of PCI.

PICe x1 cards are relatively rare in comparison to x8 and x16. I'm surprised you have them!

The diagram of the mobo bus system can be seen here: http://www.gocct.com/sheets/diagram/pp41x03x.htm and the descriptions at http://www.gocct.com/sheets/pp41003x.htm

It clearly shows PCI Express 1x bus for the three 1 GbE. If you have any thoughts, let me know.

Peter

Link to comment

I actually have tested going through RAID on this. I am trying to kill of the latency involved with the client write and server read. I just want to dump directly to memory of the server. The data throughput eon the SCRAMNet only exceeds 200 MBps. I was hoping for like 800MBps ;-)

Peter

The diagram of the mobo bus system can be seen here: http://www.gocct.com...am/pp41x03x.htm and the descriptions at http://www.gocct.com...ts/pp41003x.htm

It clearly shows PCI Express 1x bus for the three 1 GbE. If you have any thoughts, let me know.

Peter

Indeed. But the motherboard is a comapct PCI!

You will note all the conversion chips and all the busses showing PIC 33/66. In fact only Ethernet to Ethernt goes through unadulterated. Howerver, you are going through the 6300ESB (PCIx to PCI controller) to get to the disk .

Also, the SATA disk interface is is SATA 150 (equates to about 180MB/s) or IDE (100-133MB/s). Lots of things there to reduce your throughput.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.