kondratko Posted September 25, 2009 Report Share Posted September 25, 2009 All, I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput? Example: The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6 will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports? Any suggestions are appreciated.Thanks Peter Quote Link to comment
Rolf Kalbermatter Posted September 26, 2009 Report Share Posted September 26, 2009 All, I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput? Example: The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6 will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports? Any suggestions are appreciated.Thanks Peter I believe IPv6 has some inherent capability for such load balancing (it is somehow done by assigning addresses from a specifically reserved address range to the interfaces, although I'm not sure how that would have to be setup) but with IPv4 this is a lot of hassle. You basically have to program those three links as independent communication channels and do the load balancing yourself somehow. So three listen sockets bound to the correct interface card address on one side acting as individual servers and three client connections. The client will have to load the data into memory and split it up and add some sequencing information into a header and the server will have to accept those data and resequence the packages into a continuous stream. A bit like reimplementing TCP/IP on top of TCP/IP. Rolf Kalbermatter Quote Link to comment
ShaunR Posted September 26, 2009 Report Share Posted September 26, 2009 (edited) All, I am trying to concotion a way to utilize multiple 1GbE ports for streaming large amount of data to a server computer. Lets say I have three 1GbE point-to-point links to the server machine (and can dump data for fast write to RAM disk) therefore I will be link limited. Is this at all possible? Anyone has some hints for this implementation. In the end it is a file that needs to be moved from the client to a server. Will this parallel multi 1GbE implementation give me increased data throughput? Example: The client Eths with 192.168.0.4, 192.168.0.5, 192.168.0.6 will be directly linked to 192.168.0.1, 192.168.0.2, 192.168.0.3, i.e. .4 talks to .1 only. I guess in the end one has to run these separate processes in a way so that file will get assembled on the server side correctly? Any way to do this dynamically for dynamic amount of 1GbE ports? Any suggestions are appreciated.Thanks Peter Most modern adapters allow teaming with multiple ports (so you could team your servers adapters) http://www.intel.com...b/cs-009747.htm The old way used to be (under windows) creating bridged connections. Well, I say old way, you still can, but its more CPU intensive than allowing the adapters to handle the throuput. There is an overhead involved, but it is simple to implement and scalable. Sounds to me like you want 10GbE Edited September 26, 2009 by ShaunR Quote Link to comment
Michael Aivaliotis Posted September 27, 2009 Report Share Posted September 27, 2009 Whatever you don't waste your time with unknown Ethernet adapters. Use Intel all the way. Quote Link to comment
kondratko Posted September 27, 2009 Author Report Share Posted September 27, 2009 (edited) All, Good suggestions! I have tried the teaming of adapters already. It seems that with each ethernet card being point-to-point to individual server ethernet card it does not give me any throughput improvements. I am not sure if I require a switch that supports aggregation to see any bandwidth in excess of ~115MB/s. I have recently tried the test with the Win32 File IO read/write benchmarks (downloaded from NI). I mapped three network drives over three point-to-point ethernet connections using map option under windows. Over the three drives I was writing to the server RAM disk three independent files at the same time (I will try to do single file stiching at a later time once I get this bandwidth issue sorted out). It seems that I have not gained any bandwidth increase, still ~115MB/s. I am not sure if the client three ethernet cards are not capped in terms bus of bandwidth. If I run the same benchmark using just one of the 1GbE link (not three at same time), I also get 115MB/s. I do use Intel adapters. Yes, I would love to go to 10GbE, the problem is that the client platform so far does not allow for that upgrade. Any suggestions are widely appreciated. Peter Edited September 27, 2009 by kondratko Quote Link to comment
ShaunR Posted September 27, 2009 Report Share Posted September 27, 2009 (edited) All, Good suggestions! I have tried the teaming of adapters already. It seems that with each ethernet card being point-to-point to individual server ethernet card it does not give me any throughput improvements. I am not sure if I require a switch that supports aggregation to see any bandwidth in excess of ~115MB/s. I have recently tried the test with the Win32 File IO read/write benchmarks (downloaded from NI). I mapped three network drives over three point-to-point ethernet connections using map option under windows. Over the three drives I was writing to the server RAM disk three independent files at the same time (I will try to do single file stiching at a later time once I get this bandwidth issue sorted out). It seems that I have not gained any bandwidth increase, still ~115MB/s. I am not sure if the client three ethernet cards are not capped in terms bus of bandwidth. If I run the same benchmark using just one of the 1GbE link (not three at same time), I also get 115MB/s. I do use Intel adapters. Yes, I would love to go to 10GbE, the problem is that the client platform so far does not allow for that upgrade. Any suggestions are widely appreciated. Peter Aha. Sounds like you've reached the PCI bandwidth limitation, which is about 133MB/s max and about 110MB/s sustained. Not a lot you can do about that except change the motherboard for one that supports PCIe. PCI = 133MB/s Max. PCIe x1= 250MB/s Max. PCIe x4 = 1GB/s Max. PCIe x8 = 2GB/s Max. PCIe x16 = 4GB/s Max. Edited September 27, 2009 by ShaunR 1 Quote Link to comment
Michael Aivaliotis Posted September 28, 2009 Report Share Posted September 28, 2009 have you tried Jumbo frames? Quote Link to comment
kondratko Posted September 28, 2009 Author Report Share Posted September 28, 2009 Guys, I have checked, and the three enthernet ports are sitting on the PCIe x1- which should give me 250MB/s. I have never played with Jumbo Frames. Is that essentially a setting for each ethernet. I think its time to ping the mobo manufacturer. Peter Quote Link to comment
Grampa_of_Oliva_n_Eden Posted September 28, 2009 Report Share Posted September 28, 2009 Guys, I have checked, and the three enthernet ports are sitting on the PCIe x1- which should give me 250MB/s. I have never played with Jumbo Frames. Is that essentially a setting for each ethernet. I think its time to ping the mobo manufacturer. Peter Have you concidered using a Raid drive and forget about the server? SCRAMNet for Curtiss-Wright is another data path that alows for fast data transfers. Yse you have to write your own file transfer code to run on both nodes.... Ben Quote Link to comment
Mark Yedinak Posted September 28, 2009 Report Share Posted September 28, 2009 have you tried Jumbo frames? You have to be careful if you play with jumbo frames. All of the equipment between your source and destination must support jumbo frames. If some of the equipment doesn't you can run into problems with dropped data on the network. This can affect traffic besides your own. Quote Link to comment
ShaunR Posted September 28, 2009 Report Share Posted September 28, 2009 Guys, I have checked, and the three enthernet ports are sitting on the PCIe x1- which should give me 250MB/s. I have never played with Jumbo Frames. Is that essentially a setting for each ethernet. I think its time to ping the mobo manufacturer. Peter Whats the model number? Some older PC motherboards used a PCIe to PCI bridge effectively giving you the slots but not the bandwidth. The figure you are seeing reeks of PCI. PICe x1 cards are relatively rare in comparison to x8 and x16. I'm surprised you have them! Quote Link to comment
kondratko Posted September 29, 2009 Author Report Share Posted September 29, 2009 Have you concidered using a Raid drive and forget about the server? SCRAMNet for Curtiss-Wright is another data path that alows for fast data transfers. Yse you have to write your own file transfer code to run on both nodes.... Ben I actually have tested going through RAID on this. I am trying to kill of the latency involved with the client write and server read. I just want to dump directly to memory of the server. The data throughput eon the SCRAMNet only exceeds 200 MBps. I was hoping for like 800MBps ;-) Peter Whats the model number? Some older PC motherboards used a PCIe to PCI bridge effectively giving you the slots but not the bandwidth. The figure you are seeing reeks of PCI. PICe x1 cards are relatively rare in comparison to x8 and x16. I'm surprised you have them! The diagram of the mobo bus system can be seen here: http://www.gocct.com/sheets/diagram/pp41x03x.htm and the descriptions at http://www.gocct.com/sheets/pp41003x.htm It clearly shows PCI Express 1x bus for the three 1 GbE. If you have any thoughts, let me know. Peter Quote Link to comment
ShaunR Posted September 30, 2009 Report Share Posted September 30, 2009 I actually have tested going through RAID on this. I am trying to kill of the latency involved with the client write and server read. I just want to dump directly to memory of the server. The data throughput eon the SCRAMNet only exceeds 200 MBps. I was hoping for like 800MBps ;-) Peter The diagram of the mobo bus system can be seen here: http://www.gocct.com...am/pp41x03x.htm and the descriptions at http://www.gocct.com...ts/pp41003x.htm It clearly shows PCI Express 1x bus for the three 1 GbE. If you have any thoughts, let me know. Peter Indeed. But the motherboard is a comapct PCI! You will note all the conversion chips and all the busses showing PIC 33/66. In fact only Ethernet to Ethernt goes through unadulterated. Howerver, you are going through the 6300ESB (PCIx to PCI controller) to get to the disk . Also, the SATA disk interface is is SATA 150 (equates to about 180MB/s) or IDE (100-133MB/s). Lots of things there to reduce your throughput. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.