action Posted November 15, 2004 Report Share Posted November 15, 2004 I have constructed a labview application that gathers data and then sends it over a connection if it is up, otherwise it buffer the data till the link is up and running. When the connection is lost and then come back on line I will miss some packets. As soon as I receive an error out from the TCP/IP box I stop sending any new packets. Any one how know why I lose packets? Since I am using the TCP protocol I expected that the packet that was sent and not delivered has to be resent whiteout me doing this in my application. The number of packets that is lost seams to have to do with the how many packet I send per second. Higher speed more packet lost. Quote Link to comment
Jim Kring Posted November 16, 2004 Report Share Posted November 16, 2004 I have constructed a labview application that gathers data and then sends it over a connection if it is up, otherwise it buffer the data till the link is up and running. When the connection is lost and then come back on line I will miss some packets. As soon as I receive an error out from the TCP/IP box I stop sending any new packets. Any one how know why I lose packets? Since I am using the TCP protocol I expected that the packet that was sent and not delivered has to be resent whiteout me doing this in my application. The number of packets that is lost seams to have to do with the how many packet I send per second. Higher speed more packet lost. 2649[/snapback] One would expect that the packet which generated the error would be lost -- that's what the error is telling you. Are you losing more packets which were sent without generating an error, prior to the disconnect? Quote Link to comment
Mike Ashe Posted November 17, 2004 Report Share Posted November 17, 2004 Are you returning a "packet XXX recieved" notification. It seems that adding a little bit of info to the buffer you build up (when the connection is down) might workaround your problem. I agree that it would be nice to know why you are loosing the packets, but making your recovery robust seems more important. The reason I say robust recovery is more important is that I have seen multiple ways that LabVIEW's TCP/IP comms have problems over the years. Spurious timeout errors (you set the timeout to 2 seconds and the read VI returns in 50 millisec with a TO error) to buffer overflow errors when you try to transmit to fast. I have run into som many types of errors that I assume that there will be multiple error modes with TCP/IP and implement accordingly. Best of luck! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.