Jump to content

intermittent TCP error 63


Recommended Posts

I have a LabVIEW 7.0 TCP server exe that monitors a device and responds to TCP messages from remote clients, one client at a time. There's been one report of a LabVIEW 7.1 TCP Client intermittently getting a 63 error from the server after several hours, the TCP Client is sending a status request every 4 seconds.

I am familiar with the 63 error and it's cause. But in this case, what component in the communication chain is causing the error? The way I understand it, the LabVIEW TCP session looks like, LabVIEW TCP (server exe) <-> Windows IP stack <-> network <-> remote windows IP stack <-> LabVIEW TCP (client). If the client sends in a packet and gets a error 63 response, is that caused by windows, or the LabVIEW application, or network conditions, or any one? (I think the LabVIEW client code should be designed to trap the 63 error and re-transmit the request packet. )

Link to comment
I have a LabVIEW 7.0 TCP server exe that monitors a device and responds to TCP messages from remote clients, one client at a time. There's been one report of a LabVIEW 7.1 TCP Client intermittently getting a 63 error from the server after several hours, the TCP Client is sending a status request every 4 seconds.

I am familiar with the 63 error and it's cause. But in this case, what component in the communication chain is causing the error? The way I understand it, the LabVIEW TCP session looks like, LabVIEW TCP (server exe) <-> Windows IP stack <-> network <-> remote windows IP stack <-> LabVIEW TCP (client). If the client sends in a packet and gets a error 63 response, is that caused by windows, or the LabVIEW application, or network conditions, or any one? (I think the LabVIEW client code should be designed to trap the 63 error and re-transmit the request packet. )

Does your server handle multiple incoming connection requests simultansously? If not it may be not able to get into the listen queue before the timeout occurres. Or one of the sockets either on the client or server sides is not disposing closed IP ports properly so that the socket library runs out of port numbers after a while. 4 second interval would result in about 1000 connections per hour and therefore 1000 used up port numbers so this might be a possible explanation. You could try to log the used port number of your client (and server) and if it keeps increasing instead of reusing the same few port numbers then you might have a problem of not properly closed TCP/IP ports. Could it be that your TCP Close function gets wired with an error cluster indicating an error which might prevent the close from completely closing the port. I know Close functions are supposed to close independant of the error in status, but I know that some functions in the past didn't do that always.

Another issue I had sometimes with similar symptoms was when using DHCP. There I have to say that what I did was actually keep connections open for a longer period of time and if the IP address of one of the sides changed during that the connection just went into nirvana. Writing to it did nor produce an error nor get the data arrived at the other side, which also still believed it had a connection open. Closing/reopening connections solved that problem more or less but changing to static IPs was what made it really reliable.

Rolf Kalbermatter

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.