Jump to content

Search the Community

Showing results for tags 'tcprt'.

  • Search By Tags

    Type tags separated by commas.
  • Search By Author

Content Type


Forums

  • Software & Hardware Discussions
    • LabVIEW Community Edition
    • LabVIEW General
    • LabVIEW (By Category)
    • Hardware
  • Resources
    • LabVIEW Getting Started
    • GCentral
    • Code Repository (Certified)
    • LAVA Code on LabVIEW Tools Network
    • Code In-Development
    • OpenG
  • Community
    • LAVA Lounge
    • LabVIEW Feedback for NI
    • LabVIEW Ecosystem
  • LAVA Site Related
    • Site Feedback & Support
    • Wiki Help

Categories

  • *Uncertified*
  • LabVIEW Tools Network Certified
  • LabVIEW API
    • VI Scripting
    • JKI Right-Click Framework Plugins
    • Quick Drop Plugins
    • XNodes
  • General
  • User Interface
    • X-Controls
  • LabVIEW IDE
    • Custom Probes
  • LabVIEW OOP
  • Database & File IO
  • Machine Vision & Imaging
  • Remote Control, Monitoring and the Internet
  • Hardware

Find results in...

Find results that contain...


Date Created

  • Start

    End


Last Updated

  • Start

    End


Filter by number of...

Joined

  • Start

    End


Group


Personal Website


Company Website


Twitter Name


LinkedIn Profile


Facebook Page


Location


Interests

Found 1 result

  1. Some of my code is giving me behavior I'm not understanding. I've been talking with NI tech support, but I'm trying to better understand what's going on for the foreseeable project down the road that is going to tax what TCP can transmit. I have a PC directly connected to a cRIO-9075 with a cable (no switch involved). I've put together a little test application that, on the RT side, creates an array of 6,000 U32 integers, waits for a connection, and then starts transmitting the length of the array (in bytes) and the array itself over TCP every 100 msec. The length and data are a single TCP write. On the PC side, I have a TCP open, a read of four bytes to get the length of the data, then a read of the data itself. The second TCP read does not occur if there is any errors with the first TCP read. Both reads have a 100 msec timeout. The error I'm getting is a sporatic timeout (error 56) at the second TCP read on the PC side. This causes the next read of the data length to be from my data so I get invalid data from there out. The error occurs from second to hours after the start of transmission. As a sanity check, I did some math on how long it should take to transmit the data. Ignoring the overhead of TCP communication, it should take ~2 msec for the write to transmit. A workaround seems to be to have an infinite timeout (value of -1) for the second TCP read. I'm rather leery of having an infinite (or very long) timeout in the second read. Tech support was able to get this working with 250 msec on the second read. Test VIs uploaded... Test Stream Data.zip
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.