Jump to content

Recommended Posts

Thanks for the quick replys.

I didn't have time to get back to my problem til now. but it still doesn't really work. Even if i don't use any additional waits in the client VI, the data isn't transferred faster. And i thought i would have to use the waits on the server side to time the creation of datapoints.

I also had a think about the timeouts inside the network queues and had a close look on the codes again. How can I actually use them for my timing? i thought there purpose is to check if data is avilable in the specified timeout and just send an error message if not.

Thanks for any help!!

Link to comment

QUOTE (Sonny @ Jun 4 2009, 08:32 AM)

Thanks for the quick replys.

I didn't have time to get back to my problem til now. but it still doesn't really work. Even if i don't use any additional waits in the client VI, the data isn't transferred faster. And i thought i would have to use the waits on the server side to time the creation of datapoints.

I also had a think about the timeouts inside the network queues and had a close look on the codes again. How can I actually use them for my timing? i thought there purpose is to check if data is avilable in the specified timeout and just send an error message if not.

Thanks for any help!!

Again, for higher transfer throughputs you will have to transfer larger sets of data. Instead of having a network queue of scalar doubles, have a network queue of an array of doubles. Transfer 100 or 1000 points at a time. This will drastically increase performance. It's not really the data transfer that is slowing things down, it's the overhead of communicating with the target, waiting for a response, etc.

Link to comment
  • 1 year later...

This is a newbie question, sorry in advance! I'm having trouble with this queuing system when I try to use non-default values for the Server Address (and also when I accidentally created a race between the server queue creation and the client queue creation). It seems that the TCP Open Connection vi is blocking indefinitely, even though the timeout is set to 60s - I have to forcibly close labview to kill it. What am I missing?

I am using Labview 8.6 on Linux.

Thanks!

Link to comment

It seems that the TCP Open Connection vi is blocking indefinitely, even though the timeout is set to 60s - I have to forcibly close labview to kill it. What am I missing?

A couple of possibilities... 1) Are you using the default 60000ms timeout setting, or have you hard-coded the number? I've noticed in the past that the TCP functions sometimes ignore default timeout settings. 2) Does the IP you're trying to connect to exist on the network, and is the port looking for connections? If not, hopefully Linux handles that sort of thing more gracefully, but this can definitely tie a Windows machine up.

Link to comment
  • 4 weeks later...
  • 3 months later...
  • 8 months later...

There are multiple serious bugs in this API having to do with timeout and connection dropped behaviors.

Suppose client calls Dequeue. It sends a message to the server saying, "I want data." The server dequeues data from the local queue and then tries to send it back to the client.

Problem 1: Suppose that in between saying "I want data" and getting that data, the client hits its timeout and stops waiting for the data. The data gets sent anyway -- and is sitting in the TCP connection the next time the client does a TCP Read. Now the client gets around to calling Dequeue again --- the client is going to get the data that was sent the previous time. Each time thereafter, the client is one message behind.

Problem 2: A worse version of problem 1... after timing out on a Dequeue, the client does any other operation. The data that was supposed to be the last sent message will be the reply. In other words, the client times out on a first Dequeue, then does two Preview Queue calls in a row and gets different data.

Problem 3: Suppose that the client TCP connection actually does go dead. The server times out trying to send the data back to the client. The dequeued element is not put back onto the end of the queue, so it gets dropped. Now you have a message that was enqueued that no client has ever handled.

I'm pretty sure that if you have an unstable connection or use timeout a lot that there are other lurking bugs in here.

Link to comment

AQ, without thinking about it too closely, I believe that the suggestion I made earlier in the thread (use local wrappers around the primitives and call those wrappers using remote CBR calls) should handle those issues, as the execution is all done on local calls. I think the only problem you might have with that is if your connection drops in the middle of the CBR call, in which case the dequeue might happen, but you wouldn't get the data, but that's probably not something you can handle without writing a more serious network queue implementation (i.e. one which uses acknowledgments, but there you probably have the problem of where the ack chain stops). Sounds like an interesting problem.

Link to comment

I know the conversation earlier had comments about other ways to make this work, but none of them highlighted that the current way didn't work. I just don't want anyone who Googles this site to think that the Network Queue implementation here is fully functional. For non-timeout cases, it works, but it has some nasty lurking bugs.

Link to comment
  • 4 years later...

It probably is futile to comment on such an old thread, but I pulled down the code and created a Server in LabVIEW 2014 and a client in LabVIEW 2011 (simulating two application spaces),  The queue count increments and decrements correctly, but the strings I am enqueuing are not returning as anything but empty strings from the variants.  I am checking the variant creation and it is created correctly at the client, but when returned or viewed current queue items on the server the variants are empty.

 

Does anyone have any advice on this?

I attached the simple samples.

Best,

Mark

LV11-SampleClient.vi

NetworkQueueServerSample.vi

Link to comment

The advice is to debug. Place error indicators and probes. Highlight execution. A quick test shows that you get an error on the conversion on the sending side, presumably because the type descriptor is wrong. You need the type descriptor for a string, because that's your type.

Or, more likely, you're probably just expected to wire the string into the VI, since the conversion to a variant should be automatic. You probably also need to set the return elements input on the other side to T. That's as much as I can tell from a quick look, because I'm not familiar with this set of VIs.

Link to comment

My first advice besides debbugging as poiinted out by Yair, would be to try to communicate from the same LabVIEW version first. While the LabVIEW flattened format is designed to stay compatible across versions, variants are a very special beast that have a much more complicated data structure than most other LabVIEW datatypes. There is a serious chance that flattened Variants are not always binary compatible between LabVIEW versions.

Edited by rolfk
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.