Jump to content

ShaunR

Members
  • Posts

    4,856
  • Joined

  • Days Won

    293

Posts posted by ShaunR

  1. We thought about trying a dll, but none of the C programmers here are experienced with that sort of thing (they are all UNIX programmers and are having enough issues trying to deal in the Windows world) so we dropped it. We're going to write files to a ram drive. It's low enough datarate, plus the fact that all the data is available once a second and not spread out, makes that a pretty good option. Hopefully. :-)

    My next move is to kick this to NI, I guess. It's such a narrow issue -- I doubt many folks are doing loopback TCP in LabVIEW on a regular basis -- but my concern is that there's some underlying issue with LV that may affect other TCP functions.

    Well. If they have difficulty with DLLs (SOs on linux) then kernel level drivers will slay them. The randrive.sys driver is no longer available in windows 7 (hope they weren't thinking of using it wink.gif ) but there are a few 3rd party solutions I think.

    One final thought. Turn off the nagle algo. It is known to play hell with things like games and it is known to silently introduce delays in packet sending through the loopback. It is off for my setups for this very reason, although I never saw 2 second delays.

  2. And then there's always the fact that this works fine in C without having to change any settings...

    Windows 7 x64 with LV 2009 x64.

    Indeed. My problem was just sheer throughput and it didn't matter what it was written in.

    I know it's curing the symptom rather than the problem (and it will be blocking), but have you tried getting the C read and write stuff compiled into a DLL and using that instead? Just a thought to see if the specific problem goes away.

    What do NI say about it (after all it is repeatable by a number of people)?

  3. I hope this doesn't just get appended to my last post. That happens a lot on this site. I must have something set wrong somewhere. Anyway...

    Test results:

    Windows --- LV --- Result (passed = ran for over 4 hours without failing)

    XP --- 8.6.1f1 --- PASS

    XP --- 10SP1 --- PASS

    7 ------10SP1 --- FAIL

    7 ------ 11 -------- FAIL

    7------ 8.6.1f1 --- FAIL

    Anyone else see a trend here? ;)

    FWIW, all Windows 7 boxes have been 64-bit.

    I'm going to run the write and read on 2 separate Windows 7 computers for a Long Time again, since I probably never ran much longer than a couple hours before stopping with no error. For completeness sake, I should do a 4 hour test. Because if *that* fails, there's a really big problem somewhere.

    Thanks again for all the help/suggestions and especially thanks to those of you who were running tests for me!

    I have v1.0 of Desktop Execution Trace Toolkit. This should actually be a small enough program for it not to choke. I can try it even tho it doesn't say it works on Win7.

    Network analyzers aren't a lot of help since the data never actually leaves the machine.

    No questions are dumb!

    Hmm. Yes. A bit of a trend apart from LV2010. And it may be why I cannot see any problems on my machines (none of the examples fall over after running for 29 hrs now :) ). My windows TCPIP is highly modified from a standard install. It was the only way i could "reliably" get TCPIP transfer rates of up to 80 MB/sec. (Not in loop back; across the network). The sorts of things that were changed were the TCPIP auto-tuning, and Chimney Offload. Also had to play with the TCPIP Optimiser, but can't remember exactly what now. This was in addition to the TX buffers. But i wouldn't have thought 25MB/sec would/should be that much of a problem, but I guess it is windows eh?

    • Like 1
  4. Wow. So maybe it's a Win7/LV10 problem...

    I'm installing LV11 on my Win7 homebox as I type. I'll try it on that.

    BTW, I ran both sides on my XP/LV8.6 machine this afternoon for over an hour with no problem. I'll try a longer test tomorrow.

    I really appreciate you running these LV/OS combo tests.

    I know it's one of those silly questions (especially since the C program would suffer from it too). But has to be asked.....

    Are you sure the power saving is turned off on the network card(s)

  5. Every once in a long while I'm presented with a problem I just can't figure out. It's been quite some time; I guess I'm overdue. I've run so many different tests I'm seeing them in my sleep, but here's the summary of tearing my hair out for the past two weeks:

    The basic problem is to write 5MBytes of data from one program to another, on the same computer, every second, via TCP. In its original configuration, this data is literally 5MB all in one TCP write every second, not paced out. It uses payload size to determine the end of the data.

    If the two programs are written in C, it works. I was incorrect in my orignal statement that if both programs are in LabVIEW it also works. It doesn't, or rather I haven't been able to figure out how to make it work. It does work if both LV programs are on different computers, but not if they are on the same computer. And if the LV is doing the write, the C read works fine. So the issue seems to be the LV read, on the same computer with any type write. The two programs connect and are sending/receiving the data for several minutes (2-50). Then both sides stop with various errors. With both sides in LV, most often, the read errors out with a timeout (56), and the write errors out saying the system caused a network disconnect (62).

    Here are some things I've tried that made little or no difference:

    running the LV programs together on a different computer

    Intermediate mode read (thanks for the suggestion, Rolf)

    breaking the 5MB write up into 10 500KB writes and 100 50kB writes

    breaking the 5MB write up into 10 500KB writes and pacing them out over 750ms

    reading the 5MB all at once

    breaking the read up into 500kB and 50kB passes

    Shaun, your suggestion to play with the TCP buffer sizes helped in that instead of failing in a few minutes, it would go for several minutes. Oh, and controlling buffer size on a Windows 7 machine is a PITA. Check out this article if you're doing it on a Win7 platform. I tried every buffer size possible, but it never really helped much more. I even posted to serverfault.com with the problem, and also many other questions about Win7 TCP buffers that no one seems to understand, and have gotten a deafening silence in response.

    At this point, I'm going with Plan B. I've configured a ramdisk on the computer and we're just going to write/read files. In retrospect, this may actually be a better solution, but dang it, I want to know why the TCP way isn't working.

    I'm attaching a couple of very simple vis to demonstrate the problem. Just run them on the same machine, with that machine's IP address (it's an input instead of default for testing on 2 different machines). Written with LV 2010 SP1 64-bit on Windows 7. About all I haven't been able to try is a different combo of LV version/OS. The longest these test vis have ever run has been 50 minutes, and that was much longer than the norm. If anyone has a chance to run these vis, please let me know how it goes.

    Thanks,

    Cat

    Try it this way....

  6. I guess it won't help then.

    Rolfs got a point, but immediate really puts a burden on the CPU since you've got to (wo)man handle characters as they arrive. Then concatenate and terminate the loop on whatever it's supposed to terminate on (number of bytes or term char).

    This is the sort of thing:

    As you can probably see from the snippet. there is a (small) possibility that the first 4 bytes are garbage or maybe you read 1/2 way through a string and therefore expect a huge number.

    So are you using character terminated or pretending a payload size? You haven't said much of the inner workings. Example perhaps?

  7. Each of the data streams is ~ 8 M/B, so worst case thruput is 24MB/s (counting P2 --> P3 twice). I've been staring at Resource Monitor a lot. The network (1Gb) is loping, the CPUs are loping.

    Thanks for the vi. Any chance you've got a "Get Buffer" vi? I'd like to know what the buffers are set at before I start playing around with them. I tried to reverse-engineer the wsock32.dll call in "Set Buffer" but can't really test it here on my home box.

    Well, that's not a huge amount. Even the default LV examples should be able to cope with that.

    The default windows buffer is 8192 (if I remember correctly-don't have the getsockettoption....maybe later in the week). There are a few ways of calculating the optimum size dependent on the network characteristics, but I usually just set it to 65356 (64K) unless it's a particularly slow network (like dial up). It really makes a difference with UDP rather than TCP (datagram size errors). Note, however, that it only makes a difference if you are setting the "Listener" connection. It has no effect on "Open".

    It''s strange that the C program doesn't exhibit the same problem. If you are doing the usual write size, write data then you can try and combine it into one write operation (just concatenate the strings) but I haven't run into a problem with the former. If you have the C code then you can take a peek to see if they are doing anything fancy with the sockets....but I doubt it.

    Are you trying to send and receive on the same port? (Cmd-Resp) or do you have two separate channels; one for sending and one for receiving. If the latter. What part disconnects? The receipt of the data or the send of the data (or arbitrarily both) and what is the error thrown (66?). If you get timeout errors on a read, then you should see that in the network monitor (task manger) as drops but you say that it "lopes" (had to look that one up..lol). That's normally indicative of a terminated messaging scheme where the terminator gets dropped for some reason..

  8. Yes. Each Event Structure has its own events queue, and it dequeues from that queue in the thread that it is executing that section of the VI. The default is "whatever execution thread happens to be available at the time", but the VI Properties can be set to pick a specific thread to execute the VI, and the Event Structure executes in that thread.

    The "Generate User Events" node follows the same rules -- it runs in whatever thread is running that part of the VI. UI events are generated in the UI thread, obviously, but they're handled at the Event Structure.

    Hmmm. If that is true. How is it reconciled with the events of front panel controls which surely (neck stretched far) must be in the UI thread. I could understand "User Events" being able to run in anything, but if bundled with a heap of front panel events; is it still true?

  9. I think the thing that has put me off events is that it forces one to use an event structure as the only way to handle messages and then one might actually want an event structure somewhere else in the same VI and I have this prejudice against two event structures on the same diagram... Also, there's no equivalent to the queue/notifier status and flush primitives (although I take note of AQ's warning over race conditions when used in combination with dequeue/wait primitives). I guess what I really want is something that combines elements of everything:

    • Non-lossy like queues
    • One to many mappings like notifiers
    • Something I can feed the reference straight into an event structure and handle 'new element' events.
    • Re-ordering of queued elements so that one can have high priority traffic overtake lower priority.

    Indeed. Events have been screaming for an overhaul for some time. I'm not sure, but I think they may also run the the UI thread which would make them useless for running in different execution systems and priorities (another reason I don't use them much....just in case).

    I would also add to your list being able to feed VISA sessions straight in so we can have event driven serial (pet dislike of ine :P )..

  10. Ah yes now that you mention it I see what you mean about doing all the SQL queries yourself with the SQLiteVIEW toolkit. Anyway I'll send a query off to them to find out what are the advantages (if any) of using their package of the SQLite API - maybe it has better performance or something - thanks.

    Chris

    There's lots of info on the SQLite API For Labviews performance here. There's also a lot of the development history too since LAVA was its birthplace.

    • A multi-connect server and single-connect client that maintains persistent connections with each other. That means they connect, and if the connection breaks they stay up and attempt to reconnect until the world ends (or until you stop one of the end-points :rolleyes:).
    • You can have any number of TCPIP-Link servers and clients running in your LabVIEW instance at a time.
    • Both server and client support TCP/IP connection with other TCPIP-Link parties (LabVIEW), as well as non-TCPIP-Link parties (LabVIEW or anything else, HW or SW). So you have a toolset for persistent connections with anything speaking TCP/IP basically.
    • Outgoing messages can be transmitted using one of four schemes: confirmation-of-transmission (no acknowledge, just ack that the message went into the transmit-buffer without error), confirmation-of-arrival (TCPIP-Link at the other end acknowledges the reception; happens automatically), confirmation-of-delivery (you in the receiving application acknowledges reception; is done with the TCPIP-Link API, the message tells you if it needs COD-ack), and a buffered streaming mode.
    • The streaming mode works a bit like Shared Variables, but without the weight of the SVE. The user can set up the following parameters per connection: Buffer expiration time (if the buffer doesn't fill, it'll be transmitted anyway after this period of time), Buffer size (the buffer will be transmitted when it reaches this size), Minimum packet gap (specifies minimum idle time on the transmission line, especially useful if you send large packets and don't want to hog the line), Maximum packet size (packets are split into this size if they exceed it), and Purge timeout (how long time will the buffer be maintained if the connection is lost, before it's purged).
    • You transmit data through write-nodes, and receive data by subscribing to events.
    • Subscribable system-events are available to tell you about connects/disconnects etc.
    • A log is maintained for each connection, you can read the log when you want or you can subscribe to log-events. The log holds the last 500 system eventsfor each connection (Connection, ConnectionAttempt, Disconnection, LinkLifeBegin, LinkLifeEnd, LinkStateChange, ModuleLifeBegin, ModuleLifeEnd, ModuleStateChange etc.) as well as the last 500 errors and warnings.
    • The underlying protocol, besides persistence, utilizes framing and byte-stuffing to ensure data integrity. 12 different telegram types are used, among which is a KeepAlive telegram that discover congestion or disconnects that otherwise wouldn't propagate into LabVIEW. If an active network device exist between you and your peer, LabVIEW won't tell you if the peer disconnected by mistake. If you and your peer have a switch between you for instance, your TCP/IP-connection in LabVIEW stays valid even if the network cable is disconnected from your peer's NIC - but no messages will get through. TCPIP-Link will discover this scenario and notify you, close the sockets down, and go into reconnect-mode.
    • TCPIP-Link of course works on localhost as well, but it's clever enough to skip TCP/IP if you communicate within the same LV-instance, in which case the events are generated directly (you can force TCPIP-Link to use the TCP/IP-stack anyway in this case though, if you want to).
    • Something like 20 or 30 networking and application related LabVIEW errors are handled transparently inside all components of TCPIP-Link, so it won't wimp out on all the small wrenches that TCP-connections throw into your gears. You can read about most of what happens in the warning log if you care though (error 42 anyone? Oh, we're hitting the driver too hard. Error 62? Wait, I thought it should be 66? No, not on Real-Time etc.).
    • The API will let you discover running TCPIP-Link parties on the network (UDP multicast to an InformationServer on each LV-instance, configurable subnet time-to-live and timeout). Servers and clients can be configured individually as Hidden to remain from discovery in this way though.
    • Traffic data is available for each connection, mostly stuff like line-load, payload ratio and such.

    Cheers,

    Steen

    This sound like a more polished/advanced evolution of the Dispatcher in the CR (I like the use of events here although I ran into issues with them and decided TCPIP timeouts were more robust). Many of the features you highlight here (like auto-reconnect, system messages, heartbeat etc) I've been meaning to add along with a more bi-directional architecture (although the version I have in my SVN also has control channels as well as the subscriber streaming channels). But on the whole, your stuff sounds a lot more flexible and useful (I'd love to get a peek at your error detection and recovery :) )

  11. Ok, I'm getting this feeling that I'm kind of missing the point here and probably being hopelessly naive, but...

    Isn't the two parallel loops exactly what notifiers were invented for ? Ok, yes they're lossy, but if you are just using it to stop two parallel loops and you know the last message every to posted on the notifier is the 'stop now' then it doesn't matter surely ? The problem surely with any queue based design is that you're stuffed if someone else grabs the queue and takes your message - or if you want to stop N loops in parallel where N is only known at run time.

    What would be handy for multiple loops distributed over indeterminate numbers of parallel running vis would be a one to many queue with priorities - so that you could enqueue an element with an arbitrary priority and have that element delivered to multiple waits and have the elements presented to the wait sorted first by priority and then by enque-timestamp. Thus each dequeue node could pull entries safe in the knowledge that it wasn't affecting any other dequeue node, could choose to discard elelemts if it wanted, but would process them in an order determined by the enquer that wasn't necessarily FIFO. Nut I don't see this is necessary for the stated problem...?

    That is really what events are for. However. I have a dislike for them since they cannot be encapsulated easily and maintain genericism. There are a couple of other options though (with queues). You can peek a queue and only dequeue the message only if it is for it (has the downside that if you don't dequeue an element-it stalls). Or my favorite of each "module" has a queue linked to the VI instance name. To close all dependents, you only need to list all VI names and poke (enqueue at opposite end) an exit message on all of them (just a for loop). This becomes very straight forward if all queues are string types and just compare (or have a case) to detect EXIT,STOP,DIE or whatever to terminate. Some people however prefer strict data types.

    But I think you are right. In the absence of events, notifiers are the next best choice for 1 to many messaging. I think that most people prefer to have one or the other rather than both in a loop though. And if a queue is already being used, it makes sense to try and incorporate an exit strategy using it.

  12. I would be happy with the following change - any free for commercial use code (i.e. free, not just trial mode) software that can be downloaded from the LabVIEW Tools Network should be allowed in the CLA exam. It is really frustrating to experienced developers when they can't use their standard tool-set (and one that is ultimately managed by NI) on the CLA exam. It definitely takes critical time away from creating an architecture when you are compelled to recreate architecture components (or even their interfaces) from scratch, especially when you already use them every day.

    I'm probably in the minority here (again biggrin.gif) but you don't need a software tool-chain to create an architecture. You are only using the LabVIEW IDE as your editor instead of (say) Microsoft word. The NI exams are specifically designed to be challenging in the time frame provided. However. If people feel the architecture for these fairly simple systems require a tool-chain just to realise it. Then perhaps the proposed architecture is over-complicated for the task (KISS).

  13. Hello again, I have been away due to exams but now I am working very intensively on this project again and I need some help (again).

    My problem is that the control of my Multichannel Analyser (ORTEC's ASPEC-927) cannot be done through TTL pulses after all and I have to find a way to do it through labview. I have been searching a lot in the internet but all I have come up with is either a publication "LABVIEW-BASED MCA EMULATION SOFTWARE FOR ORTEC MULTICHANNEL BUFFERS" or some reference to ActiveX usage for such a task. All I need is to trigger the counting, stop it and input the data in labview. If someone has knowledge on the matter, I would like to know what is achievable and what is not. I have no knowledge of ActiveX programming whatsoever by the way.

    ps.I haven't found anything relative to the communication of labview with Maestro 32 which is the default software provided with the MCA card nor "reading" the .chn output files from labview so I assume it is out of the question.

    thank you in advance

    Well. A quick search hasn't revealed any drivers others have written. It' seems to be a USB device so you have 2 options.

    1. Write a labview USB driver from their documentation (very hard and time consuming)

    2. Use their toolkit to write a high level Labview Instrument driver (I'd go for this).

    Either way. Unless someone has already done the work or, the manufacturer has supplied some, you will have to write an interface and option 2 would be the fastest and easiest, but still time consuming.

    Their toolkit it seems to encompass two flavours. DLL based and active X. Personally I would go for DLL based but that is only because I hate active X with a vengence. It specifically states Labview 5.1 (..gulp...) and they have examples, so you could start by hacking and modifying those.You won't be able to use LV5.1 VIs if you are using a LV version grater than 6 or 7 (I think) so lets hope they have more recent versions.

    The toolkit has two options for programming. For programmers familiar with Dynamic Linked Libraries (DLLs), it provides DLLs

    and supplemental Windows applications programming interfaces, which can be called from C, C++, or Visual Basic. For

    programmers using ActiveX Controls, all the functionality can be accessed more conveniently through ActiveX methods,

    properties, and events. The ActiveX capability makes it easy to program the ORTEC products from LabVIEW (Version 5.1 or

    later), Visual C++, and Visual Basic. Simple example programs are supplied with both programming options.

  14. thanks for the response, I had already taken a stab at just throwing in a second loop like you did there, except i used "bytes at port" feeding into a case structure where 0 does nothing essentially and default does the read, which works as far as talking out of the serial port like i'd like, but the problem i'm having is syncing the trace window on my FP. in your example you just have the reads showing up in the read buffer indicator, but I'm trying to have the Tx echo in the same indicator also. That's where I was thinking originally i'd need like a qsm or something, unless there is some clever way to echo a write to "read buffer" in your bottom loop in the example you posted.

    Nothing clever. If your device doesn't echo then you just pre-pend the command string. But the main problem has been solved, simple and took me about 10 mins (Note I am using the term char to only update the display once a whole string has been received)

    -->The property node in the read loop us really a local variable. It gets converted when posting a snippet.

    You could put the read vi(s) straight after the write (after the event structure) and it would work fine most of the time and be fully synchronised. But since you don't want the UI to look unresponsive whilst waiting for data; asynchronous reading is generally preferred.

    State-machines come into their own when you have multiple states that can be operated in an arbitrary order. So where you might want to use one is, for example, when a response from the device dictates the next command to send.e.g authentication challenges. But for most implementations where user interaction is involved,. then an event structure is preferable since all user interaction is handled for you.

  15. There are alternative methods such as sending the data via queues and using a database amongst others. It really depends on how fast (what is the acquisition rate?) and how fresh he data needs to be (would the user really see if data was delayed 100ms as long as it was continuous). Besides. There are not enough pixels on a screen to represent 20,000 data points, so you don't need all of them to display to the user.

    Of the two choices you have given, I would probably go for the global with a single write and limited readers (data-pool). That is the old method before things like queues/events and doesn't limit you to running the acquisition in the UI thread. But queues are the most commonly used since they also break the coupling between the UI and the acquisition and are very efficient. There are some examples shipped with LabVIEW that demonstrate this technique for waveforms.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.