Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Improved Stability I would be very interested to see a similar graph for CARs raised by customers for the different versions.
  2. Well. If they have difficulty with DLLs (SOs on linux) then kernel level drivers will slay them. The randrive.sys driver is no longer available in windows 7 (hope they weren't thinking of using it ) but there are a few 3rd party solutions I think. One final thought. Turn off the nagle algo. It is known to play hell with things like games and it is known to silently introduce delays in packet sending through the loopback. It is off for my setups for this very reason, although I never saw 2 second delays.
  3. Windows 7 x64 with LV 2009 x64. Indeed. My problem was just sheer throughput and it didn't matter what it was written in. I know it's curing the symptom rather than the problem (and it will be blocking), but have you tried getting the C read and write stuff compiled into a DLL and using that instead? Just a thought to see if the specific problem goes away. What do NI say about it (after all it is repeatable by a number of people)?
  4. Hmm. Yes. A bit of a trend apart from LV2010. And it may be why I cannot see any problems on my machines (none of the examples fall over after running for 29 hrs now ). My windows TCPIP is highly modified from a standard install. It was the only way i could "reliably" get TCPIP transfer rates of up to 80 MB/sec. (Not in loop back; across the network). The sorts of things that were changed were the TCPIP auto-tuning, and Chimney Offload. Also had to play with the TCPIP Optimiser, but can't remember exactly what now. This was in addition to the TX buffers. But i wouldn't have thought 25MB/sec would/should be that much of a problem, but I guess it is windows eh?
  5. I know it's one of those silly questions (especially since the C program would suffer from it too). But has to be asked..... Are you sure the power saving is turned off on the network card(s)
  6. New crash reporter But it definitely feels more responsive than 2010.
  7. I guess it won't help then. Rolfs got a point, but immediate really puts a burden on the CPU since you've got to (wo)man handle characters as they arrive. Then concatenate and terminate the loop on whatever it's supposed to terminate on (number of bytes or term char). This is the sort of thing: As you can probably see from the snippet. there is a (small) possibility that the first 4 bytes are garbage or maybe you read 1/2 way through a string and therefore expect a huge number. So are you using character terminated or pretending a payload size? You haven't said much of the inner workings. Example perhaps?
  8. Well, that's not a huge amount. Even the default LV examples should be able to cope with that. The default windows buffer is 8192 (if I remember correctly-don't have the getsockettoption....maybe later in the week). There are a few ways of calculating the optimum size dependent on the network characteristics, but I usually just set it to 65356 (64K) unless it's a particularly slow network (like dial up). It really makes a difference with UDP rather than TCP (datagram size errors). Note, however, that it only makes a difference if you are setting the "Listener" connection. It has no effect on "Open". It''s strange that the C program doesn't exhibit the same problem. If you are doing the usual write size, write data then you can try and combine it into one write operation (just concatenate the strings) but I haven't run into a problem with the former. If you have the C code then you can take a peek to see if they are doing anything fancy with the sockets....but I doubt it. Are you trying to send and receive on the same port? (Cmd-Resp) or do you have two separate channels; one for sending and one for receiving. If the latter. What part disconnects? The receipt of the data or the send of the data (or arbitrarily both) and what is the error thrown (66?). If you get timeout errors on a read, then you should see that in the network monitor (task manger) as drops but you say that it "lopes" (had to look that one up..lol). That's normally indicative of a terminated messaging scheme where the terminator gets dropped for some reason..
  9. What sort of throughput are you trying to achieve? You could try and give LV a bit more time to service the TCPIP stack by increasing the buffer size.
  10. Ooooh. You like living dangerously eh? .
  11. Hmmm. If that is true. How is it reconciled with the events of front panel controls which surely (neck stretched far) must be in the UI thread. I could understand "User Events" being able to run in anything, but if bundled with a heap of front panel events; is it still true?
  12. Where do they run then? In the execution system of the vi properties?
  13. Indeed. Events have been screaming for an overhaul for some time. I'm not sure, but I think they may also run the the UI thread which would make them useless for running in different execution systems and priorities (another reason I don't use them much....just in case). I would also add to your list being able to feed VISA sessions straight in so we can have event driven serial (pet dislike of ine )..
  14. There's lots of info on the SQLite API For Labviews performance here. There's also a lot of the development history too since LAVA was its birthplace.
  15. This sound like a more polished/advanced evolution of the Dispatcher in the CR (I like the use of events here although I ran into issues with them and decided TCPIP timeouts were more robust). Many of the features you highlight here (like auto-reconnect, system messages, heartbeat etc) I've been meaning to add along with a more bi-directional architecture (although the version I have in my SVN also has control channels as well as the subscriber streaming channels). But on the whole, your stuff sounds a lot more flexible and useful (I'd love to get a peek at your error detection and recovery )
  16. That is really what events are for. However. I have a dislike for them since they cannot be encapsulated easily and maintain genericism. There are a couple of other options though (with queues). You can peek a queue and only dequeue the message only if it is for it (has the downside that if you don't dequeue an element-it stalls). Or my favorite of each "module" has a queue linked to the VI instance name. To close all dependents, you only need to list all VI names and poke (enqueue at opposite end) an exit message on all of them (just a for loop). This becomes very straight forward if all queues are string types and just compare (or have a case) to detect EXIT,STOP,DIE or whatever to terminate. Some people however prefer strict data types. But I think you are right. In the absence of events, notifiers are the next best choice for 1 to many messaging. I think that most people prefer to have one or the other rather than both in a loop though. And if a queue is already being used, it makes sense to try and incorporate an exit strategy using it.
  17. I'm probably in the minority here (again ) but you don't need a software tool-chain to create an architecture. You are only using the LabVIEW IDE as your editor instead of (say) Microsoft word. The NI exams are specifically designed to be challenging in the time frame provided. However. If people feel the architecture for these fairly simple systems require a tool-chain just to realise it. Then perhaps the proposed architecture is over-complicated for the task (KISS).
  18. Or just supply a marker and we can draw on other peoples T-shirts.
  19. I use +-3 standard deviations from the mean (or zero) to set a dynamic threshold for peak detections of varying signals.
  20. Well. A quick search hasn't revealed any drivers others have written. It' seems to be a USB device so you have 2 options. 1. Write a labview USB driver from their documentation (very hard and time consuming) 2. Use their toolkit to write a high level Labview Instrument driver (I'd go for this). Either way. Unless someone has already done the work or, the manufacturer has supplied some, you will have to write an interface and option 2 would be the fastest and easiest, but still time consuming. Their toolkit it seems to encompass two flavours. DLL based and active X. Personally I would go for DLL based but that is only because I hate active X with a vengence. It specifically states Labview 5.1 (..gulp...) and they have examples, so you could start by hacking and modifying those.You won't be able to use LV5.1 VIs if you are using a LV version grater than 6 or 7 (I think) so lets hope they have more recent versions.
  21. Nothing clever. If your device doesn't echo then you just pre-pend the command string. But the main problem has been solved, simple and took me about 10 mins (Note I am using the term char to only update the display once a whole string has been received) -->The property node in the read loop us really a local variable. It gets converted when posting a snippet. You could put the read vi(s) straight after the write (after the event structure) and it would work fine most of the time and be fully synchronised. But since you don't want the UI to look unresponsive whilst waiting for data; asynchronous reading is generally preferred. State-machines come into their own when you have multiple states that can be operated in an arbitrary order. So where you might want to use one is, for example, when a response from the device dictates the next command to send.e.g authentication challenges. But for most implementations where user interaction is involved,. then an event structure is preferable since all user interaction is handled for you.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.