Jump to content

PA-Paul

Members
  • Posts

    139
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by PA-Paul

  1. I'll have to confess to having not measured it. I was very pushed for time so just made sure it looked quick and smooth! I don't have access to the hardware to actually do any characterisation now either. When I next get it back I'll have a look at it. Sorry! Paul
  2. Hi All, So I got the nagle algorithm switched off using the VIs in asbo's link above. I also got rid of a couple of short command/response items which were being sent to get waveforms, and finally I took out the chopping data up into chunks and everything now seems to work very smoothly. No stalling or juddering at all. So, that looks to be the best way for my application (for now at least) so thanks all for your input! Cheers Paul
  3. Thanks for all of the replies. I'm going to make a couple of tweaks to the code now and see what happens. @ShaunR - you're right I am polling - and I know this will cause a reduction in the possible transfer rate, but what I was seeing (prior to disabling the Nagle algorithm) was a plenty fast enough transfer rate but it would get say 10-20 waveforms fast (>30 waveforms a second) then stall to 2-3 per second (which ties in with the time delay of the ACK etc) sporadically. The situation was manageable when the two PCs were connected only with a crossover ethernet cable, but when I then put a standard switch in between them the fast transfer pretty much died). In hindsight it is possible that doing a proper streaming of the data might help rather than polling, but I don't need "that much" speed! and I prefer the polling approach in general for this application (live updates are only part of what we need to do). Also, just for info, each point on the waveform is one U8 integer and it gets scaled at the client end. @asbo - for this application I have other things which I need to be sure arrive properly as well as the waveforms, so I prefer the tcp approach over the udp for that reason. Thanks again for all the insight, its been very useful. I'll post back when I've finished my tweaks and let you know how I got on! Paul
  4. Phillip I just found the "TCP NoDelay" property in the "Instr" property node, but you have to wire a VISA name into that property node. I'm using the TCP vis and only have a TCP refnum. How do I get from one to the other?! Thanks in advance for any info! Paul
  5. Thanks for the info Ben, I will endeavour to read up on it more through the Dark-Side, although right now I have bigger fish to fry on this project - some juddering on waveform transfer is better than the thing not turning on in the first place! I found this http://b.snapfizzle.com/2009/09/windows-7-nagles-algorithm-and-gaming/ following your first post and gave it a go. It disables the Nagel algorithm in win 7. It seems to have improved things significantly. The only minor thing to note is that it seems you need to have this set at both machines in my setup before you can get the two computers to even talk... (I found this out since I'd set it up on machine A and B and got it working, then replaced B with C before I'd modified the settings on C and the network didn't even get going...) Since in this project the PCs are likely to be isolated from any other network environments I'm not worried about it impacting anything else... Otherwise I might have done a bit more research before trying it! Thanks again for your help Paul
  6. Hi Ben, Thanks for the reply. I should first point out that this is my first foray into the world of writing Labview applications which send data across a network via tcp/ip. I have used the labview STM (Simple TCP Messaging) VIs in building my application. They simply put some wrappers around the built in LV TCP vis to help manage the commands and data types sent over the network. Similarly, I'm not overly experienced with the detailed network settings (I can set up a pc to run on a specific IP/subnet etc and in this case have had to manually set the link speed of the NICs on the PCs (the fibre link will only work at 100Mb/s and won't allow the PCs to autonegotiate speed). I guess the most simple question which I should start with is: should I expect to have to manage the size (i.e. length) of the data string I write to the "TCP-write" vi? Or should I be able expect labview/windows to take whatever i write to that VI and send it in the most appropriate way over the network? (Basically, I want to keep my life and this code as simple as possible whilst making it work as well as it can!) I looked at the wikipedia info on Nagel's algorithm (seemed like as good a place to start as any!) and it mentions issues arising from doing lots of writes to the tcp port and then reading, might it be sensible for me to split my "commands" and "responses" within my code, so I use 1 port for the "commands" from the client and a separate port for the "responses" from the server? Might that help? Next then, is: might I need to make changes to the specific PC setups for networking (I saw something on being able to disable the Nagel algorithm in windows 7 via the registry)? Finally, I still don't really understand why sticking an unmanaged switch between the two pcs (the two pcs are the only devices on the switch) causes such a marked difference in behaviour... Any more thoughts? I've not used wireshark before, what information would I be looking to get from it? Thanks again for your help! Paul
  7. Hi All, I thought I'd posted on this before, but can't find anything so I may have imagined it. I have a client-server application whereby the remote server acquires waveforms (typically 1000-16000 data points long) and fires them up on demand to the client. The waveforms are actually sent as a "header" cluster and then a 1D array of U8 integers (i.e. the waveform values). The actual data acquisition takes approximately 4ms (which involves getting the data off a digitiser in the server computer), but the time taken to send the waveform "up the line" seems to vary hugely. In some circumstances I want to stream these waveforms up from the server to the client. I do this by putting the client "request waveform" vi into a loop and allow it to run as quickly as it likes (although I have tried limiting the max rate with a wait ms multiple inside the loop). I very seldom get a good constant rate of waveform transfer between the two PCs. The waveforms seem to stutter on the client pc, running smoothly for 10-100 waveforms or so at a goo 50 per second, then stuttering down to 1 per second at worst. The PCs are networked either by a crossover network cable (with both pcs ethernet adapters set to 100Mb/s, full duplex), or via a fibre-optic to ethernet adapter (again limited to 100Mb/s). In this setup, the performance is at least bearable, but if I then try to do run the setup with a network switch between (netgear fs105 for info) the situation worsens to where I only get 1 or 2 waveforms a second. I tried chopping the waveforms up into smaller packets and sending them up and then rebuilding the waveform. This did have a positive effect (and in fact is how I get acceptable performance without the switch in place). Is there anyway of optimising the process of sending data over a TCP connection? Any ideas on this would be gratefully recieved! Thanks Paul
  8. Hi All, This post is largely to highlight the post I made on the remote forum (http://lavag.org/topic/14845-lv-app-crashing-on-close-of-tcp-connection/page__view__findpost__p__89021) But I'm not sure the issue is only related to the fact that the app in question is acting as a server in a tcp based setup. The application does seem to work ok in the IDE (no error messages) however I have seen some random complete labview crashes (i.e. labview spontaneously disappears from the screen with no error messages or anything!). Has anyone seen anything like this and got any thoughts? I'm clutching at straws at the moment and really need some ideas! (feel free to ask questions too!). Thanks! Paul
  9. This issue is really bugging me - I tried building the server application in lv2011 and ran that, but still got random application crashes with the lvrt.dll listing as the faulting module. Anyone seen any application crashes like this? Thanks!
  10. Hi all, I have a system which comprises essentially two PCs. One (the server) runs a labview app which controls a set of hardware and daq etc. The other communicates with the first over a TCP/IP connection (using the labview STM package vi from ni) to control the other and then process the data. With both PCs running from code (i.e. within labview) I have few problems. For the most part it works very well. When I run the server side from a built exe though I have issues. The first run of the application on the server runs fine. I can connect to it from the client PC and I can control it and get data, all with no issues. However, if I disconnect the client (either by hitting the big red abort button in the ide or through a proper closedown and release of the TCP/IP connection, the server registers the disconnect fine, but if I then try to reconnect, I get a windows error (see errors 1 and 2 in the attachments) saying that lvrt.dll has crashed. It doesn't "always" happen, but I can't find anything in particular that causes this. I've double checked the error handling that results from a closedown of the client, but can see nothing wrong when running from the source. The whole application was written on lv 8.6.1 and it doesn't seem to matter if the server is running on XP or windows 7. Anyone got any ideas? I'm stuck here and have a deadline approaching! Thanks in advance!
  11. (For info, this is a kind of cross post with NI.com here: http://forums.ni.com/t5/LabVIEW/Is-it-possible-to-access-motherboard-s-built-in-temperature/m-p/1662280/highlight/false#M594131) Thanks for the info. Unfortunately, coretemp only appears to return the CPU core temperatures. I'm really after ambient board temperature (that I can get from speedfan for example, but can't see how to get into labview). Any other thoughts? Thanks Paul
  12. Hi All, Does anyone know if it's possible to access motherboard temperature sensors using labview, and if so, how?! Thanks! Paul
  13. Hi All, I have a temperature measurement problem that I'm trying to solve and wondered if anyone has any suggestions... We're building a "system" comprising a PC motherboard and a few other bits and bobs in a closed box. One of the "bits" is an NI PCIe 6320 multifunction DAQ card. We want to monitor the ambient temperature within our box and check for over heating. One thought I had was to use one of the unused analogue inputs and throw a thermocouple at it to monitor the temperature. However, it seems like the 6320 does not support cold junction compensation so, since the card is going to be inside the ambient I'm trying to measure, I don't think I can do that. So, I was wondering if I can use a thermistor and use one of the 5V sources on the card to provide excitation voltage... has anyone had any experience doing that? If so, any tips?! Any other ideas also welcome! Thanks! Paul
  14. I'm going to confess I've never really got my head around using MAX to setup virtual channels and the like. I create tasks in LV and then set up physical channels etc in the code when I need them (hence needing the device name etc programatically). Should I be doing something different? crelf: can you expand a little on what you mean? Thanks for the help guys! Paul
  15. Hi Guys, I've had this "problem" several times and just wondered if anyone here can give me any new ideas. I'm writing an application that uses an NI PCIe based DAQ card (6320 if you're interested). Whilst I'm working on the application, the DAQ card is in my computer and therefore I know what its "device name" is so i can easily configure everything to work with it and use the right channel names etc. However, if I then want to deploy my application to someone else along with a DAQ card, I have no way of knowing what the device name will be in their system. If I'm now deploying this to multiple users with multiple systems for each I can't tell whether they may already have another DAQ device in their system etc and there's a risk things could get messy... So how do I get my software to talk to the right daq card? The way I've dealt with this in the past, I've supplied the DAQ card and software, and I use an ini file for the software which contains the serial number of the DAQ card. The software then finds the device name corresponding with that serial number and then populates a cluster I can the read out from wherever I need to. This works, but assumes I know the serial number of the device. I can't garuntee that the "customer" will have labview or NiMax so I also can't assume that they can access the serial number easily to enter it into a control... Is there a better way to do this? Or a more general way even... If you rename a device in NIMax, its only a local change (I think?!), which is a bit of a pain as you can't then give your device a specific name on the system. Any thoughts gratefully accepted! Paul
  16. John Lokanis seems to have something built into the labview app building interface, is that a feature of an lv release post 8.6?
  17. Ton - is there a tool like that available for "general use" (i.e. where can I get it!) Thanks
  18. John - I don't have a pre/post build action option in my app builder, is that a new thing? (I'm on LV 8.6) Ton - Is there any way to make the version of the lvlib library auto-increment? My llb file already contains the lvlib associated with the plugin VI, so I could potentially use that.... Thanks for your help. Paul
  19. Hi All, We have an application which uses a plugin architecture. The plugins are written in labview and distributed to the end users as a .llb file, with the top level vi in the llb being the main VI that is called as the plugin. At the moment, I use the source distribution build specification in the labview project (LV 8.6) to create the plugins. Unlike executable and installer builds, this does not allow me to set any kind of version number. Is it actually possible to version number an llb file? Can anyone suggest a sensible way of implementing a (preferably automated!) version numbering method for these files so I can keep track of which version of the plugin is being used? Thanks in advance for your help. Paul
  20. Hi Ben, Thanks for the reply, can you clarify what you mean by running the laser code as an "active object"? Thanks
  21. Hi all, I have an irritating VISA timeout error which I can't seem to track down. Any help would be most gratefully recieved. So, I'm controlling a tuneable laser over an RS232 link. The laser in question accepts byte code instructions, not all of which lead to responses from the laser. I have written a VI which communicates with the laser and can do the 4 main things I need - turn the laser on and off, set the laser wavelength, check the laser wavelength and set the laser to sweep through its wavelength range. As a standalone VI I have no problems running this at all. It seems completely stable (as far as anything can be!) In my main application I call the laser control VI by reference (since I have a couple of different lasers I want to use with the same main application). The main App commands the laser to sweep through its wavelengths, and then tune to a specific wavelength. It then monitors a couple of external parameters via a USB-DAQ card and from time to time will send the laser to a new wavelength depending on the results of the monitoring. And its here I seem to have an issue. The laser initialises fine, I can make the laser sweep through its wavelengths fine. But when the laser has to tune to a new wavelength, it tunes once or twice and then gives me a VISA time out error on a VISA read command (Which is part of the "set wavelength" command). Within the main application I also have the ability to manually force the laser to tune (using the exact same set wavelength code within the app) and this seems to work reliably. I've tried running the laser control VI in a loop as fast as possible to see if I can make it fail, but I can't. The only way I've been able to make it fall over myself was writing a simple program that called the laser control VI by reference and then commanded wavelength changes in a loop (alternating between two wavelengths). This setup was actually stable until I probed the error wires into and out of the set wavlength vi... at which point I noticed that the comms with the laser were suffering if I moved things around on the screen (I have a tx and rx led on the laser itself so can see if things are being sent/recieved). Without the probes, there was no problem, the leds were flashing at a nice constant rate. with the probes in place the flahing was much more stuttery especially if i moved the probes on screen. Adding the probes also seemed to induce the timeout error... If anyone has seen anything like this before or can think of other things I can try, I'd be most appreciative! Thanks all Paul
  22. I installed VisualSVN server on our windows small business server here and it works a treat. Extremely easy to set up and allows off site access as well. We use the free version as the extras in the corporate version look to be for companies a bit bigger than us (we only have 2 of us really developing code). Anyway, it works great! Paul
  23. Thanks for the replies. I ended up installing LV 8.6 under win7 and it seems ok - I had to do the install manually from the distributions folders on the disk, but other than that it went fine. So all is now working as it should be, Cheers Paul
  24. Hi All, I recently had to upgrade my laptop PC to one running windows 7 (Pro 64bit). For work I still have to use labview 8.6 which doesn't support windows 7, so, since I'm running win7 Pro, I opted to install labview 8.6 on windows XP in the virtual PC mode. I can connect USB devices to the virtual machine no problem, but the USB DAQ card I'm trying to use (NI USB6211) is detected when its plugged in for the first time, the driver starts to install then it says there was an issue and stops... The device doesn't show up in NIMax and in the device manager there's a yellow ? next to it, and the status is listed as "This device cannot start" Anyone come across this or a way around it (short of sticking to LV 2010 on win 7? Thanks Paul
  25. Thanks! Got that working great. Not into classes I'm afraid. I feel I ought to be, but just don't have time to go down that route (I'm a physicist by trade and training, so I do my best to write good "g" but I've never got into oop (well - "oops" I can do!)) Thanks again. Paul
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.