Jump to content

Neville D

Members
  • Posts

    752
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Neville D

  1. QUOTE(Gabi1 @ Jan 25 2008, 11:27 AM) ??? Why not? Post your code. Neville.
  2. QUOTE(Tim_S @ Jan 25 2008, 03:57 AM) See this http://forums.lavag.org/New-Freeware-from-Moore-Good-Ideas-t8756.html' target="_blank">link its supposed to be 500x faster. I would have loved to try it out, but I have moved away from ini's and only store config data in an xml format. Neville.
  3. QUOTE(yiweihua2002 @ Jan 22 2008, 05:20 PM) I have heard that the reason the ini VI's are slow is because the native LV VI's themselves are a bit ancient and clumsy. You can try these http://www.mooregoodideas.com/goodLabViewStuff.htm' target="_blank">VI's from David Moore that are supposed to be much faster since they replace the native VI's. I haven't used them before. Neville.
  4. QUOTE(BrokenArrow @ Jan 23 2008, 09:48 AM) What MUX are you using? Maybe it is slow in parsing all the commands that NI switch is sending it. Maybe you need to figure out lower level commands to the MUX to make it switch in the pattern you want, but faster. For example if Relay "a" needs to be closed for operation 1 and 2, then don't send that command twice or something like that. Or there might be a mode where you can pre-program the device to "remember" the sequence of relays for each command you send it. Then you just activate that mode instead of sending it a high-level command. In the past, I have worked with relay devices in VXI and yes, when you need to switch 6-8 relays to connect a single channel, then it does take a significant amount of time to send the commands (using GPIB-VXI module in those days) and for the relays to actually switch themselves. I had to program at the register level for some of the modules to make it switch faster. Neville.
  5. QUOTE(crelf @ Dec 7 2007, 12:12 PM) Sorry, I saw this too late.. just returned from Sydney yesterday.. Maybe see you at NI-WEEK 08 (corporate budgets permitting..) Neville.
  6. QUOTE(Aristos Queue @ Dec 21 2007, 09:50 AM) Try looking at Vision examples. Most of them still use stacked sequence structures. But other than that, the examples are reasonably well-written. Neville (just back from vacation)
  7. Hey everyone, last post from me in a while..!! Off to sunny New Zealand for a month and then 3 days in Sydney after that!!! Talk to you all in January (and for all our colleagues in the Northern hemisphere.. don't sprain your backs shoveling snow!) :beer: :beer: Neville.
  8. QUOTE(Neville D @ Jan 10 2006, 02:30 PM) Better late than never..!! Here is a picture of a VI I use in the RT code. If an error is generated in the RT controller, this VI is run and it automatically self- reboots the controller. http://lavag.org/old_files/monthly_12_2007/post-2680-1197050147.jpg' target="_blank"> Neville.
  9. QUOTE(rolfk @ Nov 27 2007, 12:51 PM) Your right.. I forgot to mention, I built my own tool that does that as well as reboots the targets, reads remote ini settings, opens remote front panels, ftp's remote files to and from the targets automatically etc. I just call the NI utility (suitably modified) into a sub-panel of my main utility and go from there. One downside of the NI library is that many of the VI's are locked, so minor adjustments in some cases are not possible. Still its pretty usable. Neville.
  10. QUOTE(CraigGraham @ Nov 27 2007, 01:20 AM) Uhh.. I think thats a bit of over-reaction. Like I said before, its trivial to replace an exe (of the same name) with a new one on any of NI's real-time platforms. You don't need a full-blown development environment to download the code. Do it manually through IE, or use the FTP functions in NI-MAX, or else you could use the Internet Toolkit VI's to build your own application that 1 renames the original app (so it can be deleted) 2 deletes the remote app 3 downloads the new app 4 reboots the cFP so that the new codes starts running. QUOTE(CraigGraham @ Nov 27 2007, 01:20 AM) Re using IE to go in and delete the existing exe- that'll just stop it doing anything from power on until the App Builder exe shoves the transient runtime code down. Well, I'm not sure what your upto, but how else do you expect to replace a running exe?? You will have to stop it momentarily to replace and then reboot to run the new app. Again if you replace the file with IE you don't need the app builder etc. Just send them the new startup.exe file and ask them to replace in the ni-rt/startup folder by dragging & dropping and thats it. Neville.
  11. QUOTE(CraigGraham @ Nov 26 2007, 05:58 AM) I'm not sure of a way to do it programmatically while loading an exe, but you could just get them to use IE to ftp into the cFP and rename the old exe using windows and delete it, then do the install again. Neville.
  12. QUOTE(CraigGraham @ Nov 26 2007, 05:58 AM) I'm not sure of a way to do it programmatically while loading an exe, but you could just get them to use IE to ftp into the cFP and rename the old exe using windows and delete it, then do the install again. Neville.
  13. QUOTE(Phantom Lord @ Nov 20 2007, 04:35 AM) I haven't looked at your code, but in theory, you could replace while loops with timed loops and use "Abort timed loop" to step out of it. Neville.
  14. QUOTE(Jeffrey Habets @ Nov 19 2007, 03:42 PM) From my local NI rep, they adjust prices every quarter to reflect the exchange rate. Also, there is a built-in adjustment (increase) to European prices to reflect the higher cost of doing business there. Neville.
  15. QUOTE(ramleo_cathy2000 @ Nov 15 2007, 09:40 PM) I don't think thats possible due to limitations of having only one FIFO, scan-clock etc. Neville. PS: Why are you posting the same question twice?
  16. QUOTE(Usman Shafiq @ Nov 15 2007, 01:35 AM) If you have a query, why don't you post it? If its not related to this thread, then start a new thread. N.
  17. QUOTE(kobika @ Nov 15 2007, 08:10 AM) I am not sure what you are doing. How do you have NI Vision and Vision Runtime on the SAME computer?? The PC that you need to run the exe on should only have the vision runtime activated. Have you installed the whole vision package on it?? Also what do you mean "doesn't run"? What happens? What error messages? Please be http://www.catb.org/%7Eesr/faqs/smart-questions.html' target="_blank">clear in your questions and what you have tried, else no one will respond to your queries if they have no information to work with. Are you using any IMAQ Camera-related VI's? That may require a separate licence (I am not sure about this). But maybe you should get NI involved if some of the VI's are generating error and not others. Neville.
  18. Try a manual reboot of the PXI system. Are you sure you are not in safe mode? (there is a dip switch on the controller to set it in safe mode, if you haven't changed it, then it probably is NOT in safe mode). Connect a monitor to the PXI controller and see what error messages if any are displayed on it. Try removing the other modules (except controller) from the chassis and seeing if error messages disappear or behaviour changes. If still nothing then follow instructions to re-install the software by putting in safe mode. These should be in the manual for your controller. Neville.
  19. QUOTE(ned @ Nov 12 2007, 05:16 AM) Hi Ned, Our system consists of a VME computer (non-LV) that transmits line scan data to a windows PC running LabVIEW and Vision. The image data is about 500k, and once processed by the LV PC, the returned results are about 1KB or so. We tried UDP, and it definitely is a LOT faster.. a few ms to transfer the data. But on the windows (receive) side, we seem to be missing packets inspite of having a parallel loop that simply reads the data and buffers to an Img buffer. To the others that have replied with helpful comments: Ben, and JFM, I am in the process of experimenting with Nagle, and it might help on the transmit side (windows->VME) where the data is quite small and it takes about 10ms to just open a socket. Disabling Nagle on the VME didn't make any difference. I will try experimenting with different packet sizes on the VME side. LVPunk, I have also tried increasing UDP packet size to 4k and it seems to work OK. I have just got a quad-core machine and have upgraded to Gig-E. Preliminary tests are encouraging. The Gig-E with even a dual core allows us to use UDP without missing packets. Also, going from 100MB/s to Gig-E seems to offer roughly twice the performance. Will post more info for reference, later on. Many thanks to all of you! Neville.
  20. QUOTE(kobika @ Nov 13 2007, 06:32 AM) You need an additional run-time licence for Vision applications. Once you buy that, you will get a serial number that you enter and activate your vision runtime licence on the PC that you are installing the executable with vision code. Also, even if you don't have the licence yet, you can still run your application for 30 days, but each time you start the application, the licnece manager will ask if you want to run in demo mode or activate the licence? This window usually gets hidden behind your app window. click on 30 day trial (demo mode) and your app should run fine if everything else is OK. Neville.
  21. I have a Windows XP PC with dual Gigabit Ethernet network cards in it. I would like to use one of them for reading TCP data from a remote (non-windows) computer, and one of them to write TCP data to the same remote machine. Any pointers or caveats? Would there be a substantial performance gain in separating out the read and write tasks to different network cards? Any other pointers on speeding up TCP reads? It currently takes about 50-100ms to read about 500k of data over Gig-E. thanks, Neville.
  22. I skimmed through your code a bit. I notice you are reading TC modules as well. As far as I remember they are pretty slow to update. Do you think that might be possibly affecting your update rate? The relay modules (if they are mechanical) are quite slow as well. Another thing: do you really need data at 500Hz from so many channels? You could either read it slower, or read & react to the faster data rates, but transmit data up at a slower rate. Maybe every 10 reads average results and transmit that. Your 3 loops are running in parallel quite fast without a breather for any of them if they start to interfere with each other. Have you tried getting rid of one of the loops to see if that speeds things up? Maybe you could interleave the TCP and DAQ loops. i.e read DAQ, transmit out, then read DAQ again (to prevent parallel loops from starving each other of time). Like I said before the Fieldpoint isn't the most powerful platform. Also daq on it is implemented differently from other platforms. DAQ involves the processor to actually read bits of info from the various modules. Which is why if you have a lot of modules, lots of communication and high data rates, it may not be able to keep up. I would try 1 module at the full 1.7kHz w/TCP comm and see if you get that working. Then keep adding channels until it isn't able to keep up. Sorry, I haven't worked with Fieldpoint in a few years, but from what I remember it worked very well for low update rates. I never tried updates faster than about 10Hz, and viewed the results over a web browser to the front panel. We built a hydrogen fueling station, and for fault tolerance, we hooked up the User Interface directly to the FP hardware via relay modules, and the software reacted to those button presses rather than comm over TCP to a host PC. The host was simply a way to monitor the action. Neville
  23. I haven't looked at your VI's, but the sample rates for FieldPoint are quite slow. Are you sure 500Hz is possible for the hardware? If it is, then you might try reading more samples and sending those in one shot instead of pt by pt. ie read 100 samples, and beam them across via TCP. Note that in terms of performance of RT targets, Fieldpoint targets have the worst performance. Neville.
  24. QUOTE(sachsm @ Nov 5 2007, 02:37 PM) Viewing the IMAQ image display has always been immensely problematic.. it seems with every upgrade of Vision, the remote display either works or if it had been working previously, stops working. What version of IMAQ/Vision are you using? I think with 8.2.1 and Vision 8.5 the display didn't seem to work, but with Vision 8.5 and LV 8.5 it works fine. I use it without bothering to check the value changed stuff. It seems to work without causing any serious performance issues. I did notice that if the images update rapidly, the remote display might miss updating (since this might be handled by the lower priority Web Server task on the remote machine). Prior to that, with older versions of Vision the bug re-appeared. LV 8.2.1 + Vision 8.2.1 ->bug As far as I can remember, with Vision 7.1. and LV 7.1.1 it worked.. but I could be wrong. If you really want bullet proof image display, you will have to compress the image using JPEG Encode and Decode VI's (found on NI's website), transmit it using either your own TCP IP routines or shared variable, and display on the Host (Client) side using additional code. Note that the JPEG Decode VI has a memory leak bug, but that is a story for another day. Neville.
  25. QUOTE(sachsm @ Nov 5 2007, 12:57 PM) I don't think value property or local variables (as relating to updating the front panel) works with RT. In general I try to avoid property nodes when working with RT. I usually just keep updating the indicators even if there is no client connection; when there is one, everything updates normally. Are you using any exotic indicators (IMAQ image display)? Neville.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.