Jump to content

Tom Limerkens

Members
  • Posts

    24
  • Joined

  • Last visited

Profile Information

  • Gender
    Male
  • Location
    Hasselt, Belgium

Contact Methods

LabVIEW Information

  • Version
    LabVIEW 2015
  • Since
    1997

Tom Limerkens's Achievements

Newbie

Newbie (1/14)

  • Week One Done
  • One Month Later
  • One Year In Rare

Recent Badges

0

Reputation

  1. Hi, We've had the same problems with the cRIO platform when NI switched the CPU from Intel to PowerPC and the RT OS from ETS to VxWorks. Apparently, when you want to access the TCP packet received by the cRIO 2 different times, it takes about 200 msec to access it the second time. Depending the CPU type and load. Our typical RT architecture was something like this before : (UDP) SenderSide: - Prepare data to send in a cluster - Flatten cluster to string - Get String length - Typecast string length to 4 bytes text - Pre-Pend string length before data Receiver side: - Wait for UDP data, length 4 bytes (string length) - Read 4 bytes, format to I32 string length - Read remaining string length - Unflatten datastring to Cluster and process While this worked flawlessly in Windows, RT on Fieldpoint, CompactFieldpoint and the first series of cRIO's (the intel line), the code suddenly broke when NI switched its OS to VxWorks on the newer cRIO targets. Perfectly working code suddenly had to be rewritten, just because a newer type cRIO was selected. But anyhow, We found some kind of workaround which proved to be working. General description : Sender Side : - Only use TCP, UDP proved to be too unreliable on cRIO - Prepend the String length before the flattened string, and send as a single 'send function'. - If total string size exceeds 512 bytes, split the string to 512 byte packets, and send them with separate 'send functions' Receive side - Send no more then 512 bytes per TCP Send function, if a string is longer, split it in separate 'TCP Sends' - At receive side, only access every receive package only once, to have fast cycle times - Read TCP bytes with the 'immediate' option, check the received package size, and the expected package size (first 4 bytes) - If more then the received bytes are expected, read the remaining bytes with the 'standard' option For us this proved to be a working solution, which does not take 200msec to read a package from the TCP/UDP stream. But it affects the real-time behaviour of the OS, since TCP will take more Priority in the OS scheduler then UDP did, causing more jitter on loop times. I'm still trying to figure how I have to explain this kind of stuff to customers who choose cRIO as their product/industrial platform. Try to explain that the manufacturer changes the TCP stack behaviour between 2 different cRIO versions. That the underlying OS has changed is not of their interest. It proves that NI has no clue on how automation is done in a mature industrial market, instead of a research environment or prototyping work. But let's not turn this into a rant on NI ;-) I'm sorry I can't send you any code, but that is under company copyright. If you have any questions, let me know, I'll try to help. Tom
  2. Bujjin, I never worked with the protocol, but at the following sites are some interesting documents and whitepapers. http://www.dnp.org http://en.wikipedia.org/wiki/DNP3 Tom
  3. Paul, I would not only rely solely on the Backup capabilities of the Backup of your server. We have a similar setup like you have, and our backup strategy of the SVN is as follows : 1. Every project has its own repository in the Visual SVN server 2. Every night, an automatic script runs the hotcopy command on every single repository, and sequentially zips the hotcopy 3. The zipped hotcopy is stored on the backup tape This way we have a stable backup of the repository, and can restore it to any SVN server without a problem. We are now looking for an additional offsite backup with SyncSVN, but did not find a reliable solution yet. General tip : Backup is there to recover from disaster and HW Failure, count it in in your procedures, and test your procedures, and validate it at regular intervals, HW tends to become obsolete (Tapes, drives, ...) Kind Regards, Tom ESI-CIT Group
  4. I would be very carefull by using a VGA extension cable, like you mention, not all pins are connected, and we have had some issues by using VGA cables with non-VGA devices (SICK Barcode scanners). There are companies that offer straight through DB15HD cables, I would go for such an option, or make them myself. Success, Tom Edit: Typo
  5. Just my 2 cents, if it needs to be affordable... On the cRIO Backplane side, you could go for a NI 9977 module (about 30 USD/piece), and alter it, so you can use it as a 'backplane connector'. on the module side, couldn't you replace the fixing screws there with some type of standard 'screw-nuts' like used in DB9 extension cables ? Protocol over the DB15HD connector lines is SPI, but I have no idea about the speed, if you don't extend it for too long, I wouldn't expect a real problem... Tom
  6. I definitely qualify as as lurker I guess being member since couple of years, I mostly check out the advanced forums for seeing what all can be done by LV, and use the search when encountering problems myself. Leaving the posting to my dutch colleague Rolf Great to see some of them presenting live here at NI-Week
  7. Assuming you work in Windows, and you want to map/unmap network drives dynamically. It can be done quit easily through the command prompt using the 'net use' commands. type 'net use /?' here some examples : <BR>example to connect :net use T \\companyserver\measurementdatashare mypassword /USER:mydomain\myaccountexample to disconnect drivemap above : net use T \DELETE Success, Tom
  8. Hi Shane, If I were you, check out the following articles on RAW USB communication using VISA. There are the very basics of how to communicate with a USB device, and also the different ways how to communicate, Interrupt, Bulk, etc... http://zone.ni.com/devzone/cda/tut/p/id/4478 http://digital.ni.com/public.nsf/allkb/E3A...6256DB7005C65C9 http://digital.ni.com/public.nsf/allkb/1AD...6256ED20080AA3C http://zone.ni.com/devzone/cda/epd/p/id/3622 If you have control, or knowledge about the USB device's code, that will make it a easier. Hope this helps ? Tom
  9. David, If it is an option, you can try to put a fixed IP at your windows PC. That disables the DHCP. Tom
  10. When building an application, remember to enable 'Pass all commandline arguments to application' in the 'Application Settings' Tab of the application builder. Otherwise no arguments will be accessible in the EXE. Tom
  11. Hi Matt, Now that is an interesting finding. Were there special strings, patterns or characters in the descriptors ? If you can share them with us, maybe we can avoid such strings in the future. Or at least know when the problems arise. Tom
  12. Just an idea, Maybe you can combine the LVOOP with XControls, so you can put your 'per valve/gauge' user interaction intelligence in the XControl, and keep the data in LVOOP objects. Don't have much experience in the LVOOP yet, bute sure there are some wireworkers who can point you in the right direction Tom
  13. Hi, It seems you can do it with the MS ADO activeX component. Procedure is described at http://support.microsoft.com/kb/230501 Success, Tom
  14. Matt, just one more thought, could it be that the WDM devellopment installed a different USB stack that VISA is not compatible with? did you try to install just the VISA runtime on a non contaminated PC, and then install VISA as a driver for your device, using the INF file created on the devellopment PC? Success, Tom
  15. The effect we had, was that, if on a USB device, a bulk transfer was started, a single frame of information was sent, and then the the USB host was closing the connection. We could not exactly figure it out what the problem was, because when we used another (opensource) USB Raw communication framework, it worked fine. A college pointed me to VISA changes at the NI site, and in VISA 3.4, a lot of USB stuff was changed : I don't know too much of the USB protocol itself. We found out about the problem using the USB monitor. I assume you are using one as well. If you can find out at which part in the protocol something goes wrong, it will make it easier to track down. Maybe the 3.1 version was more tolerant ... Tom
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.