Jump to content

Tim_S

Members
  • Posts

    873
  • Joined

  • Last visited

  • Days Won

    17

Everything posted by Tim_S

  1. Is this what you're trying to do? Reentrant to subpanel.zip
  2. I do have a resolution, but not an explanation. Sending of commands/response is still through TCP, however the fast data transfer has been switched to a network stream. I was able to get 600,000 U32 per second to transfer overnight without issue (better performance than I need). Network streams are built on TCP, so this has NI tech support and myself scratching our heads as to why one works well and the other does not.
  3. Normalizing the communication bus (Ethernet/IP, ModbusTCP, Profibus, CAN, Flexray, UDP, etc.) is where I have found LVOOP to be useful. Could I do this without LVOOP? Certainly. Is it a good use of the LVOOP tool? I believe so. The hardware manufacturers I've been working with don't seem to have heard of SCPI or consider it outdated. The types of devices I've worked with are drives, particle counters, valve manifolds, remote I/O, and weather stations. Most of the hardware is from customers' approved components list.
  4. How immune is an IR beacon to rain? Hrm... Sparkfun has a RFID reader board for $25 + the longest range reader (180 mm max) for $35. That could be attached to an arduino with wireless. It's not the sort of thing that will cover a room, but could scan to open a gate.
  5. Wow, this sounds like a really awesome event. Curious how long a battery in an active RFID tag would last (all of the passive tags I've seen have range <2"), my google search started coming up with inventory tracking systems and laundry tags (passive with range of ~6').
  6. I would not say that LVOOP should be avoided unless clearly called for, but (like any tool) used appropriately. In the case of the modules, a parent object could define the interface to the larger application and carry in to a child object common data/settings. (See plug-in archetecture.) Since you intend for this to change, grow and mutate without you, as generic a way for modules to communicate would be best. Two ways that has been done before is using a cluster of an enum and variant, or a string. You may want to look at SCPI commands to get some ideas for using strings.
  7. Update: Reversing the data (Windows PC as server, cRIO as client) ran overnight without issues. Both NI tech support and I were able to get the behavior to reproduce with the shipping examples "simple TCP client" and "simple TCP server". NI tested using a dual-processor cRIO which did not show the behavior. The next test I've been asked to try is to monitor CPU usage on the cRIO. Hypothesis is the CPU usage momentarily goes very high causing the TCP communication to get delayed. We haven't thought on what could be causing CPU usage to spike.
  8. My $0.02... The Process' job is to run the process as fast as possible. As such, I see the UI requesting a change in the Process (for which the Process has the option of saying yes, no or go to hell). The UI needs the Process for data, but the Process should not need the UI to run (I can think of exceptions to this).
  9. Amen. I checked the driver on my development PC and have verified that Jumbo Packet is disabled. I haven't found much on Jumbo Frames with cRIO, but what I have found indicates it is only available with Windows and not RT. I am starting to think the 'standard corporate IT load and policies' are causing some instability in long-term TCP communication. Unfortunately, I don't think I can prove that or change it.
  10. Update on this... This issue only appears on my development PC in development environment and run-time. I was able to set up executables to test non-development PC-to-PC and PC-to-cRIO for extended periods of time; these tests went for overnight without issue. I'm still working with technical support on this issue.
  11. I'm not understanding the request to locally cache rather than use a structured query or report within the database. Querying 30,000 records that are joined, filtered, sorted, fold, spindle and mutilated is what databases are meant to do (even when adding records every 10-15 seconds). The computer the database is on does need to have enough oomph to keep up with all that is being asked of it, of course. These look like reports that are normally produced once per day, so performing the report during break and non-production periods can resolve any issues with loading without the need for a local copy. Something you may want to look into is database replication. I've not worked with it myself, but have seen it used to keep two databases (local and main) synchronized. This may allow you to keep a 30-day subset local.
  12. Switched to Standard to read whatever is available at timeout. Of what does get received, the data is correct. I've Wiresharked the connection. This is looking something lower-level as there are responses to the TCP packets of "Reassembly error, protocol TCP: New fragment overlaps old data (retransmission?)". I'm now getting errors (code 56 and 66) on the RT side where I wasn't before. A coworker dropped off a PLC I'm supposed to talk Ethernet/IP to. I was able to get Ethernet/IP going for 5+ minutes without any errors using the same cable and PC. For grins, I changed from 6000 element array to 50 (should fit within one packet). The errors in the test routine went away. I'm seeing Reassembly errors in Wireshark, but those appear to be for the RT front panel (in development mode). Creeping up the array size, the issue returns soon as go to two packets for the message.
  13. I've just tried that with no improvement. I went back through the emails with tech support and see that they have tried that as well. Correct. Once I'm out of synch, the only recourse with this method is to stop everything and restart from a known point. What's is confusing about the loss of bytes is the length and data are a single write so should show up at "the same time". They will be multiple packets, but there should not be collisions or packet loss.
  14. Posted for the next person to run into this (it could always be me!). Backstory: I was testing performance of some RT code and it was coming out lousy. I tracked the slow performance down to where I was reading and writing a linear buffer in a VI. The write was taking 88 msec and the read was taking over 300 msec. I brought the code over to Windows side and was getting pretty bad performance there (write was 1 msec). [Omit lots of head scratching, improvement attempts, call to NI tech support, obligatory waving of dead chicken...] There were two things that improved the performance: 1. Eliminating the bundle/unbundle of cluster elements 2. Wiring through case structures There is some cost hit with the bundle/unbundle. I didn't determine how much improvement eliminating them created. My solution was to eliminate my cluster and use a shift register for each element of the cluster. Tech support came up with using the in place element structure. Wiring through the case structure created the best benefit for me. Tech support was able to relate about the case structure and memory allocation, "The compiler is going to create duplicate copies of the array when branching and when going into a case structure to make sure that we operate on good array data every time (regardless of which case is executed)..." My test code was branching a 1M element U32 array, so I was getting a serious performance hit from the copy. This copy does not show up when using the Show Buffer Allocations tool (LV 2012). Tech support was able to identify the allocation using the Desktop Execution Trace Toolkit. All said-and-done, the relative performance on a Win7-64 with LV2012-32: Original: 1046 usec Using IPE: 31 usec Using shift registers: 26 usec Tech support organized the back-and-forth and sent a project demonstrating the difference; I've included that. Buffer Testing Project.zip
  15. Some of my code is giving me behavior I'm not understanding. I've been talking with NI tech support, but I'm trying to better understand what's going on for the foreseeable project down the road that is going to tax what TCP can transmit. I have a PC directly connected to a cRIO-9075 with a cable (no switch involved). I've put together a little test application that, on the RT side, creates an array of 6,000 U32 integers, waits for a connection, and then starts transmitting the length of the array (in bytes) and the array itself over TCP every 100 msec. The length and data are a single TCP write. On the PC side, I have a TCP open, a read of four bytes to get the length of the data, then a read of the data itself. The second TCP read does not occur if there is any errors with the first TCP read. Both reads have a 100 msec timeout. The error I'm getting is a sporatic timeout (error 56) at the second TCP read on the PC side. This causes the next read of the data length to be from my data so I get invalid data from there out. The error occurs from second to hours after the start of transmission. As a sanity check, I did some math on how long it should take to transmit the data. Ignoring the overhead of TCP communication, it should take ~2 msec for the write to transmit. A workaround seems to be to have an infinite timeout (value of -1) for the second TCP read. I'm rather leery of having an infinite (or very long) timeout in the second read. Tech support was able to get this working with 250 msec on the second read. Test VIs uploaded... Test Stream Data.zip
  16. You've asked the equivalent of 'how do I build a bridge?'. There's a lot more information and work that goes into that question than we can really go into here. Your local suppliers for motors and drives can help you with selecting components appropriate for the task. The individual supplier will route you toward their product line. I normally don't get involved with motor selection, but you'll need information on physical envelope, worst case pressures, the type of valve you're using, etc, to calculate how much horsepower you need (what is the metric equivalent for motors?) and what speed you need to run at. The more you've clarified what you need the system to do the better (well, any) supplier can help you.
  17. Well, it depends on the drive connected to the stepper. The myDAQ cannot control a stepper directly. There has to be some components (drive) in between. The drive could has an Ethernet, serial, PROFIBUS, etc., interface. It could have a couple of TTL inputs (e.g., go forward, go reverse). The stepper motor itself doesn't matter but the drive controlling the stepper does.
  18. 1. You will need some sort of hardware that the software (LabVIEW) talks to. National Instruments sells a variety of hardware that works well with their software. There are other hardware (the Arduino you mentioned is one) that would give you the same ability to control. What hardware you pick (and how you control it) is dependent on what your other requirements are. Do you need to switch within seconds, milliseconds, or microseconds? What is the load rating (looks like you have that)? Etc... 2. Separate power sources is a controls issue. If you have power supplies only capable of handling one solenoid with inrush current, then yes you will need separate supplies. If you have one supply capable of handling all three with inrush current, then you're fine with one. There is also the impact from a safety risk analysis.
  19. Just taking a quick look at your diagram, there should be shift registers on the for loop on the connection reference and error cluster.
  20. This is a forum for LabVIEW programming language, not the Lava line of phones.
  21. For searching, you may wish to create a separate dataset from the tree control that better allows searching. This dataset would need to include the tag name of the entry in the tree. Navigating through a tree is not the fastest thing in the world.
  22. I don't remember LV 2010 to know how to do this, but there is a means to debug an executable, which is what having debugging in an exe does. With LV2012, there is an additional flag to wait for debugger.
  23. I was going someplace, but then went back and re-read your post. Alternatively,: - Does this happen if you run the code in the development environment? - Does this happen if you turn off debugging (I've seen weird behavior like that)? - Have you watched the code in the executable in debug?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.