-
Posts
35 -
Joined
-
Last visited
-
Days Won
1
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by mwebster
-
Beware that breaking out a USB cable through a set of relays and having it work on the other side is likely to be non-trivial, especially if you're talking USB 2.0+. I've personally had a difficult time just trying to splice a USB cable before even when taking care to only leave ~2" unshielded (shield was bridged over, but didn't completely enclose the spliced area). If I recall, this was for an external hard drive with "A" connectors on both sides and the computer would recognize the device but couldn't transfer data. From a few internet forum posts, several people have done some pretty hacktastic cable splicing jobs on things like mice and tablets that worked just fine so it might just be a data rate thing.
-
That's a relatively slow signal, so sample rate isn't a problem. Looks like 100Hz would probably work, 1kHz almost definitely would. However, none of NI's DAQ cards are going to measure that kind of voltage directly. There are DMM cards that would, but that's probably more than called for here. What you need is some kind of signal conditioner between those high voltages and your DAQ card. You might go with some kind of resistive voltage divider feeding into a Dataforth isolation module to prevent damage to the daq in case a wire comes loose and applies all that voltage straight into your DAQ. Pretty much any DAQ card NI sells should be able to read the signal in. Accuracy is going to be a product of the DAQ, the isolation module, and the tempco and drift of the resistors in the divider but depending on what you're doing with the signal after acquisition, may not be much of an issue. I might not need to say this, but make sure you seal all the high voltage stuff away behind finger-safe barriers in your test system. 400V isn't like regular old wall voltage. Regards, Mike
-
I'll have to go hunting for the example code again. The EXE call is actually a really good idea, thanks for that. I like easy, more and more every day ... And the registry piece I need to access is in HKLM\SYSTEM, definitely not user accessible. It's basically a USB COM port that, for the device it interfaces to, needs to have the latency timer set to 1ms instead of the default 16. I would just set it manually once, but if you unplug and replug it into a different USB port it gets a new COM port address and a new registry entry that contains the default value once again.
-
I'm wondering if anyone here has done this before. I have an application that (sometimes) needs to modify a general Windows registry key. In Windows 7, I've figured out how to get the application to request "elevated" access mode by modifying the .manifest file. However, there should be a way to get the UAC prompt to come up only if and when I need to modify the registry key (99% of the time after the application is first run, it will not need to be touched). I've found some example code on the Microsoft site, but it's all C++/C#. Has anyone done this before in Labview? Regards, Mike
-
No, I didn't. It was more of an annoyance in that program than anything else and only happened intermittently. In fact, I'm not sure I ever saw it happen in the compiled version, but I didn't play around with it that much trying to break it either.
-
The sequencing was done already. Without it, the performance was substantially worse. The time-critical code is taking up ~65% of the CPU cycles, but it is "interrupting" every 5ms to do them. Maybe it's inefficient context-switching that's killing it. I may try redoing this with straight TCP-IP in the future, but I just wanted to ask around if anyone else had experienced this. You are able to pass a lot more data as straight doubles a lot faster. Something about variant/cluster packaging just makes the SVE reads so much slower... Mike
-
LV2011 RT, crio-9076, using NSV's to share data back to the PC and send commands to the rio. I'm getting very slow reads of the NSV's in the RIO (on the order of 250-350ms). The shared variable engine is hosted on the RIO. I've got a critical timed loop reading from FIFO's and writing to scan engine I/O variables and reading from scan engine I/O and writing to FIFO's. I have a non-time critical communication loop that reads from NSV's and writes to FIFO's and reads from FIFO's and writes to NSV's. The time-critical loop spins like a top, 1-2ms tops in reading/writing FIFO to/from I/O vars. The reading FIFO -> writing to NSV is slow (30-40ms), but not nearly as slow as reading from NSV and writing to FIFO. Things I have tried: Disconnected typedefs from all the NSV's - this was necessary to deploy a built executable to the RIO (and have it work, that is). Some new bug in LV2011 according to NI. Changed all typedefs to variants and recast them on the RIO - this sped things up by 1/3 or so. I split the work up in subVI's for analysis purposs and the casting and writing to FIFO is very fast, it's definitely the read operation that's being pokey. Current workaround: I'm using an Updated boolean to tell the RIO when to actually read the NSV's so that my average loop time doesn't suffer so much. This works, but I want to know the why's and wherefore's of this being so slow. Further details: Exactly what I'm reading: 15 element boolean array (this is very fast by itself and I'm reading it on every loop now, not just when updated is true with no problem) 4 "position command" clusters (currently cast to variant) Enum targetEnum controlModeEnum ChannelEnum PVDouble commandDouble man controlCluster Control_ParametersEnum PV_TypeCluster PID_gainsDouble KcDouble TiDouble TdCluster Setpoint_rangeDouble highDouble lowCluster output_rangeDouble highDouble low 1 "test command" cluster (currently cast to variant)Enum trigger channelEnum trigger directionDouble lowerLimitDouble upperLimitBoolean StartBoolean Stop Those enums are 16-bit, so we're talking about 2624 bits in the 4 control clusters + 162 in the test command + 15 in the boolean array = 2801 = 350 bytes. Call it 500? with some structure padding from the cluster organization. Why would it take 300+ms to read less than 1KB of data? Best regards, Mike
-
Greetings, I have this intermittent problem with multiple axes on a waveform graph control. I have a graph where I'm displaying multiple, optional graphs with different units (voltage, current, torque, angular position). I coded up a boolean selector array to allow the user to select which graphs they wanted to display (some 3 whole months before the feature was introduced in 2011 (this is 2010 SP1 btw)). I process this array when it changes and make unselected graphs transparent. At the same time, I analyze which units are being displayed and if, for example, there are no torque signals, I turn off the torque axis (via YScale.Visible) to make more room on the graph. My problem is that when this is done dynamically, sometimes two axes will wind up being drawn overlapping each other. The units for one overwriting units of the other and/or one of the scales being pushed much further off to the side than necessary eating up a lot of graph space. The only way I've found to fix this (if in the dev environment) is to stop the program, right click the scale and tell it to "swap sides", then tell it to swap sides again. (I might should mention I have two scales on the left and two on the right). This will fix it every time. However, I've been unable to locate a property that replicates this behavior or even just gives me a 'redraw' type command to recompute the axis display positions. Oh, also the Yscale.Visible and coloring the unselected graphs transparent is done with DeferPanelUpdates turned on. Can anyone think of a way to fix this short of manually playing around with Bounds & Position settings? Best Regards, Mike
-
That was it. Separating those into their own loops (even with the added overhead of reindexing the arrays) brought the execution time to ~2x vs 4-5x. It's funny though, even when allowing those new loops to execute in parallel instead of serially, the improved performance is still there. I guess it goes and executes a few hundred loops of one, then yields to the other resulting in a lot fewer thread swaps. So, for an array of ~20k elements, prefiltering = 110ms, old post-filtering = 440ms, and new post-filtering = 235ms Built into EXE: prefiltering = 98ms, old post-filtering = 220ms, new post-filtering = 215ms That's pretty interesting that the built method yielded that much optimization. Thanks for your help, Mike
-
I'm scratching my head a bit trying to figure out why one of these two ways of doing something is ~5 times slower than the other. I'm generating a command curve: an array of angles for a motor. I'm translating these to 2 of the 3 motor phases for output to a motor driver. I then take each of these phases and pass it through a zero-order hold and a low-pass filter. If, instead, I zero-order hold and low pass filter the command and then translate that into the angles, it's about 5 times faster. Now I'd be happy to just leave it at that, but there are some artifacts if I do it this way near the trigonometric asymptotes (causes short duration spikes where it shouldn't). I know that I'm calling ZOH and the LPFilter twice as often the first way, but I can't understand why it's 4-5 times slower instead of twice as slow. Even more puzzling is if I turn the filter off (give it an input <= 0) and on in prefilter mode, it only adds about 50% overhead vs 400% when post-filtering. Anyone got any bright ideas or have I just made some stupid wiring mistake? The attached VI's are LV2010 and require the mathscript module to be installed to run (actually, you could get past that requirement by opening the Command to DAQ output tester, removing the Generate command curve VI, and replace it with an initialized array of, say 20000 doubles, the behavior is the same) FilterSineOrSineFilter.zip