Jump to content

Gary Rubin

Members
  • Posts

    633
  • Joined

  • Last visited

  • Days Won

    4

Everything posted by Gary Rubin

  1. You are correct that the second wait is causing preventing the acquisition from occuring every 100ms. One option would be to do the following: -Get rid of the sequence structure. -Put the I/O stuff in a case structure which only executes every fifth iteration Depending on how long you want to run, you might not want to store it all up, then export at once. Instead, write to file as you go. Check out this example provided in LV7.1: Cont Acq&Graph Voltage-To File(Binary).vi
  2. It depends on how you define "more popular", I guess: http://en.wikipedia.org/wiki/Decimal_point
  3. I know that "Power" is a very expensive operation, so this doesn't surprise me.
  4. Thank you for providing your test code in LV7.1 I found a couple of other interesting things. When I replace your Sin and Cosine functions in the subvis with the single function that calculates both, I get a noticeable speed increase (more so for the normal subvi than for the Subroutine priority). When I do the same replacement in the inline case, however, I get no improvement. Also, when I replace the indexed inline call with an array inline call (see attached image), I saw slower speed from the array one. I believe this is due to the large number of array allocations necessary because of all the branches in the wires. I find it interesting that this discussion, along with a previous one, are making me realize that NI's Optimization application notes are not the gospel that I once thought of them as. Instead, the appropriate technique for optimization (i.e. indexing vs. complete arrays) depends a lot on the code itself. Regarding the use of the subroutine priority - I seem to remember that one shouldn't play around with priorities, as it tends to interfere with multithreading. This goes back a few years though - was that just something related to the way multithreading was done on non-HT processors and/or Win2k? Gary
  5. I'm very interested in what you're doing, but unfortunately, I do not have LV8. Would it be possible to post your test VI in a LV7.1 version? Thanks, Gary
  6. At the risk of sounding dense, what's the problem? Using my hand calculator and the CS and Capacitance values, I get the same answer as your code. The quotient is very close to the CS value, but it doesn't seem that there's anything wrong with the division.
  7. I would just suggest replacing your formula node with this subvi Download File:post-4344-1148048856.vi
  8. What kind of values do you have for X1 and X2? Can you provide a code snippet?
  9. Right click on the plot, select Visible Items > Plot Legend Right Click on the appropriate Plot Legend with the finger icon. Select Point Style and choose a bigger dot. Set interpolation to get rid of the lines and increase the line width to make the dots even bigger. I've done something similar using a picture control behind an X-Y graph with a transparent background. The picture control, which I think I found in the Labview Examples, contains the concentric rings and radial lines. Putting a transparent indicator on top of another front-panel object is a no-no from a performance standpoint, but I figured it was better than creating the circles as plots within the XY graph and it doesn't seem to be too bad.
  10. Yuri, Thank you for providing the LV7.1 version of your test. I've duplicated your test in my own VI that I used for benchmarking (attached) and get similar results (local is about a factor of 2 faster than terminal). I'm still very surprised by this. You are correct that the citation I made from the NI app note referred to memory efficiency, not execution time, but my previous experience has always been that the two are very closely related. One thing I did notice is that when I show buffer allocations in my vi, it shows me that there is an array allocation associated with the terminal, but not with the local variable. This seems inconsistent with NI's recommendations, which says that local variables create copies of the data. Are there any NI lurkers out there who can comment on this? Gary Download File:post-4344-1147350446.vi
  11. Yuri, Unfortunately, I can't run your code (I'm on LV7.1), but I am quite surprised by your tables. Do I understand correctly that your results are consistently better when you pass data via a local than when you wire directly into a terminal? I am quite surprised by this, especially considering the following from the LabVIEW
  12. We are doing Labview-based radar data processing which may fit into your stream-processing definition. We have a non-NI A/D which acquires data and passes it to a bunch of DSPs for initial processing. The data is read from the DSPs by Labview, where we do further processing and display. For such applications, we always use two parallel loops - one acquisition loop and one processing loop - using LV2-style globals to pass data from the acquisition loop to the processing loop. Obviously, average processing loop speed has to be fast enough to not fall farther and farther behind the data acquisition, but the parallel loops with the LV2-style global is critical in preventing processing or display latencies from causing a buffer overflow in the DSPs.
  13. Hello all, For as long as I can remember, the VI profiler will show a max run time of 15.6ms (sometimes 31.2ms), even for vi's that run very fast. I've seen this with at least LV 7.1 on Win2K and WinXP. I don't recall whether it also goes back to LV 6.x In trying to get a handle on what's happening here, I've created the attached VI. I've found that I do see unexpectedly long loop times, but not the 15.6ms. Based on that, I'm thinking that the 15.6ms number is a product of whatever Profile is doing. I'm wondering, however, what exactly is causing these loop time variations. Is it simply the non-determinism of Windows? Is some of it due to Labview's memory management, etc.? If I was using Linux, would I see more consistent loop times? Thanks, Gary Download File:post-4344-1146232012.vi
  14. Is this what you're trying to do? Download File:post-4344-1144348534.vi
  15. I used to play in WAFC, then in an informal pickup game around the Mall on Sundays. I haven't played for a couple years now -- family obligations, like you said... Gary
  16. [quote name='peteski' date='Apr 4 2006, 09:10 AM' post='11294' I think that the real problem is that the following wording in the help file is misleading: Pete, I would certainly agree. I think that that's where our disconnect has been. BTW, you mentioned something in one of your other posts about playing ultimate. Do you play in the DC area? Gary
  17. I've got a binary search VI. I had to modify it to do the interpolation between the two nearest points if the search key is not found. What I was wondering in the original post was if there was a reason that this the Threshold Array function is not a binary search, given that caveat in Labview Help that the function only works correctly on a sorted array. Maybe this post would have been appropriate for the Wish List forum, as a suggestion that in future releases, the function could use a binary search.
  18. I think the one I'm using came from NI.
  19. The reason I would use the Threshold function, or a modified binary search, is precisely because the timetags are asynchronous. Say I've got position measurements at 1:30:00 and 1:30:01, and I've got measurement data collected at 1:30:00.2. The binary search would give me either the position at either 1:30:00 or 1:30:01, but what I really want is the interpolated location at 1:30:00.2. Granted, this is a piecewise linear interpolation, and I could do better with higher-order curve fitting, but if I want one unique position point per measurement point, then a standard binary search would not be sufficient. Am I missing something in your thinking? Gary
  20. I typically use this Threshold 1D Array along with the Interpolate 1D Array to line up data based on asynchronous time stamps. Because I'm using time as my reference array, it's always ascending. In fact, there's a comment in Labview's help for the Threshold 1D Array function that says: Is that more along the lines of the usage that you were thinking of? Gary
  21. I'm not sure if I'm posting this in the appropriate forum or not... I use the Threshold 1D Array function quite often. According to the Labview Help description of this function, and based on my experience, it only works for sorted data. I just did a timing test, comparing the speed of this function to that of the standard sequential array search, and found it to be slower for array lengths of 500, 1000, 2000, and 10000. This tells me that, although the function only works for sorted data, it is using a sequential search, rather than a binary search. Why is this? Can anyone (from NI?) explain why the Threshold 1D Array function was implemented this way? I guess I'll be spending some time modifying my binary search vi and going back to my code and doing some replacing... Gary
  22. Sorry, I should have been more specific. I'm using native file dialogs, so this is the Windows file dialog, not the LV one.
  23. I am using LV7.1 and have a question about the browse button on a path control. I can provide a start path and pattern, which will be used by the WinXP file dialog that pops up when I click the browse button. Is there any way that I can also specify the view that I want for the file dialog (i.e. list, details, thumbnail, etc.)? I realize that I can write my own file dialog, but I'd rather not do that if I don't have to. Thanks, Gary
  24. I think I was able to do about 2kHz max with a considerably higher duty cycle. That was DMA'ing the data right to another card though, so I didn't have to wait for the OS/Labview to deal with the data arrays. It'd be a bit different with the multiple record. In that case, you store several triggers-worth of data and offload it all at once. It allows you to acquire at a much higher PRF for short periods of time, with the penalty that it takes even longer to offloard the data when you're done with the acquisition.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.