Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. QUOTE (Gary Rubin @ Apr 1 2008, 02:56 PM) Nope. He's just not working on LV core anymore. He spent a couple years on the Mindstorms NXT project -- which was top secret and he couldn't talk about it, so he got out of the habit of posting. :-) He's still very much at NI.
  2. QUOTE (Jim Kring @ Apr 2 2008, 04:18 PM) Several of them are subroutine. In general we do NOT make anything be subroutine because of the inability of users to debug in such VIs. If you have a need for subroutine priority for something in vi.lib, clone it and make it subroutine. This has been a point of debate in the past, but the explorability of LV has always been given precedence over performance as far as that setting is concerned.
  3. I created major problems for LabVIEW when I made the "Trim Whitespace.vi" be reentrant a few versions ago. Just launching LabVIEW's Getting Started Window spawned 150+ copies of that subVI. It was a major memory hog. In LV8.5, that subVI is no longer reentrant. The tradeoffs between the thread synchronization and the memory usage were such that not being reentrant is better. Now, in LV8.5 we also introduced pooled reentrancy. The LabVIEW R&D team has been seriously debating a recommendation that the vast majority of non-state-maintaining-pure-function VIs should be changed to pooled reentrancy. In the long-term, this may be something we encourage. The "Trim Whitespace.vi" is a prime candidate for this status... we didn't do that in 8.5 because the feature was brand new --- since that subVI could be used in tons of user VIs around the world already, we didn't want any potential bugs in the new feature to impact existing VIs.
  4. QUOTE (Gavin Burnell @ Mar 31 2008, 04:03 PM) I believe that yes, you could make something like that work.
  5. QUOTE (jzoller @ Mar 31 2008, 02:43 PM) Yep. That would be the actual memory manager that I referenced in my original post.
  6. QUOTE (hepman @ Mar 31 2008, 02:30 PM) No, it's not.
  7. QUOTE (Val Brown @ Mar 31 2008, 01:45 PM) Basically the subVI will have to reallocate all the stuff it just deallocated every time it executes. Very very time consuming. The only time when Request Deallocation is advantageous is when you have a subVI that has a very large array pass through it and you don't expect to call that subVI again for a very long time (we're talking arrays on the order of a million elements and delays between calls of at least a few full seconds). In those cases, there can be some advantages to going ahead and deallocating the subVI after every call.
  8. LabVIEW doesn't have a garbage collection system -- there isn't a "memory manager" that is in charge of periodically deallocating data. (I put "memory manager" in quotes because there is something that we call the memory manager, but it doesn't do the job you're describing.) Let's take a VERY simple case: Suppose you have a subVI that takes an array and an int32 N as input. The subVi's job is to concatenate N items onto the end of the array. When that subVI loads into memory, it's arrays are all size zero, so they take little memory. Now call the subVI, passing in an array of size 5 and 10 for N. The front panel control will allocate space to have a copy of the 5 element array for its display. The indicator will allocate to display a 15 element array. Various terminals on the block diagram may allocate to have buffer copies of the array (use the Show Buffer Allocations tool to see these allocations). So now your VI uses more data than it did before. The subVI will not release that data when it finishes running. Those terminals stay "inflated". If you call the subVI again with a size 5 array, those allocations will be reusued for copies of the new array. If you call with a smaller array, then LV will release the memory that it doesn't need. If you call with a larger array, LV will allocate more. If you're running a test VI over and over again with the same inputs, you should see the data size remain constant after the first execution of the VIs because after that point, all the data space that the VI needs is fully allocated. If you're seeing a growth of the amount of memory (which you are seeing), it is because you're processing ever larger arrays or because you're reserving system resources and never releasing them. The common example of this is opening a queue or notifier reference and never closing it. Every Obtain Queue call allocates 4 bytes for a new refnum, and those 4 bytes will only be returned when the reference gets Released. LV will release the reference for you when the VI goes idle, but if you're calling in a loop, you should be releasing the resources manually or they'll just build up. Another common leak is allocating reentrant clones using Open VI Reference that you're never closing. You can force the subVIs to deallocate as soon as they finish running, so they're back in the pristine "just loaded into memory" state by dropping the Request Deallocation primitive onto the diagram. Doing so can reduce the amount of memory that LV uses at any one time, but that generally results in really bad performance characteristics.
  9. Please post this to the forums on ni.com so that an AE can check into the problem and file a bug report if one hasn't already been filed. A known workaround may already exist. Generally, for a crash sort of bug, I'd recommend always starting with ni.com since the AEs monitor those forums and have full access to the bug tracking database.
  10. QUOTE (pdc @ Mar 28 2008, 07:54 AM) Please check if this reproduces in LV8.5. If it does, please go to ni.com and file a bug report for this. It's worth investigation.
  11. Swap the order of your numeric and your string in the cluster, and I'll bet the Search 1D Array prim slows down to be a lot closer to your for loop. Currently the primitive is comparing that integer first, and if that doesn't match, it goes on to the next element. Comparing integers is always faster than comparing strings, so it's doing a lot less work than your for loop. No, I don't know of any generic solution within LV. I'm pretty sure that various users have written various public tools to do something like this, but I'm not sure. Perhaps one of them will post here.
  12. QUOTE (Norm Kirchner @ Mar 25 2008, 01:23 PM) The reason it has to have focus is because otherwise it is ambiguous which numeric on the front panel you want LabVIEW to decrement.
  13. QUOTE (Yuri33 @ Mar 25 2008, 11:26 AM) Now we're straying into particulars of specific prims... I think the following is correct but I'm not certain. I believe that the Reshape Array has to reallocate in order to have space to store the size of the second dimension as part of the array data.
  14. QUOTE (Daklu @ Mar 24 2008, 02:48 PM) They're just posts in the middle of other discussion threads. Nothing centralized.
  15. QUOTE (rolfk @ Mar 24 2008, 07:43 AM) It would also be possible if the Call DLL node was able to explicitly mark a terminal as "takes a subarray". If the full array was being passed, that would jsut be the initial pointer, index 0, stride 1, length entire array. But it would only work for calls that expected all of this information. It would be a lot of work for minimal benefit, but it could be done.
  16. QUOTE (Tomi Maila @ Mar 22 2008, 11:43 AM) Not at this time. The full array is always exposed to external code. If a subarray is passed to a DLL call, LV will go ahead and allocate a new array that is what the subarray represented and pass the array to the DLL call.
  17. There is a tool in LabVIEW called the "Show Buffer Allocations" tool. You can find it in a VI's menu at Tools>>Profile>>Show Buffer Allocations.... It is a useful tool for doing memory optimizations because it shows little dots every place that the LV compiler made a buffer allocation. Some people call these "copy dots" because people *think* these indicate where LV is making a copy of the data. About 90% of the time, that is accurate. But there are some cases where a "buffer allocation" and a "copy" are not the same thing at all. Today I want to mention one that I haven't posted before. When using arrays, one of the major times when buffer allocations are of interest to programmers, not all buffer allocations are copy dots. Whenever possible, LabVIEW will do an operation on an array by creating a "subarray". If you pay close attention to the Context Help window, you'll sometimes see wires that are of "subarray" type. A subarray is simply a reference to another array, storing an index into that array and a stride and a length. This is a very efficient way to do some pretty complex operations without actually making new arrays, such as decimate array, split array, and reverse array. That last one returns an index to the last element of the array and a stride length of -1. The return of a subarray is only possible when LV knows that no other operation is going to be modifying the value of the array, so it is safe for the reference to the array in memory to be shared. Now, take a look at this diagram. Notice the buffer allocations: The "Split 1D Array" node has two output buffer allocations. A lot of people would think, "Look at those copy dots. That means LV is making two new arrays, one for the first part of the array, and one for the second part of the array." Not true. The buffer allocations are *for the subarrays*. Remember I said that a subarray records a reference to the array, a starting index, a length and a stride. Those four items of data have to be stored somewhere. The buffer allocation is the allocation to store those four items. It is not a copy of the entire array. The output of the Build Array, on the other hand, is a full array allocation. To see what is being allocated at any given buffer allocation, look at the type of the wire. And don't call them "copy dots." :-)
  18. Decimate Array is going to blow any custom decimation using a For Loop out of the water performance-wise. Decimate Array is able to operate without creating any copies of data. The new "array" under the hood is a subarray (as you can see by looking in the context help next time you're hovering over an output wire of that function) that stores a starting index and a stride into the original array. The later implementations that have been posted that use the array prims should be on par with Decimate Array. PS: Since you're looking at optimizations for this diagram, you might be interested in this tidbit. I've been meaning to post it for a while: http://forums.lavag.org/Another-reason-why...ons-t10406.html
  19. QUOTE (gosor @ Mar 21 2008, 05:20 AM) Oh, I'm pretty sure you're not the only one. In fact, not that I've seen the customer feedback data or anything, but I wouldn't be surprised if NI were working on something exactly like this for an upcomming release. :-)
  20. QUOTE And *why* are you able to get speed and performance from the clones that you can't get from templates? Because they share just about everything with the original master VI. There isn't a separate copy made -- they share the data directly for pretty much everything except the operate data of controls/indicators and the flags for which wires have breakpoints. That includes the tip strips. If we were going to have independent tip strips, we would have to have independent controls to host those tip strips, which means independent panes to host those controls, which means independent front panels, which basically means template VIs. Some readers are now ready to jump in with something like, "or you could store the tip strip in the dataspace and then change the code for the master VI so that tip strips are always looked up in the dataspace instead of being a part of the control." Under that scenario, we've now expanded the size of the dataspace (so that there's more info to clone even for those clone VIs that never show their front panels), and we've included the tip strips in the part of the data that has to download to realtime and other targets (where panels don't exist), etc, etc, etc. The tip strip data belongs with the front panel. The front panel is single sourced for performance, performance that 99% of the time, users greatly appreciate. LabVIEW generally does a pretty good job of hiding the "computer science" aspects of programming from you, but this is one of those times when you just have to accept -- if you want independence, you have to be willing to pay the time expense to make the copy. Or, in the words I once heard it described, "Wishes come true, not free."
  21. QUOTE (Michael_Aivaliotis @ Mar 19 2008, 03:26 PM) Wasn't meant to be. I was trying to be helpful. He wanted a way to instantiate templates without the project window. That's the only way I know to do it. The other possible method is open the .vit and then do Save As:Copy, but that method doesn't duplicate all the subVIs that may be linked templates that also need to be instantiated. Come to think of it, it's a bit strange to me to even have a .vit in the palettes. I guess it would be useful if you build a lot of templates to be able to drop templated subVI calls, but that doesn't seem to be common.
  22. QUOTE (Justin Goeres @ Mar 19 2008, 10:49 AM) Ohhhh. Gotcha. Yes, that is very different. The solution is to have a parent class that doesn't have the numeric. The parent implements Read/Write Numeric.vi using the narrowest data type -- in this case, double -- but these two VIs do not actually do anything. Then you have two child classes. One of these children has a double and the other has a complex as its private data. You implement Read/Write Numeric on both of these. Now, that only allows you to store double values into the complex field. If you want to use the widest type in the parent -- complex -- then you have to lose data when ever you store into the class that only stores doubles. Does that make sense? What are you trying to build?
  23. File>> New... (not the same as File>>New VI). Select Browse and choose your template.
  24. I used to have the high score in both Lightening and Flowers. Both of these games have now been removed, I assume because they have the bug that allows cheating. I just want you all to know that my high scores in Lightening, where I was almost 100,000 points ahead of the next player, and Flowers, where I was about 2000 points ahead, were not achieved by any cheating. I can't help it if I have an obsessive nature when game playing that leads me to play until I can do such things as predict which card will come next or identify the advantages of removing from certain columns of flowers over certain others.
  25. In your picture you say: "Hence, the call will be for parent instead of child" It will not. What would be the point of dynamic dispatching if it did??? The node you're pointing at is Write Directory.vi, which I'm going to abbreviate as WD.vi for the rest of this post. WD.vi takes a parent class as input. Child class data can travel on parent class wires. The data that actually goes into WD.vi is *child* data. That means that if the child has an override for WD.vi -- which ClassB1 does -- then the override will be invoked. If the child does not have its own override -- which is the case for ClassB2 -- then the parent implementation will be used. Now, suppose that somewhere on the block diagram of ClassA:WD.vi you make a call to Read Numeric.vi. If the data on the wire is ClassA or ClassB1, then the invoked VI will be ClassA:Read Numeric.vi. But, if the data is ClassB2.vi, then it will invoke ClassB2:Read Numeric.vi. You object to having to do a To More Specific cast. You should *not* have to do this. If you do, it means something is wrong with your entire design. Here's why: a) By making the output of the first node not be a dynamic dispatch output, you're saying, "The data coming out of this node might be any of these types. From this point forward, I should only be calling functions that are defined on ClassA." The child classes may override those methods, and the overrides will be invoked, but fundamentally you've said that there's no way to depend upon the specific type of the output. b) Any time you find yourself in a situation where you must use To More Specific, there is an extremely high probability that there is a problem in your class hierarchy design. Either you need to add another dynamic dispatch method so that the child classes have a chance to specify custom behavior as part of generic algorithms, or you've got something inheriting from something else that really shouldn't be. To More Specific gets used commonly when the class hierarchy is wrong for a given situation, but for some reason you can't change the hierarchy (perhaps because you have to maintain compatibility with some other system, or perhaps the classes are password protected and you don't have the password, or etc).
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.