Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. This is correct behavior, not a bug. Read here: http://forums.lavag.org/Rings-and-Enums-ca...l&pid=33648
  2. My 15" Mac PowerBook has served me spectacularly well for over a year. I do develop VIs on that laptop, including smallish projects. I don't have cause to build gigantic apps in G, so I can't tell you if it scales well, but I've been satisfied. I do find myself wishing for a more direct way to pull the Project window to the front. The longer I work on a laptop, the more I want a "floating window" option for the project window, or a shortcut key for bringing it to front. But other than that, no complaints.
  3. Your article does not specify what LabVIEW version can create these corruptions. There were two corruptions known in LV8.2, both of which were fixed in 8.2.1. There was one corruption associated with 8.5 that was fixed in 8.5.1. I do not have any open bug reports of any corruptions since then. Please edit your article to specify which versions of LabVIEW are capable of creating these corruptions so that future readers don't continue to wonder if these are issues. It's irresponsible, in my opinion, to post an article such as this one without providing the version information. This article will still be available for reading -- and thus indexed by search engines -- years from now. Without a version number, this article becomes a source of FUD. Your article opens with "a recent post on LAVA made me realize that this may still be useful." I'd just like to point out that the recent post on LAVA that you cited explicitly mentions that, no, he is not using LV Classes. If you know of any way by which the project and or class files can become corrupted that is not addressed by LV8.5.1, please report them ASAP so they can be fixed.
  4. QUOTE (neB @ May 15 2008, 02:18 PM) Actually, there isn't any claim that these two implementations are doing the same work. They are doing the same job. That example is a highlight of two different architectures. There are several performance accelerations that are possible in both implementations, but those optimizations obscure the behavior of the code. The efficiency of the implentations is not something I've ever dug into. I'll take a look at the memory leak claim sometime next week. [LATER] Ok. I couldn't resist digging into the memory leak tonight. I'm not seeing it. I tried both LV8.2 and 8.5. But I am seeing something that might make you think you're seeing a memory leak. Run the VI once. It allocates memory. When it finishes running, it deallocates some memory, but does not return to its initial amount. Run the VI again. It allocates more memory than it did the first time. Then it deallocates some. Repeat a few times. After a few runs, you'll reach an equilibrium state where the amount it starts with is the same as the amount it finishes with. So, no memory leak. But the curious behavior deserves some explanation. Here's what I'm fairly certain is happening. So, what is going on? LV classes save memory by having only one copy of the default value of the class in memory. Any instance of the class that is the default value just shares that one copy. So if a terminal gets a non-default value, we allocate a new space in memory to hold that value. We don't bother deallocating the terminal once we have bothered to allocate it (if we paid for the effort of allocating it, we might need it again the next time the subVI is called). Since this VI has random input, not every code path is exercised on every execution, and so on successive execution there will be some terminals that get allocated for the first time. Eventually, all the code paths are allocated, and we reach equilibrium.
  5. QUOTE (Dan Bookwalter @ May 14 2008, 03:12 PM) As I've mentioned before: A VI that does dynamic dispatching is just a plain VI with a mark on its conpane. The Control VI or the Global VI -- these are special types of VIs. But there is not a "dynamic dispatch VI" type. To the best of my knowledge, there are only two places that dynamic dispatch VIs cannot be used that you can use a static dispatch VI: as one of the VIs inside a polymorphic VI and as the VI called by a Call By Reference node.
  6. QUOTE (PaulG. @ May 13 2008, 01:12 PM) What if the DLL is a LV-built DLL? Do the VIs in the DLL count as a driver?
  7. QUOTE (jasonw @ May 13 2008, 08:40 AM) This is a scripting property. It is unavailable in LV8.0 or later unless you have a scripting license, which is not for sale. It is available in earlier LV versions only with undocumented config tokens.
  8. QUOTE (crelf @ May 10 2008, 11:46 AM) Good point. As a "cast of thousands", the avatars are a good idea. As just a handful, not such a good idea. I withdraw my suggestion.
  9. Not that I know of. The 3D picture control can load ASE geometries, VRML and STL geometries. I don't find anything in the online help about OBJ files. There might be a third party library written to display such files. You might search the web for such a thing.
  10. QUOTE (neB @ May 9 2008, 04:20 PM) Can you make the heads be avatars?
  11. There's a config token for this. In your config file put: MaxHelpDescLength=10000 or whatever length you want. This will set the maximum number of characters that will be displayed before truncation in the Context Help. If set to a non-positive number (aka -1), then there will be no max. If set to 50 or less, then 50 will be the max. Of course, this token won't be set on other machines that you install into. If you're making a built app with Context Help, then you can put this token into your app's custom config file. If this is a toolkit that you're building, your readme file may need to tell your users to add the config token. ... assuming of course that you want a CH that is that long. That is highly unlikely. If you've got that much text, it probably goes in the help for the control/indicator/VI.
  12. Today, I am working from home. I have my Mac Pro laptop, a working wireless connection, a stack of peanut butter sandwiches, and a stable build of an unreleased version of LabVIEW. I am sitting under a shade tree in the back yard, on a low-80s degree day (Farenheit), with an ever-so-slight breeze, in a comfy chair. I have my shoes off and resting in the cool, green grass. Honestly, I don't think paid work gets better than this.
  13. QUOTE (Tomi Maila @ May 8 2008, 09:44 AM) Acknowledged. :-)
  14. QUOTE (george seifert @ May 7 2008, 07:53 AM) As mentioned in another post, no, a single processor can only do one thing at a time. But let's say that one VI does serial "add add add" and another VI does serial "multiply multiply multiply", the processor might very well do "add multiply add add multiply multiply" or any other interleaving. The VIs as a whole do run concurrently, though the individual nodes are done one by one.
  15. Are there any objections from you if DevDays presenters in other cities use material from your presentation? If not, I'll add it to my "list of resources for presenters" that I keep getting asked for.
  16. If you have a single processor and you start two VIs running, they will run at the same time, with the OS thread swapping back and forth between them. If you have a dual processor and you start two VIs running, hopefully your OS is smart enough to give one thread to each CPU and just let them run. If you have a single processor and you start one VI running with two parallel sections, they will run at the same time with the OS thread swapping back and forth between the sections. If you have a dual processor and you start one VI running with two parallel sections, they will run at the same time and hopefully your OS gives one section to each CPU. Or, more specifically, if you have N processors and you start M VIs running with K parallel sections among them, LV will spawn N*4 threads and pass those threads out to as many parallel pieces of code, on the same or disparate VIs, as it can. If K > N (i.e. there are more parallel sections than threads), as each thread finishes a "clump" of code (decided by the LV compiler) it will pick up another clump that hasn't had processor time, and thus make sure no one starves for processor attention. All of this without you having to break a sweat and figure out when to do thread spawning, thread unification and thread synchronization. It is the single most beautiful aspect of LabVIEW and dataflow programming. It is this optimal thread scheduling that virtually no other programming language can determine without significant input from the programmer. As far as timed loops and determinism, it is my understanding that if you have N processors, you can have up to N timed loops and LV Real Time will keep them deterministic.
  17. I'm surprised that I've never seen any art that combines "dataflow" with "LAVA". I've done several pieces for NI presentations that are LV primitives in rivers. Seems like G code flowing down the side of a volcano would be a natural graphic for LAVA.
  18. The check for NAN and the check for precision are interesting ideas. The precision test seems most reasonable because otherwise, why pick 0.2 as your split? It's a fairly arbitrary value.
  19. Check out the guest's hardhat. http://www.youtube.com/watch?v=OKO0IpPHnSw I've got no idea how you embed the video into a LAVA post, by the way.
  20. QUOTE (netta @ May 5 2008, 05:08 PM) Looking at the code, there is probably an upper bound around 2^30th VIs. But I don't see anything that should limit below that. I have seen 8000 VIs in a project. You may be running out of memory on your system. Save As is going to require all VIs to be loaded into memory so that all the saved paths (such as paths to subVIs) can be redirected to the new locations.
  21. QUOTE (MartinGreil @ May 2 2008, 04:43 PM) <snip> QUOTE The LabVIEW generated code would be much more efficient than using a SubVI. The above assertions can all be said to be "sort of" true insofar as it depends greatly upon how the subVI is written. They are not true of all subVIs, even all reentrant subVIs, under all conditions. Specifically, I'm not convinced they would have to apply to the specific subVI under discussion if it were written correctly. http://lavag.org/old_files/monthly_05_2008/post-6703-1209759395.jpg' target="_blank">
  22. QUOTE (Omar Mussa @ May 2 2008, 03:42 PM) Hm... I don't have any CARs on anything like that at this time. If you can replicate it, please post it on ni.com so it gets CAR'd.
  23. QUOTE (Omar Mussa @ May 2 2008, 03:29 PM) How would that fix the problem? We're not talking about saving or not saving. We're talking about whether LV should *ask* you to save before the item leaves memory. This is a question of calculating whether the class will be leaving or not. For running VIs, we can calculate which ones will be leaving memory. For a class, we cannot make that calculation until after the VIs are halted *and* already removed from memory. So those VIs have to have already asked about their unsaved changes before we even know if the class will be going or staying.
  24. QUOTE (PaulG. @ May 2 2008, 12:23 PM) It's that word "sometimes" that is the key. :-) I think the most important statement was the "a screen that ships without a mouse ships broken." Because after you watch that episode of "Lost", you probably want to go Google some bizzare sequence of numbers or figure out the significance of a strange symbol they found somewhere on the island. Or write to the authors and suggest that next episode, they should make a harder puzzle for the audience than merely the sequence of masses of each of the planets in the solar system times the atomic weight of the first nine elements on the periodic table. The small bits of feedback are in many ways more important than the major projects, and television misses even that small bit of feedback. And if the content doesn't justify the ads, then you will go work on that VI. And you never know... a running LV diagram might just show up in an episode of Lost. :-)
  25. I was asked this question today in e-mail, and I figured others might like to know the answer. Question: Why do LV classes often ask to save after everything else has asked to save? If I close a VI's panel, I'll frequently get a Save Changes dialog for the VI and its subVIs, and then a second dialog box for LabVIEW classes. Why can't you offer just one Save Changes dialog for everything that is leaving memory together? Answer: When you close a VI, we know all the subVIs that will be unused as a result of closing that VI, so we can ask about them all at once. Classes, however, cannot leave memory until all data instances have left memory, and we can't know whether closing any given VI will actually close out the last instance of data. A VI might not have any reference to the class in its code, but it might still have an instance of the class stored in some variant somewhere. If all VIs were idle, we could search every terminal value, every control/indicator, check inside every variant, including the VI's tag table, and a couple other places where VIs store data, and determine how many instances of the class will leave with this set of VIs and see if that accounts for all the instances of the class remaining in memory. Then we would know that the class is leaving memory with the VIs and we could ask about the class in the same Save Changes dialog as the other VIs/libraries. But if *any* VI is still running, the number of instances can be constantly fluctuating. And while VIs are running, data can be hiding in impossible-to-search locations, such as a queue or notifier, and these extra spaces only get disposed when the VI goes idle. And a running VI might create new instances in locations that we already checked. So it is basically impossible to count instances while the VIs are running. Since we don't abort running VIs until after we have asked whether or not to save the changes, we can't yet know if the class will be leaving memory. I said that if all the VIs were idle, we could do the search for data. But that search is extremely slow, and since it doesn't work in the cases where any VI is running, we just don't bother doing it in any case. Thus, when you close a VI, the classes frequently end up getting their own Save Changes dialog box. The only time when we can fold classes in with everything else and offer a single Save Changes dialog box is when you're closing the project. In that case, because every VI in the project will be closed down, we know that means all the data instances of the class will be disposed without having to account for them all.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.