Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. QUOTE (PaulG. @ May 7 2009, 01:13 PM) Because the i terminal returns the number of iterations that are completed, and the i terminal returns its value during the current iteration. If you want the iteration count after the loop runs, wire the output of the N terminal, not the i terminal.QUOTE (Gary Rubin @ May 7 2009, 12:09 PM) Why does a numeric constant in the BD default to I32, while a control/indicator on the FP defaults to a double? Because most people want double type for the numeric. But it is really nice on the BD constant to drop it and then, if you need floating point, to type in a value using a decimal point and the constant will automatically change to double type. If it dropped as double by default, if you typed 3, we wouldn't be able to assume that meant you wanted an integer. We'd probably do the same thing on the FP except that most people never type into the thing that drops onto the FP... they just leave the value as zero (and you wouldn't want it coercing to a new type everytime you typed a value into it while you were debugging!).
  2. RED ALERT. This is a LV crash waiting to happen. This is a very very very dangrerous piece of code: QUOTE Let me explain what you have done: You have taken a control reference of one control and used the Type Cast to unconditionally change the type of the wire without changing the value of the wire. You are now calling operations on a cluster refnum that are only defined for a boolean refnum. The Value property itself may work at the moment, but you are one change in LV's source code away from crashing entirely if someone in R&D writes any code that assumes "if the type is this then I can do this" where "this" is an operation not defined for clusters. I don't care how cool you think this is. It is fundamentally unsafe code. Normally if LV crashes that means there's a bug that LV R&D should be fixing. Hard casting refnums is one of only two ways that I know of that users can write code that can crash LV itself that is not a bug (the other being calls into a badly written DLL). You *can* get lucky on individual functions and not crash, but even then you cannot be sure that you are getting back meaningful results. Unfortunately, sometimes it doesn't crash and sometimes the results are meaningful, which can only encourage you to do this sort of thing more. If you want to cast refnums, you should use the To More Specific and To More Generic primitives. If those prims give you broken wires for a given cast then that is a cast YOU ARE NOT MEANT TO BE ABLE TO DO. The Type Cast allows this hard unrestricted casting because there are cases where you are only using the refnums as lookup table numbers or something like that. It is not meant for changing types and then actually invoking methods.
  3. QUOTE (jpdrolet @ May 6 2009, 10:50 AM) But the association is 0 !== 1 as in "zero not equals one".
  4. You can also use the Semaphore palette to protect the calls to your DLL. Wrapping the DLL call in a subVI is usually the easier solution, but if you're calling different functions within the DLL then sometimes using the Semaphores is simpler.
  5. QUOTE (Mark Yedinak @ May 5 2009, 12:59 PM) Or delegation... implementation of Delegation in LabVIEW available here.
  6. That's a flaw in our documentation scheme. All four of the "Available with..." fields in the help apply ONLY to methods and properties of VI refnums, but those four fields get printed for help for all of our properties and methods. Ignore them. All VIs In Memory correctly returns all the VIs in that application instance including standard VIs, control VIs, poly VIs, global VIs, and custom probe VIs. I don't konw what it does with the pseudo VIs (such as the clipboard VI or the proxy VIs that are created when one app instance makes a remote call to another app instance). Those aren't really VIs as customers know them, but they have attributes that make them look like VIs to many of LV's underlying subroutines, and they do appear in the VI Hierarchy Window, so maybe they're in the list too, but I can easily see them not being there. [LATER EDIT] I talked with our tech writers. This bug in our documentation process has already been reported and will be fixed in the next version of LabVIEW.
  7. QUOTE (ShaunR @ May 4 2009, 03:17 PM) When it needs a copy, it makes a copy. When it does not it does not. An easy example -- if you unbundle data from one object and bundle it into another object, clearly a copy is made along the way. If you unbundle using the Inplace Element Structure, modify the value and then put the value right back into the object using the bundle node on the other side of the structure, no copy is made.
  8. QUOTE (ShaunR @ May 4 2009, 04:19 AM) Ok, he has a large data structure somewhere. But why is that structure being duplicated? Is he legitimately running out of memory because he is updating that structure using local variables? Has he drawn some wire branch such that LV cannot inplace one copy of the data all the way through his program?
  9. QUOTE (ShaunR @ May 3 2009, 03:37 AM) Setting aside the bug with LV classes, the primitive DOES work the way it says in the LV help. That's the problem. A "slight performance hit" would be putting it mildly. For very simple VIs, that primitive can introduce slowdowns of hundreds of percent, and for complex VIs, it can be significantly more. If a user is having problems with too much memory usage in his/her LabVIEW application, there are many things I would suggest he/she attempt before trying that primitive. It is, in fact, dead last on my list of solutions to consider. Don't misunderstand me -- there are situations where it is the right solution. But those situations are rare.QUOTE His problem is plain to me. Large data set remains in memory when not needed. I'm guessing that invoking the the properties and methods also creates copies of that data which remain in memory. And although you might be correct in your diagnosis, I would like to hear confirmation of that from Ernest. And even if you are correct in that there is a large data allocation somewhere that is being copied too often, you have no evidence that it has anything to do with LV classes other than that it happens to be occurring while running one of the member VIs. Problems with large waveforms, strings or arrays can appear in many places. I get multiple bug reports each week assigned to my team that something is wrong with LV classes simply because there's a member VI somewhere in the VI hierarchy without anyone doing any analysis of where the problem is truly coming from.
  10. QUOTE (ShaunR @ May 2 2009, 09:50 AM) A) Known issue: in LV8.2, 8.5 and 8.6, the Request Dealloc prim does not affect allocations of LV class objects. So any objects allocated in the dataspace of VIs -- not just member VIs, but any VIs -- will not be deallocated by the prim. That does not mean they are leaked. We simply leave the allocations in place. This bug will be fixed in the next release of LV. (It is not listed on the official Known Issues page because it was discovered relatively recently. This prim is very rarely used, as demonstrated by the fact that it took 3 versions of LV before anyone noticed it wasn't having any effect.) B) Using this primitive often creates more problems than it solves performance-wise. I have a strong suspicion that it would not be desirable to use this even if it did affect LV class instances. Besides, Ernest already found some manual mechanism that he tried... I'm curious what that mechanism is. I think we need to know that before we can recommend solutions to the problem he is seeing. If he really does want the entire VI to leave memory, for example, the Request Dealloc prim is not the solution.
  11. Step 1. Use Edit >> Select All on the front panel to select all the controls/indicators/free labels/etc. Step 2. There's a ring on the front panel just to the right of the Run Arrow that is the font ring. Click it and select a larger font size. All the selected objects will get larger. ALTERNATIVE: Drop all your controls/indicators from the System palette. The controls in that palette are tied to the operating system settings. I assume that the default font on a touch screen is fairly large, so when you load those VIs on a touch screen, the text will be larger. Note that system controls also change color based on your OS settings.
  12. Hello, Ernest. I am the lead architect for the OO features. I do not understand your questions, in particular these two lines:QUOTE QUOTE The only way I could found to free the memory was to do it manually. In the first, if a VI calls a subVI, the subVI typically stays in memory. Why do you think there would be any difference in behavior if the subVI is a member of a class? Could you be more explicit about the unexpected behavior you are observing and why it is unexpected? In the second, LV doesn't have a "delete" or "freemem" command, so I am not sure to what you are referring. Could you describe this manual method that you're using?
  13. All the internal benchmarks that I've done show a substantial speed improvement for all queue operations from 7.1 to 8.5. I didn't redo benchmarks for 8.6. There is a new queue primitive in 8.6, but to the best of my memory none of the functionality of the existing primitives changed. QUOTE (Gary Rubin @ May 1 2009, 08:45 AM) On Windows (and I believe all the desktop platforms) the timing cannot be any more exact than milliseconds. There just isn't a more precise timer available. The fact that the time reported is zero means that it is taking less than 1 millisecond to execute, so the sum of all iterations is still zero. That's why it makes sense to benchmark in aggregate but not individual operations. If you want a more precise timing, you need LV Real Time running on a real-time target. QUOTE (PJM_labview @ May 1 2009, 11:43 AM) I have had a lot of questions about the profiler since LV 8.0. I believe something was changed somewhere so that it does no longer behave quite the same way that in previous LV version I don't believe it had any significant changes since at least LV 6.1 other than displaying VIs from multiple app instances, added in LV 8.0. I could be wrong -- it isn't an area that I pay close attention to feature-wise. QUOTE (jzoller @ May 1 2009, 02:18 PM) (Edit: thinking more on the above, unless the queue has a lock-on-read, it seems unlikely. Sorry.) I'm not sure if you're asking about this, but, yes, there is a mutex that must be acquired by the thread when it reads the queue.
  14. QUOTE (Diego Reyes @ May 1 2009, 05:52 PM) I'm not commenting on the pros/cons of the two architectures. I just want to point out that if you choose the 2nd idea, you don't have to worry about closing control references. Control references all close when you close the VI that owns those controls. You have to be careful to close any App references that you open and close any VI references that you open, but not control references (the same applies, if you use scripting, for all the block diagram references). Having an App reference open will keep LV from quitting. Having a VI reference open will keep the VI from leaving memory. But having a control reference open won't keep anything in memory. Those go stale as soon as the VI unloads.
  15. QUOTE (Black Pearl @ Apr 29 2009, 06:48 AM) "Run when called" literally means "when called as a subVI." If it is run as a top level VI (i.e., when you use the Run method of a VI), it doesn't open the FP. If you run it using the Call By Reference node, that calls as if it is a subVI, but it does not sound like you want the rest of the behavior that comes with that. So, yes, the right way for a top-level VI is to use the FP.Open method.
  16. The property node does exist, but it doesn't work in the runtime engine. You can get the version in a very strange way... take an instance of the class and flatten it to a string. Then read this post for instructions on how to get the version number out of the flattened string. That's the only mechanism I can propose that will work in the runtime engine. QUOTE (Aristos Queue @ Apr 28 2009, 01:37 PM) I thought of another option... you could read the XML file directly. The class version number is stored in plain text in the .lvclass file.
  17. QUOTE (normandinf @ Apr 28 2009, 12:13 PM) I'm surprised you couldn't hear my cry of frustration back in 2006 when I realized that someone had coupled the library interface to the project interface... unfortunately, we're tied to it now and there's no way to change it without breaking all user VIs to create a separtate refnum hierarchy independent of ProjectItem. LV Classes have tried to expose out functionality for the runtime engine that we think is useful in various utility VIs. The most important two are in the palettes (one appeared in 8.5, the other in 8.6). You can find others here: vi.lib\Utility\LVClass Please note that the ones not in the palettes may be removed/refactored/renamed between LV versions. This is basically my hiding place for lvclass functionality that multiple customers ask for but for which I don't have time to make a nice, stable, documented interface.
  18. QUOTE (Jim Kring @ Apr 27 2009, 05:49 PM) Yep.
  19. Yair posted an example where events were fired A then B but the event structure was handling them B then A, which contradicts the stated behavior of the event structure that it handles events in the order they occur. QUOTE (Yair @ Apr 27 2009, 02:37 PM) If you look at the block diagram, you'll see this: You'd expect that the user event would be handled before the control value change event. I followed up on this. It is "regrettable but correct behavior." In other words, not intended behavior but a logical result from the rules of the system, so not really a bug. When events fire, they are placed in a queue. They record a timestamp of the time they occurred. The timestamp has millisecond resolution. There are separate event queues for dynamically registered events and statically registered events. In Yair's demo, the user event is registered dynamically and the value change is registered statically, so there are two queues. The event handler looks at each of its queues and takes the earliest timestamp. In this case, the timestamps are identical and the statically registered event wins. Workarounds: 1. Dynamically register for both the user event and the value change event. [This is the preferred workaround.] 2. Add a millisecond delay after the firing of the user event before firing the value change event to guarantee different timestamps. [This is a hack.]
  20. QUOTE (Yair @ Apr 27 2009, 02:37 PM) Regrettable but not really a bug because of... well, I'll post the response in a new topic.Response posted here: http://forums.lavag.org/Out-of-order-event...-of-t13913.html
  21. QUOTE (ShaunR @ Apr 26 2009, 11:53 PM) Loop A reaches a Dequeue node. If there is data already in the queue, it gets the data out and continues running the rest of the diagram. If there is no data, it goes to sleep until data is supplied by another section of the diagram running an Enqueue node. QUOTE And (just a point to fill my knowledge gap) Are you saying that 2 while loops (or 50 even) will run in their own separate threads? Generally, yes. Any parallel node in LV (meaning any two nodes where the outputs of one do not flow through wires to inputs of the other) will run in parallel. We only spawn N threads where N is "number of CPU cores x 4". This includes while loops. If the number of parallel nodes exceeds the number of threads, the threads will use cooperative multitasking, meaning that a thread occassionally stops work on one section of the diagram and starts work on another section to avoid starvation of any parallel branch. If you have two parallel nodes and you have two available CPU cores, each node will execute truly in parallel, one on each core. This is the key reason why LV is such an incredibly powerful language in modern computers: from the raw syntax of the language, parallelism can be identified and acted upon without requiring explicit coding from the programmer (where as in other languages the programmer must manually create calls to Fork and Join and manage all of the communication between those). QUOTE There's a big difference between events and queues. (I think your comparison is to events as the rest of your post is about them although Aristos is talking about queues and notifiers I beleive). Events can happen in any order an at any time I think I heard someone say before Events are fired from the user clicking/typing or from one of four nodes: the user event firing node, the Value (signaling) property node, the FP.Close invoke node and the AppExit node. I believe this is a comprehensive list. So the events can be *generated* in any order. But that's no different from saying any two parallel branches can Enqueue into the same queue in any order. When the events are *handled* they are always handled in the order they were generated. In the same way, data is always dequeued from the queue in the order it was enqueued. QUOTE API? Thats windows speak...lol. You weren't a C++ programmer in an earlier life were you? API is pretty common term in any programming language when you have different modules or libraries. 21 years ago, when I started programming before Windows even existed, the term API was used for the C libraries, as in "I created a new .c file that runs faster than the UNIX version, but it has the same API."
  22. QUOTE (jdunham @ Apr 26 2009, 12:29 PM) Let me assure you -- it really does go to sleep. There is no CPU load at all from a sleeping queue prim. There is no polling for data. Thread A executes and goes to sleep on a queue. Thread B runs free. When Thread B puts data into the queue, it wakes up thread A. In fact, it doesn't even put the data into the queue. It puts it directly into the output of Thread A and A wakes up and starts running as if it had just gotten the data out of the queue.
  23. QUOTE (Yair @ Apr 25 2009, 12:48 PM) The constant's value is always zero. That's the key thing to realize -- if you change a control whose value is non-zero into a constant, for refnums, you always get zero. QUOTE All this circumvents the main issue I and others brought up - should such a thing be asked, at any level of certification? I would say that this is both arcane and unreliable knowledge, as NI could change it, and doesn't have any real value, other than as a trivia question (which as we can see needs to be checked before it can be answered correctly). Ideally, any advanced developer should know to use the Not a... primitive, although I'll admit to using the type-cast-and-compare-to-0 method myself in production code in the past. It better not be something that NI could change. That's equivalent -- to me -- of saying you can't rely on the behavior of the Add primitive because LV might decide to make it do subtraction. These behaviors are exactly the sort of behaviors that I consider "language" as opposed to "editor". Both the "what does the "not a" prim do" and the "what is the value of a refnum constant" ought to be things that a LV programmer can absolutely depend continuing to work in future LV versions. If a change were to occur, this is the sort of thing that (a) would require significant mutation code such that existing VIs continue to operate as they did before, (b) an change to the diagram drawing such that the new primitive or the non-zero refnum constant look substantively different from the existing prim or constant, © "save for previous" functionality, and probably (d) some way to continue to drop the previous functionality in the new LV version. I did botch the original post, but that was slop on my part, and shouldn't factor into whether or not this is defined LV behavior. Runtime functionality cannot be "just trivia." The value of the constant is in the same category as behavior of an output tunnel of a For Loop that executes zero times. LV users' ability to read the diagram and understand what was happening would be severely impacted by changes in that functionality, in addition to the runtime logic changes. The functionality of the primitive falls in the same category as knowing that the "Search 1D Array" primitive does a linear search, not a binary search, and knowing how "Sort 1D Array" behaves on a heterogenous array of LV classes. These are runtime details of these functions that have to be spec'd out and nailed down explicitly. For both of these categories, any variation in that functionality from NI has to be compensated for by mutation code between LV versions and with changes in draw style on the diagram to highlight the difference.
  24. QUOTE (Yair @ Jun 27 2008, 06:33 AM) I realized the correct answer to this: The name of the class can be a whole string of PStrs connected together when the class is itself inside an owning library. So if the name of the class is "MyLib.lvlib:InnerLib.lvlib:MyClass.lvClass" then the string would be WXMyLib.lvlibYInnerLib.lvlibZMyClass.lvclass where W = length of the whole string X, Y and Z = length of their respective substrings
  25. Write linker info is not recommended unless you are one of two developers on the LV R&D team and even then only if you are writing pretty much exclusively to VIs that don't link to anything "special" -- by which I mean anything other than another VI. (So no shared variables, libraries, classes, elemental I/O nodes, mathscript files, state charts, xcontrols, properties of xcontrols, xnodes, DLLs, pink hearts, orange stars, green clovers or purple horseshoes.) It exists predominantly to read a VI's existing linker information and then make a few very specific changes to that info and then write it back. It is not a panacea for cross-link repairs, app building or updating relative links.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.