Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. QUOTE(van18 @ Feb 28 2008, 04:14 PM) Yes, that's correct.
  2. Oh, this is a fun one to answer. Short version: The LV Basics answer is "there is no way to know" because...In a dataflow language, there is no guarantee which branch of a parallel branch will execute first. Long version: QUOTE(crelf @ Feb 28 2008, 03:00 PM) No. VIs will execute when they have all of thie inputs satisfied should be VIs CAN execute when they have all of thie inputs satisfied That is, they are now placed on the run queue for the next available thread to execute. There's no telling whether the next available thread will grab the top Tone Measurements or the Filter Signal, and, if it grabs Filter Signal, there's no way to know that it won't go ahead and execute the bottom Tone Measurements before going back to do the top Tone Measurements. Further, if there are multiple threads in your operating system, each thread may grab one branch, and then it is up to the operating system to decide which thread gets to run first. It is possible that, depending upon how the underlying code of these nodes is written, the LV compiler MAY choose to always have one specific execution order for these nodes, but there is nothing that guarantees that order. Even if you managed to get the compiler into a state where it chose the scheduling to guarantee the execution order, a slight shift in nodes, even downstream nodes, could make the compiler make a different choice. So, what choice will the compiler make in this particular example? We have Tone Measurements on the top branch and Filter Signal on the bottom branch. Let's assume (because it is reasonable to do so, not because I actually checked the implementation) that the Filter Signal's input and output terminals are inplace to each other. The Tone Measurements node is thus a "read only" node -- it uses the value of the input to calculate its outputs, but it doesn't actually modify the input. The Filter Signal does modify the value of its input and passes it through as an output. Because the Tone Measurements node is a synchronous node (it doesn't do any waiting for hardware or timer interupts, just math operations), I suspect that the LabVIEW compiler will declare all three of these nodes to be a single "clump" (aka unit of selection for threads on the run queue), and will schedule them to always run top Tone Measurements, then Filter Signal and then bottom Tone Measurements. But this is a guess from me based on what I know of the compiler and what I know of the description of those three nodes. In general, I would stick with answer number 3: There is no way to know. Here is one that is definitely undecidable -- there is no way to know which of these Tone Measurements will execute first: http://lavag.org/old_files/monthly_02_2008/post-5877-1204235944.png' target="_blank">
  3. QUOTE(Tomi Maila @ Feb 28 2008, 08:04 AM) If the tagged object does get copied, the tags will be copied as well. That could be a problem depending on the type of application you're working on.
  4. QUOTE(brent99 @ Feb 22 2008, 04:06 PM) No, you can't blame OO. The OO code isn't in the real time engine yet. That's still in development. Many people have made this assumption. For that matter, the project isn't compiled into the runtime or realtime engines, so it isn't that either. For reference, when the OO code does become part of realtime, we expect to add a couple of one or two K to the runtime engine size at most. This is still a ways in the future, but it is on the road map.
  5. This is the sort of issue that would be better posted to the ni.com DevForums. The NI application engineers there will be able to dig deeper into it and figure out if you've got a bug or not.
  6. There's no zoom in LabVIEW. On MacIntosh, the operating system allows you to zoom any application, so you can make the wires/nodes/controls as big as you want. But if you're working under MSWindows or Linux, the only solution is to lower your screen resolution.
  7. You could drop a picture ring constant onto the diagram -- there's a scripting method for setting the picture in a picture ring.
  8. The queues and notifiers are the tools explicitly designed for this sort of communication. You can acquire a named queue in both VIs and have one wait for the other to enqueue the data (similar for notifier).
  9. Not that I'm biased or anything, but you know, LabVIEW classes behave just like clusters, and their constants are a fixed size icon. There've been requests for a smaller icon size, but if you're looking for fixed size, there it is. Of course, then you're subject to the complaints about diagram values being hidden. Win some, lose some. ;-)
  10. QUOTE(guruthilak@yahoo.com @ Feb 19 2008, 12:54 AM) The whole point of a functional global is you don't need a semaphore. Only one VI can call any given subVI at a time -- if you want more than one to be able to call simultaneously, make the VI reentrant, which is, in effect, creating separate VIs for every caller. But for a non-reentrant subVI, only one caller can call at a time, so you don't need semaphores.To dthomson: Sorry, I don't have anything to say to help with your issue. I've never seen a problem with this and I'd have to see some code to make any guesses about what is wrong.
  11. QUOTE(guruthilak@yahoo.com @ Feb 19 2008, 10:23 AM) Not while the VI is running it won't (as Michael said). While running, the ctrl+any click sets and clears breakpoints. Sorry, Michael ... I don't think there's any shortcut like the one you're looking for.
  12. QUOTE(crelf @ Feb 19 2008, 08:41 AM) Huh? They do the same thing. Just one is external and one is internal. Both of them stop all execution paths in the VI and its hierarchy. Now, admittedly, doing the Stop inside the VI means that you're more in control of knowing what execution is being stopped, as opposed to the Abort, which has no idea what point the execution has reached. But they're both stops of execution of this VI hierarchy.
  13. Amazon.com lists a very recent printing run that I have not seen, so it might be true that the print issue has been cleared up. You might try asking your local Borders Books to order one for you. Double check that their policy hasn't changed, but the last time I had an issue like this, Borders had a policy that allowed you to request them to custom order a book and didn't obligate you to actually purchase it if you decided it wasn't what you wanted. -- Stephen
  14. QUOTE(neB @ Feb 15 2008, 02:29 PM) Aborting any VI is always dangerous if you're working with hardware. If you don't clean up after yourself, you have problems. That's why I try never to use hardware. :-)
  15. QUOTE(Daklu @ Feb 15 2008, 11:18 AM) Some of http://en.wikipedia.org/wiki/Millennium_Prize_Problems' target="_blank">the interesting unsolved problems require a PhD in physics, not math. :-)
  16. QUOTE(Gavin Burnell @ Feb 15 2008, 07:15 AM) Instead of using a subVI, open a VI reference to that VI and use the "Run VI" method. This starts the VI running as if it were a separate top-level VI. (Don't use Call By Ref -- that's the same as a subVI node as far as being part of the caller's hierarchy.) Since it is running as a separate top-level VI, you can call Abort VI on it.
  17. QUOTE(vugie @ Feb 15 2008, 08:48 AM) HECK YES I'M INTERESTED! This is something I've toyed with for years, but if you've got VIs that actually make this work, that would be spectacularly cool. Are you planning on posting the VIs?
  18. QUOTE(jfazekas @ Feb 14 2008, 11:03 AM) Usually these examples are intended for intra-machine communication of some sort. I don't know the particular one to which you refer, but it would be my guess that it was written for one machine to publish and the other machine to subscribe. Over a TCP/IP or Datasocket link (or your favorite protocol) the data sent is just strings which the other side interprets.
  19. A) Tomi isn't quite correct in his reply. A shift register does not always get a buffer allocation. It starts off with a buffer allocation, but we try to consolidate these down. B) Your instincts are correct that we ought to be able to consolidate in this situation. EXCEPT... your inputs to the Initialize Array function are all constants. That means that Initialize Array has been constant folded and the buffer is there in your VI's save image. To avoid stepping on the value of the constant when we run the loop, the shift register makes its own copy. If you change any of those initial constants into controls, the buffer copy will go away.
  20. Your post left me completely confused because of this line: QUOTE I thought you were talking about comparing "Flatten To String" against "Variant To Flattened String" in order to do real transmission from one application instance to another application instance (say, over a TCP/IP link or somesuch). You're just talking about handing data from one VI to another VI on the same machine. So OF COURSE handling as variant is substantially faster. Why? VARIANTS AREN'T FLAT. The data is picked up off the wire whole, with all the hair hanging off of it (arrays of clusters of arrays of clusters of arrays of... etc) and put in the variant along with the type of the wire. Then when we convert it back to data, there's just a check that the type descriptors are compatible and we're done. When flattening to a string, there's the whole traversal of the type descriptor to create the flattened string and then unflattening requires not just traversal of the type descriptor but also parsing of the string and memory allocation. You got a 50% speed difference in your test. That's with a simple array. The more complex the type the greater the difference between these will be. But the original article that you linked to is talking about something entirely different.
  21. QUOTE(guruthilak@yahoo.com @ Feb 11 2008, 05:32 AM) Are you actually building this into an EXE or DLL? If so, the problem would be that VIs don't have block diagrams in a built application. The runtime engine doesn't support any of the functions for inspecting the diagram. You would definitely get an "invalid object reference" in that case.
  22. QUOTE(brian175 @ Feb 9 2008, 09:00 AM) Pretty much. And Tomi's timing info about variant attributes vs. named queues also surprises me. The variant code has a single array implementation for the map that makes it reallocate when it runs out of space. That means that when inserting an element, the variant attribs should suffer a major penalty that the others (both named queues and OO maps) don't have to take. On fetch, they should all be just about equivalent -- the algorithms are pretty much equivalent. Definitely worth some exploration. Other tasks are going to pull my attention away from this topic for a couple months, but I've filed some notes to myself to dig into it when I free up again. [EDIT] Tomi, I thought of something that could be penalizing the OO versions. The recursion has to allocate a pool of cloned VIs. Try running your 10000 test and then, without stopping the VI, run the 10000 test a second time. See if this improves the speed.
  23. This bug only affects LVClasses that use certain data types in the private data cluster. Those include the DAQ Tag data types and certain refnum types includeing VISA, DAQ and IVI refnums. They do not include any VI Server refnums (app, VI, ctrl), nor queues, notifiers, files or datalogs. I know that isn't an exhaustive list of refnum types, but it covers the big categories.
  24. QUOTE(brian175 @ Feb 7 2008, 06:27 AM) a) I haven't looked at your test code at all yet, but something really smells about the LVVariant speeds because this is something I've benchmarked many times and it has a pretty nasty growth curve*. Staying constant for creation between 1000 and 2000 is pretty close to completely unexpected. b) Did you use Jason Durham's revised version of the Map that has the AVL balancing code? If not, please see the very first post in this thread which has been edited to have both my original and his modified version. Without the tree balancing, it would be easy to get some pretty nasty times (as I said in my original post). c) If you are using Durham's version, then there may be one other problem. Do you insert random keys or keys in some already sorted order? The worst case scenario for the tree balancing is going to be inserting already sorted information into the tree -- you'll trigger the tree balancing code A LOT. There's an optimal insert order if your data is already sorted to avoid the tree balancing, which is to start by inserting the middle element of your list, then the middle element of each of the halves, then the middle element of each quarter, etc. ANY map implementation has situations in which its performance seriously degrades, and almost all of them are optimized for random key inserts. *as of LabVIEW 8.5... I have long hoped that someone would improve the performance of this method, and I heard someone might be working on it for future LV version.
  25. QUOTE(normandinf @ Feb 6 2008, 05:11 PM) Yes, it is to be expected. I don't know what factor you should expect, but it will definitely be slower. By ref means that LV can't do any inplaceness optimization, means there has to be synchronization overhead even when no read/write conflict even exists. The cardinal rule of LV: If you want performance DON'T BREAK THE DATAFLOW.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.