Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    204

Everything posted by Aristos Queue

  1. YEEE HAW! I think I found it! The key word in the above sentence is "three". In the "enqueue dequeue enqueue" version -- the VI tries to enqueue and if there's room it immediately enqueues without releasing the lock on the queue. In the "Get status, dequeue enqueue" version -- the VI checks Get Status and if it finds that there is enough room in the queue, it enqueus... BUT if there are three receivers operating on the same queue, one of the others may have made the same Get Status check and decided it had enough room -- thus taking the very last spot in the queue! For example: The queue has max size of 5. There are currently 4 elements already in the queue. At time index t, these events happen: t = 0, VI A does Get Status and the value 4 returns t = 1, VI B does Get Status and the value 4 returns t = 2, VI B does Enqueue t = 3, VI A does Enqueue -- and hangs because there's no space left If you're going to use the Get Queue Status, you have to make your test "If (current elements in queue >= (max queue size - number of enqueue VIs)) then { dequeue }" Subtle!!!!
  2. The ideal would be to get to the point where we don't have to have everything in memory, even for the reasons Michael cited. The project is designed to scale from the simple "couple VIs utility" to the VERY LARGE APP. As such, it does not load into memory all the VIs listed in the project (though subcomponents, such as LVClasses and XControls do at this time). What could be done when you open the project is for the project to do a quick glance at each of the VI files to get specific information about them and cache that information -- such as which VI calls which other VIs as subVIs. Then editor actions could load VIs as needed. If I renamed A.vi to B.vi, the project would know all the callers of A.vi and could load them into memory right before doing the rename so that the project stays up-to-date. The project today has a dual existence as both an application definition and a deployment/file tracking tool. If dependency tracking were added, this would push the project more toward being an applciation definition and less of a deployment tool. There's quite a bit of pressure back and forth between app development LV features and target deployment LV features over what the project's job actually is. There was a sign on someone's desk a while back that said, "If you build something truly useful, people will use it for things you never intended then criticize you for shortsightedness." This has sort of happened within our team with the project. The couple original designers had an idea for the project, but now that its there, a lof of other developers are suggesting, "Hey, I could use that to help with feature XYZ if I just tweak the project like this..." The future should be interesting.
  3. Nope. If I understood that I might have attempted to modify the inplaceness algorithm. There is an enlightened being whom we call Chief Architect, at whose feet we sit to better learn the secrets of inplaceness. :worship: Perhaps I will have an answer in a few years.
  4. Norm: Did you ever investigate Reentrant Panels (LV8.0 and later) and whether they were a better solution for you than VI Templates?
  5. You can use Flush Queue to dequeue all of the elements. If you want N elements and account for the possibility that there aren't that many elements in the array, then do this: There's a slightly easier way... Put the Dequeue in a While loop with the Timeout terminal wired with zero. It will dequeue one element. If no element is available, it will immediately timeout. Wire the output Timeout terminal to the Stop terminal of the While loop. You can Or in a test to see if the "i" terminal of the While loop has reached your desired count. This way you don't have to get the Get Queue Status prim involved, and you'll save yourself some thread synchronization overhead. The attached VI is written in LV8.2. Download File:post-5877-1159034903.vi
  6. :laugh: Ah! That's why you're so picky! You started with the most stable version of LV ever. We've got to get you some perspective! All joking aside: The beta period gave LV classes a pretty hard pounding and hammered out a lot of bugs. The version that finally shipped has shown itself to be fairly stable so far, but as we expected, there are problems. This was a pretty wide ranging "feature" of LabVIEW (if you can call all of the features of OO a single feature of the language!). Even as much exposure as our beta program gives to LV, there are a lot of aspects of LV, and there's no way to test all the possible system interactions. I'm actually pretty pleased with the general system stability, but the instability of any first release is what leads to "dot-zero" aversion for many customers. It's nice to have customers who find enough utility in the new stuff to put it through its paces despite the occassional hiccup.
  7. There is nothing that I know of that would make a VI that is a member of a LV class interact with shared variables in any way different from any other VI. To the best of my knowledge there is no special functionality anywhere in the shared variables relating to LV classes. If you manage to create a VI that replicates the problem, you should report it as a bug to NI. I have absolutely no knowledge as to what those error codes mean if they're coming from a shared variable. Nothing LV class related, I'm pretty sure. If you move the VI outside of the LV class, does the shared variable have the same behavior? :!: As for the VI that always needs saving... this is a known issue. I'd list a bug report number here but I don't have the database in front of me at this time. My suggestion is that if a VI is a member of an LV class and thinks it needs to save, go ahead and let it save. I know that's not desirable behavior, but not saving it has been shown to lead to corruption over time (the info saved in the VI drifts out of sync with the info in the .lvclass file). This bug will be fixed, I assure you.
  8. Bundle/Unbundle have several special inplaceness optimizations that classes did not attempt to take advantage of in this first release. Do you remember LV6.0.1 followed quickly (weeks) by 6.0.2? That was caused by a mistake in someone changing the inplaceness algorithm. It is a hard algorithm to get exactly right, and if you get it wrong, you can end up with values that change randomly as a VI executes. I decided that having a functionally correct implementation of objects in this first release was better than trying to get involved in the optimizations --- the optimizations will be added in future LV versions. Lesson one of Computer Science 101: Make it work first, then optimize later (or never). In this case, we're going to do "later."
  9. There are several aspects of the editor that will need work over time. The find/replace system is one. At this time, there are many places in the editor, like the Hierarchy Window, that do not know to take dynamic calls into account. Regarding crelf's comment that such finding would have to be based on name: we can do the find better than that. The classes do know their inheritance relationships, so when you search for X.lvlib:Y.vi, we can differentiate which Y.vi calls might possibly invoke X.lvlib:Y.vi specificaly. It's just going to take time to find all of these places that want special treatment for dynamics. Some of them are going to need UI tweaks, since it is equally possible in some cases that you want to treat dynamic nodes as possible calls to a set of VIs, and other cases where you're going to want to talk about just the specific VI.
  10. It's not that expensive if you use a dual system the way queues do of lookup normally using the number and only occassionally using the string. Only the Obtain Queue uses string. The others all do numeric lookup using the refnum. Add a function that translates string to "pointer" to your system and you're good to go.
  11. You're probably faster for lookup times than the variant, but you'll pay pay a bigger penalty at insert time. If you use the Search 1D Array prim to find what position to do the insert, that's an O(n) penalty (though you might have written your own binary search on sorted data to get this down to O(log n) ). When you add a value to the array, possibly in the middle of the array, you're going to reallocate the array to make room for the new element. That's a lot of data movement. Compare that with the O(log n) time needed to add a variant attribute to the tree with zero data movement. Same comparison applies for removing an element from the lookup table. So if your lookup table is pretty stable from the time it is built until you're done with it, your method probably has advantages. If you're adding entries and deleting entries a lot, then I wager the variant attrib is going to beat you hands down, even including the expense of converting your doubles to strings to do the key lookup.
  12. Known issue... the performance hit is due to excessive recompiling. Already being dealt with.
  13. Hm... perhaps LV should encourage computer vendors to make mice with a pressure gauge. If we notice both mouse buttons hit simultaneously with extreme force repeatedly, we'll automatically launch a web page that lists all the major help forums (ni.com, info-LV, LAVA, etc?).
  14. If you want one that really makes you want to claw your eyes out, take a look at the output terminal of the Type Cast node... *(type *)&x Changing these names once they're established is non trivial. Unless they're really factually wrong, such changes tends to be lower priority than all the other stuff that needs working on in LV.
  15. Think back to elementary school when you first did division, before you knew about decimals. What is 7 divided by 2 ? Answer: 3 remainder 1. The 3 is the quotient. 1 is the remainder. This is also known as modular division or integer division.
  16. For possibly the same reason that I sometimes get the *evil* idea to implement if (username="xyz") { integer wires are orange; double wires are blue; } We all have temptations that we have to supress.
  17. Curious that you should bring up queues of arrays today.... For reasons of my own, I was reviewing the behind-the-scenes code of the queue primitives today. One of the biggest use cases I have for them in my own programming is tree traversal. For example, control reference traversal: enqueue all the controls on the front panel, then in a loop, dequeue them one by one to do something to them. If the one you dequeue is a tab control or a cluster, enqueue all the sub controls. Continue looping until the queue is empty. Now, what I usually do is create a queue of my base type (for example, control refnum). When I have an array of elements, I drop a for loop and enqueue each element one by one. That way they're all available for dequeue. I got to thinking -- there are some times when I'm enqueuing each individual element and then at the dequeue I'm building an array of elements back up (maybe a filtered list of the elements, for example). In these cases, it seems like it might be beneficial to have two queues, one that handles single elements and one that handles arrays of elements so that I never bother tearing down the array structure if I'm just going to build it again. Of course, this is for traverals where the order of traversal doesn't matter, since you would then dequeue from the lone element queue until it was empty then dequeue from the array queue. Since the queues try to simply take ownership of the wire's data and not make a copy of the data (unless the enqueue wire is forked to some other node in which case it has to make a copy), it might make sense in some cases to let the enqueue take ownership of the entire array of elements. I don't have any specific examples at this point. And I don't have any evidence that this would ever be advantageous. It's just one of those passing hunches to think about....
  18. Adapted from The Tempest, by William Shakespeare
  19. I looked long and hard at your picture. A few thoughts... a) Are you worried about the unnecessary loss of data? There's no "critical section" protecting the "enqueue, dequeue, enqueue" sequence. Suppose the producer VI tries to enqueue and fails. Ok, so it goes to the dequeue to make room. In the time it takes to do this, the consumer VI has dequeued an element. There's no need for you to do the dequeue, but you don't know that. I don't think this matters -- most lossy streams, such as video conference communication packets, don't really care what packets get dropped. If this matters, a semaphore aquire/release around the "enqueue, dequeue, enqueue" and the same semaphore acquire/release around the dequeue in the consumer loop would fix the problem. b) I think I can suggest a better performing way of doing the "enqueue, dequeue, enqueue." Your current implementation will fork the data being added to the queue and will hurt performance for large data elements. Try this: The Queue Status primitive generates no code for any terminal that is unwired. So if you do not wire the "Elements Out" it will not duplicate the contents of the queue, nor will it take the time to evaluate any of the unwired terminals. Fetching the current element count is very fast, and this avoids ever forking your data wire. Forking the wire is a big deal since if the wire is forked it prevents the Enqueue from taking advantage of one of its biggest speed optimizations and it guarantees a copy of the data will be made at the fork. (PS: The 0 that I've wired to the timing input of the dequeue inside the case structure is important... you might detect that the queue is full, so you go to dequeue an element... in the time between when you detect the queue is full and the dequeue, the consumer loop might speed ahead, dequeue all the remaining elements and leave the queue empty. If the timeout terminal of the dequeue is unwired, the dequeue would hang indefinitely waiting for someone to enqueue data. These are the sorts of gotchas that multi threading opens up for you. ) c) I know you said "and don't tell me about variants." Although probably not the solution at the moment for whatever it is that you're working on, as time goes on I would expect those utility VIs that you discuss to be writable with LabVIEW classes, where there is no extra data allocation when you upcast. Over time I believe that users will find a lot of utility in rewriting functionality, particularly with generic data communications systems like the queues or LV2 globals, using the LabVIEW classes for maximum code reuse and minimum "genericization" overhead. Just a thought... I'm downplaying the possibilities here since I've lately been accused of suggesting LV classes as *the* silver bullet for all of LV's problems. I want to keep expectations realistic, but I do think there's benefit in this arena.
  20. That's correct. Opening a project simply opens a file listing. It does not load all the VIs into memory (the project is designed to be a manager for very large projects where you might not want all the VIs in memory simultaneously). The same is true for a library -- .lvlib files do not load all their VIs into memory. .xctrl and .lvclass files do load all their member VIs into memory. VIs load their entire subVI hierarchy into memory, and polymorphic VIs load all of their instances into memory. Summary: File extensions that do not load all of their referenced files into memory: .lvproj .lvlib File extensions that do load all referenced files into memory: .vi .ctl .vit .ctt .xctrl .lvclass
  21. Mass compile won't fix this bug -- the whole problem is that LV doesn't think there's any reason to recompile. Mass compile does a load, check for whether anything has changed, and only do recompile if it needs to. To fix the Icon Probe bug, find this token in your .ini file: ProbeDefaultCache Delete the problem probe from the list of paths. If you don't have one or two top level VIs that will load everything, you can write a quick VI Server routine to load all the VIs in a directory. Once everyone is in memory, use the ctrl+shift+click on run arrow that I mentioned before. None of this, by the way, is correct expected behavior. *sigh*
  22. Jimi: Try loading your project into memory, get all your VIs into memory (by opening the FP of all your top-level VIs) and then do ctrl+shift+click on the run arrow. This will force a recompile of all VIs in memory. I'm tracking a bug where you make an edit to the class' private data control and save the change. We recompile all the VIs that bundle/unbundle the class. Then you exit LV without saving all the VIs. When you next load, we don't seem to be noticing that the types are out of date and so we aren't recompiling the bundle/unbundle VIs automatically. So they have stale memory addresses and may crash when run. Forcing a recompile of all the VIs in memory catches these guys and brings them up-to-date. This was reported to R&D (# 40667H1W) for further investigation.
  23. Slight modification to Michael's example... I found out recently that I could connect an array of strings to Concatenate String. So to easily handle an array of doubles...
  24. Sorry for the confusion... yes, I mean that no other VI can obtain a reference. You can pass the reference to a subVI, or send it to another VI through some other communications mechanism, but basically only those VIs to which you publish that refnum can use that refnum.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.