Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Everything posted by Aristos Queue

  1. Heh... didn't mean to conflate those two sentences. I didn't intend to cast doubt on your diagnostic prowess.
  2. I don't understand what you mean here... the notifiers have a well defined subtype. You can't attach two different type of notifiers to the same node simultaneously, since there's only one input. When I hook up a notifier of int32s to the W4N prim, then the terminals are int32 type. If I delete that wire and hook up a notifier of strings, the terminals are string type. If I create a subVI that takes a notifier of int32 as input, I cannot wire any other type of notifier to that terminal. Only a notifier of int32 will connect. I can't bundle different types of notifiers together in an array. (putting mouse cursor on other side of the Red X on broken wire shows details of the other type -- a notifier of int32) You've got a lot of good ideas, Jimi, but I can't even begin to imagine what you're referring to on this one. Safe mode??? Oh, and as for the "new prim vs added terminal" -- I thought about it some more. Adding a terminal wouldn't work -- this is behavior you'd have to pick at compile time and not have changing at run time. If you put one of these nodes in a loop and on the first few calls said "don't remember every refnum's history" and then changed it for later iterations to "remember every refnum history", there'd be no way to reconstruct the data of previous calls except by having stored it even though it wasn't being used (big performance hit). It would get messy. So I think that adding a new prim entirely would be the way to go (not that I'm planning on getting to this any time soon, let me remind you). We'd just have to make sure the new icon was distinctive. Other options would be a popup configuration (not something I like) or the terminal that requires that it be wired with a constant (which would be odd). There could be an on-diagram configuration [such as the polyVI selector ring] but that sort of thing cuts of access to the node's bottom terminals and takes more space on diagram.
  3. Believe it or not, I really don't want to argue implementations... you asked for examples of recursion, I tried to provide one. I gave a simple case as an exemplifier of an entire class of problems. Your version, although I conceed it is simpler, does not match the function defintion... if I'm trying to confirm that your code matches the specification of the function, my first question is "Where are the two 'minus one' calls?" The fact that you found a way to provide the value without ever having the -1 calculation means that you had to think about how to encode the function in a VI and get the same mathematical result, as opposed to simply encoding the function as specified. My general case remains. Instead of Fibbonacci, let's try Ackerman: http://en.wikipedia.org/wiki/Ackermann_function <img src="http://upload.wikimedia.org/math/0/a/e/0ae4053de098cc9554752b190a38bc56.png"> If you want to write a program to print out the table of values for the Ackerman function, you can do it without recursion (all recursive functions can be expressed iteratively) but you'll be hard pressed to show (other than by empirical printing of the table) that you'd gotten it right. I went through a bunch of LV's C++ code and LV examples to find other recursion examples... In LV C++ code, we use recursion for assigning depth indicies to a cluster in LV (here's pseudocode): AssignToClust(cluster, int32 *curdepth) { for (i = 0; i &lt; number_of_elts_in_cluster; ++i) { cluster[i] = *curdepth; *curdepth += 1; if (cluster[i] is a subcluster) AssignToClust(cluster[i], curdepth); } You know the int16 arrays that come out of the Flatten Variant primitive? Those are type descriptors. In the C++ code, we use a lot of recursion for traversing those (for actions such as "typedef changed names, find all embedded type descriptors and change the name"). Related to the previous example: Generation of XML for LabVIEW data is a series of recursive calls between multiple functions, not just a single function. ClusterToXML might call ArrayToXML might call ClusterToXML might call ArrayToXML might call I32ToXML. Each adds a header before calling the nested recursion function and then adds a tail after the nested function returns. Doing this with a stack implementation would be very non-trivial. The routine to search LV palettes for a particular function is recursive: check this palette, if not found, check subpalettes sequentially. Believe it or not, we do this every time you right click on a node so that we can show you the source palette for the node. It continues to amaze me just how many calculations can be squeezed into the time between right-click and the appearance of the popup menu. Here's an example that ships with LV that uses a stack implementation but would be simpler if we had recursion: c:\program files\national instruments\LabVIEW 6.1\examples\general\queue.llb\Queue Stack - Solve Maze.vi Downloading VIs to RT targets -- download top-level VI. The download each subVI (repeat recursively). This one is easy to have implemented with a queue of VIs -- enqueue the top level VI, then repeat "dequeue one VI, enqueue all its subVIs, download that VI to target" until the queue is empty. The hard ones are the ones where you need results from multiple recursions to take the next step. Like this next one... Math string parsing: Given arbitrary string of numbers and the four basic math operations, evaluate the value of the expression. For example (1+3)*3-4/4) Generally the Evaluate function is "search left to right to find the first * or /. Split the string and Evaluate each half and either * or / the results. If no * or / is found, do same for + or -. If not found, return the value of the string." Adding parentheses makes this a bit trickier, but you get the idea. Game theory: pruning a game tree involves recursive evaluation of future states of the game "On turn n you can take possible moves 1 through m. G(n,m) is a good position if you have won the game or if the majority of possible G(n+1, [1..m]) moves are good positions..." Evaluate G(1, [1..m]) to decide the best first move."
  4. Used by someone else? Except for LV, none of my code gets used by someone else generally. But I have used it in graphics applications (specifically screen saver apps) that others could've written. The Fibonacci sequence underlies the golden ratio that is so useful for layout and design. Whether or not I have used that particular math expression is beside the point... I've actually had many math expressions of the form "F(x) = F(g(x)) with some recursive base case" over the years that have been in deployed code. Newtonian approximation of functions comes to mind. As for the code, what does function recursion add? Readability. Your VI works... I think. It took me 20 minutes of staring to figure out what the heck a divide operation had to do with calculating this value. Compare with this:
  5. Oh, no need for a poll. We can all guess how they'd reply -- the same way we all replied. Yeah, I had to go drilling into this because I forgot the rules of a single node with different refnums. The current behavior is actually documented in the online help (not in detail, but it is there). It's just been a while since I looked into it (refactoring the notifiers was the very first project I had when starting at NI years ago). This isn't so much a reentrant/non-reentrant issue. It's how does a single node that gets different refnums at different times behave. Jimi's original demo was a problem of a Wait for Notification inside a For Loop. Nothing about reentrancy there. The current node always looks to the future -- this can actually be used to ensure that processes fire in a forward looking order. What looks like a bug in Jimi's two demos (either because each notifier only fires once or because two notifiers have dependencies upon each other) becomes a feature when notifiers are firing independently multiple times. I've had theories before that "If I change XYZ no older code will notice." I no longer maintain such fantasies, and changing the runtime behavior of the notifiers sets off big flashing alarm bells in my head. I'd rather try to add a new node or a terminal for specifying behavior on the current node. Considering that they are exactly the same code, I'd be surprised that there's any speed differences. The notifiers are implemented as a single-element queue that simply overwrites its first element instead of locking on enqueue. It isn't like the code is cloned or anything -- they are litterally running the same code.
  6. Sorry... took me a while to get back to this post. I really do say this is not a bug. The notifiers were not designed to work in this case, in fact, they were explicitly designed against this case for performance reasons -- the bookkeeping of which message was seen last for each individual refnum is a hit that the vast majority of apps don't worry about. In the 5 versions of LV released so far with the notifier prims (not to mention several prior versions with the VIs-around-CIN-node), this is the first time that anyone has complained (to my knowledge) about the lack of support for this case. As for the question of whether or not the Wait for Notification behavior should change, I'd have to say "no" if for no other reason than we'd be breaking behavior going back multiple versions of LV. As for the question of whether new prims could be added (a "Wait for Notification with Fine-Grain Memory"), it probably could be done, but I'd worry about the confusion of having two Waits in the palette... it'll require delicacy. I doubt that this will become a priority anytime soon, especially since other synchronization tools exist which I thnk can be hacked to do the work. I'll put it on the back burner of suggestions (which is actually a good place to be... there are good suggestions that have actually fallen off the back of the stove, rolled across the kitchen floor, out the cat door and are sitting in the garden -- LV gets a lot of good suggestions each day!)
  7. The classic would be calculating Fibbonacci numbers: F(0) = 1 F(1) = 1 F(n) = F(n-1) + F(n-2) Coming up with a data structure to do this recursion is annoying, especially when the the function definition is already known.
  8. I opened up the VI and took a look. Christopher is correct. The subVI's configuration doesn't matter. It's the caller VI that has to be configured to not do automatic error handling. You have a subVI that is returning an error, and that error is not handled. You can either turn off auto error handling in the caller or handle the error -- wiring the output terminal to anything is sufficient, as shown:
  9. They are typed. Strongly typed. We're not talking about the type of the VI. We're talking about the type of the call to the VI.
  10. It depends on your goal. If the goal is getting a project done for work, then, yes, ask if there's already a wheel. But if the goal is pushing yourself, you've got to pretend you're in a high school physics course. Every experiment you try is something done hundreds of times before, but you haven't done it yourself, and until you do you don't really understand the implications for the next higher-level experiment. Anyone using LV can use the Sort 1D Array primitive. But can you write the code that does the sort? These are the sorts of challenges that prove you know a language. You might try these... High School Level http://www.acsl.org/acsl/96-97/pdf/allstar/progs.pdf (In the PDF above, the problems start on page 7.) College Level http://www.mcs.vuw.ac.nz/~david/acm/ None of these is going to stress LV's parallel processing capacities -- in any language other than LV, spawning and collecting threads is a graduate-degree level concept.
  11. Welcome to the LV matrix... "I look at all these wires and nodes and structures... I don
  12. Quick safety tip for all you who seek to peek under the skirts of LV: external nodes and XNodes are not the same thing, though they both serve the same goal. External nodes were LV R&D's first attempt at nodes that script themselves. They have issues and have been supplanted by XNodes, our second attempt. XNodes are better, in theory*. *"In theory", from the Latin "en theos" meaning "with God" or, more specifically, "assuming He Who Built The System doesn't change His mind."
  13. That's correct. In fact, you can't create, edit, or save in an exe (ah, boolean logic jokes...)
  14. Scripting only works in a dev environment, not the runtime engine.
  15. If the code that uses this text file is written in LV, then how about putting the text file into a string constant on a VI and password protect the block diagram. Instead of using a text file, call LV to get the text. If you need editing capacity, then this isn't as helpful. You could hide your entire block of text by writing it into the LabVIEW .ini file. I'm trying to think of some clever way of using LV to solve this issue... not much I can suggest if you're not in a development environment. We leave out all of our "save" capacities in runtime engine.
  16. I forgot to mention in my solution post yesterday that the correct synchronization tools to be using in this case is Rendezvous. Create a Rendezvous of size N and let each reentrant clone post that it has reached the Rendezvous, then proceed past that point. If you need to transmit data in this case, you can collect the data together in a common array (protect with Semaphore or functional global).
  17. Yeah, I've been meaning to comment on that last bug report you filed...
  18. Oh, there's nothing you can't do if you know enough about locking to understand that you need to put in a method you have permission to add a method to the class. You might not be able to add a method if you don't have the password to the class, the class is a shared component among many groups that cannot be modified, the class has been validated by your ISO9000 inspection and can't change further without having to redo the inspection, or changing the class would change some important aspect of compatitiblity. Other reasons may exist, these are just the ones I can think of off the top of my head The classic example is reading tables of a database. There are tables that are updated only rarely but read from frequently, so to maximize performance, you don't lock the table for a read operation, only for write. Locking on read can make many database projects perform so badly as to be unusable.
  19. Just turn on Execution Highlighting on your favorite VI, sit back, and watch complexity arise from simple steps.
  20. Excellent. I think we're both on the same page now.
  21. Suppose I have a LV2-style global that stores a double-precision floating point value and supports these three operations: "Round to nearest integer" "Get Value" "Add 1" "Divide by 2" and "Multiply by 2" Then I write this code: 1) Round to nearest integer 2) Get the value 3) if the value is odd, then Add 1 4) if the value is not equal to 2, then Divide by 2 My expectation is that I will always end up with an even number for my result (I realized had to put the "not equal to 2" in for this example since 2/2 is 1). For whatever reason, I need to make sure the value in there is two. Now I do these three steps in two parallel sections of code (a and b). Remember, each individual operation is protected, but the aggregate is not protected. So I end up with this: Initial value in LV2-style global: 5.3 1a) Round to nearest integer (was 5.3, now 5.0) 2a) Get the value (it returns 5.0) 1b) Round to nearest integer (was 5.0, now 5.0) 2b) Get the value (it returns 5.0) 3a) if the value is odd (we got 5.0) then Add 1 (was 5.0, now 6.0) 3b) if the value is odd (we got 5.0) then Add 1 (was 6.0, now 7.0) 4a) if the value is not equal to two (we got 5.0, so we assume it was 6.0) then Divide by 2 (was 7.0 now 3.5) 4b) if the value is not equal to two (we got 5.0, so we assume it was 6.0) then Divide by 2 (was 3.5 now 1.75) A completely different result occurs if the user puts a semaphore around the code so that these instructions are forced to happen as a complete block -- the final answer comes out to 2.0. This is what I'm talking about == each individual operation is locked, but the overall operation is not locked.
  22. The problem is not with two actions happening concurrently. The problem is with actions from other parallel threads happening in between successive actions in this thread. There's the rub, and I will contend that it is a lot more significant than just 2% of the problem. Further, there's plenty of situations with read operations where preventing the parallelism would work against the performance of the system.
  23. In replying to this thread, let me first of all say "Thank you." This particular thread, though it may have strayed from Jimi's original post, is the most coherent discussion of OO in LabVIEW thus far. And it has highlighted a lot of topics in words I hadn't been able to put forth. I've been staying out of the by-reference debates for the last couple weeks except on technical points. I've made my opinions known, and I was happy to step back and see what the community proposed. Besides, you guys/gals can be a bit intimidating. Let me tell you, being a LV developer is a lot easier when you just let LV stand on its own behind the corporate mask :ninja: rather than associating your own name on a feature. And the paycheck is the same either way. What makes me want to wade back into the fray? One singularly interesting comment from JFM: I would like to point out that each one of the reference examples is a particular refnum type with a specific set of operations available on that refnum. None of these is an implementation of a generic "reference to data" that lets you plug in your own arbitrary operations. That's a distinction that I want to highlight. In the discussion about lossy queues the other day on LAVA, user Kevin P had an implementation for lossy queues that involved "try to enqueue the data, if you fail (timeout), then dequeue one element, then enqueue again (this time knowing you have space). " For various reasons, I proposed another solution. "Get queue status, if not have space then dequeue, then enqueue". Both Kevin P and my solutions DO NOT WORK if there are multiple VIs posting to the same queue. Why? Because: Kevin's solution fails for similar reasons. This is a case where you need a locking level for a whole series of operations. To implement a lossy queue with today's primitives, you need to use Semaphores to prevent ALL accesses to the queue from the time you start the "test queue size" operation (whether you're doing that through Get Queue Status or by enqueing and seeing if you timeout) all the way through your dequeue to make space and the final enqueue. The Queue and Notifier APIs are very complete reference-based APIs. They have locking exclusions for each queue/notifier for every operation on that queue/notifier. And yet they require a whole new level of locking in order to do this "bunched" operation. Semaphores are what is available today. I've contemplated a "Lock Queue" primitive. What would a "Lock Queue" primitive have to do? Well, it would acquire some sort of lock that would prevent other queue operations from proceeding, and it might return a completely new queue refnum -- one that is to the exact same queue but is known to be the one special refnum that you can pass to primitives and they won't try to acquire the queue's lock. What if a programmer forgot to invoke "Unlock Queue"? His code would hang. Or, for example, the producer loop might acquire the lock and just keep infinitely producing, starving the consumer loop of ever having time to run. So, yes, it could be done, I think, but the burden would be on the programmer to know when he/she needs to lock the queue for multiple operations and remember to unlock it again. Any reference system must leave the burden on the programmer to handle locking/unlocking. If such facilities are not provided then the refnum will have an inability to string multiple functions together on the same refnum and guarantee that the reference isn't being changed by some other parallel section of code. All the implementations of GOOP I have ever seen have this problem. The naive "just make every method of the class exclusive with regard to every other method for a given piece of data" does not handle this sort of locking (and introduces its own issues when one method wants to call another method as a subVI). My point in this case is that the existence of refnum APIs in LabVIEW is not sufficient to say that a general by-reference system can be created. I'm certainly not ruling it out. But the "acquire-modify-release" mechanism has to be custom written for all the modifications that the reference's author thinks will ever need to be handled atomically. If the author didn't think of a particular functionality, or didn't get around to including it, then there's no parallel-safe way to do it. LV2-style globals provide safety for whatever operations are in the LV2-style global, but try to string multiple operations together and you have problems. Global VIs guarantee that it is safe to read entire large cluster values from the globals without a write occuring mid-way through the read. But other than this primitive locking, they're wide open. So if you "read the value, increment the value, store the value back in the global," you have problems. Whatever general reference system that LV puts forth someday cannot automatically deal with all the locking issues that everyone seems to think a native reference implementation would handle. References can co-exist with dataflow, but only if the author of each particular refnum type is very careful about protecting the resources to which the refnums refer. It is not something that can (in the technical engineering sense) be handled automatically. It is, however, functionality that you can build when you construct your particular refnum type -- using native by-value OO to build that refnum. You can create a class that has the behavior you want on the wire, with all the locking and unlocking that you're desiring, with whatever functionality you design into it. References co-exist with dataflow because the refnums themselves are dataflow, and because the authors of the refnum types have tried to provide as complete an API as possible. There is no magick bullet that will make generic references safe for parallel execution -- not in LabVIEW, not in any programming language.
  24. Global VIs are not in any way thread safe. It is one of the reasons that LV advocates functional globals (aka LV2-style globals) whenever you're doing parallel work.
  25. Jimi and JFM: Can you both post code showing what you're concerned about? I find it hard to follow these conversations in text without actual points in VIs to look at. I'll try to answer what I can.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.