-
Posts
3,183 -
Joined
-
Last visited
-
Days Won
204
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Aristos Queue
-
This is not a bug. After analyzing the full test suite, I can say definitively that it is not behavior that should change. The explanation is not simple, so it may be worth expanding the documentation to talk about this case explicitly. Short answers: 0) This does not affect queues at all. 1) When waiting for multiple notifiers, use Wait for Multiple Notifications primitive. It exists to handle a lot of the complexity that I'm about to talk about. 2) There's an example program showing how to wait for ALL of a set of multiple notifiers at "<labview>\examples\general\WaitForAll.llb" Long answer: It is very easy to think that this is a bug. I was tempted to agree until I watched what was happening in the actual assembly code and I realized that the Wait for Notification nodes were correctly ignoring some notifications. Terminology: Notifier: a mailbox, a place in memory where messages are stored. Notifier Refnum: a reference to one of these mailboxes which is used on the block diagram to access the mailboxes Notification: a message sent to a notifier Node: an icon on the block diagram; in this explanation, we're interested mostly in the nodes that represent functions for operating on the notifiers W4N: shorthand notation for "Wait for Notification" node. W4MN: shorthand notation for "Wait for Multiple Notifications" node. The situation in the posted example (leaving out all the reentrant subVIs stuff): 1) N notifiers are created. 2) A loop is iterating over the list of the N, doing W4N on each one. 3) In another section of code, Send Notification is called for each of the N in a random order. Expected result: The listener loop will check the first notifier. If it has a notification available already, then the node will return the posted notification. If the notifier does not yet have a notification, the node sleeps until the notification is sent. Then the loop repeats, proceeding to the second refnum, and so on. Whether the notification posted to a given notifier in the past or we wait for it to post doesn't affect any other notification. So we expect the loop to finish after all N notifications have arrived. Observed result: The loop hangs. The W4N node doesn't return for one of the notifiers, as if the notifier did not have a notification. We know it does have a notification, but the node doesn't seem to be catching the message... almost as if the "ignore previous?" terminal had been wired with TRUE. The explanation: Each notification -- each message -- has a sequence number associated with it. Each W4N and W4MN node has memory of the sequence number for the latest notification that it returned. In the For Loop, we're testing N notifiers. Remember that these notifiers are getting notifications in a random order. So the node always returns the message for the first notifier. But the second notifier will return only 50% of the time -- depending randomly on whether or not it got its notification before or after the first notifier. If the second notifier got its notification before the first notifier, then the node will consider the notification in the second notifier to be stale -- the notification has a sequence number prior to the sequence number recored for this node. The sequence number behavior is key to how the node knows not to return values that it has already seen. A given node does not record "sequence number for each refnum". Storing such information would be an expensive penalty to the performance and thus would be of negative value in most cases (the vast majority of the time the notifier nodes are used with either the same refnum every time or with cascading refnums that are accessed in a particular order or where lossy transmission is acceptable). In the cases where you need to hear every message from a set of notifiers, that's what the W4MN node is for -- it records the last message it heard, but when it listens again, it returns all the new messages. In the particular example, the subVIs doing the posting of notifications are in a bunch of cloned copies of a reentrant VI. Each reentrant subVI waits for a single "start" notification, and then sends its "done" notification in response. The VIs are cloned using Open VI Ref and then told to run using the Run VI method. If we put a "Wait Milliseconds" prim wired with 50 after the Run VI method, the bug appears to go away. This makes the situation seem like a classic race condition bug -- slight delay changes behavior. In this case, adding the delay has a very important effect -- it makes sure that each reentrant subVI goes to sleep on the start notification IN ORDER. So that they wake up in order. So they send their done notifications in order. In other words, adding the delay removes the randomness of the notification order, and so our loop completes successfully. And that's why this isn't a bug.
-
Jimi: I looked at your code. WOW. This is an interesting situation. I was knee deep in assembly code, ready to agree with you that it was a bug, when I realized what was going on. In a classic sense, it is not a bug, because the code is operating exactly as intended, and, indeed, exactly as documented. But the consequences in this seemingly simple case are amazing. It might even be considered a bug because we might want to change the documented behavior of the Notifier Wait to account for something like this. It's going to take me a while to write this up... I'll post it in the other thread later today or tomorrow. PS: The situation does only affect notifiers, not queues.
-
Take a look at <labview>\vi.lib\Utility\Wait for All Notifcations.llb\Wait for All Notifications.vi This shows a correct implementation for how to wait for an entire set of notifications to fire. I'm not sure what is wrong in your code, but maybe by code comparison you can find it. I'm still hunting around, but I just started looking at this and it may take a while to untangle.
-
I'm pretty sure there's no Notifier bug. The way you guys described it over there, it sounded like correct code. The Notifiers are explicitly not guaranteed to catch every notification. If you post twice and listen once, only the last message gets heard. I didn't say that earlier because there wasn't any code posted to actually inspect, but please don't spread FUD around unless we have something to actually inspect. I close 90% of reported Queue/Notifer bugs as "not a bug" because they're actually programming errors -- someone forgot to wire a terminal or wired it with the wrong wire, or was doing something in two separate loops that wasn't protected, etc. In 8.0 I have one confirmed thread-lock involving Obtain Queue by name in a tight loop with Enqueue. In 8.2, there are no known Queue or Notifier bugs at this time. Edit: You've posted code on the Notifier bug... I missed that earlier... I'll take a look.
-
The failure of the Call By Ref node to do the automatic downcasting is not a bug. Jimi and I discussed this earlier regarding the dynamic dispatch VIs which do not automatically downcast their outputs. The ability of a subVI node to automatically downcast outputs is based on the block diagram of the VI and whether or not that block diagram preserves run time type. Since they dynamic dispatch subVIs don't know what diagram will be invoked (and new classes could be dynamically loaded into memory at any time that do not support the automatic downcasting) we can't do the downcast. The same is true for the Call By Ref node. The VI you happen to be calling could do the downcast, but the CBR doesn't know what VI that will be until you actually are running.
-
Suggestions for improving this in the future are welcome. Here's how "probes with classes" support stands today: 1) The probe is built for the wire type. Since any type of data may come down that wire at run time, the wire type is the most specific probe we can put up. And so we do. 2) The probe does show the specific type that comes down the wire, but none of the more specific data because there's no place in the probe to show this data. We discussed all sorts of options for probes that add/remove fields when a specific type comes down the wire. This creates a very jumpy probe (bad user experience). 3) You can create custom probes for class types. You might write a probe for a specific class hierarchy that displays a subpanel for the specific classes. This is A LOT of effort to go to for a probe, so I'd only recommend doing this for a class that you're going to release as a library for other users to use in development. 4) Why was the current solution found to be acceptable for this release? Whenever you have child data traveling on a parent wire, the vast majority of the time, you don't care what the values are in the child fields. The problem that is being debugged is generally to do with one of the fields of the parent class. In the other smaller percent of the time, you can put debug code on your diagram that does a "to more specific" cast of the object and then probe the output of that wire. I hope this helps.
-
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
Did you ever use the Posix library or any thread spawn command? If not, the reason you didn't have to have locking is that your entire program was serial executed with no parallelism whatsoever. In C++ you have to explicitly declare new threads, where to spawn them and where to merge them back together. Unless you spawned new threads manually, this isn't a valid example. -
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
Exactly which languages are you talking about where the locking is not necessary? I regularly program in JAVA, C, and C++. In the past I've worked in Pascal, Haskell, Basic, and Lisp. In all of these lanugages, if the programmer spawns multiple threads, the programmer must handle all locking. These languages have no support for automatically protecting data from modification in separate threads. The closest you come is the "synchronized" keyword in JAVA, but even that doesn't handle all the possible woes, particularly if you're trying to string multiple "modify" steps together in a single transaction. Am I misunderstanding what you're referring to? -
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
The design is as it is for purely technical reasons. I'd be ashamed if we actually changed a good design to accomodate a patent application. -
Need help with inheritance and encapsulation
Aristos Queue replied to Lee Robertson's topic in Object-Oriented Programming
Argh. ISP changed the mapping of the URLs... For all URLs in the previous post, replace Replace http://www.outriangle.org/jabberwocky/<filename> with http://jabberwocky.outriangle.org/<filename> -
Need help with inheritance and encapsulation
Aristos Queue replied to Lee Robertson's topic in Object-Oriented Programming
You don't prevent the coercion! You embrace it! I've been meaning to post these for the last couple days, but I haven't gotten around to it. There's a write-up that goes with them, but it isnt' ready yet. But take a look. This is the original solution that doesn't work in a runtime engine: http://www.outriangle.org/jabberwocky/Fact...ingClassRef.zip This is the modified solution that should work in a runtime engine: http://www.outriangle.org/jabberwocky/Factory_UsingVIs.zip In both cases, open the enclosed .lvproj file and then take a look at "DEMO Factory.vi". You can explore from there. -
Share your favorite tips and shortcuts
Aristos Queue replied to m3nth's topic in Development Environment (IDE)
Enough folks liked this so red is now the default color in LV8.2. New tip that may not be known: Common frustration: Have a structure node on diagram. Drop a string control inside it and start typing. At some point the string reaches the boundary of the node. If you have autogrow turned on, your structure will start stretching. If you have autogrow turned off, your string will vanish under the edge of the structure. Either way, not desirable. So you click away from the string, resize it, maybe turn on scrollbars, then start typing again. But... from LV7.0 forward... After you drop the string control, type for a while. When the string reaches the max width that you want, hit shift+Enter. Keep typing. The string will now wordwrap at the current width, and grow vertically. When you reach the maximum desired height, hit shift+Enter again. The vertical scrollbar will appear and the string will now scroll instead of growing any further. Sorry... the trick doesn't work on free labels. Perhaps someday... -
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
This depends on what you think of as "an object." I contend you're very used to doing this -- you just don't recognize it. Waveform, timestamp, matrix -- these are object types. They are complex data types with well defined operations for manipulating the data. They may aggregate many pieces of data together, but they are exposed to the world as a single coherent data type. NI isn't going to be replacing these types with LVClasses any time soon -- we want LabVOOP to be seasoned before such core components become dependent upon it. But if we were developing these new today, they would be classes. Every cluster you create is an object definition -- with all of its data public and hanging out for any VI to modify. These are the places you should be looking to use LVClasses. And you don't want references for these. I tend to think of integers as objects because I've been working that way for 10+ years. You don't add two numbers together. One integer object uses another integer object to "add" itself. Forking a wire is an object copying itself. An array is an object. Consider this C++ code: typedef std::vector<int> IVec;IVec DoStuff(const IVec &x, const IVec &y) { IVec z(x); c.push_back(y.begin(), y.end()); return c;}void main() { IVec a, b; a.push_back(10); b = a; IVec c = DoStuff(a, b); } Consider the line "b = a;" This line duplicates the contents of a into b. This is by-value syntax. The syntax is there and valuable in C++. JAVA on the other hand doesn't have this ability. That language is exclusively by reference. If you use "b = a;" in JAVA, you've just said "b and a are the same vector." From that point on "b.push_back(20);" would have the same effect as "a.push_back(20);" The by-value syntax is just as meaningful for objects. In fact, in many cases, it is more meaningful. But you have to get to the point where you're not just looking at system resources as objects but data itself as an object. Making single specific instances of that data that reflects specific system resources is a separate issue, but is not the fundamental aspect of dataflow encapsulation. -
Six LabVIEW 8.0 features no one talks about
Aristos Queue replied to Aristos Queue's topic in Development Environment (IDE)
There's some P4 settings you can put in for remote depots that creates a local mirror of the depot and only syncs back occassionally. I've only heard about these settings from our international R&D folks (the ones in Shanghai who have to access the P4 servers here in Austin, TX) so I can't give you any more info than that, but it might be worth skimming the P4 help documents. -
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
Jaegen and JFM both ask effectively the same question. Why hasn't anyone built by-value classes before? It wasn't possible. Only fundamental changes to the LV compiler make this possible. There just wasn't any way for any user to create classes that would fork on the wire and enforce class boundaries for data access. Doing inheritance by value was right out. As for the "99.9%" estimate -- All the fever about the need for by-ref classes with locking is about doing things that you cannot do effectively with LabVIEW today. The by-value classes are all about improving all the code you can already write with LV today. It's about creating better error code clusters, numerics that enforce range checks, strings that maintain certain formatting rules, and arrays that guarantee uniqueness. Suppose you have a blue wire running around on your diagram and suppose that this 32-bit integer is supposed to represent a RGB color. How do you guarantee that the value on that wire is always between 0x000000OO and 0x00FFFFFF? A value of 0x01000000 can certainly travel on that wire, and someone could certainly wire that int32 to a math function that outputs a larger-than-desired number. In fact, most of the math operations probably shouldn't be legal on an int32 that is supposed to be a color -- if I have a gray 0x00999999 and I multiply by 2, I'm not going to end with a gray that is twice as light or any other meaningful color (0x013333332). Every VI that takes one of these as an input is going to have to error check the input because there's no way to enforce the limitation -- until LV classes. My entire drive has been to highlight that Software Engineers in other languages have found that organizing the functions in your language around the objects of the system instead of the tasks of the system creates better software. When we apply this design style to a language IT SHOULD NOT CHANGE THE LANGUAGE NATURE. In fact, OO design can be done in LabVIEW WITHOUT LabVOOP in LV7.1. At NI Week presentations for 2002 through 2005, I and others gave presentations that talked about the importance of using typedef'd clusters and disciplining yourself to only unbundle those clusters on specific VIs. I talked about creating promises to yourself not to use certain VIs outside of certain other VIs. With LV8.2, we have OO programming. LabVOOP is a set of features that let the language and programming environment enforce these "data walls", to let you formally define the boundaries of data. Everyone keeps looking at LabVOOP as the silver bullet to solve the problems they've been having with LabVIEW -- references. The greatest value of LabVOOP is not in solving any new problems. Its in changing the fundamental design of all the previous VIs you've ever written. As time goes forward, as the UI experience of LabVOOP improves, I think this will become more obvious. My opinion, not shared by most of National Instruments, is that eventually -- in about 5-10 years -- we will have LV users who learn "New>>Class" before they learn "New>>VI" because the VI is not the primary component of the language that a user should be thinking about if they are going to use LV for building applications. If they are going to use LV as a measurement tool -- dropping a bunch of Express VIs, creating one-off utilities for charting values from a DAQ channel -- then they will be more interested in VIs and may never create a class. Jaegen, you asked if I can refute the need for references. No, I can't. As I said, improving references would help LV. But it is an issue for all LabVIEW data, not just LabVIEW class data. Locking references for arrays, for numerics, for refnums themselves (ha!), AND for classes. Classes certainly have some special needs when we do references (primarily the automatic deref in order to do a dynamic dispatch method call), but "by reference" is a conversation for LabVIEW as a whole, not for LabVOOP by itself. I saw a beautiful thing yesterday -- an internal developer with a several hundred VI project he's built since August in which all the VIs are in classes. He's used the classes to create a very flexible string token parser and code generator. He tells me that the code is much easier to edit than his previous version that didn't use classes. In fact, he scrapped the old code entirely to do this rewrite. ... And not a reference in sight. -
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
No, a native implementation would not take care of this. Let me lay that myth to rest right now. If we had a native by-reference implementation, the locking would have been left completely as a burden to the programmer. Would you suggest we put an "acquire lock" and "release lock" around every "unbundle - modify - bundle" sequence? When you do an unbundle, how do we know that you'll ever do a bundle -- there are lots of calls to unbundle that never reach a bundle node, or, if they do, its buried in a subVI somewhere (possibly dynamically specified which subVI!). There are bundle calls that never start at an unbundle. We could've implemented a by-reference model and then had the same "Acquire Semaphore" and "Release Semaphore" nodes that we have today, only specialized for locking data. Great -- now the burden on the programmers is everytime they want to work with an object, they have to obtain the correct semaphore and remember to release it -- and given how often Close Reference is forgotten, I suspect that the release would be forgotten as well. Should you have to do an Aquire around every Multiply operation? A Matrix class would have to if the implementation was by reference. Oh, LV could put it implicitly around the block diagram of Multiply, but that would mean you'd be Aquiring and Releasing between every operation -- very inefficient if you're trying to multiply many matricies together. So, no, LV wouldn't do that sort of locking for you. There is no pattern for acquire/release we could choose that would be anywhere near optimal for many classes The by-reference systems used in the GOOP Toolkit and other implementations do not solve the locking problems. They handle a few cases, but by no means do they cover the spectrum. The by-value system is what you want for 99.9% of classes. I know a lot of you don't believe me when I've said that before. :headbang: It's still true. :ninja: -
The need for lock in get-modify-set pass
Aristos Queue replied to bsvingen's topic in Object-Oriented Programming
This is a reply to the original post, not to any of the subsequent comments. I want to focus on "when would this be a problem in a real situation." Simplest case: Unique ID. I have a database that can be accessed simultaneously by many people, such as a banking system. At many branches around the contry, people may simultaenously open new accounts. Each account should have a unique account number. The counter that stores the "next available account number" needs to have the "get modify set" lock -- get the current value, increment it, save it back for the next request. Otherwise you may have two accounts that both get the same value, both increment and then both set the new value back. Wikipedia is a site where anyone can edit an encyclopedia entry which does *not* support a locking mechanism. If I view a page, I might see something wrong, so I edit the page to fix it. Meanwhile, someone else notices a different part of the same article and realizes they can add some additional information. They start editing. They submit their new information, then I submit my fact correction. Their new information is lost because mine is the final version of the article checked in and I didn't have their changes in my edited version. These are cases of data editing. You can have more interesting locking requirements... take this forum for example. Suppose I post a message which asks a question: "Can I open a VI in LabVIEW?" You see the message and start composing your reply to answer the question. Before you get a chance to post your message, I edit my original post so it asks the negative of my original question: "Is it impossible to open a VI in LabVIEW?" I hit re-submit, and then you hit submit on your post. You submitted "Yes" because you were answering the original question. But the "last edited" timestamp on my post is before the timestamp on your reply, so it looks like your answering my second question. Which then leads to a ton of posts from other people telling you you're wrong. This is a case where you and I are not modifying exactly the same entry (we both have separate posts) but we're both editing the same conversation thread. Any time two systems may be updating the same resource -- DAQ channel settings, OS configuration, database tables, etc -- you have a danger of these "race conditions" occuring. It's a race because if the same events had occurred in a different order -- an order that a lock would've enforced -- the results would be very different. These are the hardest bugs in all of computer science to debug because they don't necessarily reproduce... the next time you do the test, the events might occur in a different interleaved order and so the results come out correct. Further, if you do start applying locks, you have to have "mutex ranking" that says you can never try to acquire a "lower rank" lock if you've already acquired a higher rank lock other wise you'll end up deadlocked. The rankings of the locks is entirely the programmer's rule of thumb to implement -- no implementation of threads/semaphores/mutexes or other locking mechanism that I've ever heard of will track this for you since the run time checking of locks acquisition order would be a significant performance overhead to the system. -
With the exception of "Application Exit" (in Lv8.2) and user defined events, no events of the Event Structure can be triggered programmatically. If a panel is closed programmatically, that does not fire a Panel Close event. This is important for VIs that filter the panel close event, then do some clean up work and then close their own panels (or do the same on behalf of another VI). Similar applies to Value Changed (where the value gets changed, and in the event code the value is somehow modified [perhaps pinned to be within a certain range] ). The system breaks down if the programmatic work triggers events. App Exit was added as an exception predominantly because it *can't* trigger itself (once LV is exiting, its exiting!) and it provides a way for VIs to guarantee a time to do their clean up work (such as resetting hardware) for built applications.
-
If it isn't in the palettes, it doesn't always get checked. Only real zealots go digging into all the subVIs in vi.lib. Not that I'm referring to anyone on this forum or anything.
-
Now there's a signature... "I know enough LabVIEW to be dangerous." Maybe Michael could use that for the new LAVA users. Instead of "2 more posts to go" it could be "Enough LAVA to be dangerous."
-
Bingo. Your thoughts accord with mine, sir. In fact, all of the type propagation information needed to support this function is available today in the compiler. The problem is the user interface. We wouldn't be able to just popup on a terminal and say "This terminal is...>> XYZ" because what we would need is a way to specify which terminal, possibly a fairly complex mapping. And if we implement the simple form, you'd probably next want us to support the case of the MinMax.vi -- given two class inputs that produce two class outputs, the top one being the greater and the bottom one being the lesser, you'd want the outputs to become the nearest common parent to two input types. So to really handle this you need a fairly complex map. And then there's the enforcement on the block diagram, with some very arcane error reports, such as, "Although data from X and Y both propagate to Z, the runtime data type of Y is not preserved at the Event Structure tunnel." The ones we have for dynamic output terminals are confusing enough. With the dynamic in/out, I was able to implement the background color of the wire to indicate whether or not the data was successfully propagating to the output FPTerm. With multiple sets of propagations on the diagram, I'd have to have some annotation on the wire probably involving a numeric code of which FPTerms were involved in setting the data on the wire. I tried pulling this together a couple years ago during LabVOOP development. It's a continuing problem. At this point I'm actually considering something more akin to an "Advanced Conpane Configuration" dialog which would allow you to fully define the mapping of input terms to output terms, specify 5 different behaviors for unwired input terminals and about 3 other really useful behaviors that a simple "click on conpane click on control" cannot specify. I'm starting to think there exists a rather elegant solution to the type propagation problem with the addition of a single new primitive to the language. I've been drawing sketches of a "Preserve Runtime Type" prim, which has a look/feel identical to the "To More Specific" node, but instead of casting to the wire data type, it casts to the type of the data on the wire at runtime. It's a little weird to explain why such a node is necessary, but the short description is that with it you could wrap a dynamic dispatch in a static VI and then do the runtime type cast on the output of the dynamic subVI call to preserve the type... then make the static VI be your public interface. (Sorry, I don't have the sketches in a .png form to post right now.)
-
XControl & .NET Framework
Aristos Queue replied to Joachim's topic in Application Design & Architecture
You can't use a .Net refnum as the data type of the XControl, nor pass such refnums to/from the host VI through Properties of the XControl. Using .Net functionality purely internal to the implementation of the XControl should work fine. If you can't get that to work, I suggest posting to ni.com to get feedback from some of the .Net folks -- neither XControls nor .Net are my typical domain. -
XControl & .NET Framework
Aristos Queue replied to Joachim's topic in Application Design & Architecture