Jump to content

Aristos Queue

Members
  • Posts

    3,183
  • Joined

  • Last visited

  • Days Won

    202

Posts posted by Aristos Queue

  1. If the code that uses this text file is written in LV, then how about putting the text file into a string constant on a VI and password protect the block diagram. Instead of using a text file, call LV to get the text. If you need editing capacity, then this isn't as helpful.

    You could hide your entire block of text by writing it into the LabVIEW .ini file.

    I'm trying to think of some clever way of using LV to solve this issue... not much I can suggest if you're not in a development environment. We leave out all of our "save" capacities in runtime engine.

  2. I forgot to mention in my solution post yesterday that the correct synchronization tools to be using in this case is Rendezvous. Create a Rendezvous of size N and let each reentrant clone post that it has reached the Rendezvous, then proceed past that point. If you need to transmit data in this case, you can collect the data together in a common array (protect with Semaphore or functional global).

  3. OK so if I understand you correctly you are thinking of something with these actions:

    1. Lock the data and do something

    2. do something else and and unlock the data

    Well if you want to execute these actions you can call these methods in an other method that has locking enabled. After all, this sequence of actions on this object can be considered a new action again. Why not make it a method then ? There must be a reason why you want to leave the data in a locked state to come back to it later and unlock it. What actions would you like to perform that you cannot do in a method ?

    Oh, there's nothing you can't do if

    1. you know enough about locking to understand that you need to put in a method
    2. you have permission to add a method to the class. You might not be able to add a method if you don't have the password to the class, the class is a shared component among many groups that cannot be modified, the class has been validated by your ISO9000 inspection and can't change further without having to redo the inspection, or changing the class would change some important aspect of compatitiblity. Other reasons may exist, these are just the ones I can think of off the top of my head

    Can you give an example of a situation where you think locking during the method execution does not work ?

    The classic example is reading tables of a database. There are tables that are updated only rarely but read from frequently, so to maximize performance, you don't lock the table for a read operation, only for write. Locking on read can make many database projects perform so badly as to be unusable.

  4. OK, thanks. I can put a semaphore around the block to serialize the operation, and the problem can be solved. What i wondered about was if i put the whole operation as a block, inside a LV2 global, then the LV2 global will protect the operation just as if the operation was protected by semaphores. That is what my three examples does, and so far i have seen nothing that should suggest othervise.

    Excellent. I think we're both on the same page now.

  5. But now i just read this (from another thread):

    This makes me confused again. So i will ask the question a bit different. A functional global ala LCOD has lots of actions. Every one of these actions exist inside the global, and the action itself is a normal block diagram and can consist of call to sub vis and so on. Is a call to that global safe? Do i run any risk of "inter-global-confusion" if two or more calls to the same instance happends simultaneously?

    Suppose I have a LV2-style global that stores a double-precision floating point value and supports these three operations:

    "Round to nearest integer"

    "Get Value"

    "Add 1"

    "Divide by 2"

    and

    "Multiply by 2"

    Then I write this code:

    1) Round to nearest integer

    2) Get the value

    3) if the value is odd, then Add 1

    4) if the value is not equal to 2, then Divide by 2

    My expectation is that I will always end up with an even number for my result (I realized had to put the "not equal to 2" in for this example since 2/2 is 1). For whatever reason, I need to make sure the value in there is two. Now I do these three steps in two parallel sections of code (a and b). Remember, each individual operation is protected, but the aggregate is not protected. So I end up with this:

    Initial value in LV2-style global: 5.3

    1a) Round to nearest integer (was 5.3, now 5.0)

    2a) Get the value (it returns 5.0)

    1b) Round to nearest integer (was 5.0, now 5.0)

    2b) Get the value (it returns 5.0)

    3a) if the value is odd (we got 5.0) then Add 1 (was 5.0, now 6.0)

    3b) if the value is odd (we got 5.0) then Add 1 (was 6.0, now 7.0)

    4a) if the value is not equal to two (we got 5.0, so we assume it was 6.0) then Divide by 2 (was 7.0 now 3.5)

    4b) if the value is not equal to two (we got 5.0, so we assume it was 6.0) then Divide by 2 (was 3.5 now 1.75)

    A completely different result occurs if the user puts a semaphore around the code so that these instructions are forced to happen as a complete block -- the final answer comes out to 2.0.

    This is what I'm talking about == each individual operation is locked, but the overall operation is not locked.

  6. Why do you think that ? This aquire-modify-release is the same every time. Pure code replication.

    No indeed, but it can handle the 98% standard cases. Simply to prevent modifications that are being made from getting screwed up because two of those action happen in parallel. That's the most common problem.

    The problem is not with two actions happening concurrently. The problem is with actions from other parallel threads happening in between successive actions in this thread. There's the rub, and I will contend that it is a lot more significant than just 2% of the problem. Further, there's plenty of situations with read operations where preventing the parallelism would work against the performance of the system.

  7. In replying to this thread, let me first of all say "Thank you." This particular thread, though it may have strayed from Jimi's original post, is the most coherent discussion of OO in LabVIEW thus far. And it has highlighted a lot of topics in words I hadn't been able to put forth.

    I've been staying out of the by-reference debates for the last couple weeks except on technical points. I've made my opinions known, and I was happy to step back and see what the community proposed. Besides, you guys/gals can be a bit intimidating. Let me tell you, being a LV developer is a lot easier when you just let LV stand on its own behind the corporate mask :ninja: rather than associating your own name on a feature. And the paycheck is the same either way. ;)

    What makes me want to wade back into the fray? One singularly interesting comment from JFM:

    The by-ref design allready exists in LabVIEW, we have VI-server, queues, notifers, file-I/O, xControl etc. For me it is therefore clear that by-ref can co-exist with the dataflow. The existing by-ref GOOP implementations also shows that OOP is possible with a by-ref approach in LabVIEW.

    Whether our by-ref wish is inline with the nature of LV, I leave to others, but since we have a multi threaded system we need synchronization.

    A by-ref system that solves synchronization is, in my opinion, easier to debug than a system using globals etc., since I can follow my references to find all access nodes.

    I would like to point out that each one of the reference examples is a particular refnum type with a specific set of operations available on that refnum. None of these is an implementation of a generic "reference to data" that lets you plug in your own arbitrary operations. That's a distinction that I want to highlight.

    In the discussion about lossy queues the other day on LAVA, user Kevin P had an implementation for lossy queues that involved "try to enqueue the data, if you fail (timeout), then dequeue one element, then enqueue again (this time knowing you have space). " For various reasons, I proposed another solution. "Get queue status, if not have space then dequeue, then enqueue".

    Both Kevin P and my solutions DO NOT WORK if there are multiple VIs posting to the same queue. Why? Because:

    For example:

    The queue has max size of 5. There are currently 4 elements already in the queue.

    At time index t, these events happen:

    t = 0, VI A does Get Status and the value 4 returns

    t = 1, VI B does Get Status and the value 4 returns

    t = 2, VI B does Enqueue

    t = 3, VI A does Enqueue -- and hangs because there's no space left

    Kevin's solution fails for similar reasons.

    This is a case where you need a locking level for a whole series of operations. To implement a lossy queue with today's primitives, you need to use Semaphores to prevent ALL accesses to the queue from the time you start the "test queue size" operation (whether you're doing that through Get Queue Status or by enqueing and seeing if you timeout) all the way through your dequeue to make space and the final enqueue.

    The Queue and Notifier APIs are very complete reference-based APIs. They have locking exclusions for each queue/notifier for every operation on that queue/notifier. And yet they require a whole new level of locking in order to do this "bunched" operation. Semaphores are what is available today. I've contemplated a "Lock Queue" primitive. What would a "Lock Queue" primitive have to do? Well, it would acquire some sort of lock that would prevent other queue operations from proceeding, and it might return a completely new queue refnum -- one that is to the exact same queue but is known to be the one special refnum that you can pass to primitives and they won't try to acquire the queue's lock. What if a programmer forgot to invoke "Unlock Queue"? His code would hang. Or, for example, the producer loop might acquire the lock and just keep infinitely producing, starving the consumer loop of ever having time to run. So, yes, it could be done, I think, but the burden would be on the programmer to know when he/she needs to lock the queue for multiple operations and remember to unlock it again.

    Any reference system must leave the burden on the programmer to handle locking/unlocking. If such facilities are not provided then the refnum will have an inability to string multiple functions together on the same refnum and guarantee that the reference isn't being changed by some other parallel section of code. All the implementations of GOOP I have ever seen have this problem. The naive "just make every method of the class exclusive with regard to every other method for a given piece of data" does not handle this sort of locking (and introduces its own issues when one method wants to call another method as a subVI).

    My point in this case is that the existence of refnum APIs in LabVIEW is not sufficient to say that a general by-reference system can be created. I'm certainly not ruling it out. But the "acquire-modify-release" mechanism has to be custom written for all the modifications that the reference's author thinks will ever need to be handled atomically. If the author didn't think of a particular functionality, or didn't get around to including it, then there's no parallel-safe way to do it. LV2-style globals provide safety for whatever operations are in the LV2-style global, but try to string multiple operations together and you have problems. Global VIs guarantee that it is safe to read entire large cluster values from the globals without a write occuring mid-way through the read. But other than this primitive locking, they're wide open. So if you "read the value, increment the value, store the value back in the global," you have problems. Whatever general reference system that LV puts forth someday cannot automatically deal with all the locking issues that everyone seems to think a native reference implementation would handle.

    References can co-exist with dataflow, but only if the author of each particular refnum type is very careful about protecting the resources to which the refnums refer. It is not something that can (in the technical engineering sense) be handled automatically. It is, however, functionality that you can build when you construct your particular refnum type -- using native by-value OO to build that refnum. You can create a class that has the behavior you want on the wire, with all the locking and unlocking that you're desiring, with whatever functionality you design into it. References co-exist with dataflow because the refnums themselves are dataflow, and because the authors of the refnum types have tried to provide as complete an API as possible. There is no magick bullet that will make generic references safe for parallel execution -- not in LabVIEW, not in any programming language.

  8. But regarding the locking. Are my assumptions correct that within the global the get-modify-set pass is locked? I mean since there are no traditional get-mod-set pass at all, only an ordinary functional global VI call, then i would believe that this is thread safe. Will there be any change if the modify VI is reentrant?

    Global VIs are not in any way thread safe. It is one of the reasons that LV advocates functional globals (aka LV2-style globals) whenever you're doing parallel work.

  9. This is not a bug. :D After analyzing the full test suite, I can say definitively that it is not behavior that should change. :D:D The explanation is not simple, so it may be worth expanding the documentation to talk about this case explicitly.

    Short answers:

    0) This does not affect queues at all.

    1) When waiting for multiple notifiers, use Wait for Multiple Notifications primitive. It exists to handle a lot of the complexity that I'm about to talk about.

    2) There's an example program showing how to wait for ALL of a set of multiple notifiers at "<labview>\examples\general\WaitForAll.llb"

    Long answer:

    It is very easy to think that this is a bug. I was tempted to agree until I watched what was happening in the actual assembly code and I realized that the Wait for Notification nodes were correctly ignoring some notifications.

    Terminology:

    Notifier: a mailbox, a place in memory where messages are stored.

    Notifier Refnum: a reference to one of these mailboxes which is used on the block diagram to access the mailboxes

    Notification: a message sent to a notifier

    Node: an icon on the block diagram; in this explanation, we're interested mostly in the nodes that represent functions for operating on the notifiers

    W4N: shorthand notation for "Wait for Notification" node.

    W4MN: shorthand notation for "Wait for Multiple Notifications" node.

    The situation in the posted example (leaving out all the reentrant subVIs stuff):

    1) N notifiers are created.

    2) A loop is iterating over the list of the N, doing W4N on each one.

    3) In another section of code, Send Notification is called for each of the N in a random order.

    Expected result: The listener loop will check the first notifier. If it has a notification available already, then the node will return the posted notification. If the notifier does not yet have a notification, the node sleeps until the notification is sent. Then the loop repeats, proceeding to the second refnum, and so on. Whether the notification posted to a given notifier in the past or we wait for it to post doesn't affect any other notification. So we expect the loop to finish after all N notifications have arrived.

    Observed result: The loop hangs. The W4N node doesn't return for one of the notifiers, as if the notifier did not have a notification. We know it does have a notification, but the node doesn't seem to be catching the message... almost as if the "ignore previous?" terminal had been wired with TRUE.

    The explanation:

    Each notification -- each message -- has a sequence number associated with it. Each W4N and W4MN node has memory of the sequence number for the latest notification that it returned. In the For Loop, we're testing N notifiers. Remember that these notifiers are getting notifications in a random order. So the node always returns the message for the first notifier. But the second notifier will return only 50% of the time -- depending randomly on whether or not it got its notification before or after the first notifier. If the second notifier got its notification before the first notifier, then the node will consider the notification in the second notifier to be stale -- the notification has a sequence number prior to the sequence number recored for this node.

    The sequence number behavior is key to how the node knows not to return values that it has already seen. A given node does not record "sequence number for each refnum". Storing such information would be an expensive penalty to the performance and thus would be of negative value in most cases (the vast majority of the time the notifier nodes are used with either the same refnum every time or with cascading refnums that are accessed in a particular order or where lossy transmission is acceptable). In the cases where you need to hear every message from a set of notifiers, that's what the W4MN node is for -- it records the last message it heard, but when it listens again, it returns all the new messages.

    In the particular example, the subVIs doing the posting of notifications are in a bunch of cloned copies of a reentrant VI. Each reentrant subVI waits for a single "start" notification, and then sends its "done" notification in response. The VIs are cloned using Open VI Ref and then told to run using the Run VI method. If we put a "Wait Milliseconds" prim wired with 50 after the Run VI method, the bug appears to go away. This makes the situation seem like a classic race condition bug -- slight delay changes behavior. In this case, adding the delay has a very important effect -- it makes sure that each reentrant subVI goes to sleep on the start notification IN ORDER. So that they wake up in order. So they send their done notifications in order. In other words, adding the delay removes the randomness of the notification order, and so our loop completes successfully.

    And that's why this isn't a bug. ;)

  10. Edit: You've posted code on the Notifier bug... I missed that earlier... I'll take a look.

    Jimi: I looked at your code. WOW. This is an interesting situation. I was knee deep in assembly code, ready to agree with you that it was a bug, when I realized what was going on. In a classic sense, it is not a bug, because the code is operating exactly as intended, and, indeed, exactly as documented. But the consequences in this seemingly simple case are amazing. It might even be considered a bug because we might want to change the documented behavior of the Notifier Wait to account for something like this.

    It's going to take me a while to write this up... I'll post it in the other thread later today or tomorrow.

    PS: The situation does only affect notifiers, not queues.

  11. Take a look at

    <labview>\vi.lib\Utility\Wait for All Notifcations.llb\Wait for All Notifications.vi

    This shows a correct implementation for how to wait for an entire set of notifications to fire. I'm not sure what is wrong in your code, but maybe by code comparison you can find it. I'm still hunting around, but I just started looking at this and it may take a while to untangle.

  12. I wonder if this can be somehow related to the notifies problem reported on this forum last week.

    I'm pretty sure there's no Notifier bug. The way you guys described it over there, it sounded like correct code. The Notifiers are explicitly not guaranteed to catch every notification. If you post twice and listen once, only the last message gets heard. I didn't say that earlier because there wasn't any code posted to actually inspect, but please don't spread FUD around unless we have something to actually inspect. I close 90% of reported Queue/Notifer bugs as "not a bug" because they're actually programming errors -- someone forgot to wire a terminal or wired it with the wrong wire, or was doing something in two separate loops that wasn't protected, etc. In 8.0 I have one confirmed thread-lock involving Obtain Queue by name in a tight loop with Enqueue. In 8.2, there are no known Queue or Notifier bugs at this time.

    Edit: You've posted code on the Notifier bug... I missed that earlier... I'll take a look.

  13. I found what i believe is a bug (though i am not sure). The action do method is withing the global. the global is called by a reference node. When i wire the action class to the reference node, the input is OK. The output will however always be the parent class, and i have to manually cast the wire to the specific class. If this was an ordinary VI (not a call by reference node) then i do not need to do this manually, it happends automatically.

    The failure of the Call By Ref node to do the automatic downcasting is not a bug. Jimi and I discussed this earlier regarding the dynamic dispatch VIs which do not automatically downcast their outputs. The ability of a subVI node to automatically downcast outputs is based on the block diagram of the VI and whether or not that block diagram preserves run time type. Since they dynamic dispatch subVIs don't know what diagram will be invoked (and new classes could be dynamically loaded into memory at any time that do not support the automatic downcasting) we can't do the downcast. The same is true for the Call By Ref node. The VI you happen to be calling could do the downcast, but the CBR doesn't know what VI that will be until you actually are running.

  14. 2- is it possible to probe the actual data on the input of Draw.vi in the for loop? If not is this planned in future versions? I only get a probe that tells me the actual datatype...

    3- In reference to number 2, how can i look at the data that flows through if the probe does not work?

    Suggestions for improving this in the future are welcome. Here's how "probes with classes" support stands today:

    1) The probe is built for the wire type. Since any type of data may come down that wire at run time, the wire type is the most specific probe we can put up. And so we do.

    2) The probe does show the specific type that comes down the wire, but none of the more specific data because there's no place in the probe to show this data. We discussed all sorts of options for probes that add/remove fields when a specific type comes down the wire. This creates a very jumpy probe (bad user experience).

    3) You can create custom probes for class types. You might write a probe for a specific class hierarchy that displays a subpanel for the specific classes. This is A LOT of effort to go to for a probe, so I'd only recommend doing this for a class that you're going to release as a library for other users to use in development.

    4) Why was the current solution found to be acceptable for this release? Whenever you have child data traveling on a parent wire, the vast majority of the time, you don't care what the values are in the child fields. The problem that is being debugged is generally to do with one of the fields of the parent class. In the other smaller percent of the time, you can put debug code on your diagram that does a "to more specific" cast of the object and then probe the output of that wire.

    I hope this helps.

  15. C++ actually. I made a matrix class system in C++ once. It was a part of larger engineering application, and the way it worked was to store 2D arrays as 1D arrays and call core Fortran LAPACK solvers (Fortran stores all arrays in 1D due to performance gains). It started on a SGI using gcc, but later moved to windows using Watcom compilers (fortran and c). Anyway i can't remember ever considering a locking mechanism to protect anything within a class.

    Did you ever use the Posix library or any thread spawn command? If not, the reason you didn't have to have locking is that your entire program was serial executed with no parallelism whatsoever. In C++ you have to explicitly declare new threads, where to spawn them and where to merge them back together. Unless you spawned new threads manually, this isn't a valid example.

  16. I think that it a get-modify-set pass that could easely be protected or locked in a native implementation of GOOP, just as it is locked in openGOOP. But probably more important is the fact that if by-ref in general was natively implemented in the same manner as other languages, there would be no need for a get-modify-set pass at all.

    Exactly which languages are you talking about where the locking is not necessary? I regularly program in JAVA, C, and C++. In the past I've worked in Pascal, Haskell, Basic, and Lisp. In all of these lanugages, if the programmer spawns multiple threads, the programmer must handle all locking. These languages have no support for automatically protecting data from modification in separate threads. The closest you come is the "synchronized" keyword in JAVA, but even that doesn't handle all the possible woes, particularly if you're trying to string multiple "modify" steps together in a single transaction.

    Am I misunderstanding what you're referring to?

  17. I just wonder if all this is due to some patent related issues. For instance, there is no good reason that all sub VIs shall have both front panel and block diagram. 99% of the cases, VI are not used as virtual instruments, but as functions and subroutines. All you need for functions and subroutines are block diagram and the icon with connectors. Also, a by value object will not change anything of the basics. A by ref object would be impossible to protect in any form because it already exist (openGOOP, dqGOOP etc), and a native implementation would require a storage that is not a VI to be efficient, and then it blows the patents. Just some wild guesses :)

    The design is as it is for purely technical reasons. I'd be ashamed if we actually changed a good design to accomodate a patent application.

  18. How do you prevent the coercon???

    :D You don't prevent the coercion! You embrace it!

    I've been meaning to post these for the last couple days, but I haven't gotten around to it. There's a write-up that goes with them, but it isnt' ready yet. But take a look.

    This is the original solution that doesn't work in a runtime engine:

    http://www.outriangle.org/jabberwocky/Fact...ingClassRef.zip

    This is the modified solution that should work in a runtime engine:

    http://www.outriangle.org/jabberwocky/Factory_UsingVIs.zip

    In both cases, open the enclosed .lvproj file and then take a look at "DEMO Factory.vi". You can explore from there.

  19. I don't see this posted, if it is, oh well. Make your coercion dots stand out by making them bright red:

    Enough folks liked this so red is now the default color in LV8.2.

    New tip that may not be known:

    Common frustration: Have a structure node on diagram. Drop a string control inside it and start typing. At some point the string reaches the boundary of the node. If you have autogrow turned on, your structure will start stretching. If you have autogrow turned off, your string will vanish under the edge of the structure. Either way, not desirable. So you click away from the string, resize it, maybe turn on scrollbars, then start typing again.

    But... from LV7.0 forward...

    After you drop the string control, type for a while. When the string reaches the max width that you want, hit shift+Enter. Keep typing. The string will now wordwrap at the current width, and grow vertically. When you reach the maximum desired height, hit shift+Enter again. The vertical scrollbar will appear and the string will now scroll instead of growing any further.

    Sorry... the trick doesn't work on free labels. Perhaps someday...

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.