Jump to content

The need for lock in get-modify-set pass


Recommended Posts

This topic have been discussed in some other threads, but only by branching off the original topic, so i start a new thread here since there are some things i just don't get.

As I have understood by now, a by-ref GOOP requires a get-modify-set (GMS) pass. I can see that in theory it is possible that the same GMS can take place at the same time for the same instance in different places, and then it is unknown what data actually will be set to. Also I can see that the set (only) method can be used at the same time as the GSM for the same instance in some other place, and then it is also unclear what will happen. To resolve this a lock is required so that data is not (read?) or at least not set any other places during the GMS pass.

So far so good, but what is not clear (to me), is under which circumstances this actually can occur in a real application. Then, if it ever will occur, what will the problem be? and last but not least, will the lock mechanism actually solve the problems of collision?

IMO the program flow should make certain that this will never occur in the first place, at least not for the same instance. I mean, the main reason to have a program flow is afterall to make sure that data are set and read in the correct order. A program is not a set of random events, even though randomness can be a part of the constraints. Using "ordinary" call by value LV programming, it will be physically impossible to set the value at different places (at least on a per wire basis). A by-ref system will make this possible, and is often the main reason you would like to do it. But when switching to a by ref system, one of the unavoidable drawbacks is that you have to explicitly program in such a way that data is set and read at the correct places, and at the correct time, or else your program will not work as intended. This is true for simple (by ref) get and set, and is totally independant of any locking mecanism you have.

The queue primitives are by-ref, but they solve this issue by being synchrounous. They lock the data completely, effectively halting execution altogether until some data is available to read, and untill data has been read. Thus, queues are not only by-ref primitives, they are also program flow primitives making 100% sure that data is read and written in the correct sequence by completely taking control of program flow. Isn't it therefore also true that the only way of making a by-ref system 100% "collision safe" is to let the by-ref system take control over to program flow? I think so, because any other alternative will require some explisit control function (that will take full control), or data can both be set and read independent of the implicit diagram-flow. So basically what i am saying is that queue primitives (and notifiers to some extent), but used as primitives and as intended, are the only by-ref system that will prevent "read and write collision", and they do so by taking full control. Any other alternative will require that the program flow is explicitly made so that collision will not occur (if collision will be a problem that is). A simple locking mechanism for GMS pass will therefore not solve the real problem, namely inadequate program flow control by the programmer when using a asynchrounous by-ref system, but will only hide the programming error from being seen.

The problem with collision is not solved by locking the GMS pass. If there is a collision between the set and GMS pass, that collision will be there no matter if the GMS pass is protected. For instance, a call to the GMS method and the set method is done simultaneously for the same instance. Two things can happen:

1. The GSM executes first, it then locks the set method untill it is finished. Then the set method executes.

2. The set method executes first, then the GMS method.

The resulting data will either be data from the set, or data from the GMS, and it will be random which, thus the collision is not solved.

If the GMS method was not protected, the result will be exactly the same:

1. Set executes first, then GMS

2. GSM executes first, then set.

3. GSM starts, then set (only) executes before set in GSM, result will be same as 1.

In a queue-based by-ref system, i can see that the program can accidentally hang, or data can disapear, if the GMS pass is not protected or locked, but it will not solve any collision problems. In a non-queue based ref system, data cannot disapear and the program will not hang, but the collision problem is still there as well. The collision problem is a program flow problem that exist independent of any locking, and can only be solved by using ordinary queue primitives (in normal synchrouneous mode instead of any type of asynchroneous by ref system) or by explicitly making the program so that collisions will not occur (the only way IMO). A locking mecanism will not change anything regarding the results of a collision, although it probably will protect the program from hanging when using queue based asynchroneous ref system.

Well, it would be nice to hear any comments on this, as I am no expert, i just do not understand what all this locking is about. :)

Link to comment
  • Replies 51
  • Created
  • Last Reply

Top Posters In This Topic

Here is an example that hopefully will clarify what i mean (or shows how completely i have misunderstood this :blink: ).

An object, A, has two members, a and b. In addition there is a value, c, that is used to modify those members with two methods f and g.

A collision can only be a problem if a GMS calls the same instance, A1, and the two members are dependent and the sequence of calling f and g are not arbitrary. They are dependent if the modification at least consist of a = f(b) and/or b = g(a). For instance:

f = b + c and g = a*c, and a = 2, b = 3 and c = 4 the results will be:

a = f = 3 + 4 = 7 and b = g = 7*4 = 28

or

b = g = 2*4 = 8 and a = f = 8 + 4 = 12

This shows that the sequence of calling them will produce different results. Therefore a collision of two methods (can be the same or two or more different methods) with dependant member functions are a problem no matter what, because the sequence of wich they are called will influence the result. Protecting (locking) these functions will not solve the problem of collision, simply because the problem will be there independent of the protection.

If the two members are independent, then there will be no problem with collisions at all, because they will not influence each other. If the members are dependent but the sequence are arbitrary (must be only some very few odd cases), then i'm not sure what will happen, but that doesn't really matter by now.

It is not enough to protect (lock) member functions f and g. To prevent problems with collisions the only solution is to make sure that f and g will never collide, they must be sequenced to execute in the correct order by the program flow. Locking them will not assure that the sequence is in correct order, but will only prevent them from executing, or reading members simultaneously.

Therefore, locking the functions only has two results:

1. It has no effect at all because the members are independent.

2. It does not solve the real problem that is arbitrary sequencing at collision, and therefore has no effect.

Locking is therefore an uneccesary construct that only bogs down GOOPs, IMHO of course :)

Link to comment

I'm by no means an expert in this, but here goes...

If we have a counter, c, in the attributes, together with two methods

1. Init

Resets the counter value to 0

2. Increase counter

Increases the counter value by 1.

With locking we always know that the counter will start at zero and continue counting until we call init again, e.g.

0, 1, 2, 3, 4, 0, 1, 2, 3...

Without locking, the increase counter wouldn't wait for init to finish, and the above sequence could be

0, 1, 2, 3, 4, 0, 5, 6, 7...

This can happen if the Increase Counter does not detect the second Init.

The program flow determines how many times the IncreaseCounter will be called between the first and second Init, but the main thing is that we know that the counter will really be a counter, i.e. contain numbers in ascending order.

/J

Link to comment

It seems that you are correct about this one. A lock here will indeed solve the problem. The reason for this is that a collision between init and/or a multitude of increment methods does not result in any problems about the sequence of execution. The member is only dependant on itself, so that if two increment methods collide, it is irrelevant which one executes first.

Your lock works because your member is independant of the program flow. This also means that it will work with no wires attached, and will therefore be much more efficient when written as a simple functional global with a init and increment action. The problem still exist for members that are not mutually independent.

Link to comment

I don't think locking is about keeping your program executing in the right order, for me it is a way to avoid race-conditions; in order to modify data you must first get data from memory.

With a LV2 global, you can do the same thing, but then you must protect your global by having only one GMS VI, and this VI must be non-reentrant.

LabVIEW will then schedule the GMS access automatically. If you allow more than one GMS method, then the protection against race-conditions is gone.

With a get/lock method you can lock data, perform actions on the data in subsequent VIs; and then update the object when the action is done.

This allows us to create methods that only updates a single attribute, leaving the others intact. Resulting in an easier API for the user, compared to one single GMS VI in the LV2 global case, at least when we have many attributes to handle.

But as I said, I'm not an expert.

/J

Link to comment
I don't think locking is about keeping your program executing in the right order, for me it is a way to avoid race-conditions; in order to modify data you must first get data from memory.

Exactly, i agree 100%. But - when the members are mutually dependent, then locking will not solve the result of race condition. The problem is still there dispite of all the locking you can think of. The only way to solve it is to make sure no race condition can ever occur. Therefore, locking is not even necessary in most of the cases.

With a LV2 global, you can do the same thing, but then you must protect your global by having only one GMS VI, and this VI must be non-reentrant.

LabVIEW will then schedule the GMS access automatically. If you allow more than one GMS method, then the protection against race-conditions is gone.

I don't understand this. In a functional global all the members are internal to the global, there are no get and set, only unbundle and bundle at most (but that isn't neccesary either). If the global is non-reentrant, then you can have alot of different actions. However, you can also use reentrant globals and call them by a reference node, there will still be no get and set. I use this all the time, no problems yet.
With a get/lock method you can lock data, perform actions on the data in subsequent VIs; and then update the object when the action is done.

This allows us to create methods that only updates a single attribute, leaving the others intact. Resulting in an easier API for the user, compared to one single GMS VI in the LV2 global case, at least when we have many attributes to handle.

Yes, i agree that it can be problematic to change and add things in a global. But with LVOOP this is actually solved, i have tried it and it works perfectly. An example can be found here

Link to comment

What about this problem (which I mentioned in an other thread).

Let's assume each method is derived from a template method and retrieves the object data at the beginning and stores it at the end.

Now, if you have three methods:

- refresh (stores timestamp of refresh in object, and calls the following 2 methods)

- retrieve scope wave (reads wave from scope and stores it in object)

- retrieve scope settings (reads scpe settings and stores them in object)

would this mean that each method retrieves and then stores the data ? Rather inefficient. And there need to be an exception for the first method, because it should store the updated data before calling the others. Or read and store after calling the submethods...

What do you think about this ? What have been your solutions on GOOP (and maybe other LV OO systems) ?

Let me make this statement: A referencing OO system does not need a Retrieve-Modify-Store system where you retrieve all data and store all data.

Joris

Link to comment
Let me make this statement: A referencing OO system does not need a Retrieve-Modify-Store system where you retrieve all data and store all data.

But in OO someone can always create a dynamic VI that overrides your method that doesn't need locking. If you don't have locking mechanism available in the base class, then override method that would need it cannot operate.

Link to comment
Let me make this statement: A referencing OO system does not need a Retrieve-Modify-Store system where you retrieve all data and store all data.

I agree on this one, if for no other reason than the fact that functional globals outperform all GOOPs i know of. The more i think about it, the more LV2OO seems to to be the way to go. Here you can modify the exact member you want with no need for any GMS pass. All members are protected within the global, and it is really easy, fast and unproblematic to add new actions (once everything is set up). The only concern is performance, since the dynamic dispatching thing does not seem to be too efficient as of today and because LV2OO style adds more VI calls (that because of the dynamic dispatch cannot have subroutine priority). But if i have understood Aristos correct, the performance will increase in future versions, maybe with stateless VIs ?

Link to comment
I don't understand this. In a functional global all the members are internal to the global, there are no get and set, only unbundle and bundle at most (but that isn't neccesary either). If the global is non-reentrant, then you can have alot of different actions. However, you can also use reentrant globals and call them by a reference node, there will still be no get and set. I use this all the time, no problems yet.

Just to confirm, you are speaking about an LV2 global that not only stores data, but contains the methods to act on it?

If this is the case then I agree that in the simple cases, i.e. numeric operations etc. this is the way to go.

On the otherhand, if you only use LV2 as storage, then you must have get/set methods to act on the data, and then race conditions are likely to occur if no locking mechanism is used.

To use the "LV2 global with actions" approach when building instrument drivers etc, it becomes way to cluttered. And you can only perform one action at the time, since actions are performed within the global itself (and the global can not be reentrant, or at least encapsulated in a non-reentrant VI). Using GOOP with locking, methods can be called almost simulataneously.

Don't get me wrong, I like and use the LV2 globals all the time to: pass data, as intermediate buffers, data filtering etc., I'm just not ready to throw locking out.

I also don't think we would have had this discussion, if NI had gone the by_reference path instead, since the native implementation would take care of this.

/J

Link to comment
Just to confirm, you are speaking about an LV2 global that not only stores data, but contains the methods to act on it?

If this is the case then I agree that in the simple cases, i.e. numeric operations etc. this is the way to go.

On the otherhand, if you only use LV2 as storage, then you must have get/set methods to act on the data, and then race conditions are likely to occur if no locking mechanism is used.

To use the "LV2 global with actions" approach when building instrument drivers etc, it becomes way to cluttered. And you can only perform one action at the time, since actions are performed within the global itself (and the global can not be reentrant, or at least encapsulated in a non-reentrant VI). Using GOOP with locking, methods can be called almost simulataneously.

Don't get me wrong, I like and use the LV2 globals all the time to: pass data, as intermediate buffers, data filtering etc., I'm just not ready to throw locking out.

I also don't think we would have had this discussion, if NI had gone the by_reference path instead, since the native implementation would take care of this.

/J

Yes LV2 globals ala' LCOD. The way i use them in a very large application i have is to make them reentrant, then i call them with a call by ref node. Then i can have as many as i want, all by ref.

Link to comment

This is a reply to the original post, not to any of the subsequent comments. I want to focus on "when would this be a problem in a real situation."

Simplest case: Unique ID. I have a database that can be accessed simultaneously by many people, such as a banking system. At many branches around the contry, people may simultaenously open new accounts. Each account should have a unique account number. The counter that stores the "next available account number" needs to have the "get modify set" lock -- get the current value, increment it, save it back for the next request. Otherwise you may have two accounts that both get the same value, both increment and then both set the new value back.

Wikipedia is a site where anyone can edit an encyclopedia entry which does *not* support a locking mechanism. If I view a page, I might see something wrong, so I edit the page to fix it. Meanwhile, someone else notices a different part of the same article and realizes they can add some additional information. They start editing. They submit their new information, then I submit my fact correction. Their new information is lost because mine is the final version of the article checked in and I didn't have their changes in my edited version.

These are cases of data editing. You can have more interesting locking requirements... take this forum for example.

Suppose I post a message which asks a question: "Can I open a VI in LabVIEW?" You see the message and start composing your reply to answer the question. Before you get a chance to post your message, I edit my original post so it asks the negative of my original question: "Is it impossible to open a VI in LabVIEW?" I hit re-submit, and then you hit submit on your post. You submitted "Yes" because you were answering the original question. But the "last edited" timestamp on my post is before the timestamp on your reply, so it looks like your answering my second question. Which then leads to a ton of posts from other people telling you you're wrong. This is a case where you and I are not modifying exactly the same entry (we both have separate posts) but we're both editing the same conversation thread.

Any time two systems may be updating the same resource -- DAQ channel settings, OS configuration, database tables, etc -- you have a danger of these "race conditions" occuring. It's a race because if the same events had occurred in a different order -- an order that a lock would've enforced -- the results would be very different. These are the hardest bugs in all of computer science to debug because they don't necessarily reproduce... the next time you do the test, the events might occur in a different interleaved order and so the results come out correct. Further, if you do start applying locks, you have to have "mutex ranking" that says you can never try to acquire a "lower rank" lock if you've already acquired a higher rank lock other wise you'll end up deadlocked. The rankings of the locks is entirely the programmer's rule of thumb to implement -- no implementation of threads/semaphores/mutexes or other locking mechanism that I've ever heard of will track this for you since the run time checking of locks acquisition order would be a significant performance overhead to the system.

Link to comment

What kind of speed can you get using call by ref?

Have you considered encapsulating in a non-reentrant VI for each global you need (given that you know how many you need in your application).

/J

Yes LV2 globals ala' LCOD. The way i use them in a very large application i have is to make them reentrant, then i call them with a call by ref node. Then i can have as many as i want, all by ref.
Link to comment

I wrote a notifier based mutex library for locking. It performs about as well as queues. Notifiers used directly can exceed queue performance with a small marigin. The problem with implementing mutex using notifiers is however the bug that I found when I tried to test the scalability of my mutex implementation; the notifiers may get missed. The mutex system I wrote is attached below, the bug is reported here.

Download File:post-4014-1159987981.zip

Edit: LV 8.0 version, previous was accidenttally for LV 8.2.

I also wrote another mutex library based on occurances. There is a bug in this library as I first didn't know that occurances act like constant i.e. an occurance created in a loop doesn't really generate multiple occurances but only one. Occurance is used to notify other threads about mutex possibly coming available. Now that all mutexes are exactly one, this causes all threads waiting to any mutex to become free to start acquiring a mutex when only one is freed. This doesn't cause a functional failure i.e. everything should operate correctly but the the mutex engine may become overloaded when all the threads may start trying to acquire a mutex simultaneously.

This other implementation is here. A mutex is called a semaphore in this implementation.

Download File:post-4014-1159987236.zip

To fix the occurance bug, one may perhaps create a predefined amount of occurances instead of only one. For this purpose I wrote a nice VI which generates 256 occurances array, all of the different. It may suffice if 256 occurances were used in a loop instead of only one. It doesn't really matter if several threads try to acquire the mutex simultaneously as long as not all the threads try to do it.

Download File:post-4014-1159988026.vi

Edit: LV 8.0 version, previous was accidenttally for LV 8.2.

The occurance implementation of the mutex performs about the same as queues and notifier mutexes as the total number of mutexes in the system is low. When the number is more than several thousands, the occurance mutex starts to perform better. This performance advantage may disappear when the bug is fixed.

I couldn't test the scalability issues when several threads try to access same mutex simultaneously. I tried to do it, but I came across the bug I mentioned above. Then I gave up, as I was only able to succesfully get about 6-8 simultaneous threads accessing the mutex before a notifier was incorrectly missed.

Link to comment
since the native implementation would take care of this.

No, a native implementation would not take care of this. Let me lay that myth to rest right now.

If we had a native by-reference implementation, the locking would have been left completely as a burden to the programmer. Would you suggest we put an "acquire lock" and "release lock" around every "unbundle - modify - bundle" sequence? When you do an unbundle, how do we know that you'll ever do a bundle -- there are lots of calls to unbundle that never reach a bundle node, or, if they do, its buried in a subVI somewhere (possibly dynamically specified which subVI!). There are bundle calls that never start at an unbundle. We could've implemented a by-reference model and then had the same "Acquire Semaphore" and "Release Semaphore" nodes that we have today, only specialized for locking data. Great -- now the burden on the programmers is everytime they want to work with an object, they have to obtain the correct semaphore and remember to release it -- and given how often Close Reference is forgotten, I suspect that the release would be forgotten as well. Should you have to do an Aquire around every Multiply operation? A Matrix class would have to if the implementation was by reference. Oh, LV could put it implicitly around the block diagram of Multiply, but that would mean you'd be Aquiring and Releasing between every operation -- very inefficient if you're trying to multiply many matricies together. So, no, LV wouldn't do that sort of locking for you. There is no pattern for acquire/release we could choose that would be anywhere near optimal for many classes

The by-reference systems used in the GOOP Toolkit and other implementations do not solve the locking problems. They handle a few cases, but by no means do they cover the spectrum.

The by-value system is what you want for 99.9% of classes. I know a lot of you don't believe me when I've said that before. :headbang: It's still true. :ninja:

Link to comment

This thread is officially getting interesting (not that it wasn't already :) ).

Would a native by-reference GOOP implementation take care of locking? I think Aristos did a good job of answering this. This exact topic came up here at work when we first discovered that LVOOP was only by-value. We came to the conclusion that C++, while it does support pointers, doesn't do any "native" locking either - though we weren't 100% sure about this. Does it? This of course lead to a discussion about the fact that multi-threaded applications were much more difficult in C++, and therefore less common, thus reducing the need for locking (Yay LabVIEW :thumbup: ), but I digress ... Essentially, it seems that the locking feature built in to all of the existing non-native GOOP frameworks is an added feature.

Here is my real question though: Do you really mean 99.9%? Right now, I think we need by-reference classes, with locking. Here's why:

We have 2 main threads - HMI and Control. The Control thread uses controller classes, calling several member functions for each class on every iteration. The HMI thread can asynchronously (i.e. in response to user interaction events) call "Set" member functions for the same class instances. We need references in order for both loops (and actually even separate tasks within each loop) to access the same class instances. We need locking to prevent race conditions between the periodic Control loop and the asynchronous HMI loop calls. Can you refute this? I would love for you to, because we would really like to use the inheritance and cleanliness that LVOOP provides. (Note: I have yet to find time to investigate any of the various by-reference LVOOP frameworks proposed here and elsewhere - do any of them support inheritance cleanly?)

Jaegen

Link to comment
No, a native implementation would not take care of this. Let me lay that myth to rest right now.

Thanks for the clarification.

Anyway, I asked in another thread that it would be nice to know how many users that will actually use LVOOP out of the box, compared to the number of users that are going to wrap LVOOP in some kind of reference system. Have you made such a poll at NI?

Regarding by_value vs. by_reference:

If the by_value need is 99.9% (within the LabVIEW community), why haven't anyone already created this? It should not be more difficult than creating the by_ref systems that exists today.

Personally, I think that the current implementations shows that the by_ref systems are used more, but this is just my opinion.

I haven't really played around with LVOOP, because it doesn't support all platforms (RT), and I need that portability, but I do like what I've seen of inheritance etc..

/J

Link to comment
If the by_value need is 99.9% (within the LabVIEW community), why haven't anyone already created this? It should not be more difficult than creating the by_ref systems that exists today.
Here is my real question though: Do you really mean 99.9%? Right now, I think we need by-reference classes, with locking.

Jaegen and JFM both ask effectively the same question.

Why hasn't anyone built by-value classes before? It wasn't possible. Only fundamental changes to the LV compiler make this possible. There just wasn't any way for any user to create classes that would fork on the wire and enforce class boundaries for data access. Doing inheritance by value was right out.

As for the "99.9%" estimate -- All the fever about the need for by-ref classes with locking is about doing things that you cannot do effectively with LabVIEW today. The by-value classes are all about improving all the code you can already write with LV today. It's about creating better error code clusters, numerics that enforce range checks, strings that maintain certain formatting rules, and arrays that guarantee uniqueness. Suppose you have a blue wire running around on your diagram and suppose that this 32-bit integer is supposed to represent a RGB color. How do you guarantee that the value on that wire is always between 0x000000OO and 0x00FFFFFF? A value of 0x01000000 can certainly travel on that wire, and someone could certainly wire that int32 to a math function that outputs a larger-than-desired number. In fact, most of the math operations probably shouldn't be legal on an int32 that is supposed to be a color -- if I have a gray 0x00999999 and I multiply by 2, I'm not going to end with a gray that is twice as light or any other meaningful color (0x013333332). Every VI that takes one of these as an input is going to have to error check the input because there's no way to enforce the limitation -- until LV classes.

My entire drive has been to highlight that Software Engineers in other languages have found that organizing the functions in your language around the objects of the system instead of the tasks of the system creates better software. When we apply this design style to a language IT SHOULD NOT CHANGE THE LANGUAGE NATURE. In fact, OO design can be done in LabVIEW WITHOUT LabVOOP in LV7.1. At NI Week presentations for 2002 through 2005, I and others gave presentations that talked about the importance of using typedef'd clusters and disciplining yourself to only unbundle those clusters on specific VIs. I talked about creating promises to yourself not to use certain VIs outside of certain other VIs. With LV8.2, we have OO programming. LabVOOP is a set of features that let the language and programming environment enforce these "data walls", to let you formally define the boundaries of data.

Everyone keeps looking at LabVOOP as the silver bullet to solve the problems they've been having with LabVIEW -- references. The greatest value of LabVOOP is not in solving any new problems. Its in changing the fundamental design of all the previous VIs you've ever written. As time goes forward, as the UI experience of LabVOOP improves, I think this will become more obvious. My opinion, not shared by most of National Instruments, is that eventually -- in about 5-10 years -- we will have LV users who learn "New>>Class" before they learn "New>>VI" because the VI is not the primary component of the language that a user should be thinking about if they are going to use LV for building applications. If they are going to use LV as a measurement tool -- dropping a bunch of Express VIs, creating one-off utilities for charting values from a DAQ channel -- then they will be more interested in VIs and may never create a class.

Jaegen, you asked if I can refute the need for references. No, I can't. As I said, improving references would help LV. But it is an issue for all LabVIEW data, not just LabVIEW class data. Locking references for arrays, for numerics, for refnums themselves (ha!), AND for classes. Classes certainly have some special needs when we do references (primarily the automatic deref in order to do a dynamic dispatch method call), but "by reference" is a conversation for LabVIEW as a whole, not for LabVOOP by itself.

I saw a beautiful thing yesterday -- an internal developer with a several hundred VI project he's built since August in which all the VIs are in classes. He's used the classes to create a very flexible string token parser and code generator. He tells me that the code is much easier to edit than his previous version that didn't use classes. In fact, he scrapped the old code entirely to do this rewrite. ... And not a reference in sight. :)

Link to comment

Can I sense some frustration from Aristos last two answeres... :throwpc: Perhaps NI is releasing an update for 8.2 soon and developers are stressed up under thight schedule. After all Aristos has been quite silent for the last week or so. :) After having used LVOOP for about a month now, I think it's in many way a great leap forward in LabVIEW programming paradigm. :thumbup:

Back to the subject. Software objects often represent entities that similar to objects in our physical world. They have an identity, some properties and some behaviour. Many of software objects especially in automation and networked environments also need to be reactive to the impulses from the environment. I think this by-ref discussion is a bit misleading. I think we should be talking about objects with identity and objects that as individuals can communicate with the environment and react to impulses from the environemnt. The environemnt is then formed from other objects, physical hardware, network, user etc. References are just something below the hood stuff that perhaps user doesn't even need to know exist.

There are plenty of innovative ways to manage concurrency of such an objects. One which I think is very interesting is the concept of reactive objects in O'haskell. The fundamental way O'haskell guarantees there are no collisions is that only one method of an (by-ref) object can be active at a time. All other methods of the same object must wait until this first method exits until they get their change of running. Perhaps I copy a few paragraphs from O'haskell web page below.

Object modeling

The object model [1] offers a remarkably good strategy for decomposing a complex system state into a web of more manageable units: the state-encapsulating, identity-carrying entities called objects. Objects are abstractions of autonomous components in the real world, characterized by the shape of their internal state and the methods that define how they react when exposed to external stimuli. The object model thus inherently recognizes a defining aspect of interactive computing: systems do not terminate, they just maintain their state awaiting further interactions [2]. Not surprisingly, object modeling has become the strategy of choice for numerous programming tasks, not least those that must interface with the concrete state of the external world. For this reason, objects make a particularly good complement to the abstract, stateless ideal of functional programming.

The informal object model is naturally concurrent, due to the simple fact that real world objects communicate and ``execute their methods'' in parallel. On this informal plane, the intuition behind an object is also entirely reactive: its normal, passive state is only temporarily interrupted by active phases of method execution in response to external requests. Concurrent object-oriented languages, however, generally introduce a third form of state for an object that contradicts this intuition: the active, but indefinitely indisposed state that results when an object executes a call to a disabled method.

The view of indefinite blocking as a transparent operational property dates back to the era of batch-oriented computing, when interactivity was a term yet unheard of, and buffering operating systems had just become widely employed to relieve the programmer from the intricacies of synchronization with card-readers and line-printers. Procedure-oriented languages have followed this course ever since, by maintaining the abstraction that a program environment is essentially just a subroutine that can be expected to return a result whenever the program so demands. Selective method filtering is the object-oriented continuation of this tradition, now interpreted as ``programmers are more interested in hiding the intricacies of method-call synchronization, than preserving the intuitive responsiveness of the object model''.

Some tasks, like the standard bounded buffer, are arguably easier to implement using selective disabling and queuing of method invocations. But this help is deceptive. For many clients that are themselves servers, the risk of becoming blocked on a request may be just as bad as being forced into using polling for synchronization, especially in a distributed setting that must take partial failures into account. Moreover, what to the naive object implementor might look like a protocol for imposing an order on method invocations, is really a mechanism for reordering the invocation-sequences that have actually occurred. In other words, servers for complicated interaction protocols become disproportionately easy to write using selective filtering, at the price of making the clients extremely sensitive to temporal restrictions that may be hard to express, and virtually impossible to enforce.

Existing concurrent object-oriented languages tend to address these complications with an even higher dose of advanced language features, including path expressions [3], internal concurrency with threads and locks [4, 5], delegated method calls [6], future and express mode messages [7], secretary objects [8], explicit queue-management [6, 9], and reification/reflection [10]. O'Haskell, the language we put forward in this document, should be seen as a minimalistic reaction to this trend.

Preserving reactivity

The fundamental notion of O'Haskell is the reactive object, which unifies the object and process concepts into a single, autonomous identity-carrying entity. A reactive object is a passive, state-encapsulating server that also constitutes an implicit critical section, that is, at most one of its methods can be active at any time. The characteristic property of a reactive object is that its methods cannot be selectively disabled, nor can a method choose to wait for anything else than the termination of another method invocation. The combined result of these restrictions is that in the absence of deadlocks and infinite loops, objects are indeed just transiently active, and can be guaranteed to react to method invocations within any prescribed time quantum, just given a sufficiently fast processor. Liveness is thus an intrinsic property of a reactive object, and for that reason, we think O'Haskell represents a quite faithful implementation of the intuitive object model.

Concurrent execution is introduced in O'Haskell by invoking asynchronous methods, which let the caller continue immediately instead of waiting for a result. A blocking method call must generally be simulated by supplying a callback method to the receiver, which is the software equivalent of enclosing a stamped, self-addressed envelope within a postal request. Two additional features of O'Haskell contribute to making this convention practically feasible. Firstly, methods have the status of first-class values, which means that they can be passed as parameters, returned as results, and stored in data structures. Secondly, the specification of callbacks, their arguments, and other prescribed details of an interaction interface, can be concisely expressed using the statically safe type system of the language.

There is, however, an important subcase of the general send/reply pattern that does not require the full flexibility of a callback. If the invoked method is able to produce a reply in direct response to its current invocation, then the invoking object may safely be put on hold for this reply, since there is nothing but a fatal error condition that may keep it from being delivered. For such cases, O'Haskell offers a synchronous form of method definitions, thus making it possible to syntactically identify the value-returning methods of foreign objects that truly can be called just as if they were subroutines.

With these contrasting forms of communication in mind, we may draw an analogy to the cost-effective, stress-reducing coordination behaviour personified by a top-rated butler(!). Good butlers ask their masters to standby only if a requested task can be carried out immediately and under full personal control (or, if necessary, using only the assistance of equally trusted servants). In all other cases, a good butler should respond with something like an unobtrusive ``of course'' to assure the master that the task will be carried out without the need for further supervision, and in applicable cases, dutifully note where and how the master wants any results to be delivered. Only an extraordinarily bad butler would detain his master with a ``one moment, please'', and then naively spend the rest of the day waiting for an event (e.g. a phone call) that may never come.

Reactive objects in O'Haskell allow only ``good butlers'' to be defined. We believe that this property is vital to the construction of software that is robust even in the presence of such complicating factors as inexperienced users, constantly changing system requirements, and/or unreliable, long-latency networks like the Internet. As an additional bonus, reactive objects are simple enough to allow both a straightforward formal definition and an efficient implementation. Still, the model of reactive objects is sufficiently abstract to make concurrency an intuitive and uncomplicated ingredient in mainstream programming, something which cannot be said about many languages which, like Java, offer concurrency only as an optional, low-level feature.

For more detailed information information see O'Haskell homepage.

The main message what I'm trying to say is that there really is a need for objects that have an identity and that semaphores and locking really is not the only way to implement this. In that sense I think Aristos is wrong here. There may be a need for by-reference system for the whole LabVIEW but there is a different need for objects with identity in LVOOP and the way to implement these two things doesn't need to be the same. Objects are different kinds of entities and therefore allow more sophisticated concurrency control methods that can be worked out for pure data.

Link to comment

I must say I support JFM and Jaegen. When LVOOP was first mentioned I hoped it would be a native implementation close to what Endevo made with their GOOP implementation. Using references feels natural when you are dealing with objects, not only in general, but because that's what we have been doing for a long time in LV - we open a reference to a VI, control or other object and use invoke nodes to run methods...Likewise we would like to create our own objects, open a reference to it and get the feeling that that reference points to that object no matter where we might refer to it(!).

Except for the really basic users, people already need to learn how to break out of the dataflow paradigm very early when they use LabVIEW. It does not take long before they require more than one loop and a way to share data between them, and voila - the wire does not do the job anymore. The user then typically starts to learn the use of locals and globals and easily gets into trouble. Later he needs to create software that breaks out of the "Virtual Instrument" paradigm because users of modern software are not used to be limited to just a couple of windows- they want to be able to pull up as many trend windows as they would like e.g., not be limited as if they were looking at a physical box...This introduces the need for creating multiple instances of VIs and make them run in parallell.

NI continues to sell LabVIEW as something to use in the lab to get the data from DAQ cards on screen, maybe filter them and write them to disk - all in the development environment off course, not a built application. On the other hand LabVIEW is becoming more and more advanced and is used to create stuff that does not belong in a lab, nor can it be described as a virtual instrument (only very simple programs use VIs as VIs, in 99,9% of the cases they are functions). In the case of LVOOP I think the advanced users of LabVIEW hoped that NI would aim to please them and not think too much about new users simply because they would never understand it anyway, nor have much use for it. Ironically it is much easier to understand object orientation if it is by reference....if you really think of it as an object it is very difficult to wrap your head around the fact that you are creating a new instance of the object if you branch a wire...

Link to comment
....if you really think of it as an object it is very difficult to wrap your head around the fact that you are creating a new instance of the object if you branch a wire...

Maybe instead of calling it object oriented, we could call it object disorientet :ninja: ;) Joke aside, with LVOOP we can in fact make a whole new and different set of reference types. Instead of just being a dumb typedefed datalog reference, we can now put a whole lot of information into the reference itself.

Link to comment
if you really think of it as an object it is very difficult to wrap your head around the fact that you are creating a new instance of the object if you branch a wire...

This depends on what you think of as "an object." I contend you're very used to doing this -- you just don't recognize it.

Waveform, timestamp, matrix -- these are object types. They are complex data types with well defined operations for manipulating the data. They may aggregate many pieces of data together, but they are exposed to the world as a single coherent data type. NI isn't going to be replacing these types with LVClasses any time soon -- we want LabVOOP to be seasoned before such core components become dependent upon it. But if we were developing these new today, they would be classes. Every cluster you create is an object definition -- with all of its data public and hanging out for any VI to modify. These are the places you should be looking to use LVClasses. And you don't want references for these.

I tend to think of integers as objects because I've been working that way for 10+ years. You don't add two numbers together. One integer object uses another integer object to "add" itself. Forking a wire is an object copying itself. An array is an object. Consider this C++ code:

typedef std::vector<int> IVec;IVec DoStuff(const IVec &x, const IVec &y) {	 IVec z(x);	 c.push_back(y.begin(), y.end());	return c;}void main() {   IVec a, b;   a.push_back(10);   b = a;   IVec c = DoStuff(a, b);   }

Consider the line "b = a;" This line duplicates the contents of a into b. This is by-value syntax. The syntax is there and valuable in C++. JAVA on the other hand doesn't have this ability. That language is exclusively by reference. If you use "b = a;" in JAVA, you've just said "b and a are the same vector." From that point on "b.push_back(20);" would have the same effect as "a.push_back(20);"

The by-value syntax is just as meaningful for objects. In fact, in many cases, it is more meaningful. But you have to get to the point where you're not just looking at system resources as objects but data itself as an object. Making single specific instances of that data that reflects specific system resources is a separate issue, but is not the fundamental aspect of dataflow encapsulation.

Link to comment

I understand what you mean Aristos, and it actually made me realise that perhaps the whole problem is that the developers have been too used to object orientation from text based languages when they implemented LVOOP;-)

As you say - you have been working in C++ for a decade and are used to even consider integers as objects...and that's perfectly natural because that is what you learn to do when you program in C++, and similar to the "the wire is the variable" concept the by-value implementation of LVOOP may seem natural. To a G programmer an integer is not an object, it is just a variable.

Let's say that you want to teach someone what object orientation is and how he should design an object oriented program...how do you do it? One of the core concepts then is that he should try to not think of functions, but real life objects. If he wants to create a car simulator he should think of what kind of objects a car is made of and create the equivalent in code, he would need to make an engine object with its attributes and methods, consisting of smaller objects with their attributes and methods etc. etc....Just as with a real car he will need to construct one if he needs a car. After constructing a car the construction results in a car that he in his mind has a reference to: "my Toyota" and if he wants someone to do anything with his car he tells them to do this and that with "my Toyota". Even though he tells two different people to do something to his Toyota at two different locations they will always work on the same car - they get the reference to a specific car and work on that same car. This is a very simple thing to understand. In a text based language we can write "myToyota" to refer to the object we created, in LV he will wire the reference where he needs it. Now consider the case where this newcomer has created a car class and drags a car object onto the diagram. He may e.g. want two loops to do something with that car so he wires the car to two different places (just like he does after creating a que e.g....he can see an image of the que...people standing in line...in his mind, the wire refers to that que.) - but wait, what are you saying - the branching of the wire created two cars??? Which one is "my Toyota" and what is the second car... What have you done to my Toyota!!!???:-)

Off course this is how e.g. a user of GOOP would feel, and it may just be a question of time to get used to the by value concept instead...but I still think he will miss the by-reference possibilities frequently. The by-reference example that ships with LV also indicate that you guys have thought about that.

Link to comment

I think much of this discussion is about defenitions. Most of the time when people disagree strongly about something they really disagree about defenitions. I thinkg NI made a bad (marketing) choise in choosing to call LabVOOP object oriented programming as Object Oriented Programming does really mean somewhat different thing. LabVOOP is not an object-oriented programming in traditional sense since it cannot deal with the concept of real-world objects or "My Toyota" as Mads put it. What we are dealing with in LabVOOP is intelligent dataflow rather than OOP. OOP is about solving software problems with real-world-alike objects and LabVOOP is about something else. Why would we not call it intelligent dataflow.

This intelligent dataflow is a very nice paradigm, but it is not really OOP. If we had a different name for it, developers wouldn't argue that it is missing something but would embrace it as a great new paradigm addition to LabVIEW. LabVOOP can deal with many programming problems but these problems are not the same set of problems that OOP can deal with, although these problem-sets somewhat overlap. Instead of continuing to talk about what is wrong with LabVOOP we should start to talk about how can we improve LabVIEW to allow programming paradigm in which encapsulated objects have identity such as "My Toyota" and how these objects can interact and react to the messages of other objects.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.