Jump to content

The need for lock in get-modify-set pass


Recommended Posts

A funny analog came into my nerd mind. You know those SciFi movies with parallel universes. Parallel universes all have same history but from some point on they separate and futures are alike but still different. One doesn't know that there really are parallel worlds. Each one thinks that he is unique. LVOOP is like such a scifi movie. Each object knows nothing about the other objects and has no way of knowing that it's not the original My Toyota but one of the parallel yet different alternate futures of the original My Toyota. EDIT So perhaps LabVOOP should be though with the analog of real world objects in a world branching to parallel universes as OOP is traditionally thought with the analog of real world objects.

Link to comment
  • Replies 51
  • Created
  • Last Reply

Top Posters In This Topic

I just wonder if all this is due to some patent related issues. For instance, there is no good reason that all sub VIs shall have both front panel and block diagram. 99% of the cases, VI are not used as virtual instruments, but as functions and subroutines. All you need for functions and subroutines are block diagram and the icon with connectors. Also, a by value object will not change anything of the basics. A by ref object would be impossible to protect in any form because it already exist (openGOOP, dqGOOP etc), and a native implementation would require a storage that is not a VI to be efficient, and then it blows the patents. Just some wild guesses :)

Link to comment
I just wonder if all this is due to some patent related issues. For instance, there is no good reason that all sub VIs shall have both front panel and block diagram. 99% of the cases, VI are not used as virtual instruments, but as functions and subroutines. All you need for functions and subroutines are block diagram and the icon with connectors. Also, a by value object will not change anything of the basics. A by ref object would be impossible to protect in any form because it already exist (openGOOP, dqGOOP etc), and a native implementation would require a storage that is not a VI to be efficient, and then it blows the patents. Just some wild guesses :)

The design is as it is for purely technical reasons. I'd be ashamed if we actually changed a good design to accomodate a patent application.

Link to comment
Is the C++ code that I posted OO? If so, my definition stands. QED.

I tried to answer to this Aristos' comment. But as I was editing this message, Aristos removed the comment. This forum defenitely needs a topic-locking mechanism, or alternatively we should have multiple instances of each topic in LabVOOP way so that everybody can edit their own topic instance :D:rolleyes:

There are entities in programming that don't have an identity. Integers belong to this class of entities; you cannot distinguish two instances of number 7 from each other. These entities can be represented using LabVOOP objects. There are however much more entities that does have an identity, such as My Toyota. These entities cannot be represented using LabVOOP objects. I think OOP language is a language that is capable of representing practically any object using objects of the language. If this is considered a necessary requirement for OOP language then LabVOOP is not an OOP language.

Link to comment
No, a native implementation would not take care of this. Let me lay that myth to rest right now.

If we had a native by-reference implementation, the locking would have been left completely as a burden to the programmer. Would you suggest we put an "acquire lock" and "release lock" around every "unbundle - modify - bundle" sequence? When you do an unbundle, how do we know that you'll ever do a bundle -- there are lots of calls to unbundle that never reach a bundle node, or, if they do, its buried in a subVI somewhere (possibly dynamically specified which subVI!). There are bundle calls that never start at an unbundle. We could've implemented a by-reference model and then had the same "Acquire Semaphore" and "Release Semaphore" nodes that we have today, only specialized for locking data. Great -- now the burden on the programmers is everytime they want to work with an object, they have to obtain the correct semaphore and remember to release it -- and given how often Close Reference is forgotten, I suspect that the release would be forgotten as well. Should you have to do an Aquire around every Multiply operation? A Matrix class would have to if the implementation was by reference. Oh, LV could put it implicitly around the block diagram of Multiply, but that would mean you'd be Aquiring and Releasing between every operation -- very inefficient if you're trying to multiply many matricies together. So, no, LV wouldn't do that sort of locking for you. There is no pattern for acquire/release we could choose that would be anywhere near optimal for many classes

The by-reference systems used in the GOOP Toolkit and other implementations do not solve the locking problems. They handle a few cases, but by no means do they cover the spectrum.

The by-value system is what you want for 99.9% of classes. I know a lot of you don't believe me when I've said that before. :headbang: It's still true. :ninja:

I think that it a get-modify-set pass that could easely be protected or locked in a native implementation of GOOP, just as it is locked in openGOOP. But probably more important is the fact that if by-ref in general was natively implemented in the same manner as other languages, there would be no need for a get-modify-set pass at all.

Link to comment

Hehe, parallel universes indeed - wrap your head around that newbies! :unsure:

As bsvingen so elegantly put it, LVOOP makes me feel object disoriented :o

Mads

A funny analog came into my nerd mind. You know those SciFi movies with parallel universes. Parallel universes all have same history but from some point on they separate and futures are alike but still different. One doesn't know that there really are parallel worlds. Each one thinks that he is unique. LVOOP is like such a scifi movie. Each object knows nothing about the other objects and has no way of knowing that it's not the original My Toyota but one of the parallel yet different alternate futures of the original My Toyota. EDIT So perhaps LabVOOP should be though with the analog of real world objects in a world branching to parallel universes as OOP is traditionally thought with the analog of real world objects.
Link to comment
I think that it a get-modify-set pass that could easely be protected or locked in a native implementation of GOOP, just as it is locked in openGOOP. But probably more important is the fact that if by-ref in general was natively implemented in the same manner as other languages, there would be no need for a get-modify-set pass at all.

Yes that's what I meant as well. Immediate actions on the referenced object. Why retrieve and store it in the first place ? That was only necesary for GOOP, but not for NI ! The GOOP-ers could not access the stuff under the bonnet of LabVIEW, but NI can.

The differences between the currently implemented static objects and referenced objects are very small. The diagram of a method would look the same. A big plus would be that there would be no object-set and object-get methods, or wrappers required. That's all just extra code that we need to carry around every time, which conflicts with a reason to go for OO: preventing code duplication.

Having a referencing wire would allow for constructors and destructors. I completely agree with Jimi the current system is not true OO. The state of the object should always represent something in the real world and this is currently not enforcable. With con/destructors, if I would destroy an object of class IniFile, I could flush the file data and close the file nicely. Or for a driver you could free resources (eg. close a port). And on creation the state should be known as well. Generally speaking, you can keep things consistent. That is impossible now, the programmer can screw things up easily. If he branches a Keithley2000 wire and performs different operations on each object, the objects do not know of eachother and the state of the multimeter is incorrectly known. The language construction should prevent this to be possible.

I hope NI changes its mind and changes the implementation to ref-wires in LV9. There would be no real problems with that, programs written in 8.20 would still work. On converting to 9.0 wire branches would be a problem, but it could be solved by inserting a copy constructor at that point. The class will need a copy constructor for that, but it could be added automatically on conversion. So the door is still open.

Joris

Link to comment
Yes that's what I meant as well. Immediate actions on the referenced object. Why retrieve and store it in the first place ? That was only necesary for GOOP, but not for NI ! The GOOP-ers could not access the stuff under the bonnet of LabVIEW, but NI can.

The differences between the currently implemented static objects and referenced objects are very small. The diagram of a method would look the same. A big plus would be that there would be no object-set and object-get methods, or wrappers required. That's all just extra code that we need to carry around every time, which conflicts with a reason to go for OO: preventing code duplication.

I'm already regretting adding this, as I can see the number of email alerts I'll get after this, and I'm far from being an OO expert, and I don't have 8.2, but here goes anyway:

If I understand Joris correctly, then I would reiterate his point in different wording - you define properties for a class and then, instead of needing the GMS pass, you simply wire it into a property node and select the property that you want, just as you would do for any object in the VI server hierarchy. NI should definitely be able to do this.

As an addition to this, if I remember correctly, Xcontrols do allow you to create user accessible properties. Has anyone considered using Xcontrols to do GOOP?

I only came up with this now, but here's what I thought - the control only serves as a means for holding the properties, you place the control in a VIT and create an instance for each object (since I don't think you can have Xcontrols in an array or, if you could, they would probably have identical properties).

Then, you use the reference (which should hopefully be strictly typed) to wire into a property node. I wonder if this can be done and how it would perform (I don't have 8.0 either)?

Maybe I should try this on my 8.2 eval at some point, just to see if it can be done.

Link to comment
As an addition to this, if I remember correctly, Xcontrols do allow you to create user accessible properties. Has anyone considered using Xcontrols to do GOOP?

I actually looked into this in LV8, since this means that new methods gets public the very instance they are created. And you also get good protection for your data members.

The reason I stopped working with this, is that an xControl also inherits all general control attributes, and the access methods are therefore cluttered with unneccessary information.

Which also ment that I had to create wrappers to get a clean user interface.

The usage of "property node" and "invoke node" is more in line with my ideas of a native OO implementation in LabVIEW, so maybe it is time to give it another shot in LV8.20.

But then we mightl loose all the goodies in LVOOP (inheritacne/dispatch etc.)?

Yes that's what I meant as well. Immediate actions on the referenced object. Why retrieve and store it in the first place ? That was only necesary for GOOP, but not for NI ! The GOOP-ers could not access the stuff under the bonnet of LabVIEW, but NI can.

Good point!

/J

Link to comment
I think that it a get-modify-set pass that could easely be protected or locked in a native implementation of GOOP, just as it is locked in openGOOP. But probably more important is the fact that if by-ref in general was natively implemented in the same manner as other languages, there would be no need for a get-modify-set pass at all.

Exactly which languages are you talking about where the locking is not necessary? I regularly program in JAVA, C, and C++. In the past I've worked in Pascal, Haskell, Basic, and Lisp. In all of these lanugages, if the programmer spawns multiple threads, the programmer must handle all locking. These languages have no support for automatically protecting data from modification in separate threads. The closest you come is the "synchronized" keyword in JAVA, but even that doesn't handle all the possible woes, particularly if you're trying to string multiple "modify" steps together in a single transaction.

Am I misunderstanding what you're referring to?

Link to comment
Exactly which languages are you talking about where the locking is not necessary? I regularly program in JAVA, C, and C++. In the past I've worked in Pascal, Haskell, Basic, and Lisp. In all of these lanugages, if the programmer spawns multiple threads, the programmer must handle all locking. These languages have no support for automatically protecting data from modification in separate threads. The closest you come is the "synchronized" keyword in JAVA, but even that doesn't handle all the possible woes, particularly if you're trying to string multiple "modify" steps together in a single transaction.

Am I misunderstanding what you're referring to?

C++ actually. I made a matrix class system in C++ once. It was a part of larger engineering application, and the way it worked was to store 2D arrays as 1D arrays and call core Fortran LAPACK solvers (Fortran stores all arrays in 1D due to performance gains). It started on a SGI using gcc, but later moved to windows using Watcom compilers (fortran and c). Anyway i can't remember ever considering a locking mechanism to protect anything within a class.

I think LAPACK, BLAS and derivated routines can be found in parallelized versions now, using multiple threads. I still don't see any need for locking as long as you don't access the same memory, since you don't have a get-modify-set pass, but operate directly on the individual data. I mean, what is the idea of having multithreading/multiprocessor applications if you implement a lock that effectively serializes the execution like it is done in the available GOOPs ? Then you would be much better of with much better performance using one single thread/processor. The only consideration is the relatively few inter thread calls, but they too can be solved by asuring that all inter thread routines writes to separate memory locations.

The by-value approach of LabVIEW asures that we always operate on different data in parallel loops, and therefore multithreading is a relatively simple thing to implement, since the memory collision considerations are solved, or more precisely they are irrelevant. The same thing could however easely be implemented in C++ when using only call by value, but this would result in two major problems:

1. Loss of performance due to constantly memory allocation/deallocation.

2. No way of effectively handling inter thread (read: inter loop) calls.

As i see it, this is also the two major problematic issues of LabVIEW, and particularly number 2 is something that puzles all newcomers to labview after a week or two. Problem 1 can only be solved by using by ref. Problem 2 can be solved in many ways depending on what the program actually does, and locking can be one solution (or at least part of the solution).

But, as i said, i'm no expert. Maybe i have a too simplistic view on this.

Link to comment
But, as i said, i'm no expert. Maybe i have a too simplistic view on this.

I think implementing by-ref objects in LabVIEW without simultaneously implementing a decent syncrhonized access mechanism would be irresponsible from NI. You are right that syncrhronized access is not needed in producer-consumer patterns, but in most cases of multithreaded access to shared resource some sort of syncrhonizing mechanism is needed or at least transient data corruption evetually results. If such a syncrhonization mehcanism is not implemented simultaneously with by-ref objects, developers tend to start using by-ref objects in an unsafe manner. In the best case the user experience is such that it guides but doens't force developers to using synchronization in access to shared resources. Also synchronization doesn't need to be of slow performance. For example transaction based synchronization mechanisms do not suffer from the weaknesses of mutex based synchronization.

I started a new thread about how to implement synchronized access to shared objects in LabVIEW. I don't think that discussion fits under the topic of this thread.

Link to comment
C++ actually. I made a matrix class system in C++ once. It was a part of larger engineering application, and the way it worked was to store 2D arrays as 1D arrays and call core Fortran LAPACK solvers (Fortran stores all arrays in 1D due to performance gains). It started on a SGI using gcc, but later moved to windows using Watcom compilers (fortran and c). Anyway i can't remember ever considering a locking mechanism to protect anything within a class.

Did you ever use the Posix library or any thread spawn command? If not, the reason you didn't have to have locking is that your entire program was serial executed with no parallelism whatsoever. In C++ you have to explicitly declare new threads, where to spawn them and where to merge them back together. Unless you spawned new threads manually, this isn't a valid example.

Link to comment

First for the locking mecanism :

Well, it would be nice to hear any comments on this, as I am no expert, i just do not understand what all this locking is about. :)

I believe that Aristos cleared that issue http://forums.lavag.org/index.php?showtopi...amp;#entry17439

And know some OO (ObjectionsOverload)

Using references feels natural when you are dealing with objects, not only in general, but because that's what we have been doing for a long time in LV...

It never occured to you that you where using the wrong tool to do OO, and in the process making thinghs more complicated than it would be with a OO tool?

The by-value syntax is just as meaningful for objects. In fact, in many cases, it is more meaningful. But you have to get to the point where you're not just looking at system resources as objects but data itself as an object. Making single specific instances of that data that reflects specific system resources is a separate issue, but is not the fundamental aspect of dataflow encapsulation.

Could data be just seen as data?

Let's say that you want to teach someone what object orientation is and how he should design an object oriented program...how do you do it? One of the core concepts then is that he should try to not think of functions, but real life objects. If he wants to create a car simulator he should think of what kind of objects a car is made of and create the equivalent in code, he would need to make an engine object with its attributes and methods, consisting of smaller objects with their attributes and methods etc. etc....

Is a motor really and object or a function to transform air and gaz inputs into mecanical force output?

All you need for functions and subroutines are block diagram and the icon with connectors.

And then im back to the old text languages methods where i would have to code a UI to test my function...also take note that when built into exe the front panels that are not needed are removed...

Having a referencing wire would allow for constructors and destructors. I completely agree with Jimi the current system is not true OO. The state of the object should always represent something in the real world and this is currently not enforcable. With con/destructors, if I would destroy an object of class IniFile, I could flush the file data and close the file nicely.

I strongly believe that OO is not a representation of the "real" world, but more a representation of the "ideal world" that some people think or would like to live in.

Example : In the OO world you create an object called "plastic bag", you use it then destroy it. Its gone, evaporated, no traces, by by...

In the world i live in, you create an object called "plastic bag", you use it then destroy it. It pollutes...

Some people see objects and believe that they exists and generate data, some others believe that the data exists and the objects are a particular state of the data...if we live in an object world, who decides where are the boundaries?

This discussion reminds me of all the conflicts that arose when an old theory was challenged by a new one that enables a more simplistic way and a larger domain of applicability to the transformation of our surroundings. Particularly of the conflict of the flat world vs the sphere world that went on a couple of centuries ago.

And i found this website that could give some encouragment to the OO advocates on continuing their mission of propagating the "everything is an object" concept. Here is an excerpt from there mission statement :

http://www.alaska.net/~clund/e_djublonskop...arthsociety.htm

But why? Why do we say the Earth is flat, when the vast majority says otherwise? Because we know the truth.

Link to comment

In the quarrelsome or humourous corner today are we, Jacemdom?

Is a motor really and object or a function to transform air and gaz inputs into mecanical force output?

It is an object if you choose to view it with an object oriented mind, that's the whole point.

Link to comment
It is an object if you choose to view it with an object oriented mind, that's the whole point.

i don't see objects when i model (software or any other modeling)...i see data and functions that act upon it...i don't unite them as one to create objects...i see them seperated...

LabVIEW Object-Oriented Programming: The Decisions Behind the Design

For those who choose object designs,

we want wire and node

to morph naturally

into class and method.

I created a similar relation about 5 years ago, to be able to differenciate the LV concepts from the OO ones, that went :

In LabVIEW there are containers (contain data, wires, globals, queues etc...) that are similar to the properties of objects and there are functions (transforms the data, VIs, VITs, LV Blocks) that are similar to the methods of the objects. A definition of a container is a domain (Type def cluster) that is similar to an OO class.

So i created a way of modeling the architecture of an app entirely using a naming convention representing this. For more details see

http://forums.lavag.org/index.php?s=&s...ost&p=17238

As for

In the quarrelsome or humourous corner today are we, Jacemdom?

I don't know how to qualify the corners, but one thing i know is that i'm not in the same corner than the ones who believe that LV lacks native by ref modeling tools. If NI wants to add them in the future, so be it, as long as they never forget the original simplicity of the dataflow implemented by the fathers of LV.

Emotionnaly i can say that all the discussions and complaints i have read on this forum, since LV8.2 came out with new tools to push forward the designs of those who have used LV for what it was, a DATAflow language, have made me :angry: ...and i also believe that i'm the only responsible for getting :angry: as i was the one who read all the info, knowing in advance i would get :angry: ...

I have now let it out :o , and can continue in my corner :ninja: preaching about DATAFLOW (i don't believe in that the wires can solve evrything, sometimes data needs to be shared and LV as a lot of different tools to do so, no need to have a by ref OO).

Can "by ref" OO offer, you have bug, follow the wire?

Link to comment
I see in your tests that you create and destroy the mutexes almost as often as you use them. I would believe that a more acurate test would consist of creating and destroying them once, but use them often.

I fully agree. When I was writing such a test, I encountered a bug in LabVIEW notifier implementation. I couldn't go on with the test as notifiers were missed. So notifiers cannot be fully trusted.

Link to comment
i don't see objects when i model (software or any other modeling)...i see data and functions that act upon it...i don't unite them as one to create objects...i see them seperated...

Noone should force you to use OO. To use OO should never be a target on itself. All programs I'v written that use OO aspects don't do that everywhere, only for the parts that it's useful. But I don't think the usefulness of OO is something that should be discussed here. It's already in and it will never leave LabVIEW.

And obviously you can build OO systems without native language support. "C with classes" was the first implementation in C, it was done only in #define's etcetera.

I don't know how to qualify the corners, but one thing i know is that i'm not in the same corner than the ones who believe that LV lacks native by ref modeling tools. If NI wants to add them in the future, so be it, as long as they never forget the original simplicity of the dataflow implemented by the fathers of LV.

I agree with you that's essential. I believe there is a lot of power in dataflow. The route of the data on the diagram tells so much. But dataflow only works locally, in a VI and towards its subVIs. For the app as a whole dataflow is not appearent. We use functional globals all the time (I hope you do as well!) to store more complicated things that need to be accessed from multiple places. A functional global does not follow the dataflow paradigm, as it stores data in shift registers. So at that point the dataflow ends (or starts). You can use dataflow up to the functional global, and even inside the functional global, but once you place that functional global in multiple VIs the dataflow is disturbed. The same modified data does not flow in and out, a lot of data may be stored inside the VI.

Dataflow is a great concept, but there are limits to it. The trick is to go past the limits in the best way. A functional globals is a such way, the event structure - a fascinating solution by NI - is such a way and native referencing OO would be another.

Can "by ref" OO offer, you have bug, follow the wire?

You mean you are affraid debugging is going to be more difficult ? Then NI should add features to make that possible. It is quite important to keep it "Rapid development".

Joris

Link to comment
You mean you are affraid debugging is going to be more difficult ? Then NI should add features to make that possible. It is quite important to keep it "Rapid development".

That is my question...How can you make that possible? How can you make a "follow the datawire debugging" scheme in a by ref design? I don't see how one can follow the data in a by ref design...

Link to comment
That is my question...How can you make that possible? How can you make a "follow the datawire debugging" scheme in a by ref design? I don't see how one can follow the data in a by ref design...

You could attach a probe to the wire and see what the object's attribute's values are... You could open a diagram and set a breakpoint. Maybe you could have a front panel and block diagram per instance of an object...

In the diagram things don't look different at all. It's only the behaviour on life-start and life-end where things are different. The current systems creates a new object when you branch a wire, and removes an object from memory after the data in the wire is not used anymore (this is the simple version of the story ;) ). A referencing system creates an object on request (allows to have a constructor) and removes it on request (allows for a destructor; may allow for automatic destruction when an object is not used anymore but this is tricky business). The fact that the programmer needs to request a new object prevents having "parallel universes" by accident. The "request to create" could in practise be the constructor placed on the block diagram, just like a method is placed now.

Joris

Link to comment
You could attach a probe to the wire and see what the object's attribute's values are... You could open a diagram and set a breakpoint. Maybe you could have a front panel and block diagram per instance of an object...

In the diagram things don't look different at all. It's only the behaviour on life-start and life-end where things are different. The current systems creates a new object when you branch a wire, and removes an object from memory after the data in the wire is not used anymore (this is the simple version of the story ;) ). A referencing system creates an object on request (allows to have a constructor) and removes it on request (allows for a destructor; may allow for automatic destruction when an object is not used anymore but this is tricky business). The fact that the programmer needs to request a new object prevents having "parallel universes" by accident. The "request to create" could in practise be the constructor placed on the block diagram, just like a method is placed now.

Joris

Using analogy, would the by ref implementation in LabVIEW be like designing a sewer system that enables the water and the treatment plant to go trough the pipes all together? :blink:

Link to comment
Using analogy, would the by ref implementation in LabVIEW be like designing a sewer system that enables the water and the treatment plant to go trough the pipes all together? :blink:

Jacemdom, you don't seem to want to use by-ref objects in LabVIEW. So may I ask what is your solution to abstract real world objects such as files or hardware decives or a specific internet connection or front panel object, if you don't think refereces are a proper way of referring to these objects. There maybe a way that I don't know about. If your answer is do not use abstraction, then how do you refer to these objects unless by using references?

Link to comment
Jacemdom, you don't seem to want to use by-ref objects in LabVIEW. So may I ask what is your solution to abstract real world objects such as files or hardware decives or a specific internet connection or front panel object, if you don't think refereces are a proper way of referring to these objects. There maybe a way that I don't know about. If your answer is do not use abstraction, then how do you refer to these objects unless by using references?

From : Implementing synhronized access to shared objects

I know that everything cannot be connected with wires and that sometimes a "shared dataspace" is needed between parallel process, but i create that shared space when i need it and only put the data that needs to be shared there.

Basically i see them as shared/parallel ressources that need sharing mecanisms may they be direct or pointed(refnums) means. And i try to avoid them when possible, surely wont start creating more...I don't see the file or other refnums as the file itself, but just one more data that is needed by a function to accomplish it's task...

Now that i answered your question, could you respond to mine :

Using analogy, would the by ref implementation in LabVIEW be like designing a sewer system that enables the water and the treatment plant to go trough the pipes all together? If not, how would the sewer example need to be modified to fit?

Link to comment
Basically i see them as shared/parallel ressources that need sharing mecanisms may they be direct or pointed(refnums) means. And i try to avoid them when possible, surely wont start creating more...I don't see the file or other refnums as the file itself, but just one more data that is needed by a function to accomplish it's task...

It seems after all this debate that we fully agree on this issue. I also appreciate the dataflow nature of LabVIEW. I defenitely don't want modifications that would be against the dataflow nature. If you read my previous posts in this forum I have been pushing to see new features that would push LabVIEW even closer to real dataflow to reach the performance gains of pure dataflow. Instead of LabVOOP I would've liked to see LabVIEW to evolve more towards the features of functional programming languages which are pretty close to dataflow languages in many respects. There are many alternative ways to provide the modularity and the data abstraction in a programming language. Among all the possible solutions to this problem National Instruments chose to support concepts familiar from object oriented programming. So that's what we have to live with. As object-oriented resembling concepts are chosen to be the de-facto abstraction layer and modularity tool in LabVIEW, we just have to verify that it can cope with the programming problems of LabVIEW developers. We are not likely to see another abstraction mechanism to be built in LabVIEW for a while. LabVOOP will be the abstraction and modularity layer from now on.

The only thing related to this discussion of by-ref objest is that I'm wanting to see is a decent way to abstract real world objects. I wouldn't do this if I really didn't need to do this but I just need to do it. It may be that the present way of abstracting real world objects in LabVIEW is sufficient in your projects, but it defenitely runs into constant problems in my projects where I need to refer huge amount of real world objects shared by different parts of the application. And since I cannot avoid this, I just need to deal with it. It would defenitely help if instead of dealing with the issue every time by myself, LabVIEW would give me more sophisticated tools to deal with the issue. It doesn't make LabVIEW any less dataflow. LabVIEW is currently using references in multiple places, to refer to files, front panel objects etc. I would just need a way to create my own abstract references so that I could refer to my own file types and other real world objects I need to refer. I can somhow manage with the current tools, LabVOOP is an excellent help in abstracting these real world objects. But still, I need to use queues as reference mechanism, but this could be built-in to LabVIEW providing a more easy to use and more efficient reference mechanism. It's all about efficiency of software development and nothing else. I only want to see LabVIEW features to help our software development projects to become easier to work trough and easier to maintain. Nobody forces you to use these features as you seem not to have as strong need to these features as others do. Still I cannot see how it can annoy you if we have different needs in our software development projects as you do and LabVIEW supporting these features would in no way be a less efficient tool for your needs.

Now that i answered your question, could you respond to mine :

Using analogy, would the by ref implementation in LabVIEW be like designing a sewer system that enables the water and the treatment plant to go trough the pipes all together? If not, how would the sewer example need to be modified to fit?

It must be my lack of English language skills, but I didn't really understand this example of yours. So I cannot really answer.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.