Jacemdom Posted May 10, 2007 Report Share Posted May 10, 2007 Salut. Here is an exploration of an alternative/complimentary direction to LVOOP that can be chosen to enhance/accelerate the code creation and manageability in LabVIEW. I willingly posted this on Info-LabVIEW, LAVA and NI's forums, in order to reach the largest base of opinions possible. I propose that everyone who is a member of LAVA post's there, to try and limit the redundancy. I did not want to post it only on LAVA and go to Info-LabVIEW and NI's forum to say "Go see this on LAVA" because not everyone wants to become a member of LAVA and I would like to get their opinions to. I plan on gathering all the information and publishing it on an FTP site. MS Word version ftp://all:Password1@ftp.cephom.ca/AnotherVIEW.doc Browser version ftp://all:Password1@ftp.cephom.ca/AnotherVIEW.htm Quote Link to comment
robijn Posted May 10, 2007 Report Share Posted May 10, 2007 Hi, Very interesting. I miss one important goal of object orientation: the match between the real-world and the information. OO does not only fascilitate this by being able to make a hierarchy of characteristics ("classes") but only by guaranteeing some things about the life of an object (constructor, destructor, consistency of data). The latter is missing in your article. I think it is important that the relation between real-world and its abstraction in the form of objects is correct and protected, to prevent getting "parallel universes" as we have discusses here ealier on Lava. This means an object X needs to have defined actions in a given sequence on it, so that it is always possible to have a consistent state, that also matches the state of the real-world. The prorgammer of the class should be in full control of everything that happens with objects instantiated from the class. The only way to achieve that is by having some kind of referencing system. (I haven't heard of any other solution). Seeing your list of references you must have considered this, so my question is why do you not consider this a problem ? Joris Quote Link to comment
Jacemdom Posted May 10, 2007 Author Report Share Posted May 10, 2007 QUOTE(robijn @ May 9 2007, 11:21 AM) OO does not only fascilitate this by being able to make a hierarchy of characteristics ("classes") Hierarchy of classes = Hierarchy of clusters QUOTE(robijn @ May 9 2007, 11:21 AM) but only by guaranteeing some things about the life of an object (constructor, destructor, consistency of data) Constructor, destructor and life expectancy, are irrelevant concepts to the dataflow paradigm. More details can be found in the decisions behind the design document. Data exists as long as it is needed. QUOTE(robijn @ May 9 2007, 11:21 AM) Seeing your list of references you must have considered this, so my question is why do you not consider this a problem ? Could you come up with a concrete example of the problem you are talking about? Quote Link to comment
Yair Posted May 10, 2007 Report Share Posted May 10, 2007 Sciware GOOP already handles inheritance (including overriding methods) in this fashion and works as far back as 6.1, so the wheel was already invented. :laugh: It's not by-value, but that seems to be a good thing for most people. BTW, typedef's weren't always around. I believe they've only been around since about version 5 or so. Quote Link to comment
Jacemdom Posted May 10, 2007 Author Report Share Posted May 10, 2007 QUOTE(yen @ May 9 2007, 03:52 PM) Sciware GOOP already handles inheritance (including overriding methods) in this fashion and works as far back as 6.1, so the wheel was already invented. :laugh: It's not by-value, but that seems to be a good thing for most people. BTW, typedef's weren't always around. I believe they've only been around since about version 5 or so. To stay by value is the actual intention of this exploration... Quote Link to comment
robijn Posted May 10, 2007 Report Share Posted May 10, 2007 QUOTE(Jacemdom @ May 9 2007, 07:14 PM) Could you come up with a concrete example of the problem you are talking about? I have an axis. I can control a stepper motor to change the angle that the axis is at. For that reason I have created a class A and created one object of it, X. Now, when I call X.goto(30) the axis rotates to a position of 30 degrees. The object stores the position of this axis because next time we need to rotate from this angle to a new angle. For example if we need to go to 45 degrees, we need to rotate only 15 degrees. That means the current state is stored in the object and used to calculate the new movement. In C, Java and other languages that have referencing class systems, the programmer of the class has means make sure that he always has full control of the state of the object. With dataflow, like in LVOOP, if you connect a wrong wire your object data gets mixed up, gets lost and/or gets currupt. It is very very easy to make an enourmous mess of your object data in this way. Let alone what happens if you have parallel access to an object that is stored in some repository. Retrieving twice and then storing twice results in loss of one of the two mutations and can only be prevented in an easy way by locking it, but this may give deadlocks. Preventing them is quite difficult. I guess a search on "parallel universe" should give more examples of these problems. Joris Quote Link to comment
Jacemdom Posted May 10, 2007 Author Report Share Posted May 10, 2007 QUOTE(robijn @ May 9 2007, 05:08 PM) I have an axis. I can control a stepper motor to change the angle that the axis is at. For that reason I have created a class A and created one object of it, X. Now, when I call X.goto(30) the axis rotates to a position of 30 degrees. The object stores the position of this axis because next time we need to rotate from this angle to a new angle. For example if we need to go to 45 degrees, we need to rotate only 15 degrees. That means the current state is stored in the object and used to calculate the new movement. In C, Java and other languages that have referencing class systems, the programmer of the class has means make sure that he always has full control of the state of the object. With dataflow, like in LVOOP, if you connect a wrong wire your object data gets mixed up, gets lost and/or gets currupt. It is very very easy to make an enourmous mess of your object data in this way. Let alone what happens if you have parallel access to an object that is stored in some repository. Retrieving twice and then storing twice results in loss of one of the two mutations and can only be prevented in an easy way by locking it, but this may give deadlocks. Preventing them is quite difficult.I guess a search on "parallel universe" should give more examples of these problems. Joris If you have a single VI to set the angle of that stepper motor, it could remember the last state in itself (shift register) and nobody could corrupt it. Sometimes other shareable "containers" of data are required by parallel processes, but i believe that each situation needs to be considered and that no one way of doing things will solve everything, like putting everything into objects. So for this issue i would either: 1- Query the axis to know where it is before sending another command or 2- Store that value in the a set angle VI in an unitialized shift register Quote Link to comment
Aristos Queue Posted May 12, 2007 Report Share Posted May 12, 2007 I have read your document. Will comment later. Maybe a lot later given current work load, but I wanted you to know I had read it. QUOTE(yen @ May 9 2007, 02:52 PM) It's not by-value, but that seems to be a good thing for most people. If by "most people" you include C++ and JAVA programmers, then yes, I agree. If we restrict "people" to be the set of LV programmers, then I think your statement needs to be amended: "It's not by-value, but most people seem to think that's a good thing." Quote Link to comment
Yair Posted May 12, 2007 Report Share Posted May 12, 2007 QUOTE(Aristos Queue @ May 11 2007, 05:35 AM) If by "most people" you include C++ and JAVA programmers, then yes, I agree. If we restrict "people" to be the set of LV programmers, then I think your statement needs to be amended: "It's not by-value, but most people seem to think that's a good thing." OK, I'm amending my statement: It's not by-value, but most people seem to think that's a good thing, because they're used to by-ref OOP in LV (file functions, TCP functions, QUEUE functions) and find it very easy to understand and work with. I include myself with that group, since I still don't really understand by-value OOP (I'm still trying, though. I just have a limited amount of time for it since I don't work with 8.x) and my experience with GOOP has been with hardware, which requires concurrent access to the same instance from different places in the code. When I split a numeric wire, I don't mind what LV does with the "object". It can duplicate it or schedule it so that it can reuse the buffer, since all the data is in the wire. When I split a reference pointing to a file, however, I definitely do not want LV to create another copy, because I only have a single file. In any case, we shouldn't turn this into another "explain LVOOP" thread. I guess I will just need to learn by learning. Quote Link to comment
Jacemdom Posted May 14, 2007 Author Report Share Posted May 14, 2007 Made an update to the document. Between ******* below 5.3.3.3 AnotherVIEW Implementation All data would be public by default and it could be possible to restrict access to a certain elements in the Hierarchy by right-clicking on it and selecting restrict access to those VI's and consequently the unbundle and bundle function would only allow this data to be accessed when used in those VI's. More complex behavior could be added directly in the configuration of the "eTD" used in conjunction with "eBundle" and "eUnbundle". *******The data that would need to be accessed by parallel process just needs to be "encapsulated" in a functional global variable. A software could consist of basically 2 main hierarchical clusters, 1 for the dataflow in a wire, and the other in a global for parallel processes that need access to it if required.******* Also tiny links for those who were not able to use the previous hyperlinks MS-Word http://tinyurl.com/ys7hof HTML http://tinyurl.com/2698ob Quote Link to comment
LAVA 1.0 Content Posted May 15, 2007 Report Share Posted May 15, 2007 Hi, In OOP you also need the ability of encapsulation. Not just encapsulation of data, but also encapsulation of functionality. With your idea, how can I make two drivers (or in general objects) to two hardware devices, and choose which object to use at run time? (I know this can be done with dynamic references, but that's not oop!) If you find a way to embed this into your idea, I think you'll end up with pretty much the same as LabVOOP. Also in 5.3.2.3 you say: "Then the unbundle would let you see what was "initialized"" That seems easy at first. But how can LabVIEW determine if the wire is initialized at compile time (or worse: during edit time). Lets say we have a wire comming from a case. The false case does the initialization, the true case doesn't. LabVIEW can't know which case is used, so it cannot disable the unbundle elements correctly. Some applies to class wires comming from sub vi's outputs, the connector pane (sub vi inputs), locals, globals, property nodes, variants, type casts or flattened strings. How would you make a sub vi that acts on the class? The user can wire both uninitialized and initialized wires to the sub vi. during editting, the wire will most likelly be uninitialized. Regards, Wiebe. Quote Link to comment
Aristos Queue Posted May 16, 2007 Report Share Posted May 16, 2007 Dear Jacedom, Only two sections of your document seem to need a reply. 5.3.2.3 is your idea of defining the entire class hierarchy in a single control of a cluster of clusters. 5.3.3.3 is your idea that all data would be public. The 5.3.2.3 idea is similar to how the classes work under the hood. You, the user, create each class independently. Behind the scenes, each class is represented by a cluster of clusters -- all the clusters of the ancestors clustered together with the current level of data. As I said, it is similar, but we don't create the branching cluster structure that you propose. Something like that structure is being toyed with for getting classes onto targets that have to have fully preallocated data -- every object on those targets would be a composite of fields such that the whole was capable of containing all possible descendants (obviously no dynamic loading on those platforms). The idea is a good one, but it doesn't really work for the editing environment. Problems with having them all in the same cluster up front: 1) You couldn't have multiple users creating child classes of a parent, since they'd both need to be editing the same file. Integration would be hell. 2) You couldn't dynamically load classes -- every descendant would always be in memory. 3) The parent implementations would be open and visible to the child implementations. You'd lose the independence of separating implementation from interface. As far as "all data would be public"... you are hereby banned from using the word "encapsulation" ever again. The disadvantages of public data and the advantages of private data have been so thoroughly talked about in so many forums I'm not going to go into them here. We do need to make the process of creating accessor VIs simpler. But making public data would be a disservice in the extreme to developers (yes, that's my opinion, and yes, I just stated it as fact, and you're not going to make much headway with any counter argument). Quote Link to comment
Jacemdom Posted May 18, 2007 Author Report Share Posted May 18, 2007 QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) Dear Jacedom,Only two sections of your document seem to need a reply. 5.3.2.3 is your idea of defining the entire class hierarchy in a single control of a cluster of clusters. 5.3.3.3 is your idea that all data would be public. Reply under construction...estimated delivery date 1 to 5 days... Quote Link to comment
Aristos Queue Posted May 18, 2007 Report Share Posted May 18, 2007 QUOTE(Jacemdom @ May 17 2007, 12:24 PM) Reply under construction...estimated delivery date 1 to 5 days... As a short reply, let me know if there's another section that you want me to specifically comment on. As I read it, the rest seemed to be explanation of how you came to your suggestions and the advantages of those suggestions. Quote Link to comment
Jacemdom Posted May 18, 2007 Author Report Share Posted May 18, 2007 QUOTE(Aristos Queue @ May 17 2007, 04:17 PM) As a short reply, let me know if there's another section that you want me to specifically comment on. As I read it, the rest seemed to be explanation of how you came to your suggestions and the advantages of those suggestions. Those 2 points were the ones to be challenged and the arguments were right on target. Quote Link to comment
Jacemdom Posted May 22, 2007 Author Report Share Posted May 22, 2007 QUOTE(Jacemdom @ May 17 2007, 01:24 PM) Reply under construction...estimated delivery date 1 to 5 days... Delivery date revised to 1 to 10 days from now... Quote Link to comment
Jacemdom Posted June 1, 2007 Author Report Share Posted June 1, 2007 QUOTE(Jacemdom @ May 21 2007, 11:48 AM) Delivery date revised to 1 to 10 days from now... Delivery date re-revised to 1 to 20 days from now... Quote Link to comment
Jacemdom Posted June 16, 2007 Author Report Share Posted June 16, 2007 QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) The idea is a good one, but it doesn't really work for the editing environment. Problems with having them all in the same cluster up front:1) You couldn't have multiple users creating child classes of a parent, since they'd both need to be editing the same file. Integration would be hell. What if development was to be organized in a way that there would be a central or master architect on every project that would design the overall architecture of the software and design the clusters at the same time, as there is probably already a lead architect in organized development teams. This architecture would basically consist of a hierarchy of domains and associated actions to be performed on them, by entities of particulat natures. It is believed that this is the basic of all computer programming, performing actions (functions) on data. This data can be organized in a hierarchical way and a list of actions can be created for each of them, thus designing the entire app. A standard naming scheme was created to support this idea, and at the same time added some untaught of benefits, like never having a VI in memory with the same name, as no 2 VIs can have the same name in this scheme as they would do the same thing. It also allows to verify that the problem is coherently solved by the fact that if a function name cannot be found, it is because: 1- there is a missing domain or 2- the frontier/s between some domains is/are not where it/they should be or 3- A function does more or less of what it is supposed to do (a function called something like Acquire.VI that also saves the acquisition data to file). It also allows to automatically include dynamic called VIs in a build as they are associated with a dynamic nature. The nomenclature of what are called entities (Vis, TD, Globals etc.) consists of 3 basic parts: 1- The nature: What is it? (TD=Type definition, GLB=Global, FVI=Flow VI, DVI=Dynamic, UVI=UI VI etc.) 2- The domain : On what does it act? Example: The function generator's(FG) power level, the oscilloscope scale 3- The action: What does it do? Set the scale, set the level etc. This translates into FVI_FG_Power_Level_Set and FVI_Oscilloscope_Scale_Set. In this implementation the 3 parts are separated by underscores, but any convention can be used, as long as it is a standard among a particular team. So the basic job of the principal architect would consist of generating the overall domain architecture and then concurrent development could start. He would also be responsible to update those clusters as they would be directly linked to the overall design and it would serve the purpose of making sure that he knows everything that is going on at that level. The cluster would also be constructed in a hierarchy of type definitions, allowing work to be done on sub-domains (child classes) if necessary. Creating all the data definition/containers needed by a software is not a long task compared to the creation of functions, that is in LabVIEW (dropping a control in a cluster). The archietect could also decide to leave a particular domain TD empty to let someone else decide of the particular data needed by that domain. As the data structures are all created during the design phase, concurrent development of code could then proceed and mainly consist of function creation using the already defined inputs and outputs. This Methodology is currently under use and as been under development for the last 7 years. It as proved to clarify, accelerate and enhance the development, while keeping inline with the original dataflow implementation. The functionalities discussed in AnotherVIEW could allow to push this way even further. QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) 2) You couldn't dynamically load classes -- every descendant would always be in memory. As the proposed approach separates clearly data from functions and that the data cluster type def is the structural equivalent of a class, this means that yes all classes would be in memory, but what would that memory cost be, if all data contained mainly consisted of null/empty values? It would basically only leave the clusters definitions in memory, would that be significant in todays multi gigabytes of ram systems, even in multi megabytes systems? You could still load the functions dynamically, significantly reducing the memory usage, as the majority of bytes reside in function definitions vs data container(TD) definitions. QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) 3) The parent implementations would be open and visible to the child implementations. You'd lose the independence of separating implementation from interface. Is this valid considering that in dataflow the data is naturally separated from the functions, in contrast as in OO design where methods and properties are merged in one object? QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) As far as "all data would be public"... you are hereby banned from using the word "encapsulation" ever again. The disadvantages of public data and the advantages of private data have been so thoroughly talked about in so many forums I'm not going to go into them here. We do need to make the process of creating accessor VIs simpler. But making public data would be a disservice in the extreme to developers As stated in the document, QUOTE(AnotherVIEW v2) All data would be public by default and it could be possible to restrict access to a certain elements in the Hierarchy by right-clicking on it and selecting restrict access to those VI's and consequently the unbundle and bundle function would only allow this data to be accessed when used in those VI's. More complex behavior could be added directly in the configuration of the "eTD" used in conjunction with "eBundle" and "eUnbundle". Therefore creating private data. QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) We do need to make the process of creating accessor VIs simpler. Is the added debugging complexity also being worked on, specially probing, that I believe to be a drawback of the chosen implementation? I believe that the ability to follow and look into the wire as been one of the main strength of LabVIEW and loosing that decelerates my ability to write working tested code. QUOTE(Aristos Queue @ May 15 2007, 12:00 AM) Something like that structure is being toyed with for getting classes onto targets that have to have fully preallocated data -- every object on those targets would be a composite of fields such that the whole was capable of containing all possible descendants (obviously no dynamic loading on those platforms). Does this mean that standardizing everything to this idea could simplify the architecture of LabVIEW itself? Could you have dynamic loading on those platforms, if dynamic loading only consisted of dynamically loading functions? Quote Link to comment
Aristos Queue Posted August 30, 2007 Report Share Posted August 30, 2007 I'm going to come down pretty negative on this idea. I hope it is apparent that I'm objecting on technical grounds and not personal ones. Yes, LabVOOP is my baby, and it can be easy for me to get defensive about design decisions. I've taken the time to really look at jacedom's suggestion, and to make sure my mind is clear of any prideful predjudices. I think I can clearly reject this idea on technical merit. Why am I prefacing this? Because this discussion is over text, which doesn't convey conversation well at all. So I'm going out of my way to say that I appreciate users who contemplate and suggest alternate implementations. I really do. But I'm going to be very direct in my objections to this idea. I want everyone to understand that it is the idea that I'm knocking, not the person. Why do I say all this? Because I like being able to provide feedback on customer suggestions, to give the background details on why LV won't be implementing it, but I have found that not all customers take it well, and I've learned to just say "interesting idea, we'll think about it" whether the idea is good or bad so as to not give any feedback whatsoever. But jacedom's is a worthy suggestion that I'd like to respond to in full, because of the unique tack it takes for language design. I have no interest in starting a flame war, and I hope my comments are taken in that light. With all that in mind... QUOTE(Jacemdom @ Jun 15 2007, 09:34 AM) What if development was to be organized in a way that there would be a central or master architect on every project that would design the overall architecture of the software and design the clusters at the same time, as there is probably already a lead architect in organized development teams. Even if you have such a central architect, you can't expect the central architect to do all the implementing. As various members of a team create various branches of the heirarchy, you have them integrating their changes into a central single VI. Even with the graphical merge of LV8.5, that's still creating a lot of contention for that VI. As time goes by, you may have a large enough hierarchy that no single person is even capable of describing all the possible classes -- I know of many hierarchies like this in C++ code, and one getting close to this already in G code. There would be many hierarchies that would easily expand beyond a single screen in their complexity. Also, what about when there is no "team" at all? How does a LV user extend a class developed by NI? How do multiple OpenG developers collaborate to extend each other's work? The design that requires such a central repository for the hierarchy necessarily limits the hierarchy extension only to those who can edit the hiearchy file. If Albert develops a class hierarchy and gives it to both Bob and Charlie, under your scheme, if Bob and Charlie both develop a new class (by editing the central cluster) they cannot deploy on the same machine since their clusters would be unable to be in memory at the same time. Further (and this one is critical) there would be no way for the parent class to be redesigned without impact on the child classes. The parent needs to be an independently replaceable unit. That allows for the complete overhaul of the private portions of that class without disturbing any of the child classes. Indeed, with the current LV scheme, the parent can be replaced even in a run-time engine without requiring the children to recompile at all. Although there are merits to your single cluster concept as a deployment step for optimization, as a development environment I just can't see it as viable at all. QUOTE This architecture would basically consist of a hierarchy of domains and associated actions to be performed on them, by entities of ... <snip> ... principal architect would consist of generating the overall domain architecture and then concurrent development could start. All of the above is a significant organization of *people* required to make the software architecture viable. That creates significant barriers to usage. QUOTE This Methodology is currently under use and as been under development for the last 7 years. It as proved to clarify, accelerate and enhance the development, while keeping inline with the original dataflow implementation. The functionalities discussed in AnotherVIEW could allow to push this way even further. Within a single organization, a Central Architect system can be very viable, but there are many other styles of programming team that are just as effective. You work in the Cathedral but others work in the Bazzar. I do not see how the single cluster concept makes an all-powerful architect's job easier; I do see how it makes the communal developers' work nigh on impossible. QUOTE As the proposed approach separates clearly data from functions and that the data cluster type def is the structural equivalent of a class, this means that yes all classes would be in memory, but what would that memory cost be, if all data contained mainly consisted of null/empty values? The memory cost is all the implementations of all the functions that are never invoked, possibly including some very large DLLs. Tying all the classes together would make many architectures I've already seen with LabVOOP not available at all. The core software can install a hierarchy. A new module can come along later and install an extension. This is the basis of the new Getting Started Window in LabVIEW 8.5. In fact, many modules can install extensions. Having all possible modules in memory is a major size bloat and is not worth it if you don't use those modules. With the Getting Started Window, the classes are small, so even if you have every possible module installed, the impact is small, but the same type of architecture can be applied to many cases, and in a lot of these cases the effect would be devastating. Take the error code cluster refactoring that I posted to the GOOP forum a couple months back. There may be specific apps that want very complex error handling routines that load all sorts of external DLLs for graph and chart displays or e-mail communication. These should not be loaded every time the user brings General Error Handler.vi into memory. There are many hierarchies where every single operation is defined for every single class in the hierarchy. I'm willing to bet that it is way more common to have a method at every level than for the majority to be empty. QUOTE It would basically only leave the clusters definitions in memory, would that be significant in todays multi gigabytes of ram systems, even in multi megabytes systems? You could still load the functions dynamically, significantly reducing the memory usage, as the majority of bytes reside in function definitions vs data container(TD) definitions. What gigabytes of ram? The RT targets have 64k. Or 8k. An FPGA fabric is very limited. Hitting a PDA requires a thoroughly stripped down runtime engine. QUOTE QUOTE3) The parent implementations would be open and visible to the child implementations. You'd lose the independence of separating implementation from interface. Is this valid considering that in dataflow the data is naturally separated from the functions, in contrast as in OO design where methods and properties are merged in one object? Ok. This is a completely bogus argument. And it is the root misunderstanding of all the by-reference vs by-value debate. Let's get something clear, everyone: In C++, if I declare an object of type XYZ like this: XYZ variable; The language does not suddenly clone all the functions in memory so that this object has its own copy of all the functions. The functions are still separate from the data insofar as the assembly code for them occupy the same region of memory whether I have one instance of the class or 1000 instances of the class. The ONLY merging of functions with data is in the header file. Which is equivalent to the .lvclass file. The binding between data and functions is EXACTLY THE SAME between JAVA, C++ and LabVIEW. And Smalltalk. And just about any other OO language you'd like to name (I would say "all other OO languages", but I leave room for someone having developed a language such as Befunge for OO). Yes, my argument 3 is valid. Very much valid. Any time you have children being designed such that they depend upon a particular implementation of the parent you have a violation of the most basic tennet of OO, encapsulation of data. QUOTE Therefore creating private data. See previous post on why ever having public or protected data is a very bad idea. I don't care that you *can* create private data under your scheme. I object to the idea that you *can* create public data. The default direction is really not under contention here. You can default it to public or default it to private -- but the fact that it can ever be set to public, whether as the default or by deliberate change, is bad. QUOTE QUOTEWe do need to make the process of creating accessor VIs simpler. Is the added debugging complexity also being worked on, specially probing, that I believe to be a drawback of the chosen implementation? I believe that the ability to follow and look into the wire as been one of the main strength of LabVIEW and loosing that decelerates my ability to write working tested code. I think you changed topics here... give me a second... When you say "the chosen implementation", are you referring to the need to create accessors? When I first read this, that seemed to be what you're referring to here. That I would disagree with. The debugging challenge is dramatically simplified by requiring the accessor VIs because you have a bottle neck to catch all data value changes as they happen, rather than trying to set breakpoints and probes in places scattered throughout the code. But on re-reading, I think you're actually asking about the ability to display in the probe the full child data when the child is traveling on a parent wire. That is a feature I've asked my team to work on. It does pose quite a challenge, and I wouldn't expect to see it soon. But, having said that, I have yet to see much call for it. The majority of the time, if I'm debugging a parent wire, it is the parent's data cluster that I care about. The child's cluster doesn't get changed by parent operations and is rarely of interest at that point in the code. So, yes, it is an interesting problem worthy of attention, and there are cases where it would be useful. But I've spent the last year looking over the shoulders of LVClass developers, and I haven't seen this create an impediment to development. This isn't the same level of impediment as, for example, the Error Window feedback. QUOTE Does this mean that standardizing everything to this idea could simplify the architecture of LabVIEW itself? Could you have dynamic loading on those platforms, if dynamic loading only consisted of dynamically loading functions? No. You couldn't have dynamic loading at all. The whole point is to use this for targets such as FPGA where there is only a single deployment to the target and all possible classes are known at compile time. SUMMARY: In short, the central repository of data definition for an entire hierarchy is, in my opinion, unworkable for development. It is a useful concept for deployment only. Tying an entire hierarchy together limits extensibility and places restrictions on the types of software teams that can work on the software. I hope all the above makes sense. Quote Link to comment
robijn Posted August 30, 2007 Report Share Posted August 30, 2007 QUOTE(Aristos Queue @ Aug 29 2007, 03:24 AM) See previous post on why ever having public or protected data is a very bad idea. I don't care that you *can* create private data under your scheme. I object to the idea that you *can* create public data. I think there are two reasons why you would want public data: 1. easy of use (no methods needed to do the work) 2. speed (no method calls needed to do the work) I got to understand NI's way of thinking when I got told that the idea was to do much more optimization for methods. That solves the 2nd point. But for the ease of use I think there is no solution. There is a workaround to have public attributes: in Pascal they were called properties and they are also known as getters and setters. They can be used to seamlessly replace an attribute later in the class' life with a pair of methods to set and get. Users (i.e. other classes/functions) of the attribute will not notice the change. There is one fundamental problem with that, that it is impossible to get decent error reporting. So maybe to avoid getters and setters is best anyway. That leaves us with the current solution. I support it. QUOTE(Aristos Queue @ Aug 29 2007, 03:24 AM) But on re-reading, I think you're actually asking about the ability to display in the probe the full child data when the child is traveling on a parent wire. That data is really very usefull, because you often need to know the state of the parent to understand the state of the child. Hmm, I don't understand why collecting this data is very difficult... It a matter of checking what classes the current class consists of and displaying all their data. Actually I would think you have already done this work while building the object data list. You have no child-parent attribute overrides, so that simplifies the situation. QUOTE(Aristos Queue @ Aug 29 2007, 03:24 AM) No. You couldn't have dynamic loading at all. The whole point is to use this for targets such as FPGA where there is only a single deployment to the target and all possible classes are known at compile time. I think I miss something here. Dynamic loading of classes is a very important feature. For example for instrument drivers. It has to be available. But I think that was not the real reason this point was brought up in the discussion here, was it ? Was it only to be able to create an object of a given class by specifying what type it should be ? There is another way for that, a case structure. Indeed then all classes need to be known from the start. But they will be anyway on an FPGA because it cannot load code. And then they use exactly the described amount of memory. Was this what you meant, Jacemdom ? Joris Quote Link to comment
Yair Posted August 30, 2007 Report Share Posted August 30, 2007 QUOTE(Aristos Queue @ Aug 29 2007, 04:24 AM) If Albert develops a class hierarchy and gives it to both Bob and Charlie What happened to Alice? Did she finally retire? http://xkcd.com/177/' target="_blank"> Quote Link to comment
Jacemdom Posted September 8, 2007 Author Report Share Posted September 8, 2007 Reply part 1 QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) Because this discussion is over text, which doesn't convey conversation well at all Live/Face to face = multiple communication channels (visual, sound etc...) at an infinite refresh rate eMails/Forums = one communication channel (visual) with at best an average practical refresh rate of 0.0016Hz (10 minutes from send to response) In this thread, the average refresh rate is a magnificient 0.000001509414Hz or 184.030 Hours... It is slower...but as its advantages, like letting things settle down and allowing the thoughts to develop at their pace. It is also less stressful, as slower acquisitions rates puts less stress on the hardware... QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) an expected central architect that knows everything" AnotherVIEW does not require a central architect that knows everything. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "a central single VI" AnotherVIEW does not require a central single VI. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "the central architect to do all the implementing" AnotherVIEW does not require a central architect to do all the implementing. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "integrating their changes into a central single VI" AnotherVIEW does not require to integrate changes into a central single VI. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "to limit the hierarchy extension only to those who can edit the hiearchy file" AnotherVIEW does not require to limit the hierarchy extension only to those who can edit the hiearchy file. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "a single cluster concept" AnotherVIEW is not a single cluster concept. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "Tying all the classes together" AnotherVIEW does not require tying all the classes together. QUOTE(Aristos Queue @ Aug 28 2007, 09:24 PM) "children being designed such that they depend upon a particular implementation of the parent" AnotherVIEW does not require children being designed such that they depend upon a particular implementation of the parent. Part 2 in 0.1 to 74 days... Quote Link to comment
Jacemdom Posted October 6, 2009 Author Report Share Posted October 6, 2009 Part 2 in 0.1 to 74 days... I will rephrase that to "Part 2 in 0.1 to 74 VENUS days"... When i wrote this analysis it was mostly based on theorethical knowledge...part 2 will be written after having added more practical knowledge using LVOOP... So i have now approximately 47 years left to come up with part 2... Quote Link to comment
Jacemdom Posted October 16, 2009 Author Report Share Posted October 16, 2009 Yes, LabVOOP is my baby, and it can be easy for me to get defensive about design decisions. If LVOOP is your baby then i must now admit that you are a pretty good father and i would love to meet the mother! What seemed to be an ugly baby when i first saw it, as grown into a remarkable and attractive young programming tool! It took sometime to see it, but it will now be impossible to back for me! The end of AnotherVIEW to LVOOP... 1 Quote Link to comment
Jacemdom Posted January 28, 2019 Author Report Share Posted January 28, 2019 On 10/17/2009 at 7:32 AM, Jacemdom said: It took sometime to see it, but it will now be impossible to back for me! That was before the arrival of malleable VIs. My LV code is now 100% class free 😂😉 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.