Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 07/19/2010 in all areas

  1. OK, I finally finished a version of a document I have been promising to write. We put it on our site: OOMessagingCommandPatternStatePattern. In it we present examples of the following in LabVIEW: messaging with LabVIEW objects Command Pattern (with XML configuration files application example) State Pattern Hopefully the examples will be helpful to some readers, and promote further discussion on scalable application architectures.
    3 points
  2. There are four "interesting" cases of a class containing itself in its private data cluster. The cluster could contain... ... the class cube directly [this is functionally identical to the class cube inside a nested cluster] ... the parent class cube directly with the default value set to an instance of the child class ... the class cube in an array ... the class cube in a refnum (DVR, Queue, Notifier, Datalog, User Event, User Event Registration) If we're talking about direct inclusion (case #1), Daklu got the answer right in his post above. The same infinite allocation that he describes would happen in use case #2. So cases 1 & 2 are off the table permanently as "logically impossible to ever implement" (unless LV ever introduced a lazy evaluation language syntax which would be a very different programming model than anything that exists today -- I mention the possibility only so that someone doesn't say something like, "If it is impossible, how does Lisp or Haskell do it?") For case #3, if the array is non-empty, you have the same problem as 1 & 2, but if the default value of the array is empty, it would be fine. Case 4 never has to allocate the actual data of the class (it just allocates space for a reference cookie, a cookie whose default value is always Not A Refnum), so it should work. That, I think, is the heart of Mikael's question: Why doesn't this work in LabVIEW? What we found is that we could get some significant performance boosts during load, notably dynamic loading into an already running hierarchy, if a class hierarchy has no circular dependencies (meaning it does not reference itself, including indirectly). There's a lot of work to identify which class can instantiate and reserve first. If all the references are changed out for parent class references, users can still write all the functionality (albeit with a slightly modified syntax than they would generally try first) and LV could simplify the load and reserve algorithm considerably. I looked through various lists of recursive data structures. The most common that I saw were composition patterns, such as is in the Graphics shipping example in LV, where Collection inherits from Graphic and includes an array of Graphic. Those are unchanged because they are already including their parents, not themselves. The other class was the linked lists, trees, and graphs. Honestly, I expected most of these to be written once by someone who was an expert in LVOOP and then most users would just use them as libraries. That expectation was predicated on other features of LabVIEW existing which have not come to fruition. Even so, it appears (based on AE reports and my looking at user code when I get a chance to visit customers) that very few LV users ever attempt to build anything recursive, much less a recursive data structure, so I wager most of those who attempt it are people who are power LV users who generally know about LAVA and will post questions (like Mikael's) and find these answers. Even better, define a parent class that is not recursive with dynamic dispatch methods for anything that would be recursive, then define a child class that overrides all the necessary VIs. That way you avoid the downcast. As of LV 2010, dyn dispatch is faster than downcast (although the compiler team is working on some new optimizations for future LV versions that may change the balance of power here).The <labview>\examples\LVOOP\Graphics\Graphics.lvproj has an example of the composition pattern.
    1 point
  3. Just to be clear I don't agree with the above programming methodology although to be honest, thats what used to do when I started using LabVIEW but I consider it a very bad way to program now (as I am sure I have mentioned before). It is an assumption I guess that is made when talking about using QSM etc.. I may have dialog boxes shown based on the that state of possibly multiple classes within an application. I guess in this case, and for a simple example, it would be based on the configuration of the application and an event occurring rather than an enum e.g. attendedMode = TRUE AND exceptionOccurred = TRUE. But you still need an architecture that sits behind that dialog box - the engine/process that runs it. And using whatever implementation, I agree, it's function should be solely to present information to the user and possibly allow for user input, not affect execution logic. Using your presented example: If the system is running in "Normal Mode" and an exception occurs that you want to present to the user then I image in implementation could be as follows: 1. Pass Object as an input for the Dialog box 2. Run method to format information to string (possibly do this before passing to Dialog) 3. Get Exception string from Object 4. Display String in Dialog Box 5. Allow the user to enter a Comment in a "Comments Box" 6. If User presses Ok, Set Comment information to Object 7. If User presses Cancel, Set a predefined, default Comment (to indicate it was cancelled by the User) / leave blank etc... 8. Pass out Object that now contains the Exception Information and now a user Comment Depending on other State data the Application logic may decide to 1. Discard this data 2. Log this data to a network share 3. Log this data locally Now if the Application was configured to attendedMode=FALSE then application would make the decision not to show the dialog box in the first place, would instead run methods to format the Exception data to a string and maybe Set a default Comment to indicator not attended etc... But the application would be still be using the same Class methods called by dialog box - so that functionality is encapsulated and reusable Therefore, with all functionality encapsulated, it has nothing to do with the Dialog or it's implementation. Its separated. Put simply, the Dialog should call Get methods to access Object data and Set methods for User input etc... So all the above options have absolutely nothing to do with how the data is displayed to the user. And the architecture of the Dialog used (to display the data to the user) has nothing to do with the Application making the decision to: 1. Show a standard dialog box 2. Show a high contrast dialog box OR 3. Do not show a dialog box. 4. Other (expanded for more functionality) Now I am sure you can implement the Application Logic using advanced design patterns that incorporate LVOOP - and that is what I am really interested in (and what the community is calling for at present). But I still consider the QSM an important building block of my application for example, in order to implement the Dialog Box you get a new requirement (and for want of a better example) that says "the User Comment must be at least 20 characters long" (as management think this will make the attending user less likely to just press "a" <enter> and carry on). Therefore when the Dialog is open the User types in a Comment that is 10 Characters long. Your dialog (e.g. on a Comment Value Change Event) would then 1. Check the entry 2. Decide whether it was valid (>= 20 CHARS) 3. If valid enable Ok Button 4. If not valid display some text (contextual help) on screen that is a message saying why it failed In order for the Dialog Box to do either of the above, one way would require it queuing a message to itself in order to either enable the Ok button OR to display some message based on the result of the validity check of the Comment. Therefore, using a QSM would be a great way to do this IMO, and it would be an easy way to sequence these methods/commands/states. And I also see no difference using a QSM or splitting this out into two loops in the same VI whereby a Producer (User Event loop) sends a message to the Consumer (Working Loop). Whilst this may have the advantage or simple design, it is more work to implement and it has limitations in sharing statefullness between the two loops (as they are combined). The JKI QSM implementation can be considered a Producer/Consumer combined in a single loop. Additionally as this UI functionality is encapsulated within the QSM it can be broken down further it required based on changing requirements etc... It can also be reused throughout an application (depending on if it runs in parallel, but considering a standard Dialog) just like any other subVI/method etc...) So maybe in order to implement the above: 1. I start by creating the Dialog using a QSM without the need for any SubVIs, Typedefs or additional classes as it is so simple and its quick. The UI works and I don't need to spend that much time on it. 2. Changes occur to requirements and I refactor to make it tighter, using subVIs for reuse, typedef etc.. 3. More expanding features means that I need to start using classes or possibly x-controls to encapsulate UI functionality etc... 4. A new requirement means I really need to take it up a notch and do something advanced e.g. a skinnable UI for Standard and High Contrast views that would need to share an engine/process for functionality reuse etc... Now this maybe implemented a totally different way, but one way could be using multiple QSMs whereby the original QSM gets strips of FP objects and become the engine/process in a UI Class that would now have two UIs (both implemented as QSMs). Thats the thing I like is the flexibility of it all when I use the JKI QSM. If I would have started with option 4 when I really only needed option 1 (and for example, it turns the application only ever needed option 1) then IMO would have wasted a lot of time (the customers and mine) on implementing something I didn't need. I am also not bottled-necked in my approach as I can refactor the implementation at anytime to account for requirement changes.
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.