Jump to content

Recommended Posts

Hello,

Bit of an odd one and I am probably doing something fundamentally wrong however I am fairly new to OOP.  Essentially I have a hierarchy of 3 items say A, B and C.  I have a DD VI called "ModifyUI.vi" in each class and I am calling this VI and its parents in parrallel (Probably not a great idea but it is my end goal).  After each "ModifyUI.vi" has completed doing what it needs to I would then like to collect each VI's data as they were all run in parrallel back into the initial class that called them all.

I have attached an overview of what I am trying to achieve in a mockup of my problem.  I am willing to change tack as long as the same goal is reached (Modify all class data in parrallel).

Thanks in advance

Craig

Lava Temp.zip

Overview.png

Link to comment

Why are you looking to merge data? What is the intention? (I'm thinking something like configuration?)

Ignoring the merge data requirement for the moment... Not sure why you have a parent, child and grandchild. Looking at this, I see a parent and three child objects with the methods:

  1. Initialize: Passes in the communication user event sets up anything needed in the specific child
  2. Start: Accepts a subpanel reference, inserts the ModifyUI.vi and starts the ModifyUI.vi running
  3. Stop: Stops the ModifyUI.vi
  4. Cleanup: Any tasks needed to clean up the specific child

 

Link to comment

Hi Tim,

Thanks for taking the time to have a quick look at this with me.

It is essentially configuration of a plugin type structure of a class Hierarchy A->B->C.  For each plugin say "classC:modifyUI.vi", is for modifying its own internal data.  It also calls via call parent method "classB:ModifyUI.vi" which takes care of handling meta data, which then calls "classA:ModifyUI.vi" which takes care of globally configured data.  I am able to load every class "modifyUI.vi" in the hierarchy in parrallel such that user essentially just sees a configuration page.  All good so far.

My initial reasoning for having this hierarchy in this manner is that the child object is held in a list of other child objects or "plugin Tests".  The test is merely concerned with its data for running a test or taking a measurement etc.  Its parent method deals with meta data associated with that test, "limits", "Requirements" and other references associated with that test instance.  The top level in the hierarchy deals with other flags such as "on fail options", "abort", "repeat conditions", "test type" etc.

Merging the Data when there was only two classes was fine as I only needed one VI in the child class which was static dispatch.  However merging the data between N classes throughout the hierarchy requires dynamic dispatch and this is where this structure is currently falling over.

If I were to set the hierarchy as you suggest would I not break the ability to run the "plugin Test" and be able to call upon its parent data for handling the test instance?  Apologies, I should of explained this in my initial post as it is pretty much a big part of what I am trying to achieve.  I am in this twilight zone of trying to keep my classes by value.  I understand I could send the modified data via Queues to a collection VI like say with rendezvous.  However I think I am missing something in OOP which is blindsiding me to the correct way of achieving what I am trying to implement.

Craig

Link to comment

Instead of splitting and merging actual object data, split and share a DVR of the object to the UI and have both the UI and the caller utilise the DVR instead of the bare object (Yes, IPE everywhere can be annoying).  That way you can simply discard all but one (it's only a reference, disposing it is only getting rid of a pointer) and continue with a single DVR (using a Destroy DVR primitive to get the object back) after the UI operation is finished.

  • Like 1
Link to comment

Hi Shoneill,

Thanks for the input.  I was thinking about DVR but initially thought that I should be able to do this in a traditional "split wire and merge" approach.  If you are saying that DVRs are the way forward then I am guessing I have hit an actual hurdle rather than me missing something easy which is what I was fearing.  I have not played with DVR much which is another reason for my initial hesitation although this seems to be the perfect scenario to give them a go.  I will refactor the mockup I have made and keep you both updated.

Craig

Link to comment

For simple atomic accessor access, splitting up actual objects and merging MAY work but once objects start doing consistency checks (perhaps changing Parameter X in Child due to setting of Parameter Y in Parent) then you can end up with unclear inter-dependencies between your actual objects.  When merging, the serialisation of setting the parameters may lead to inconsistent results as the order of operations is no longer known.  When working with a DVR, you will always be operating on the same data and the operations are properly serialised.  Of course it's of benefit to have some way of at least letting the UUI know that the data in the DVR has changed in order to update the UI appropriately.... but that's a different topic (almost).

Link to comment

After looking at your code, I can see what it is you are trying to do.  It looks like an attempt at the old "Magic Framework of Massive Reusability".  It's a nice to think you'll have to build this only once and then reuse it everywhere.  You'll have these convenient VI's that automatically launch at all three class levels that allow updating the data at their level for all the objects in your array.  The question new LVOOP or even just new LabVIEW programmers usually ask is something close to: "Why do I seem to keep building the same program over and over again?"  and/or "Isn't there some way I can build some magic modular framework once and use it everywhere?"  Many have tried, including the Actor framework.  Some of the newer ones work well, but all are messy at the bottom, in my opinion.  Even if you were to get this hierarchy to work, how convenient would it be to use, really?  Everywhere it goes, it will need three sub panels.  My advice would be to forget the "ModifyUI" method for your classes and take a traditional approach of putting the controls for modifying your class data all at the same level on your main front panel.  You'll suffer some aggravation at having to build that top-level VI over and over again, but your programming style will be easy to understand and debug.  Keeping it simple is usually the best approach.

UI_snippet.png

  • Like 1
Link to comment

Hi All,

The DVR Method seems to be working well, although it has raised a couple of questions on best practice.

Currently I take a reference of the object in the parent class and cast this in the children as per this thread for guidance https://forums.ni.com/t5/LabVIEW/Combining-LVOOP-DVR-with-Asynchronous-Dynamic-Dispatch-and/td-p/2254600. I am thinking this is the correct way to go about this.

Also I am using the preserve Runtime engine in the "Test plugin" class to retrieve my object and maintain runtime class on my DD wire, first time using this primitive and just wanted to double check I am not misusing it.

The attached mockup works for demonstration but has an issue with the subpanels not loading, I am not concerned with this as my main application does this in a different manner anyway (It was just the quick and dirty to get this example running).

Thanks for all the help

Craig

Lava_Temp.png

Lava Temp.zip

Link to comment

Hi Smarlow,

Thanks for the comments, I am also not a fan of the one Framework fits all and in my particular instance the trhee panels are pretty static and wont be loaded in different places all over my code etc.  However using the techniques above I have ended up with quite a dynamic flexible configuration screen.  I often take the approach that the most direct effort (while still using patterns) is often the best and avoid using bigger frameworks.  I also have experience in automation machines which dont tend to like being shoe-horned into a large fits all framework.

This is more of an exercise to learn some OOP and as it happens some DVR stuff also.  I am actually using a "framework" that i have already written and used successfully but was a bit clunky storing configuration data so was looking to see if i could simplify this process both on the configuration screen and the "Run Sequence" screen.  I think now I have these objects stored and their data readily at hand I will help in the creation and execution of new tests.

I dont know,  all this is subjective, and I am all ears

A_Panel.png

Newer Version of the configuration screen using techniques as above.

old_Panel.png

Old Version of configuration screen with data stored in the tree control.

Edited by CraigC
Annotated Images
Link to comment

Yes, I can see you've done quite a nice job with the front end, and it looks very flexible.  After I posted, I had some additional thoughts about what you were trying to do.  It was just hard to see where you were going from your original post.  Actually, your final solution looks good.  

Link to comment
8 hours ago, CraigC said:

My initial reasoning for having this hierarchy in this manner is that the child object is held in a list of other child objects or "plugin Tests".  The test is merely concerned with its data for running a test or taking a measurement etc.  Its parent method deals with meta data associated with that test, "limits", "Requirements" and other references associated with that test instance.  The top level in the hierarchy deals with other flags such as "on fail options", "abort", "repeat conditions", "test type" etc.

Since I didn't see it mentioned elsewhere, it sounds like you might be in need of https://en.wikipedia.org/wiki/Composition_over_inheritance

A measurement isn't a type of limit check, and a limit check isn't a type of execution behavior. It may be that you want to split this up into executionstep.lvclass which contains an instance of thingtoexecute.lvclass (your measurement) and an instance of analysistoperform.lvclass (your limit checks), or something along those lines.

Link to comment

Hi smithd,

Thanks for the advice.  I have had a quick look at composition vs hierarchy and it does indeed look like this is more suited to my needs.  My initial reasoning for inheritance was that during the execution of a test plugin it parent data should be readily available and bound as such to the test subject (to test limits etc).  However it looks like the same objective is achievable if I use composition so will give it a go and see what works best.  Better to try new things now at the early stage of learning!

Thanks

Craig

Link to comment
  • 1 month later...
On 4/21/2017 at 2:59 AM, shoneill said:

Instead of splitting and merging actual object data, split and share a DVR of the object to the UI and have both the UI and the caller utilise the DVR instead of the bare object (Yes, IPE everywhere can be annoying).  That way you can simply discard all but one (it's only a reference, disposing it is only getting rid of a pointer) and continue with a single DVR (using a Destroy DVR primitive to get the object back) after the UI operation is finished.

Why would you bring DVRs into this problem? (That question goes to shoneil... I'm surprised that's the tool he would reach for in this case. Yes, it can work, but it is an odd choice since it forces the serialization of the parallel operations.)

Consider what you would do if you had a cluster and you wanted to update two of its fields in parallel. You'd unbundle both, route them in parallel, then bundle them back together on the far side.  Do the same with your class.

If the operations are truly parallel, then the information inside the class can be split into two independent classes nested inside your main class. Write a data accessor VI to read both nested objects out, call their methods, then write them back in. Or use the Inplace Element Structure to read/write the fields if you're inside the class' scope.

If there are additional fields that are needed by both forks, unbundle those and route them to both sides.

Untitled.png

Edited by Aristos Queue
Link to comment

How the DVR is structured, whether the DVR is encapsulated or not is a design choice based on the requirements (one of which could be the parallel operation AQ points out).

The DVR is simply a method to remove the awkward requirement of "branch and merge" mentioned in the OP.  I've done some similar UI - Model things in the past and I've found using by-ref Objects simply much more elegant than by-val Objects.  DVRs are the normal way to get this.  Whether we use a DVR of the entire class or the class holds DVRs or it's contents is irrelevant to the point I was trying to make: Instead of branching, modifying and merging, just make sure all instances are operating in the same shared space.

Link to comment
12 hours ago, shoneill said:

How the DVR is structured, whether the DVR is encapsulated or not is a design choice based on the requirements (one of which could be the parallel operation AQ points out).

The DVR is simply a method to remove the awkward requirement of "branch and merge" mentioned in the OP.  I've done some similar UI - Model things in the past and I've found using by-ref Objects simply much more elegant than by-val Objects.  DVRs are the normal way to get this.  Whether we use a DVR of the entire class or the class holds DVRs or it's contents is irrelevant to the point I was trying to make: Instead of branching, modifying and merging, just make sure all instances are operating in the same shared space.

I see. While I concede the method works, I'm willing to wager that splitting the necessary parts of the class in parallel and then bringing them back together -- not forking the class and then merging!* -- will have better performance and be less error prone for the *vast* majority of applications. I'd definitely keep shoneill's method in my back pocket as an option, but it wouldn't be the first tool I'd reach for.

 

* I think we'll all agree that forking the class and then merging is messy and error prone. I've never seen that be a good strategy, as the original question at the top of this thread discovered.

Link to comment
7 hours ago, MikaelH said:

By using references objects you don't need to merge the data. It will solve lots of your problems, maybe create some new ones if you don't know what you are doing.

 

This is true.  One would think that with proficiency, this problem-trading (old ones replaced with new ones) would shift in our favour.  My experience is that the number of problems stays approximately constant but the newer ones become more and more obscure and hard to find.

This is a bit of a pessimist (realist?) view I will admit.  Truth is that we just keep looking until we find problems.  And if we can't find any problems, then we convince ourselves we've missed something. :wacko:

Link to comment

I think going by reference object shouldn’t be considered difficult or hard. Most other languages are using references so why shouldn’t a LabVIEW developer succeed in that task?

We have around 20 LabVIEW developer at the office here, that are all using by reference objects without any problems. Half of the guys are just users of the By Reference instrument driver layer (200+ drivers). And since all NI hardware drivers (DaqMx, Vision, Visa, FileIO) already are by reference it makes sense to them.

So if you can handle the Queue VIs you can handle a by reference lv-class.

One of the current application I’m working on has 36 motor driver objects of different types, more than 50 digital input/output objects, vision cameras, a bunch of standard instruments of course and they are all by reference objects.

In my case I wouldn’t dare to go by reference value objects here ;-)

Edited by MikaelH
Link to comment
On 6/17/2017 at 3:11 AM, Aristos Queue said:

References: the last weapon in the arsenal of the G master but the first refuge of the novice.

There are two broad categories of objects/classes:

  • Values (e.g. numbers, strings, URLs, timestamps, images*, tuples/collections of these)
  • Identities/entities (e.g. file, physical I/O channel, physical device, database, GUI control/indicator/window, state machine)

Conceptually, values can be copied but identities cannot. (To illustrate what I mean: Branching a string wire can create a new copy of the string, but branching a DAQmx Channel wire cannot create a new channel)

I would use references as a last resort for "value objects", but as a near-first resort for "identity objects".

 

*IMAQ images are by-reference, but they didn't strictly have to be.

On 6/17/2017 at 3:11 AM, Aristos Queue said:

by rights, his reference-heavy G code should be both more buggy and less performant than I know it to be.

Genuinely curious: Why do you expect reference-heavy G code to be less performant?

Edited by JKSH
  • Like 1
Link to comment

Hello,

Time for a little update maybe.

On 21/04/2017 at 5:23 PM, smithd said:

Since I didn't see it mentioned elsewhere, it sounds like you might be in need of https://en.wikipedia.org/wiki/Composition_over_inheritance

This little comment helped me realise the solution for what I was attempting to do.  By using composition the problem with merging classes back into a hierarchy was absolved.  I did end up using DVRs as again the need to merge was absolved.  I could of just used clusters to split and merge from the onset but I have learned a little of OOP on the way so all is good.  Also  using DVRs made saving and restoring data into named attributes fairly simple (using the unique tree tag as a name).

On 21/04/2017 at 5:23 PM, smithd said:

A measurement isn't a type of limit check, and a limit check isn't a type of execution behavior.

I still feel a little uncomfortable about putting things into a hierarchy and although smithd is technically correct... measurements can all have limits and each test all has execution behaviour.  Having said this composition is fine and there is no real need for hierarchy here.

 

On 14/06/2017 at 8:29 PM, Stagg54 said:

Looks like you are doing sequencing.  Instead of reinventing the wheel, perhaps consider using Teststand...

Hi Stagg54,

Yes we use TestStand on other projects.  The main focus on this sequencer was to allow Hardware engineers who are not familiar with TestStand or LabVIEW the ability to make sequences quickly and easily from within the same application.  I am not trying to build the monolithic one size fits all application.  This sequencer does not cover a lot of features available within test stand and is much more limited in its capabilities.  It is designed to run on specific hardware (at the moment), with the philosophy of being easily modified to target other pieces of test equipment as the need arises.

 

Thanks for all the input, I have much more of an insight into OOPy stuff.  I think my inital downfall was trying to be too strict on the design by using only by value and forcing the design into a hierarchy when the final solution didn't quite fit either of those methodologies.

Craig

Also of note, not being able to hide or set column width in tree controls to 0 is most frustrating!!

Edited by CraigC
Link to comment
  • 1 month later...
On 6/19/2017 at 11:27 PM, JKSH said:

Genuinely curious: Why do you expect reference-heavy G code to be less performant?

A) Compiler cannot apply most optimizations around references. To name a few examples: There's no code movement to unroll loops. Constant folding cannot happen. Look-ahead is impossible. Many more.

B) Lots of overhead lost to thread safety (acquiring and releasing mutexes) to prevent parallel code from stepping on itself.

C) Time spent on error checking -- verifying refnums are valid is a huge time sink in many ref-heavy programs, despite LV making those checks as lightweight as possible.

D) Unnecessary serialization of operations by programmers. Even programmers consciously watching themselves sometimes forget or get lazy and serialize error wires rather than allowing operations to happen in parallel and then using Merge Error afterward.

E) A constant fight with dataflow. Making one piece of data by-ref often means other pieces become by-ref in order that the programming models remain consistent, so things that could be by-value are changed to match the dominant programming paradigm. Sometimes this is because programmers only want to maintain one model, but often it is because they need data consistency and cannot get it with a mixed model.

----------------------------------------

You mentioned "identity objects" as a category of objects. Although you are correct, that is a good division between objects, the identity objects category is one that I see more and more as an anti-pattern in dataflow. Doing synchronous method calls on a by-ref object comes with so many downsides. Functional programming languages don't allow that sort of programming at all, even when they have a class inheritance type system... it's harder and harder for me to see any reason to have it in LabVIEW. We won't take it out, for the same reason that we don't take out global variables -- it's conceptually easy and a lot of people think they need it. But I no longer believe it is necessary to have at all and, indeed, I believe that programs are better when they do not ever use by-reference data. Communications with processes should be through command queues, preferably asynchronous, and operations on data should be side-effect-free by-value operations.

Link to comment
On 6/19/2017 at 5:12 PM, MikaelH said:

I think going by reference object shouldn’t be considered difficult or hard. Most other languages are using references so why shouldn’t a LabVIEW developer succeed in that task?

Most languages (Haskell, Racket, Erlang, Lisp) that are successful in taking full advantage of multicore parallel programming do not allow references. References are a holdover from serial processing days.

Computer science has had a better way to program since the 1940s, but Van Neumann saddled us with the IBM hardware architecture, and direct addressing of assembly and then C and its successors won performance at a time when such things were critical. Hardware has moved beyond those limitations. It's time for software to grow up.

"Procedural programming is a monumental waste of human energy." -- Dr. Rex Page, Univ. of Oklahoma, 1985 and many times since then.
(And in this context, "procedural" is "method and reference-to-objects oriented".)

Edited by Aristos Queue
Link to comment
2 hours ago, Aristos Queue said:

"Procedural programming is a monumental waste of human energy." -- Dr. Rex Page, Univ. of Oklahoma, 1985 and many times since then.
(And in this context, "procedural" is "method and reference-to-objects oriented".)

It's a good job that LabVIEW is a dataflow programming language then! :P

Edited by ShaunR
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.