Jump to content

Recommended Posts

Hello,

Bit of an odd one and I am probably doing something fundamentally wrong however I am fairly new to OOP.  Essentially I have a hierarchy of 3 items say A, B and C.  I have a DD VI called "ModifyUI.vi" in each class and I am calling this VI and its parents in parrallel (Probably not a great idea but it is my end goal).  After each "ModifyUI.vi" has completed doing what it needs to I would then like to collect each VI's data as they were all run in parrallel back into the initial class that called them all.

I have attached an overview of what I am trying to achieve in a mockup of my problem.  I am willing to change tack as long as the same goal is reached (Modify all class data in parrallel).

Thanks in advance

Craig

Lava Temp.zip

Overview.png

Link to post

Why are you looking to merge data? What is the intention? (I'm thinking something like configuration?)

Ignoring the merge data requirement for the moment... Not sure why you have a parent, child and grandchild. Looking at this, I see a parent and three child objects with the methods:

  1. Initialize: Passes in the communication user event sets up anything needed in the specific child
  2. Start: Accepts a subpanel reference, inserts the ModifyUI.vi and starts the ModifyUI.vi running
  3. Stop: Stops the ModifyUI.vi
  4. Cleanup: Any tasks needed to clean up the specific child

 

Link to post

Hi Tim,

Thanks for taking the time to have a quick look at this with me.

It is essentially configuration of a plugin type structure of a class Hierarchy A->B->C.  For each plugin say "classC:modifyUI.vi", is for modifying its own internal data.  It also calls via call parent method "classB:ModifyUI.vi" which takes care of handling meta data, which then calls "classA:ModifyUI.vi" which takes care of globally configured data.  I am able to load every class "modifyUI.vi" in the hierarchy in parrallel such that user essentially just sees a configuration page.  All good so far.

My initial reasoning for having this hierarchy in this manner is that the child object is held in a list of other child objects or "plugin Tests".  The test is merely concerned with its data for running a test or taking a measurement etc.  Its parent method deals with meta data associated with that test, "limits", "Requirements" and other references associated with that test instance.  The top level in the hierarchy deals with other flags such as "on fail options", "abort", "repeat conditions", "test type" etc.

Merging the Data when there was only two classes was fine as I only needed one VI in the child class which was static dispatch.  However merging the data between N classes throughout the hierarchy requires dynamic dispatch and this is where this structure is currently falling over.

If I were to set the hierarchy as you suggest would I not break the ability to run the "plugin Test" and be able to call upon its parent data for handling the test instance?  Apologies, I should of explained this in my initial post as it is pretty much a big part of what I am trying to achieve.  I am in this twilight zone of trying to keep my classes by value.  I understand I could send the modified data via Queues to a collection VI like say with rendezvous.  However I think I am missing something in OOP which is blindsiding me to the correct way of achieving what I am trying to implement.

Craig

Link to post

Instead of splitting and merging actual object data, split and share a DVR of the object to the UI and have both the UI and the caller utilise the DVR instead of the bare object (Yes, IPE everywhere can be annoying).  That way you can simply discard all but one (it's only a reference, disposing it is only getting rid of a pointer) and continue with a single DVR (using a Destroy DVR primitive to get the object back) after the UI operation is finished.

  • Like 1
Link to post

Hi Shoneill,

Thanks for the input.  I was thinking about DVR but initially thought that I should be able to do this in a traditional "split wire and merge" approach.  If you are saying that DVRs are the way forward then I am guessing I have hit an actual hurdle rather than me missing something easy which is what I was fearing.  I have not played with DVR much which is another reason for my initial hesitation although this seems to be the perfect scenario to give them a go.  I will refactor the mockup I have made and keep you both updated.

Craig

Link to post

For simple atomic accessor access, splitting up actual objects and merging MAY work but once objects start doing consistency checks (perhaps changing Parameter X in Child due to setting of Parameter Y in Parent) then you can end up with unclear inter-dependencies between your actual objects.  When merging, the serialisation of setting the parameters may lead to inconsistent results as the order of operations is no longer known.  When working with a DVR, you will always be operating on the same data and the operations are properly serialised.  Of course it's of benefit to have some way of at least letting the UUI know that the data in the DVR has changed in order to update the UI appropriately.... but that's a different topic (almost).

Link to post

After looking at your code, I can see what it is you are trying to do.  It looks like an attempt at the old "Magic Framework of Massive Reusability".  It's a nice to think you'll have to build this only once and then reuse it everywhere.  You'll have these convenient VI's that automatically launch at all three class levels that allow updating the data at their level for all the objects in your array.  The question new LVOOP or even just new LabVIEW programmers usually ask is something close to: "Why do I seem to keep building the same program over and over again?"  and/or "Isn't there some way I can build some magic modular framework once and use it everywhere?"  Many have tried, including the Actor framework.  Some of the newer ones work well, but all are messy at the bottom, in my opinion.  Even if you were to get this hierarchy to work, how convenient would it be to use, really?  Everywhere it goes, it will need three sub panels.  My advice would be to forget the "ModifyUI" method for your classes and take a traditional approach of putting the controls for modifying your class data all at the same level on your main front panel.  You'll suffer some aggravation at having to build that top-level VI over and over again, but your programming style will be easy to understand and debug.  Keeping it simple is usually the best approach.

UI_snippet.png

  • Like 1
Link to post

Hi All,

The DVR Method seems to be working well, although it has raised a couple of questions on best practice.

Currently I take a reference of the object in the parent class and cast this in the children as per this thread for guidance https://forums.ni.com/t5/LabVIEW/Combining-LVOOP-DVR-with-Asynchronous-Dynamic-Dispatch-and/td-p/2254600. I am thinking this is the correct way to go about this.

Also I am using the preserve Runtime engine in the "Test plugin" class to retrieve my object and maintain runtime class on my DD wire, first time using this primitive and just wanted to double check I am not misusing it.

The attached mockup works for demonstration but has an issue with the subpanels not loading, I am not concerned with this as my main application does this in a different manner anyway (It was just the quick and dirty to get this example running).

Thanks for all the help

Craig

Lava_Temp.png

Lava Temp.zip

Link to post

Hi Smarlow,

Thanks for the comments, I am also not a fan of the one Framework fits all and in my particular instance the trhee panels are pretty static and wont be loaded in different places all over my code etc.  However using the techniques above I have ended up with quite a dynamic flexible configuration screen.  I often take the approach that the most direct effort (while still using patterns) is often the best and avoid using bigger frameworks.  I also have experience in automation machines which dont tend to like being shoe-horned into a large fits all framework.

This is more of an exercise to learn some OOP and as it happens some DVR stuff also.  I am actually using a "framework" that i have already written and used successfully but was a bit clunky storing configuration data so was looking to see if i could simplify this process both on the configuration screen and the "Run Sequence" screen.  I think now I have these objects stored and their data readily at hand I will help in the creation and execution of new tests.

I dont know,  all this is subjective, and I am all ears

A_Panel.png

Newer Version of the configuration screen using techniques as above.

old_Panel.png

Old Version of configuration screen with data stored in the tree control.

Edited by CraigC
Annotated Images
Link to post

Yes, I can see you've done quite a nice job with the front end, and it looks very flexible.  After I posted, I had some additional thoughts about what you were trying to do.  It was just hard to see where you were going from your original post.  Actually, your final solution looks good.  

Link to post
8 hours ago, CraigC said:

My initial reasoning for having this hierarchy in this manner is that the child object is held in a list of other child objects or "plugin Tests".  The test is merely concerned with its data for running a test or taking a measurement etc.  Its parent method deals with meta data associated with that test, "limits", "Requirements" and other references associated with that test instance.  The top level in the hierarchy deals with other flags such as "on fail options", "abort", "repeat conditions", "test type" etc.

Since I didn't see it mentioned elsewhere, it sounds like you might be in need of https://en.wikipedia.org/wiki/Composition_over_inheritance

A measurement isn't a type of limit check, and a limit check isn't a type of execution behavior. It may be that you want to split this up into executionstep.lvclass which contains an instance of thingtoexecute.lvclass (your measurement) and an instance of analysistoperform.lvclass (your limit checks), or something along those lines.

Link to post

Hi smithd,

Thanks for the advice.  I have had a quick look at composition vs hierarchy and it does indeed look like this is more suited to my needs.  My initial reasoning for inheritance was that during the execution of a test plugin it parent data should be readily available and bound as such to the test subject (to test limits etc).  However it looks like the same objective is achievable if I use composition so will give it a go and see what works best.  Better to try new things now at the early stage of learning!

Thanks

Craig

Link to post
  • 1 month later...
On 4/21/2017 at 2:59 AM, shoneill said:

Instead of splitting and merging actual object data, split and share a DVR of the object to the UI and have both the UI and the caller utilise the DVR instead of the bare object (Yes, IPE everywhere can be annoying).  That way you can simply discard all but one (it's only a reference, disposing it is only getting rid of a pointer) and continue with a single DVR (using a Destroy DVR primitive to get the object back) after the UI operation is finished.

Why would you bring DVRs into this problem? (That question goes to shoneil... I'm surprised that's the tool he would reach for in this case. Yes, it can work, but it is an odd choice since it forces the serialization of the parallel operations.)

Consider what you would do if you had a cluster and you wanted to update two of its fields in parallel. You'd unbundle both, route them in parallel, then bundle them back together on the far side.  Do the same with your class.

If the operations are truly parallel, then the information inside the class can be split into two independent classes nested inside your main class. Write a data accessor VI to read both nested objects out, call their methods, then write them back in. Or use the Inplace Element Structure to read/write the fields if you're inside the class' scope.

If there are additional fields that are needed by both forks, unbundle those and route them to both sides.

Untitled.png

Edited by Aristos Queue
Link to post

How the DVR is structured, whether the DVR is encapsulated or not is a design choice based on the requirements (one of which could be the parallel operation AQ points out).

The DVR is simply a method to remove the awkward requirement of "branch and merge" mentioned in the OP.  I've done some similar UI - Model things in the past and I've found using by-ref Objects simply much more elegant than by-val Objects.  DVRs are the normal way to get this.  Whether we use a DVR of the entire class or the class holds DVRs or it's contents is irrelevant to the point I was trying to make: Instead of branching, modifying and merging, just make sure all instances are operating in the same shared space.

Link to post
12 hours ago, shoneill said:

How the DVR is structured, whether the DVR is encapsulated or not is a design choice based on the requirements (one of which could be the parallel operation AQ points out).

The DVR is simply a method to remove the awkward requirement of "branch and merge" mentioned in the OP.  I've done some similar UI - Model things in the past and I've found using by-ref Objects simply much more elegant than by-val Objects.  DVRs are the normal way to get this.  Whether we use a DVR of the entire class or the class holds DVRs or it's contents is irrelevant to the point I was trying to make: Instead of branching, modifying and merging, just make sure all instances are operating in the same shared space.

I see. While I concede the method works, I'm willing to wager that splitting the necessary parts of the class in parallel and then bringing them back together -- not forking the class and then merging!* -- will have better performance and be less error prone for the *vast* majority of applications. I'd definitely keep shoneill's method in my back pocket as an option, but it wouldn't be the first tool I'd reach for.

 

* I think we'll all agree that forking the class and then merging is messy and error prone. I've never seen that be a good strategy, as the original question at the top of this thread discovered.

Link to post
7 hours ago, MikaelH said:

By using references objects you don't need to merge the data. It will solve lots of your problems, maybe create some new ones if you don't know what you are doing.

 

This is true.  One would think that with proficiency, this problem-trading (old ones replaced with new ones) would shift in our favour.  My experience is that the number of problems stays approximately constant but the newer ones become more and more obscure and hard to find.

This is a bit of a pessimist (realist?) view I will admit.  Truth is that we just keep looking until we find problems.  And if we can't find any problems, then we convince ourselves we've missed something. :wacko:

Link to post

I think going by reference object shouldn’t be considered difficult or hard. Most other languages are using references so why shouldn’t a LabVIEW developer succeed in that task?

We have around 20 LabVIEW developer at the office here, that are all using by reference objects without any problems. Half of the guys are just users of the By Reference instrument driver layer (200+ drivers). And since all NI hardware drivers (DaqMx, Vision, Visa, FileIO) already are by reference it makes sense to them.

So if you can handle the Queue VIs you can handle a by reference lv-class.

One of the current application I’m working on has 36 motor driver objects of different types, more than 50 digital input/output objects, vision cameras, a bunch of standard instruments of course and they are all by reference objects.

In my case I wouldn’t dare to go by reference value objects here ;-)

Edited by MikaelH
Link to post
On 6/17/2017 at 3:11 AM, Aristos Queue said:

References: the last weapon in the arsenal of the G master but the first refuge of the novice.

There are two broad categories of objects/classes:

  • Values (e.g. numbers, strings, URLs, timestamps, images*, tuples/collections of these)
  • Identities/entities (e.g. file, physical I/O channel, physical device, database, GUI control/indicator/window, state machine)

Conceptually, values can be copied but identities cannot. (To illustrate what I mean: Branching a string wire can create a new copy of the string, but branching a DAQmx Channel wire cannot create a new channel)

I would use references as a last resort for "value objects", but as a near-first resort for "identity objects".

 

*IMAQ images are by-reference, but they didn't strictly have to be.

On 6/17/2017 at 3:11 AM, Aristos Queue said:

by rights, his reference-heavy G code should be both more buggy and less performant than I know it to be.

Genuinely curious: Why do you expect reference-heavy G code to be less performant?

Edited by JKSH
  • Like 1
Link to post

Hello,

Time for a little update maybe.

On 21/04/2017 at 5:23 PM, smithd said:

Since I didn't see it mentioned elsewhere, it sounds like you might be in need of https://en.wikipedia.org/wiki/Composition_over_inheritance

This little comment helped me realise the solution for what I was attempting to do.  By using composition the problem with merging classes back into a hierarchy was absolved.  I did end up using DVRs as again the need to merge was absolved.  I could of just used clusters to split and merge from the onset but I have learned a little of OOP on the way so all is good.  Also  using DVRs made saving and restoring data into named attributes fairly simple (using the unique tree tag as a name).

On 21/04/2017 at 5:23 PM, smithd said:

A measurement isn't a type of limit check, and a limit check isn't a type of execution behavior.

I still feel a little uncomfortable about putting things into a hierarchy and although smithd is technically correct... measurements can all have limits and each test all has execution behaviour.  Having said this composition is fine and there is no real need for hierarchy here.

 

On 14/06/2017 at 8:29 PM, Stagg54 said:

Looks like you are doing sequencing.  Instead of reinventing the wheel, perhaps consider using Teststand...

Hi Stagg54,

Yes we use TestStand on other projects.  The main focus on this sequencer was to allow Hardware engineers who are not familiar with TestStand or LabVIEW the ability to make sequences quickly and easily from within the same application.  I am not trying to build the monolithic one size fits all application.  This sequencer does not cover a lot of features available within test stand and is much more limited in its capabilities.  It is designed to run on specific hardware (at the moment), with the philosophy of being easily modified to target other pieces of test equipment as the need arises.

 

Thanks for all the input, I have much more of an insight into OOPy stuff.  I think my inital downfall was trying to be too strict on the design by using only by value and forcing the design into a hierarchy when the final solution didn't quite fit either of those methodologies.

Craig

Also of note, not being able to hide or set column width in tree controls to 0 is most frustrating!!

Edited by CraigC
Link to post
  • 1 month later...
On 6/19/2017 at 11:27 PM, JKSH said:

Genuinely curious: Why do you expect reference-heavy G code to be less performant?

A) Compiler cannot apply most optimizations around references. To name a few examples: There's no code movement to unroll loops. Constant folding cannot happen. Look-ahead is impossible. Many more.

B) Lots of overhead lost to thread safety (acquiring and releasing mutexes) to prevent parallel code from stepping on itself.

C) Time spent on error checking -- verifying refnums are valid is a huge time sink in many ref-heavy programs, despite LV making those checks as lightweight as possible.

D) Unnecessary serialization of operations by programmers. Even programmers consciously watching themselves sometimes forget or get lazy and serialize error wires rather than allowing operations to happen in parallel and then using Merge Error afterward.

E) A constant fight with dataflow. Making one piece of data by-ref often means other pieces become by-ref in order that the programming models remain consistent, so things that could be by-value are changed to match the dominant programming paradigm. Sometimes this is because programmers only want to maintain one model, but often it is because they need data consistency and cannot get it with a mixed model.

----------------------------------------

You mentioned "identity objects" as a category of objects. Although you are correct, that is a good division between objects, the identity objects category is one that I see more and more as an anti-pattern in dataflow. Doing synchronous method calls on a by-ref object comes with so many downsides. Functional programming languages don't allow that sort of programming at all, even when they have a class inheritance type system... it's harder and harder for me to see any reason to have it in LabVIEW. We won't take it out, for the same reason that we don't take out global variables -- it's conceptually easy and a lot of people think they need it. But I no longer believe it is necessary to have at all and, indeed, I believe that programs are better when they do not ever use by-reference data. Communications with processes should be through command queues, preferably asynchronous, and operations on data should be side-effect-free by-value operations.

Link to post
On 6/19/2017 at 5:12 PM, MikaelH said:

I think going by reference object shouldn’t be considered difficult or hard. Most other languages are using references so why shouldn’t a LabVIEW developer succeed in that task?

Most languages (Haskell, Racket, Erlang, Lisp) that are successful in taking full advantage of multicore parallel programming do not allow references. References are a holdover from serial processing days.

Computer science has had a better way to program since the 1940s, but Van Neumann saddled us with the IBM hardware architecture, and direct addressing of assembly and then C and its successors won performance at a time when such things were critical. Hardware has moved beyond those limitations. It's time for software to grow up.

"Procedural programming is a monumental waste of human energy." -- Dr. Rex Page, Univ. of Oklahoma, 1985 and many times since then.
(And in this context, "procedural" is "method and reference-to-objects oriented".)

Edited by Aristos Queue
Link to post
2 hours ago, Aristos Queue said:

"Procedural programming is a monumental waste of human energy." -- Dr. Rex Page, Univ. of Oklahoma, 1985 and many times since then.
(And in this context, "procedural" is "method and reference-to-objects oriented".)

It's a good job that LabVIEW is a dataflow programming language then! :P

Edited by ShaunR
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By cyro2015
      Hi,
      I tried to create a template based on OOP for QMH. During development I have been confronted with infinite crashes of LabVIEW so I decided to slow down with this project and open it to the community. I finished my working example and stopped for now.
      So if anyone is interested to play around with the code, see attached ZIP file (LV 2020).
      Cu,
      Peter
       
      MHT.zip
    • By Marko Hakkarainen
      I had some time to learn about new interfaces and finally I could implement my collection class as I had envisioned. I didn’t want to use iterable and iterator names, because I thought that would have been too bold a claim.
      The original version of the collection class was (and is) used as a collection of sequence steps. Each element can be either a sequence command (send message, wait timer, wait complete etc.) or another collection of commands (sub-sequence). That’s the reasons for the labels and search method. Otherwise it is just a fancy (Rube Goldberg) array.
      Next method is recursive and it steps through all elements in the collection. Execute is only method, which requires override.
      For now, it’s at least an exercise in new interfaces. I don’t know if it’s useful enough to be in the code repository, but I can polish it up if needed.
       
      --
      Marko H
      Certified LabVIEW Architect
      www.optofidelity.com
      Iterable Collection LV2020.zip
    • By Zyl
      Hi everybody,
       
      I'm running into something I don't really understand. Maybe you can help me here !
      I've got a LVLIB that is used as an 'Interface': it exposes public VIs which wrap around public functions of a private class (see code attached) . The class is private because I want to force the users to use the 'interface' functions.
      In one of my interface VI, I create a DVR on the private class (Interface_init). The DVR is stored into a typedef (FClass_DVR.ctl) and this typedef is the 'reference' that link all the interface public functions.
      In TestCode.vi (which is not part of the lvlib and illustrates the standard code that a user can create to use my driver), I can call my public interface functions and link them without any problem.

      But as soon as I create an indicator on that reference (to create a state-machine-context-cluster for example), my TestCode VI breaks !

      The error returned is : This VI cannot use the LabVIEW class control because of library access scope. The LabVIEW class is a private library item and can only be accessed from inside the same library or libraries contained in that library.
      I understand that the class is private. But the DVR is contained into a public control. Using an In Place structure on that DVR into TestCode would not work, since the class is private. So why is the DVR control problematic at that point ? Creating it do not breaks any access protection...
      Am I missing something ?
      DVR Private POC.zip
    • By Brains
      Hi,
      Does anybody know the best way to make a copy of a byref object (open gds v4) at runtime and pass all the attributes values (including inherited attributes) to the new object?
      Thank you!
      Craig
    • By GregFreeman
      I currently have a project that I am refactoring. There is a lot of coupling that is not sitting well with me due to typedefs belonging to a class, then getting bundled into another class which is then fired off as event data.
      Effectively, I have class A with a public typedef, then class B contains ClassA.typedef and then class B gets fired off in an event to class C to be handled. Class C now has a dependency on class A which is causing a lot of coupling I don't want.
      For my real world example I query a bunch of data from our MES, which results in a bunch of typedef controls on the connector panes of those VIs. Those typedefs belong to the MES class. I then want to bundle all that data into a TestConfig class and send that via an event to our Tester class. But, now our tester has a dependency on the MES.
      I see a few ways to handle this. First is move the typedefs currently in the MES class, to the TestConfig class. The MES VIs will now have the typedefs from the TestConfig class on their connector panes, but at least the dependency is the correct "direction." Or, I can move the typedefs out of classes all together, but then I am not sure the best way to organize them. Looking for how others have handled these sorts of dependencies.
       
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.