Jump to content

To OOP or not to OOP?? Please Help "Array of clusters or Classed data"


Recommended Posts

Hi All,


I m trying to build a software, and data management has become too complex. In short, it is acquiring data from three channels and doing some calculations and then showing and reporting that data for the user to make a fitting decision. The user then, can save the file and import the saved worked-up file later to view or manipulate.

The data consist of three interdependent data set, and so far, I have collected the dat as an array of clusters (size 3) for three sets and kept user input data in the same manner. I am also using a simple "producer/consumer" design. However, I think that may have been a mistake, and I might be able to do a better job using either actor or QMH design, especially since I need to think about adding the acquisition part (which is not yet developed). The thing is that I have not used oop before at all and am not quite sure if this is the correct approach and how should I bundle my data. Considering that the data (not all but some) need to be accessed by other classes if I goo with oop.

For example, should I keep the data as separate class data and use methods for override or keep it as an array in the parent class and use static methods and only use child classes for graphing. These are the type of questions I am struggling with. Also, can anyone help me decide between DQMH, Actor or just standard Producer/Consumer patterns, please? I am completely lost.

Thank you in advence.

 

This is how the front pannel looks like:

image.png.b0ec9ae97cf7a1c4ce920f1c4f3eb375.png

And this is the data set for the system (which I am keeping as an array) there will me more data added to this as the capabiliy grows.

image.png.cf63be5275ee811ed08c415ba8dcf8bf.png

Link to comment

Let's try to break it down a bit:

Quote

I m trying to build a software, and data management has become too complex.

Why is it too complex? What exactly is complex?

Quote

The data consist of three interdependent data set, and so far, I have collected the dat as an array of clusters (size 3) for three sets and kept user input data in the same manner.

What's wrong with that? What problems are you actually trying to solve by trying to move into OOP?

Quote

I am also using a simple "producer/consumer" design. However, I think that may have been a mistake, and I might be able to do a better job using either actor or QMH design, especially since I need to think about adding the acquisition part (which is not yet developed).

Why do you think the "simple producer/consumer" is a mistake? I'd say QMH and Actor Framework are kind of fancy producer/consumer frameworks, you may get some value, but you also get the learning curve. Have you ever used them, or even looked at some example applications?

Quote

The thing is that I have not used oop before at all and am not quite sure if this is the correct approach and how should I bundle my data. Considering that the data (not all but some) need to be accessed by other classes if I goo with oop.

Again - what is wrong with your current way of handling the data? As a side note - going with OOP do not require you to use Actor or QMH, and using any of those frameworks does not automatically make your code object-oriented. Generally speaking, if you're thinking about designing your application in object-oriented way, you'd need to actually start with the design part - do some modeling of your application, thinking about possible objects and their interactions with other objects, etc. But I'd start with thinking about questions above - what problems are you trying to solve?

Link to comment

Hi PiDi,

Firstly, Thank you for thaking  the time.

As I mentioned, I am not a programmer, and I have no notion of how big of a data package to carry on the shift register on every iteration is acceptable. 
So far, I have to carry an array of size 3 ( consisting of 20+ arrays of data) for each "fitting" tab data, i.e. silica, solvent, sample, separately and carry that through the whole software and also take transfer that data to the next loop, e.g. acquisition loop --> analysis loop -> Display/(FileI/O) loo. I will eventually add more loops as more capabilities are added to this software. I am not sure about this, but I think in this way, I have three or more copies of the same data set carried on the register each iteration.
I would also need to run acquisition asynchronously with the fitting section. 

When it comes to oop, I thought it would be easier to break data for each of (silica, solvent and sample) and use override VIs since some of the data fitting parameters differ for the different packages, e.g. silica fits independently, but sample uses data acquired from silica to fit its data. This way, When I am writing the samples fitting VIs (OA and CA fitting), I only need to worry about messing up the samples fitting equation and if I know that the silica VI runs with no issues. I assume that is the purpose of using OOP (unless I misunderstood, which is possible). Right now, when I mess up an equation, I have to chase up which data was affected.

This an example of of of the VIs right now.

737875418_getIzero.png.c24bd82613939c871a5a2d26e9566d03.png


I have not seen any real-life DQMH or Actor based applications apart from "TOM's LabVIEW adventures" on youtube, and the thanks are very much to his channel. Unfortunately, I don't know where to look for these applications; the examples packages with LabVIEW are lacklustre, and there are not many tutorials on the subject as far as I have been able to look.

 

 

I am not sure if this was enough information.

 

Cheers,

 

Mahbod

Link to comment
  • 2 weeks later...

If you are not a programmer, then I would not jump into OOP. I'm not a real programmer myself either, but did programming for years, did some medium complex data aquisition and processing applications but never dared to try OOP.

Anyways, I don't know "how big of a data package to carry on the shift register on every iteration is acceptable", I think you are okay with shift registers, but you have to be careful about the amount whatever way you do it. For example save new data to a file (append) regularly and throw away what you saved (or down sample, or whatever you need).

For copying data a lot, I'm not sure, the code looks rather messy. I don't know the full picture, but based on the snippet you posted, maybe you should separate raw data from calculated data (because raw data is not modified by your snippet, only read). You could also try to "streamline" your VIs. I don't know the proper term, when the data on the input terminal comes out unaffected on the output terminal (like a reference). This will ruin parallelization of the calculations but can save a few copies (if the VIs can be inlined).

Link to comment
4 hours ago, Lipko said:

If you are not a programmer, then I would not jump into OOP. I'm not a real programmer myself either, but did programming for years, did some medium complex data aquisition and processing applications but never dared to try OOP.

Anyways, I don't know "how big of a data package to carry on the shift register on every iteration is acceptable", I think you are okay with shift registers, but you have to be careful about the amount whatever way you do it. For example save new data to a file (append) regularly and throw away what you saved (or down sample, or whatever you need).

For copying data a lot, I'm not sure, the code looks rather messy. I don't know the full picture, but based on the snippet you posted, maybe you should separate raw data from calculated data (because raw data is not modified by your snippet, only read). You could also try to "streamline" your VIs. I don't know the proper term, when the data on the input terminal comes out unaffected on the output terminal (like a reference). This will ruin parallelization of the calculations but can save a few copies (if the VIs can be inlined).

Maybe I misunderstood, and what you posted is just an example. Anyway, that example shows that the different calculations and sub Vis are run serially, because of data dependency, so I would "streamline" the VIs and use "in place element" structures if possible (and of course for loops with auto-indexing/deindexing). In your example there would be no large data copies at all: "Calc DZPV" (modified so data in passes through) then "Cal S and wa" (modified so data in passes through) then "in place element" structure to index the data, and inside the structure anotheer "in place element" structure to adress the cluster elements you want to change. If the subVIs can be inlined theres no data copy of the whole data in array. Maybe you don't even have to inline the subVIs because the optimizer can optimize away the data copies when passing data to the subVIs, I have no knowledge about these stuff.

Edited by Lipko
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.