Jump to content

Design advice needed for data acquisition system


Recommended Posts

Hi

I need to make a scalable data acquisition system (I have to deal between 1 and 20 of hardware cards in the same time).
The systems are based on polling and have quite much data in and out.

Tried to use arrays of data, arrays of clusters, than arrays of classes and with all of them my problem was the multiple data copies while I am doing a single operation (one input change causes a copy of the whole array of all inputs or clusters or classes). Thought about functional variables too but that is hard to scale (with my current labview experience).

So I decided to use references - the basic idea is below - it it a valid one ore I try to push my luck with something not recommended?

DataIn is a bool array representing a set of inputs (there will be more, also numeric ones).
In the final solution the Ref_DataIn probably will be a global, the rest of the loops will be in different reentrant VI's (except the creation of the references from frame 1 which probably will be non-reentrant or synced). 

Also I would like to know the granularity of Labview in the sense of reference variables (Ref_DataIn and his elements in the case below) - do I need to sync the read and write of the values or it is good as it is?
I would like to avoid reading the Value while the data acquisition loops write it - it is not a problem if I read an array half old, half new values but it is a problem if I read empty array or half array when this kind of collision occurs. 
Have no idea how this works in Labview - in C/C++ I would need to sync it.

image.png.ac1b6965fc76f62047236e0720e68b11.png

Any other input is welcome, I come from the C,C++ world with more than 20 years of experience, I could make this work in a few days in a DLL and than import it in Labview but the problem is others have to access, slightly modify and easily maintain the whole structure even on site which is easier if I leave everything in Labview ...

Thank You ...

Link to post
Share on other sites
3 hours ago, brownx said:

Tried to use arrays of data, arrays of clusters, than arrays of classes and with all of them my problem was the multiple data copies while I am doing a single operation (one input change causes a copy of the whole array of all inputs or clusters or classes). Thought about functional variables too but that is hard to scale (with my current labview experience).

Arrays of clusters and arrays of classes will certainly have a much higher memory footprint than an array of any primitive type. The amount of data copies, however, depends on your particular implementation and isn't affected by the data type. LabVIEW is also quite smart about avoiding memory copies: https://labviewwiki.org/wiki/Buffer_Allocation

Perhaps, if you could show the particular section of code that has high memory footprint, we could suggest ways to optimize it.

3 hours ago, brownx said:

So I decided to use references - the basic idea is below - it it a valid one ore I try to push my luck with something not recommended?

I don't want to be mean, but this particular approach will result in high CPU usage and low throughput. What you build requires a context switch to the UI thread on every call to one of those property nodes, which is equivalent to simulating keyboard presses to insert and read large amounts of data. Not to mention that it forces LabVIEW to copy all data on every read/write operation... Certainly not the recommended way to do it 😱

3 hours ago, brownx said:

Also I would like to know the granularity of Labview in the sense of reference variables (Ref_DataIn and his elements in the case below) - do I need to sync the read and write of the values or it is good as it is?

You don't need to sync read and write for property nodes, they are thread safe.

3 hours ago, brownx said:

I would like to avoid reading the Value while the data acquisition loops write it - it is not a problem if I read an array half old, half new values but it is a problem if I read empty array or half array when this kind of collision occurs. 
Have no idea how this works in Labview - in C/C++ I would need to sync it.

Do you really need to keep all this data in memory in large chunks?

It sounds to me as if you need to capture a stream of data, process it sequentially, and output it somewhere else. If that is the case, perhaps the producer/consumer template, which comes with LabVIEW, is worth looking into.

Edited by LogMAN
Link to post
Share on other sites

>but this particular approach will result in high CPU usage and low throughput
The cycles are in sleep most of the time - the application does not need higher refresh rate than 50msec and mostly will be using from Teststand which means context switches on every line :( But your point is valid, TBH I am tented to make a compact C dll which could deal with all this stuff with max speed and less CPU, however I am open to any suggestions which would let me remain in the Labview domain.

>It sounds to me as if you need to capture a stream of data, process it sequentially, and output it somewhere els

It is not a stream, rather read input and and control output - probably the best description is driver for hardware like NI-DAQ and NI-Multiplexer but with slow data communication (50msec I don't consider fast) and much more complex functionality in the HW but this does not matter from the labview point of view.

Basically what I need is:
- a generic API (sort of a driver interface) which can deal with two-three types of hardware, 1-10 of each (TCP/IP, UDP or serial connected). This might scale later.
- all this HW is polling type (no async events in the communication)
- the inputs are mostly digital or analog type, the outputs are digital 
- some of the inputs once became active have to be kept active until first read even if they go inactive meantime (sort of an interrupt, HW is prepared to do this, SW has to be as well)
- the data will have multiple consumers (same data too) and I need a "last known value" support at any time to any of the data

- due to high debug demand I also need to "cache" the outputs (last known value which was sent to the HW)

>Do you really need to keep all this data in memory in large chunks?
Not big chunks - I am only interested on the last value (always the newest). The data is not that much - mostly controlling stuff - sort of 50 input, 50 output, few analogs, I2C,SPI, other protocol extension - this kind of stuff.

Thanks for the input

Edited by brownx
Link to post
Share on other sites

Oh I see, my first impression was that the issue is about performance, not architecture.

Here are my thoughts on your requirements. I assume that your hardware is not pure NI hardware (in which case you can simply use DAQmx).

34 minutes ago, brownx said:

- a generic API (sort of a driver interface) which can deal with two-three types of hardware, 1-10 of each (TCP/IP, UDP or serial connected). This might scale later.
- all this HW is polling type (no async events in the communication)

Create a base class for the API (LabVIEW doesn't have interfaces until 2020, a base class is the closest thing there is). It should have methods to open/close connections and read data. This is the Read API.

For each specific type of hardware, create a child class and implement the driver-specific code (TCP/IP, UDP, serial).

Create a factory class, so that you can create new instances of your specific drivers as needed. The only thing you need to work out is how to configure the hardware.
I can imagine using a VISA Resource Name (scroll down to VISA Terminology) for all drivers, which works unless you need to use protocols that VISA doesn't support (TCP/IP, UDP, and serial are supported though). Alternatively create another base class for your configuration data and abstract from there.

Of course, the same should be done for the Write API.

35 minutes ago, brownx said:

- the inputs are mostly digital or analog type, the outputs are digital 

The easiest way is to have two methods, one to read analog values and one to read digital values. Of course, hardware that doesn't support one or the other will have to return sensible default values.

Alternatively, have two specific APIs for reading analog/digital values. However, due to a lack of multiple inheritance in LabVIEW (unless you use interfaces in 2020), hardware that needs to support both will have to share state somehow.

35 minutes ago, brownx said:

- some of the inputs once became active have to be kept active until first read even if they go inactive meantime (sort of an interrupt, HW is prepared to do this, SW has to be as well)

It makes sense to implement this behavior as part of the polling thread and have it cache the data, so that consumers can access it via the API. For example, a device reads all analog and digital values, puts them in a single-element queue and updates them as needed (dequeue, update, enqueue). Consumers never dequeue. They only use the "Preview Queue Element" function to copy the data (this will also allow you to monitor the last known state). This is only viable if the dataset is small (a few KB at most).

32 minutes ago, brownx said:

- the data will have multiple consumers (same data too) and I need a "last known value" support at any time to any of the data
- due to high debug demand I also need to "cache" the outputs (last known value which was sent to the HW)

Take a look at notifiers. They can have as many consumers as necessary, each of which can wait for a new notification (each one receives their own copy). There is also a "Get Notifier Status" function, which gives you the latest value of a notifier. Keep in mind, however, that notifiers are lossy.

Link to post
Share on other sites

Seems logical but it is not that logical with Teststand where I need a TestStand call (usually a VI in step by step script) which does nothing except requires the last known value of an IO than closes (and the script will do a decision based on that value or give a fail or a pass).

This VI instance does not existed before, it will cease exist after it's execution so probably the "obtain notifier" will be run long after the IO changed and I am interested in. 

I have to check if a VI like this can be a "consumer" for a GPIO value - if I have to introduce a middle stage which listens through notifier to every IO change and saves a "snapshot" of the current state to provide it to a Teststand call than I am back to square one :) In a monolith APP I would definitely go notifiers/queues but it's not the case with TestStand so I am still puzzled which way to go ... And I cannot go completely Teststand oriented way either since in rare cases we do use this stuff in monolith applications too ...

Seems a dig deep into notifiers but I'll learn something new I guess :) I used queue before, not notifier but it seems close.

Link to post
Share on other sites

Sorry, I'm not familiar with TestStand. I'd assume that there is some kind of persistent state and perhaps a way to keep additional code running in the background, otherwise it wouldn't be very useful. For example, it should be possible to use Start Asynchronous Call to launch a worker that runs parallel to TestStand and which can exchange information via queues/notifiers whose references are maintained by TestStand (and that are accessible by step scripts). In this case, there would be one step to launch the worker, multiple steps to gather data, and one step to stop the worker.

Maybe someone with more (any) experience in TestStand could explain if and how this is done.

Link to post
Share on other sites

Teststand can call a VI as Labview would do, it does permits storing values in script variables as well (Labview has Teststand API which can access Teststand variables too).

Also permits parallel thread which can hold a loop VI which can persist during the whole test.  Done this many times.

However to have the data from the persistent thread readable from other parts of the test (like "read IO1") would mean a different VI accessing the variables of this persistent VI - which takes me back again to references, global variables or functional variables. You are suggesting that instead of the data references I should use a notifier reference?  

As far as I understand the notifier can send notification of a value changed (and keep the last change only) but it will not provide me a way to "read IO1" unless I keep the last value on a local variable on the receiver end in the persistent vi. Thus I will still end in mirroring the data in the background, the only difference is that I will not loop continuously but rather run only when something changed, right? Sort of event based programming instead of continuous loops.

I could also write the values directly to Teststand but this API will be used 20% in standalone labview applications so it has to work without teststand as well (not to mention that Labview Teststand API is slow and I would like to keep the connection between teststand and labview "ondemand"  instead of always updating a lot of stuff even when I don't need it).

 

 

Link to post
Share on other sites
1 hour ago, brownx said:

You are suggesting that instead of the data references I should use a notifier reference?  

Yes, either notifier or queue. You can store the notifier reference, or give the notifier a unique name (i.e. GUID), which allows you to obtain an existing notifier by name if it already exists in the same address space (application). Here is an example using notifiers:

LV_NotifierWorker.png.47b2073f0f2b0c493033b0742491a70c.png

Queues work the same way.

1 hour ago, brownx said:

As far as I understand the notifier can send notification of a value changed (and keep the last change only) but it will not provide me a way to "read IO1" unless I keep the last value on a local variable on the receiver end in the persistent vi.

Either your notification always contains the value for "read IO1", in which case the latest value also contains it, or you need to inform the worker about which channel to read. For example, by sending a message to your worker that includes the desired channel name, as well as a reply target. For things like this, the Queued Message Handler template (included with LabVIEW) or the Messenger Library are probably worth looking into.

LV_CommandWorker.png.f00137787607e771f75276aa5a4fc8d9.png

1 hour ago, brownx said:

Thus I will still end in mirroring the data in the background, the only difference is that I will not loop continuously but rather run only when something changed, right? Sort of event based programming instead of continuous loops.

How much data are we talking about?

Yes, there is some copying going on, but since the data is requested on-demand, the overall memory footprint should be rather small because memory is released before the next step starts. If you really need to gather a lot of data at the same time (i.e. 200 MB or higher), there is the Data Value Reference, which gives you a (thread safe) reference to the actual data. DVRs, however, should be avoided whenever possible because they limit the ability of the compiler to optimize your code. Not to mention breaking dataflow, which makes the program much harder to read...

Link to post
Share on other sites

>How much data are we talking about?

Not that much - around 110 Booleans basically (digital IO's) and less than 10 analog (double) per hardware.
Since I have less than 64 inputs and 64 outputs I can even use a 64 bit unsigned for Inputs and another for Outputs and deal with the "BOOL" on the reader side with number to bool array.

>Either your notification always contains the value for "read IO1",
Since the HW responds anyway with all his IO's packed into one response probably it's much easier to do a "get everything" and than just mask the bit or bits at the other end.

I also need a background loop too which does maintenance stuff so it's not enough to be completely on demand (exp. one of the inputs usually is Abort which stops everything).
Probably the easiest way to go is to do a worker to feed the notifier with real hw data on every 50msec, in this case I don't even need read io command since a complete read will be performed anyway periodically.
The only setback of this would be that a read IO 1 would wait for maximum 50 msec (or less).

The only thing which will be a pain is the interrupt IO's - I'll have to deal with them somehow but I will figure it out - probably I can deal with that in the background reader ...

Thank You for the feedback - it helped a lot ... 

 

 

Link to post
Share on other sites
6 hours ago, brownx said:

Not that much - around 110 Booleans basically (digital IO's) and less than 10 analog (double) per hardware.
Since I have less than 64 inputs and 64 outputs I can even use a 64 bit unsigned for Inputs and another for Outputs and deal with the "BOOL" on the reader side with number to bool array.

You are trying to optimize something that really isn't a bottleneck. Even if each bit was represented by a 8-bit integer, the total size of your data is less than 200 Bytes per hardware. Even with 100 devices (hardware) only 20 KB of memory is needed for all those inputs and outputs (analog and digital). In the unlikely event that there are 1000 consumers at the same time, each of which have their own copy, it will barely amount to 20 MB...

As a C/C++ programmer I feel the urge for memory management, but this is really not something to worry about in LabVIEW, at least not until you hit the upper MB boundary.

6 hours ago, brownx said:

Since the HW responds anyway with all his IO's packed into one response probably it's much easier to do a "get everything" and than just mask the bit or bits at the other end.

It might seem easier at first glance, but now all your consumers need to know the exact order of inputs and outputs (by index), which means you need to update every consumer when something changes. If you let the worker handle it, however, (i.e. with a lookup table) consumers can simply "address" inputs and outputs by name. That way the data structure can change independently. You'll find this to be much more flexible in the future (i.e. for different hardware configurations).

6 hours ago, brownx said:

I also need a background loop too which does maintenance stuff so it's not enough to be completely on demand (exp. one of the inputs usually is Abort which stops everything).

I'd probably use another worker that regularly (i.e. every 100 ms) polls the state of the desired input and sends the stop signal if needed.

6 hours ago, brownx said:

Probably the easiest way to go is to do a worker to feed the notifier with real hw data on every 50msec, in this case I don't even need read io command since a complete read will be performed anyway periodically.
The only setback of this would be that a read IO 1 would wait for maximum 50 msec (or less).

That, and the fact that the worker has to poll continuously even if there is no consumer. It is also not possible to add new features to such a worker, which can be problematic in case someone needs more features...

6 hours ago, brownx said:

The only thing which will be a pain is the interrupt IO's - I'll have to deal with them somehow but I will figure it out - probably I can deal with that in the background reader ...

Suggestion: Keep them in a separate list as part of the worker. For example, define a list of interrupt IOs (addresses) that the worker keeps track of. On every cycle, the worker updates the interrupt state (which is a simple OR condition). Consumers can use a special "read interrupt state" command to get the current state of a specific interrupt (you can still read the regular input state with the other command). When "read interrupt state" is executed, the worker resets the state.

Now that I think about it, there are quite a few things I might just use in my own I/O Server... 😁

Link to post
Share on other sites

>As a C/C++ programmer I feel the urge for memory management, but this is really not something to worry about in LabVIEW,
I have the same background with 20+ years of writing all kind of code from servers to low level 16 bit half assembly code :)  Maybe I just get older and dumber with time but I need much more time to think in pipes than in C lines - this whole thing I would have wrote it in C++ in 1-2 days and long forgotten :))

>at least not until you hit the upper MB boundary.
Not really the memory was why I want a clean setup - it's more the syncing and multithread. 

Old solution had only 1 HW at a time - a simple global or functional variable written by one thread and read by the rest was enough - speed and memory was not an issue since the application is slow, memory footprint is low and whenever I needed faster read I could directly read the HW for the rest of the pins.

But once 1HW is replaced with 10 and few threads with many - this solution does not works anymore. Either I use references which makes things hard to follow, either I use some async piped solution.

You confirmed this as well - the notifiers for data and queue for commands might be the best solution, now I just have to make a drawing to see all the elements in one place to see if I did not forgot something ...

>Now that I think about it, there are quite a few things I might just use in my own I/O Server... 
It's not opensource is it? :))

>It might seem easier at first glance, but now all your consumers need to know the exact order of inputs and outputs (by index),
The outputs and inputs name changes for every application so they are changing anyway (rearranged and sometimes different function and different name). And the configuration is usually loaded from a INI file runtime (index, name, everything) which makes it a pain to use named clusters specially combined with Teststand. 
Teststand also has other limitations - too long to describe - they make easier to have indexes on consumer like ABORT_SIGNAL = 10 which can easily changed by the INI file runtime while a ring or enum would be messed up at every change by Teststand.

 

Link to post
Share on other sites

Didn't not read the full conversation, but thought I'd point out to the OP that it is a common mistake for programmers coming into LabVIEW to mistakenly identify LabVIEW's UI Widgets (Controls/Indicators) as being "variables", and Control References (actually references to the UI widgets) as being "data references".  They also confuse the UI Widgets with the data that the Widget is acting as a UI for (which I suspect the OP has done when they talked about having many data copies in using "Arrays" and "Clusters").

Performant data-handling code will not utilize the User Interface at all, and if a "reference" were to be needed, it would probably be a DVR.

  • Like 1
Link to post
Share on other sites
4 minutes ago, drjdpowell said:

Didn't not read the full conversation, but thought I'd point out to the OP that it is a common mistake for programmers coming into LabVIEW to mistakenly identify LabVIEW's UI Widgets (Controls/Indicators) as being "variables", and Control References (actually references to the UI widgets) as being "data references".  They also confuse the UI Widgets with the data that the Widget is acting as a UI for (which I suspect the OP has done when they talked about having many data copies in using "Arrays" and "Clusters").

Performant data-handling code will not utilize the User Interface at all, and if a "reference" were to be needed, it would probably be a DVR.

Hm - I definitelly did this mistake ... Even classes have their private data in clusters formed from bools, etc. - same sort of elements You get on UI.

Can You give me a good Labview example which shows how to use real data vs UI elements? 

I did Core1 and 2 and this thing was never mentioned (except functional variable) and I still have the feeling that I miss something - this could be the reason I still kinda don't like Labview :)

Link to post
Share on other sites
25 minutes ago, brownx said:

Hm - I definitelly did this mistake ... Even classes have their private data in clusters formed from bools, etc. - same sort of elements You get on UI.

Can You give me a good Labview example which shows how to use real data vs UI elements? 

I did Core1 and 2 and this thing was never mentioned (except functional variable) and I still have the feeling that I miss something - this could be the reason I still kinda don't like Labview :)

You do use the same "normal" controls in the class private data as you would in your GUI. This is 100% ok and you can use whatever you like as they will never be visible at runtime, they are just a visual representation of your data types.

What you choose to show on the GUI is totally unrelated to what data you choose to have in the classes.

 

  • Like 1
Link to post
Share on other sites

Remember that you are a User, and you only look at User Interface.  When you "open a class" a cluster UI widget is loaded to represent the class data.  But that widget is not the actual cluster.  When you "open a subVI", the corresponding front panel is loaded with UI widgets to correspond to the block diagram terminals.  But for most subVIs, that front panel is never loaded if it is not opened. 

This is unfortunately counter intuitive, as every subVI has a User Interface (Front Panel) a new programmer will think that is significant, while an experienced LabVIEW programmer will intuitively understand that the front panel of most subVIs doesn't really exist at runtime.

 

  • Like 2
Link to post
Share on other sites
1 minute ago, Neil Pate said:

You do use the same "normal" controls in the class private data as you would in your GUI. This is 100% ok and you can use whatever you like as they will never be visible at runtime, they are just a visual representation of your data types.

What you choose to show on the GUI is totally unrelated to what data you choose to have in the classes.

 

I C ...

Well in my case the UI is almost nonexistent - I have to do under layers of Teststand scripts based on different hardware, the result is never a UI (if You don't call a PASSED/FAILED and the test log a ui with some minimal interactions like start/stop/exit etc.).

So I don't think in UI, the data I am working with is almost never shown (maybe on the debug UI only where I want to see some details if something goes wrong :) )

Link to post
Share on other sites
Just now, brownx said:

I C ...

Well in my case the UI is almost nonexistent - I have to do under layers of Teststand scripts based on different hardware, the result is never a UI (if You don't call a PASSED/FAILED and the test log a ui with some minimal interactions like start/stop/exit etc.).

So I don't think in UI, the data I am working with is almost never shown (maybe on the debug UI only where I want to see some details if something goes wrong :) )

You don't really need to worry about performance of the GUI anyway until you are getting into real-time updating of graphs with hundreds of MB of data. Even then it can be done if you are careful.

Link to post
Share on other sites
9 minutes ago, drjdpowell said:

This is unfortunately counter intuitive, as every subVI has a User Interface (Front Panel) a new programmer will think that is significant, while an experienced LabVIEW programmer will intuitively understand that the front panel of most subVIs doesn't really exist at runtime.

For me a subvi is equivalent with a functioncall in a scripted language - this is how Teststand uses them too (just see a line of script representing a vi with a bunch of inputs and outputs representing the data in and out, not even see the UI). 
But I get what You mean ... 

4 minutes ago, Neil Pate said:

You don't really need to worry about performance of the GUI anyway until you are getting into real-time updating of graphs with hundreds of MB of data. Even then it can be done if you are careful.

I don't care too much about performance since my application is a slow one with small amount of data - what is complex is the distribution of the data, the multithreaded access and the need for scalability.
This is why I just thought there should be a better way than  "arrays of arrays of arrays or classes and references" which is a pain to maintain and to scale.

Ok - let's talk about my example - it's basically a IO driver:
- I need to support a set of hardware, each with N inputs and M outputs (don't matter what kind, they will be many)
- it has to be easily scaled to add more type and more from the same type
- has to be multithreaded and permit multiple instances of the same HW
- data has to be accessed from a single standalone vi call (hard to explain this unless You guys are using Teststand which is a bit different than having one monolith Labview app)

How You guys would implement it? Don't need complete examples, just a few keywords from where I can get a new viewpoint ...

For now what I see is a set of classes, holding only the class in array (with not even that is needed), using one persistent vi per instance and notifier to get the data from it and queue to send commands to it (this will replace the arrays of arrays of arrays I did not liked anyway). Do You see anything wrong with it or have a completely new idea? 

 

Link to post
Share on other sites
10 minutes ago, brownx said:

For me a subvi is equivalent with a functioncall in a scripted language - this is how Teststand uses them too (just see a line of script representing a vi with a bunch of inputs and outputs representing the data in and out, not even see the UI). 
But I get what You mean ... 

I don't care too much about performance since my application is a slow one with small amount of data - what is complex is the distribution of the data, the multithreaded access and the need for scalability.
This is why I just thought there should be a better way than  "arrays of arrays of arrays or classes and references" which is a pain to maintain and to scale.

Ok - let's talk about my example - it's basically a IO driver:
- I need to support a set of hardware, each with N inputs and M outputs (don't matter what kind, they will be many)
- it has to be easily scaled to add more type and more from the same type
- has to be multithreaded and permit multiple instances of the same HW
- data has to be accessed from a single standalone vi call (hard to explain this unless You guys are using Teststand which is a bit different than having one monolith Labview app)

How You guys would implement it? Don't need complete examples, just a few keywords from where I can get a new viewpoint ...

For now what I see is a set of classes, holding only the class in array (with not even that is needed), using one persistent vi per instance and notifier to get the data from it and queue to send commands to it (this will replace the arrays of arrays of arrays I did not liked anyway). Do You see anything wrong with it or have a completely new idea? 

 

Well now you are really getting in the need for a proper architecture with HAL and clonable/re-entrant actors and stuff. Not something that can be easily described in a few sentences.

Link to post
Share on other sites
21 minutes ago, Neil Pate said:

Well now you are really getting in the need for a proper architecture with HAL and clonable/re-entrant actors and stuff. Not something that can be easily described in a few sentences.

If it's that complex I rather do it in C++, few days (already done such stuff so not starting from 0) and just export the thing to Labview :)) But the guys following me will go crazy in case any change is needed :D

The HAL is not that complex in my case - just think about a set of booleans and numeric values - the rest is taken care of the HW, no need implement a huge abstraction layer on Labview ...

Classes, multiple inheritance, class factory etc. is not a problem - I am used with them. What I am not that familiar are the notifiers but with LogMAN and NI examples it will be fine.

I've done complex HAL systems in C++, home automation/iot server with modules loadable by dll or jar or activex or whatever, everything abstract including the programming language too, scalability is high, etc. etc. - this is what You mean? 

If yes I probably don't need that complex - there will be less than 10 types of HW supported and the consumer of the data will exactly know what hardware he is using too, so there is no need of that complexity. 

Edited by brownx
Link to post
Share on other sites
1 hour ago, brownx said:

If it's that complex I rather do it in C++, few days (already done such stuff so not starting from 0) and just export the thing to Labview :)) But the guys following me will go crazy in case any change is needed :D

The HAL is not that complex in my case - just think about a set of booleans and numeric values - the rest is taken care of the HW, no need implement a huge abstraction layer on Labview ...

Classes, multiple inheritance, class factory etc. is not a problem - I am used with them. What I am not that familiar are the notifiers but with LogMAN and NI examples it will be fine.

I've done complex HAL systems in C++, home automation/iot server with modules loadable by dll or jar or activex or whatever, everything abstract including the programming language too, scalability is high, etc. etc. - this is what You mean? 

If yes I probably don't need that complex - there will be less than 10 types of HW supported and the consumer of the data will exactly know what hardware he is using too, so there is no need of that complexity. 

if you can get this all done in C++ in a few days I am mighty impressed 🙂

Link to post
Share on other sites
6 minutes ago, Neil Pate said:

if you can get this all done in C++ in a few days I am mighty impressed 🙂

All that dynamic not - that was a 5 person project for years and always evolving :)) 

Forgot to say that most of those HW have C++ API and C++ test applications (some written by me so I know them) - thus I already have over 70% ready in C++.
I would need to just add a class structure with the right inheritance to be able to put them together which is not that big of a task ...

It is tempting however I think I'll  go Labview since this stuff will be used by Labview programmers - they need to be able to maintain it, slightly modify it to taylor it to their needs.
If I go C++ they will curse me till the end of my life and I will get calls all day :))

Which means I will have to take a deep breath and go class factory - I don't need dynamic VI loading, a class factory with selectable HW type should do ...
Pfff ... Was hoping to get a new fresh input to get pass of the array of arrays with some Labview magic I don't know yet, but I was also hoping to not get to complex dynamic HAL stuff ... :)) So I will aim somewhere in the middle ...

Link to post
Share on other sites
1 hour ago, brownx said:

Ok - let's talk about my example - it's basically a IO driver:
- I need to support a set of hardware, each with N inputs and M outputs (don't matter what kind, they will be many)
- it has to be easily scaled to add more type and more from the same type
- has to be multithreaded and permit multiple instances of the same HW
- data has to be accessed from a single standalone vi call (hard to explain this unless You guys are using Teststand which is a bit different than having one monolith Labview app)

How You guys would implement it? Don't need complete examples, just a few keywords from where I can get a new viewpoint ...

All my applications are developed with a package I wrote several years ago called "Messenger Library": https://labviewwiki.org/wiki/Messenger_Library  There are a few YouTube videos linked from that wiki that go through an actual example application.   It's motivated by the Actor Model, and so I take inspiration from frameworks like Erlang, Akka or the C++ Actor Framework.  The core concept is "messaging" to gain the benefits you list.

  • Thanks 1
Link to post
Share on other sites

I wrote a simple "connector" for test stand a few years ago. 

It was just a VI launcher which had a standard calling convention (from Test Stand for configuration of the called VI) and a standard output back to Test Stand. It would call any VI with the appropriate FP connectors, dynamically, with parameters supplied from Test Stand. You could call any VI's (DVM, Frequency generators etc) which were wrapped in a normalising VI which created a standard interface for the launcher to call and formatted data to the standard format to be returned. All configuration, reporting and execution was in Test Stand. It took about 5 mins to write the launcher and about 5 mins to write a wrapper VI for each device. The caveat here is that there was already a device VI to wrap.

This a similar technique to VI Package Manager which runs the pre and post install VI's.

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.