Jump to content

plugin HAL


Recommended Posts

Hi,

I need help with the redesign of an old framework:

A sequencer has a general plugin front panel for each family of instruments drivers.

Each of the general plugins manages a front panel while interacting with a general family driver which is actually a dll server that accesses the instrument.

That dll server accesses a dll that wraps the vis from the instrument manufacturer using family standard vis.

For example:

Instrument manufacturer Init vi -> Instrument family standard Initialize vi -> specific instrument standard dll -> general family driver server dll -> General family front panel plugin -> sequencer 

The result is the ability to build a sequence with a general family front panel + general driver and replace the general driver with a specific driver when it is done. 

It shortens the development time since the sequence and the driver can be written in parallel while maintaining a bug free standard that gets all the other benefits of the sequencer or the already written front end logic.

At the moment the driver family server is written in c because it was written when LabVIEW missed lots of features.

Since it reminded me of the actors HAL example, I'm thinking about an actor that will hold the family front panel and lvlibp general driver with inheriting lvlibps that uses the manufacturer vis implementing standard family vis.

Is there a simpler solution that will let me reuse as much of the framework?

Thanks in advance.

Link to comment

For my part, I use VI server call per function per instrument.

The VI that is dynamically is in another .exe.

This .exe has all the driver for all the instrument. the instrument selection and availability is done trough a .ini file.

currently I support 22 different hardware like JTAG, Eload, PSU, Hipot, I2C CAN bus, SPI, Oscilloscope, Switch, DAQ, Ethernet, serial port, etc...

Each instrument type support multiple manufacturer and model.

When a call is done by VI server, the capability of the selected instrument is evaluated and report an error if out of range.

This is generally what I designer here. I decided to stay 100% LabVIEW.

Link to comment

Let's see if I understood you correctly:

You have an exe that lets you support all the device families while the specific instruments are listed in an ini file.

In the exe, if the logic wants to set, for example, the Eload to CC 0.2A and load=on it will call "init", "set cc" and "set load" via dynamic vi server call by reference of a driver you built for the specific model and that driver must follow that families API? 

If the vi server call can't see the correct inputs/outputs from the API you'll get an error for a specific function  for that instrument in the exe (greyed out maybe).

I too wish to keep it all LabVIEW. 

Question 1: Is my description correct?

Question 2: In this scenario how do you free the memory or prevent on driver from making all the application to hang or even work with several instruments in parallel?

Question 3: If the driver you are calling uses a global, for example, and you didn't call a dll that remains in memory with the global from init to close but rather a simple vi, how do you keep the connection id alive?

Link to comment

The call is done by the following step.

Init: create a reference on all the function "VI" for that instrument. (keep in a getset)

Function call: from get set use the reference of the function to send parameter or get parameter of the instrument

Close: close all the reference from the getset

 

In the .exe that holds all the Instrument, I have every instrument function in separate re-entrant VI.

The physical init to the instrument and physical close instrument in done by the executable not by the caller.

This allow the caller to have the status of the instrument initialization at every function call since I keep the error in another getset. The time to initialize the instrument is done by this executable, not by the caller.

This architecture allow me to have an auto-recovery of any communication failure. This allow me as well to have only one prototype for calling any instrument that has the same function. Per example, the power supply 0,1,2,3,4,5,6... all have enable output function. by setting the PSU# at the function call, I can do whatever I want on whatever instrument...

 

I do have 4 layer on my HAL, but the two other wont be necessary for you I think. The two other layer are

Communication layer. ( a power supply can have 3 different output physically. I can access them virtually at the same time by different VI server call without sending 3 command at the same time on the serial port. (I made a queue at this level)

Position Layer. The position layer is that I have a test framework that allow me to use in batch mode, unit mode or even asynchronous mode testing anything. per example, if I have 16 position on my tester that is used by 16 different .exe, I don't want to program differently the 16 .exe. I just add one layer with a config file that tells what position use what PSU# per example.

So yes... crazy design, but if you can see what i can do now... FCT, Burn-IN, Programming, Hipot station and much more all with the same GUI. Only the test sequence (.exe) are called depending of the product I want to test. The product is selected by the serial number and a call to the MES system. That allows me to test different product part number and version on the same tester and same GUI. :)

3 years design for this crazy system.. and now.. I cannot see anyone in the industry doing something better than that. A Product test development that use over 300 measurement point and 20 different instrument with various technology takes me only one week and it's deployed and validated in production.

 

image.png.513919393fdec143bfd80f5a7a420067.png

Benoit

Edited by Benoit
add picture
Link to comment

Not trying to brag but I can do about the same Only that in my case the caller, which is a sequencer with lots of functionality, calls a dll server for each family, think of it as a different exe for each family in your case while the GUI of that server is loaded as a plugin into the sequencer.

With all the benefits it slows down development sometimes since I can't just work with a new device using the manufacturer's vis, I need first to fit it into my standard (but then the full-featured framework saved me time since I don't need to rewrite the logic and gui, I just reuse it).

Now that you know actors framework, would you have done it differently using HAL-MAL+LVLIBP?

Do you cell your tool? Is there a video demonstrating it? Is there an easier way to add a driver while using a standard reused logic/gui for that instrument's family?

Link to comment

The sequencer that you use is it TestStand?

Actually, The design I did give me more advantage than what is offer by NI. so no... I wont design it differently.

When I want to add a Power supply model/manufacturer into the list, I have to modify only the middle lever of the HAL by adding a case in the list of every function. that case hold the driver from the manufacturer(LabVIEW,.dll or other)

I can't show you by video right now. Not sure my company will allow me to do that. In the case of your sequencer, I believe that no other software will call your driver... I think that you can remove the VI server call layer. To keep the object in memory, you just need to keep the reference open. a getset can do that or I think if you use TestStand, it is possible to keep reference in memory as well, but I'm not sure. I didn't use TestStand since a while. the cost is too high and development time for the same test is to long compared with my test framework.

Benoit

Link to comment

The sequencer is not TestStand, this is the main framework I was talking about.

I thought that by caller you referred to a sequencer.

However, this framework is getting old, thus, I had 2 options:

1. Recompile this huge framework to LV2018

2. take out the main advantage it has over TestStand (standard driver for most of the device families out there, much more than 22 with many drivers from different manufacturers already implemented) take it outside of my framework and bring it into TestStand.

I'm trying to explore this second option in this post.

You got me interested when you talked about being able to run a batch file of tests over this exe without a sequencer but rather more like a step recorder and allow different batches to run simultaneously.

In my case, the sequencer does this kind of work.

I was never able to do that in your kind of engine exe unless I used autoit.

I love the concept of Xilinx Vivado. It is written in TCL/TK and you can record a test you do manually and it will create a TCL/TK script you can run on a machine that doesn't have Vivado (costs a lot).

How do you do it? Do you use Actors and record the messages?

Link to comment

OK. So let me see if I've got this straight.

You have a DLL "server", written in C, that represents a family of devices - say a DMM "Server" DLL that encompasses Keithly, Agilent et.al (and maybe even different model numbers within manufacturers). This DLL server is a "translator" that unifies the interfaces between different models and calls the appropriate, device specific initialisation functions so that your "Sequencer" just calls InitDVM (say), sets a range and reads the values without having knowledge of the underlying, instrument specific formatting.

You are asking whether the DLL "Server" can be re-written in LabVIEW rather than the C implementation? And that because "Actors" are dynamic instantiations of the objects, whether this could be used?

Link to comment

More or less.

The DLL Server for each family handles the logic of that family and it expects you to give it another dll of vis that implements it's API functions for a specific instrument model.

The sequencer declares which instruments are available and opens standard panels to configure steps of tests using those instruments among other utilities that the sequencer comes with.

The question is about the DLL C Server but also about the entire design. What would you change in it? Will adding simple panels for device families help a factory floor non-programmer write his/her own tests in TestStand? 

I know you hate OO and probably Actors in particular so I won't narrow you down.

Having a standard panel to handle DMMs and load different implementations into that panel without having to change logic or datatypes or GUI is the specific case I have to deal with.

I thought OO inheritance might fit here: the standard panel will use API functions from LVLIBP and the specific instruments will inherit the actor in this LVLIBP. 

However, you can just as well have Python and MySQL run a test sequence on a station while running API calls that are implemented for the specific station's instruments according to an XML file. 

The reason I have that DLL C Server in my framework is that I didn't want the general panel in the sequencer to hold the connection to the instrument and the low-level logic mainly because I wanted to be able to kill it and free the memory. Correct me if I'm wrong but this is easily done when the engine that keeps alive the connection to the hardware is external in a different scope than the sequencer exe.

Link to comment
1 hour ago, 0_o said:

More or less.

The DLL Server for each family handles the logic of that family and it expects you to give it another dll of vis that implements it's API functions for a specific instrument model.

The sequencer declares which instruments are available and opens standard panels to configure steps of tests using those instruments among other utilities that the sequencer comes with.

The question is about the DLL C Server but also about the entire design. What would you change in it? Will adding simple panels for device families help a factory floor non-programmer write his/her own tests in TestStand? 

I know you hate OO and probably Actors in particular so I won't narrow you down.

Having a standard panel to handle DMMs and load different implementations into that panel without having to change logic or datatypes or GUI is the specific case I have to deal with.

I thought OO inheritance might fit here: the standard panel will use API functions from LVLIBP and the specific instruments will inherit the actor in this LVLIBP. 

However, you can just as well have Python and MySQL run a test sequence on a station while running API calls that are implemented for the specific station's instruments according to an XML file. 

The reason I have that DLL C Server in my framework is that I didn't want the general panel in the sequencer to hold the connection to the instrument and the low-level logic mainly because I wanted to be able to kill it and free the memory. Correct me if I'm wrong but this is easily done when the engine that keeps alive the connection to the hardware is external in a different scope than the sequencer exe.

The reason for my clarification wasn't really to do with architecture - more to find out what can be replaced piecemeal. Full re-factors always run into problems but if you have defined partitions, then you can replace over time with far less risk.

Two things spring to mind.

Firstly. If the C Server is dynamically loading the instrument specific DLL dependencies based on instrument selection. Then that can't really be done from LabVIEW. If you are planning on using an actor architecture to try and replicate this kind of behaviour then you will be jumping through many sub-optimal hoops just to achieve a similar outcome. 

Second

If these are DMMs, SAs, VNAs etc then they probably support SCPI. That was invented to take out all the instrument specific driver requirements so you should only need one driver (TCPIP?) and that means you can just create simple text files with the raw SCPI commands for different configurations, regardless of manufacturer/model and squirt them directly to the device. The device will tell you what is allowed and not allowed (a DVM won't have current range, for example). Once you have a command squirter, then the "sequencer" just becomes a file manager and you can use your own custom sequencer or Test Stand-it doesn't matter, although Test Stand UIs are horrendous for production. Most modern devices also have recipe storage which is an initial configuration squirt (say at shift start, device replacement) and then just a recipe select command. It would be a judgement call as to how much is in the device and how much is in your files.

If not all devices support SCPI, then I would look at reusing the current code for only those particular devices with a view to replacing them with SCPI devices at the first opportunity.

You can do all that in any architecture you like (OOP or other) and it will be simple, easy to maintain and easily configurable (maybe too easily configurable?). You can make it more complicated than I have outlined based on specific needs, but it will work regardless. Addressing is the thing you have to solve in your own code, and that's it.

Link to comment
On 1/24/2019 at 8:12 PM, ShaunR said:

Firstly. If the C Server is dynamically loading the instrument specific DLL dependencies based on instrument selection. Then that can't really be done from LabVIEW. If you are planning on using an actor architecture to try and replicate this kind of behaviour then you will be jumping through many sub-optimal hoops just to achieve a similar outcome. 

Can you elaborate which hoops for example?

Link to comment
4 hours ago, 0_o said:

Can you elaborate which hoops for example?

Well. The CLFN can't dynamically load. By that i mean you've cannot load a DLL, get the pointers to the function calls and call the functions with the appropriate arguments. If you can do that then it gives you the capability to map similar functions (e.g initialisation) across multiple DLLs with a single call from the application. The closest you can get is to apply a path at run-time to the CLFN which is more like a just-in-time static load of a single function. So for each function you want to call, you will have to have a CLFN and supply it with a path to the particular DLL that the function resides in and have one for each "similar" function in each of the DLLs you want to call. If all the DLLs have the same binary interface, then it's not a problem (think about the same DLL but just 54 bit or 32 bit compiled). If they are all different, then you have a huge number of VIs with CLFNs (one for each DLL function variant).

Link to comment

Thus, I thought about leaving the DLL behind and move to LVLIBP Actors.

The specific instruments will inherit from the top-level LVLIBP that acts as an API.

In the Sequencer's panel, I call the API functions but load through inheritance the specific driver instance.

This way I'll even remove a layer: the sequencer's panel is still there but it calls the specific inheritance LVLIBP that overrides the low-level functions but still gets the high-level functions from the parent API LVLIBP Actor.

Link to comment
17 hours ago, 0_o said:

Thus, I thought about leaving the DLL behind and move to LVLIBP Actors.

The specific instruments will inherit from the top-level LVLIBP that acts as an API.

In the Sequencer's panel, I call the API functions but load through inheritance the specific driver instance.

This way I'll even remove a layer: the sequencer's panel is still there but it calls the specific inheritance LVLIBP that overrides the low-level functions but still gets the high-level functions from the parent API LVLIBP Actor.

It depends on what the DLL server is doing for you. You did say that the reason for the DLL server in the first place was due to LabVIEW limitations - many of them still exist. If the low level drivers for specific devices are so obnoxious that they have features that can only be implemented in C/C++ (like callbacks) then you will get stuck. It is for these reasons that Rolf prefers wrapper DLLs for LabVIEW to other DLLs.

If, however, you go the SCPI route then you can implement it all in LabVIEW, packed libraries or not.

Link to comment

The current version of the sequencer uses visa and scpi. It even allows me to add functions to the API through SCPI standard. The drivers can use DLLs and .net DLLs but those calls can't be extended beyond the API.

I wanted the redesign partially because new drivers come with .net dlls and extending the API through scpi won't always work.

Inheritance, on the other hand, will allow me to extend the API but it is a bit more complex if the GUI panel is also inherited and what will happen to the extra controls if I replace the driver through in the sequencer.

I know LVLIBPs are fragile but I'm used to them and so far I can't see why I must use a C DLL as a layer between the sequencer and the driver.

Can you refer me to Rolf's original wrapper DLLs for LabVIEW post? There are lots and lots of them out there.

Link to comment
5 hours ago, 0_o said:

The current version of the sequencer uses visa and scpi. It even allows me to add functions to the API through SCPI standard. The drivers can use DLLs and .net DLLs but those calls can't be extended beyond the API.

I'm not really sure what this means. SCPI is a command syntax (strings) usually sent over TCPI so where you are talking about adding functions "through the SCPI standard"...it's a little bewildering. The aim is to get completely away from DLLs and drivers where possible. Where not possible, one sometimes makes an SCPI compliant intermediary translator for that device. Is this what your service DLL is doing?

5 hours ago, 0_o said:

Inheritance, on the other hand, will allow me to extend the API but it is a bit more complex if the GUI panel is also inherited and what will happen to the extra controls if I replace the driver through in the sequencer.

I have no view on lVLIBPs. I don't use them. I know some that do. If a developer wants to use them it's up to them I do have a very strong views on .NET, ActiveX and DLLs, though and the rule of thumb is avoid whenever possible. SCPI is one way that I avoid them in multi-device architectures because it only requires TCPIP and string manipulation for hundreds of devices. But here you are basically talking about device specific property pages. I can see the OOP argument for it. I've yet to see a robust implementation that doesn't require a complete rewrite of base classes as soon as a new device needs to be supported. But I understand the "theoretical" appeal.Personally I prefer a database solution to this.

6 hours ago, 0_o said:

Can you refer me to Rolf's original wrapper DLLs for LabVIEW post? There are lots and lots of them out there.

I was merely pointing out Rolfs preference for DLL wrappers around other DLLs which greatly simplifies the LabVIEW interface code. That is in contrast to my preference to direct implementation in LabVIEW without intermediate wrappers. They both have their pros and cons. For me, I just don't want to have to recompile another intermediary every time the upstream binary changes and let the user replace the binary directly if they really want to. That means a lot more complicated LabVIEW code but less forward maintenance.

I'm sure Rolf will chime in if there is some specific information you require from his implementations.

Link to comment
5 minutes ago, ShaunR said:

Personally I prefer a database solution to this

This is actually the real alternative I'm considering:

I thought about leveraging mongodb with kibana and celery with xml rpc

Python will load an API sequence from the db that suits the specific station and specific test while the API driver is preloaded also into the station according to the SQL (it can be a Python driver or whatever).

It bypasses all the OO HAL architecture.

As an OO devotee, it is hard for me to go the SQL path but I must admit it is much less breakable.

However, that will make me give up most of my sequencer and write much of it from scratch

Link to comment
11 hours ago, 0_o said:

I thought about leveraging mongodb with kibana and celery with xml rpc

Python will load an API sequence from the db that suits the specific station and specific test while the API driver is preloaded also into the station according to the SQL (it can be a Python driver or whatever).

I'm curious how this would work in more detail if you could share. It sounds interesting, but it also sounds like...a lot. For example I'm not sure where kibana fits in, and I'm curious what mongo gets you that you couldn't get with a more common database like postgres.

This kind of reminded me of what you are describing. It wasn't really applicable for me but it came to mind when I read your post. It looks like their "driver" can (in at least this case) just be an ini file.

Since it sounds like you're open to python, this have always been on my "neat, but I don't have a use now" list: https://airflow.apache.org/
Its a task graph and you can use something like celery or zeromq or amqp to move data to those tasks.

 

Also, on the topic of kibana+databases, this tool is way cool: https://github.com/apache/incubator-superset
its basically a graphical sql editor in python/html which talks to any sql db python can talk to, and with a bunch of cool visualizations. I've not used it 'in production' but its similar to (and 1000x better) than something we had built in house and I demo'd it to some folks and they liked it almost as much as I did. I know its not related to this topic at all but I like it.

Edited by smithd
Link to comment

Python is like a candy store and it is so easy to deploy a package and start using it.

I'll try to explain yet keep in mind that I prefer working with LabVIEW end to end mainly because of maintenance 5 years from now. I don't want to employ a C, python or whatever language programmer forever and ever.

Building a well documented and automated tool in one language makes the development somewhat limited but the result should be much more stable in the long run without issues rising from bad communication between departments.

Let's say you have several generic test-benches with different devices in them that can test different uuts.

There are versioned generic drivers for those test-benches that can be operated via an API.

I'll open the browser and communicate to the server back and forth through xml rpc. The server will decide which generic driver to deploy and through celery it will decide which uut is going to be tested where and when and by whom.

The server will send the relevant flow of uut test.

The db with matchmaking of station+uut+sequence of a test is mongo db since the structures are not as strong as in sql, it fits the development in a much more harmonic way.

Take notice that this way a control in a function doesn't have to keep all the inheritance limitations in an OO HAL. You simply deploy the relevant test sequence.

Finally, Kibana will create reports from the results collected with recipes which are fast again since the mongodb is tailored to the development and doesn't enforce an sql design which might be slow for a future query that the sql design was not optimized for and it is nearly impossible to optimize it in this stage of the development.

Factory floor results accumulate fast and in a matter of 5 years, it will be hard for the sql query to want to run given the timeout and optimization you set when you designed the system. 

A kibana recipe tailored for a mongodb that is in harmony with the software design will be fast even over huge datasets.

  

Edited by 0_o
  • Like 1
Link to comment

None of that really solves your current problem - if anything it is making it worse with assumptions and complete rewrites. Your risk assessment should be screaming at you. If you consider exploitation to be separate from from acquisition, then a natural partition will form where you can add different exploitation methods as required at a later date.

Link to comment
3 hours ago, 0_o said:

I understand it will require more rewrite and I prefer not to go down that path.

I'm not sure I understood what you meant by exploitation and acquisition (I guess I understand that one)

I was refering to :

On 1/30/2019 at 8:51 AM, 0_o said:

Finally, Kibana will create reports from the results collected with recipes which are fast again since the mongodb is tailored to the development and doesn't enforce an sql design which might be slow for a future query that the sql design was not optimized for and it is nearly impossible to optimize it in this stage of the development.

Factory floor results accumulate fast and in a matter of 5 years, it will be hard for the sql query to want to run given the timeout and optimization you set when you designed the system. 

Which is exploitation of acquired data - reporting, statistical analysis and trending.

But I guess you misunderstood my reference to databases for property pages. This is where I leverage a relational database to give different "views" of devices and easily present tests and configurations that are applicable to those devices. A relational database is ideal for this purpose. Somewhere on lavag.org I did a simple example a while back for converting ini file configuration system to a database which is a similar concept (can't seem to find it now)

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.