Jump to content
News about the LabVIEW Wiki! Read more... ×


  • Content Count

  • Joined

  • Last visited

  • Days Won


0_o last won the day on January 31

0_o had the most liked content!

Community Reputation


About 0_o

  • Rank
    Very Active

Profile Information

  • Gender
    Not Telling

LabVIEW Information

  • Version
    LabVIEW 2016
  • Since

Recent Profile Visitors

1,106 profile views
  1. 0_o

    plugin HAL

    I understand it will require more rewrite and I prefer not to go down that path. I'm not sure I understood what you meant by exploitation and acquisition (I guess I understand that one)
  2. 0_o

    plugin HAL

    Python is like a candy store and it is so easy to deploy a package and start using it. I'll try to explain yet keep in mind that I prefer working with LabVIEW end to end mainly because of maintenance 5 years from now. I don't want to employ a C, python or whatever language programmer forever and ever. Building a well documented and automated tool in one language makes the development somewhat limited but the result should be much more stable in the long run without issues rising from bad communication between departments. Let's say you have several generic test-benches with different devices in them that can test different uuts. There are versioned generic drivers for those test-benches that can be operated via an API. I'll open the browser and communicate to the server back and forth through xml rpc. The server will decide which generic driver to deploy and through celery it will decide which uut is going to be tested where and when and by whom. The server will send the relevant flow of uut test. The db with matchmaking of station+uut+sequence of a test is mongo db since the structures are not as strong as in sql, it fits the development in a much more harmonic way. Take notice that this way a control in a function doesn't have to keep all the inheritance limitations in an OO HAL. You simply deploy the relevant test sequence. Finally, Kibana will create reports from the results collected with recipes which are fast again since the mongodb is tailored to the development and doesn't enforce an sql design which might be slow for a future query that the sql design was not optimized for and it is nearly impossible to optimize it in this stage of the development. Factory floor results accumulate fast and in a matter of 5 years, it will be hard for the sql query to want to run given the timeout and optimization you set when you designed the system. A kibana recipe tailored for a mongodb that is in harmony with the software design will be fast even over huge datasets.
  3. 0_o

    plugin HAL

    This is actually the real alternative I'm considering: I thought about leveraging mongodb with kibana and celery with xml rpc Python will load an API sequence from the db that suits the specific station and specific test while the API driver is preloaded also into the station according to the SQL (it can be a Python driver or whatever). It bypasses all the OO HAL architecture. As an OO devotee, it is hard for me to go the SQL path but I must admit it is much less breakable. However, that will make me give up most of my sequencer and write much of it from scratch
  4. 0_o

    plugin HAL

    The current version of the sequencer uses visa and scpi. It even allows me to add functions to the API through SCPI standard. The drivers can use DLLs and .net DLLs but those calls can't be extended beyond the API. I wanted the redesign partially because new drivers come with .net dlls and extending the API through scpi won't always work. Inheritance, on the other hand, will allow me to extend the API but it is a bit more complex if the GUI panel is also inherited and what will happen to the extra controls if I replace the driver through in the sequencer. I know LVLIBPs are fragile but I'm used to them and so far I can't see why I must use a C DLL as a layer between the sequencer and the driver. Can you refer me to Rolf's original wrapper DLLs for LabVIEW post? There are lots and lots of them out there.
  5. 0_o

    plugin HAL

    Thus, I thought about leaving the DLL behind and move to LVLIBP Actors. The specific instruments will inherit from the top-level LVLIBP that acts as an API. In the Sequencer's panel, I call the API functions but load through inheritance the specific driver instance. This way I'll even remove a layer: the sequencer's panel is still there but it calls the specific inheritance LVLIBP that overrides the low-level functions but still gets the high-level functions from the parent API LVLIBP Actor.
  6. 0_o

    plugin HAL

    Can you elaborate which hoops for example?
  7. 0_o

    plugin HAL

    More or less. The DLL Server for each family handles the logic of that family and it expects you to give it another dll of vis that implements it's API functions for a specific instrument model. The sequencer declares which instruments are available and opens standard panels to configure steps of tests using those instruments among other utilities that the sequencer comes with. The question is about the DLL C Server but also about the entire design. What would you change in it? Will adding simple panels for device families help a factory floor non-programmer write his/her own tests in TestStand? I know you hate OO and probably Actors in particular so I won't narrow you down. Having a standard panel to handle DMMs and load different implementations into that panel without having to change logic or datatypes or GUI is the specific case I have to deal with. I thought OO inheritance might fit here: the standard panel will use API functions from LVLIBP and the specific instruments will inherit the actor in this LVLIBP. However, you can just as well have Python and MySQL run a test sequence on a station while running API calls that are implemented for the specific station's instruments according to an XML file. The reason I have that DLL C Server in my framework is that I didn't want the general panel in the sequencer to hold the connection to the instrument and the low-level logic mainly because I wanted to be able to kill it and free the memory. Correct me if I'm wrong but this is easily done when the engine that keeps alive the connection to the hardware is external in a different scope than the sequencer exe.
  8. 0_o

    plugin HAL

    The sequencer is not TestStand, this is the main framework I was talking about. I thought that by caller you referred to a sequencer. However, this framework is getting old, thus, I had 2 options: 1. Recompile this huge framework to LV2018 2. take out the main advantage it has over TestStand (standard driver for most of the device families out there, much more than 22 with many drivers from different manufacturers already implemented) take it outside of my framework and bring it into TestStand. I'm trying to explore this second option in this post. You got me interested when you talked about being able to run a batch file of tests over this exe without a sequencer but rather more like a step recorder and allow different batches to run simultaneously. In my case, the sequencer does this kind of work. I was never able to do that in your kind of engine exe unless I used autoit. I love the concept of Xilinx Vivado. It is written in TCL/TK and you can record a test you do manually and it will create a TCL/TK script you can run on a machine that doesn't have Vivado (costs a lot). How do you do it? Do you use Actors and record the messages?
  9. 0_o

    plugin HAL

    Not trying to brag but I can do about the same Only that in my case the caller, which is a sequencer with lots of functionality, calls a dll server for each family, think of it as a different exe for each family in your case while the GUI of that server is loaded as a plugin into the sequencer. With all the benefits it slows down development sometimes since I can't just work with a new device using the manufacturer's vis, I need first to fit it into my standard (but then the full-featured framework saved me time since I don't need to rewrite the logic and gui, I just reuse it). Now that you know actors framework, would you have done it differently using HAL-MAL+LVLIBP? Do you cell your tool? Is there a video demonstrating it? Is there an easier way to add a driver while using a standard reused logic/gui for that instrument's family?
  10. 0_o

    plugin HAL

    Let's see if I understood you correctly: You have an exe that lets you support all the device families while the specific instruments are listed in an ini file. In the exe, if the logic wants to set, for example, the Eload to CC 0.2A and load=on it will call "init", "set cc" and "set load" via dynamic vi server call by reference of a driver you built for the specific model and that driver must follow that families API? If the vi server call can't see the correct inputs/outputs from the API you'll get an error for a specific function for that instrument in the exe (greyed out maybe). I too wish to keep it all LabVIEW. Question 1: Is my description correct? Question 2: In this scenario how do you free the memory or prevent on driver from making all the application to hang or even work with several instruments in parallel? Question 3: If the driver you are calling uses a global, for example, and you didn't call a dll that remains in memory with the global from init to close but rather a simple vi, how do you keep the connection id alive?
  11. 0_o

    plugin HAL

    Please ask for clarification if something is unclear. By family I mean dmm/dio/scope/...
  12. Hi, I need help with the redesign of an old framework: A sequencer has a general plugin front panel for each family of instruments drivers. Each of the general plugins manages a front panel while interacting with a general family driver which is actually a dll server that accesses the instrument. That dll server accesses a dll that wraps the vis from the instrument manufacturer using family standard vis. For example: Instrument manufacturer Init vi -> Instrument family standard Initialize vi -> specific instrument standard dll -> general family driver server dll -> General family front panel plugin -> sequencer The result is the ability to build a sequence with a general family front panel + general driver and replace the general driver with a specific driver when it is done. It shortens the development time since the sequence and the driver can be written in parallel while maintaining a bug free standard that gets all the other benefits of the sequencer or the already written front end logic. At the moment the driver family server is written in c because it was written when LabVIEW missed lots of features. Since it reminded me of the actors HAL example, I'm thinking about an actor that will hold the family front panel and lvlibp general driver with inheriting lvlibps that uses the manufacturer vis implementing standard family vis. Is there a simpler solution that will let me reuse as much of the framework? Thanks in advance.
  13. I refer to the migration tool between LabVIEW and LabVIEW NXG. If there are features that are not yet supported or if my code uses an old llb, for example, that won't be supported or if the migration tool is not working properly and can't migrate correctly some features that means I won't be able to move to NXG even if I wanted to. A new thread here with NXG issues will make sure NXG is ready for us in a year or two when we have to migrate. I'm going to try to migrate a very big project (just to check in theory what happens) and list the errors I get, should I post it to the dark side or here?
  14. You convinced me not to try NXG at this time. My main concern is not being able to open a large project without it being broken. Should we start a new NXG thread with failed migration attempts so that NI will make sure that we'll be able to migrate smoothly in a year or two?
  15. Hi, LV NXG got itself some new features that finally make me conceder it, more so because of the LV2018 release bugs and the fact I can't work with LV2017 along with LV8.5. It supports DAQmx, Vision, FPGA, SVN, TestStand and much more. Mainly, I hate feeling left behind stuck at LV2016, especially when it comes to Python support. However, I don't see here a thread about NXG with active users so I don't know what is not yet supported and what will be the learning curve. Will I, again, be NI's guinea pig? I understand they have a code migration tool yet I don't know how good it is for huge projects. Is it using a VI at all? Does it support scripting and OO? Could I use llb/lvlib/lvlibp/...? What are the benefits? Is the FP resizable? Migrating upstream is one thing. What about downgrading? Even Regular LabVIEW can't handle downgrade (just try writing a vi in 2016 with a conditional indexing tunnel and save it back to 8.5 that didn't have this feature - LV8.5 will crash). Yours truly, Guinea pig SN: 23145211.

Important Information

By using this site, you agree to our Terms of Use.