Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,793
  • Joined

  • Last visited

  • Days Won

    246

Posts posted by Rolf Kalbermatter

  1. On 6/1/2024 at 12:36 PM, sam said:

    It's been a while I've worked on sbRIO devices.  usually MAX is how you setup the sbRIO but that options seems to be gone.   Am I missing something?    this sbRIO had 2019 on it before I formatted the drive , now I can browse to it but options to install are no longer present.  I downloaded System Configuration/setup from NIPKG manager and all *.ipk files but a bit lost on how I can put an OS on this sbRIO.

    Any help is appreciated.

     

    Editing  to add:

    Some information on only LabVIEW 2019 and older can be installed the old way

    https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000001DthyCAC&l=en-US

    2020 and newer installation is done differently.  What I'm concerned is if all sbRIOs are supported with new method.   I'm guessing not

    https://knowledge.ni.com/KnowledgeArticleDetails?id=kA03q000000x2jQCAQ&l=en-US

    end of edit.

    You consequently have hidden the actual type number of your sbRIO in your pictures. I think I can see in one place a 963x but it is not sure. And most 963x except the 37 and 38 are VxWorks based and as such LabVIEW 2019 is the latest to support that. It also requires you do install CompactRIO software not later than 19.6. The 9637 is supported since LabVIEW 2015 and CompactRIO 15.5, but the 9638 requires at least LabVIEW 2019 and CompactRIO 19.5.

  2. 11 hours ago, Dan Bookwalter N8DCJ said:

    If I build and exe using LabVIEW 2019 Pro , that utilizes the report generation toolkit , can it be run on a PC with just the LabVIEW runtime engine , and , some of these PC's may have LabVIEW 2019 Full installed on them. Will it run in that case ?

    Regards

    Dan

    It should. The Report Generation Toolkit VIs are either built in (HTML Report) or access the according Word or Excel Active X component. However the Office ActiveX component is version sensitive. LabVIEW uses dynamic dispatch to access these interfaces, but uses early binding for that. It means it determines the actual method IDs to call at compile time rather than at runtime based on the method name. This has as consequences that the calling is slightly faster but any version difference in the ActiveX interface leads the LabVIEW method call into the abyss. So your Office installation on the target machine has to use the same version as what was used on the machine on which you build the application. The change to use late binding would have been fairly trivial for anyone having access to the LabVIEW source code, but alas was never considered a strong enough issue to let a developer spend a few hours on it.

    If I would have had to do it I would probably have left the early binding option in there and added an extra retry to try late binding at runtime if the initial methodID call would fail. Or even fancier, let the ActiveX method call have a menu option to select if it should do early binding, late binding or a combination of both with retry if the early bind call initially fails.

    • Like 1
  3. 5 hours ago, Łukasz said:

    Thank you for your reply, 
    passing a struct seemed very intuitive, and I considered a similar approach. However, I couldn't find any examples of how to pass an array of structs, which is the main blocker for this idea. Have you seen this approach used anywhere?

    Pretty simple except if you need to resize the array in the C code.

    You can let LabVIEW create the necessary code for the function prototype and any datatypes. Create a VI with a Call Library Node, create all the parameters you want and configure their types. For parameters where you want to have LabVIEW datatypes passed to the C code, choose Adapt to Type. Then right click on the Call Library Node and select "Create C code". Select where to save the resulting file and voila.

    This would then look something like this:

    /* Call Library source file */
    
    #include "extcode.h"
    
    #include "lv_prolog.h"
    /* Typedefs */
    typedef struct {
    	LStrHandle key;
    	int32_t dataType;
    	LStrHandle value;
    	} TD2;
    
    typedef struct {
    	int32_t dimSize;
    	TD2 Cluster elt[1];
    	} TD1;
    typedef TD1 **TD1Hdl;
    #include "lv_epilog.h"
    
    void ReadData(uintptr_t connection, TD1Hdl data);
    
    void ReadData(uintptr_t connection, TD1Hdl data)
    {
    
    	/* Insert code here */
    
    }

    Personally I do not like the generic datatype names and I always rename them in a way like this:

    /* Call Library source file */
    
    #include "extcode.h"
    
    #include "lv_prolog.h"
    /* Typedefs */
    typedef struct {
    	LStrHandle key;
    	int32_t dataType;
    	LStrHandle value;
    } KeyValuePairRec;
    
    typedef struct {
    	int32_t dimSize;
    	KeyValuePairRec elt[1];
    } KeyValuePairArr;
    typedef KeyValuePairArr **KeyValuePairArrHdl;
    #include "lv_epilog.h"
    
    void ReadData(uintptr_t connection, KeyValuePairArrHdl data);
    
    void ReadData(uintptr_t connection, KeyValuePairArrHdl data)
    {
        int32_t i = 0;
        KeyValuePairRec *p = (*data)->elt;
      
        for (; i < (*data)->dimSize; i++, p++)
        {
             p->key;
             p->dataType;
             p->value;
        }
    }

     

    • Like 2
  4. Well this is one hell of an API to tackle.

    amqp_bytes_t seems to be actually a struct similar to a LabVIEW handle contents, with a size_t element indicating how many bytes the following pointer points at. That in itself is already nasty if you want to support both 32-bit and 64-bit LabVIEW, since size_t is a pointer sized unsigned integer and the pointer after is of course pointer sized too!

    Then you have the amqp_field_value_t which in principle is a library specific Variant. Basically you want to have an element that consist of a binary string based key value and a variant, except that the Variant manager API in LabVIEW while present is basically undocumented. Well it's not totally undocumented since the NI developers let slip through a header file in the GPU Toolkit download that actually declares quite some of the actual functions. Of course there is the problem that function declarations are hardly any real documentation. It only gives the function signature but doesn't explain anything about how the functions would need to be used. So there is in fact a lot of trial and error here and the realistically present danger, that the Variant datatype and its related functions are subject to change at the simple whim of any LabVIEW developer since the fact that it is not officially documented makes the API "subject to change" at any time, for any reason including simply the desire to change it. The only reason not to do so is that existing NI libraries such as the OPC UA Toolkit, which makes internally use of that API, would also need to be reviewed and changed in order to not crash with a new LabVIEW version. Since NI has a bit of a habit to release LabVIEW version synchronized Toolkits (albeit sometimes with a year long delay or an entire version even missing) this is however not an impossible limitation as it not only would limit the documented version compatibility but also be a technical limitation to help prevent version incompatible Toolkit installations.

    Even if you would use LabVIEW variants as value of the key value pair, you would need to do some binary translation since a LabVIEW variant is not the same as your amqp_field_value_t variant.

    Personally I would likely use a cluster with a LabVIEW string as key element and a flattened string of the binary data with extra integer for datatype indication. Then in the C code do a translation of these elements to the amqp_bytes_t and amqp_field_value_t data elements. If you allow for a simple 1 to 1 mapping of the field_value element, things could be fairly straightforward.

    Something like this:

    struct
    {
        LStrHandle key;
        LStrHandle value; // really the flattened binary data
        int32 datatype;   // native LabVIEW datatype, could get rather nasty if you want to support complex
                          // datatypes and not just scalars and a string as you would need to allow for a
                          // hierarchical datatype description such as the i16 typedef array of LabVIEW itself
    } KeyValueRec;

    If you use a native LabVIEW Variant it would instead look like:

    struct
    {
        LStrHandle key;
        LvVariantPtr value; // Native LabVIEW variant
    } KeyValueRec;

    But as mentioned the API to actually access LvVariant from C code is completely undocumented.

  5. 1 hour ago, ShaunR said:

    Not so much mind boggling - I used to support VxWorks :frusty:.  It's not just Apple OS's though. Linux is similar. The same mind-set pervades both ecosystems. I used to support Mac, Linux and Windows for my binary based products because LabVIEW made it look easy. Mac was the first to go (nobody used it anyway) then Linux went (they are still in denial about distribution).

    VxWorks is quite special. It looks on many fronts like a Posix platform, but that is only a thin and not complete layer above the lower level and very specialized APIs. Programming to that lower level interface is sometimes required for specific operations but documentation was only available as part of the very expensive developer platform with according compiler. It's of academic interest now since VxWorks has been deprioritized by WindRiver in favor of their own Linux based RT platform. And NI has long ago stopped using it and never made the move to anything beyond 6.3 of the OS. It was anyhow only intended for the PowerPC hardware since they moved to that platform as power efficient embedded targets were not really an option on x86 based hardware at that time. But with the PowerPC loosing pretty much all markets, it was a dead end (at some point in time it was the most used embedded CPU solution, many printers and other devices, where users never ever saw anything of the internal hardware, were running on PowerPC).

    It was hard to port any reasonably sized code to VxWorks because of the higher level APIs often being very similar to other Posix platforms like Linux, but not always working exactly that way or not providing certain functionality on that level. Accessing the lower level API was very difficult because of the very limited documentation about it that could be found without investing an arm and a leg into the developer platform from WindRiver. But once that porting was done there was fairly little maintenance required both because the API stayed fairly consistent and NI didn't move to a different version (except VxWorks 6.1 to 6.3 between LabVIEW 8.2 and 8.5).

  6. 3 minutes ago, ShaunR said:

    It doesn't have to. Just back-save (:D) to a version that supports the OS then compile under that version. If you are thinking about forward compatibility then all languages gave up looking for that unicorn many years ago.

    Unfortunately, Apple manages to almost consistently break backwards compatibility with earlier versions for anything but the most basic "Hello World" application. And yes that is only a mild exaggeration of the current state of affairs. For an application like LabVIEW there is almost no hope to be compatible over multiple OS versions without some tweaks. Partly this is caused by legacy code in LabVIEW that uses OS functions in a way that Apple has declared depreciated versions ago, partly it is simply because that is considered quite normal among Apple application developers. For someone used to program to the Windows API, this situation is nothing short of mind boggling.

     

  7. On 5/15/2024 at 9:57 AM, ShaunR said:

    This wouldn't be much of an issue since you could always use an older version of LabVIEW to compile for that customer. However. Now LabVIEW is subscription based so hopefully you have kept copies of your old LabVIEW installation downloads.

    It seems they are going to make normal ordering of perpetual licenses possible again. While the official stance was that the perpetual licenses were gone, the reality was that you could still order them but you had to be VERY insisting, and have some luck to know the right local NI sales person, to be able to order them. That will of course not help with a current Macintosh version of LabVIEW. Still, maybe some powers to be might decide that reviving that is also an option. Kind of doubt it as I have experience with trying to support Mac versions of LabVIEW toolkits that contain external compiled components and the experience is nothing short of "dramatic". But if there would be a client teasing NI convincingly about ordering a few thousand seats of LabVIEW if there was a Mac version available, I'm sure they would think very hard about that. 😁

  8. 2 hours ago, LogMAN said:

    There is also OpenG LabPython Library Toolkit for LabVIEW - Download - VIPM by JKI. IIRC it required a license and I'm not sure if it works with Python 3 and newer versions of LabVIEW.

    It's Open Source (on SourceForge) and I started developping it more than 25 years ago. There never was any license involved but yes at that time Python 2.2 or thereabout was the actual version. I did some updates to also make it work in 2.3 and 2.5 and minor attempts to support 2.7 but had at that time lost interest in tinkering with it as I was getting more involved with Lua for LabVIEW and two scripting solutions next to each other seemed a bit excessive to work with.

    The shared library necessary to interface Python with LabVIEW definitely won't work out of the box with Python 3. There were simply to many changes with Python 3 to the internals as well as datatype system that that could work without some changes to the shared library interface code (the change to Unicode strings instead of ASCII is only one of them, but quite far reaching one). Also there is absolutely no support present for Python environments such as offered by Anaconda and the like.

    The main reason for starting with LabPython was actually that I had been trying to reverse engineer the script host interface that LabVIEW had introduced to interface to HiQ, and later Matlab. When searching for an existing scripting language that had an embedding interface to integrate into other applications to use as a test case, I came across a project called Python, that was still somewhat obscure at that time. I didn't particularly like Python, and that its inventor Guido van Rossum was actually Dutch did not affect my choice. And when reaching out to the Python community about how to embed Python in another application, I was frankly told that while there was an embedding API available in Python, there was little to no interest in supporting that and I was pretty much on my own trying to use that. It still seemed the most promising option as it was Open Source and had actually a real embedding API. I did not even come across Lua at that time, although before version 5.0 Lua had anyways fairly limited capabilities to integrate it in other applications.

    So I developed a Python script server for that interface to allow integration of Python, and even got help from someone from inside NI who was so friendly to give me the function prototype declarations that such an interface needed to support in order for LabVIEW to recognize the server and not crash when trying to load it. After it was done and started to work, I was less than thrilled by the fact that the script was actually a compile time resource, so could not be altered by the user of a LabVIEW application but only by its developer. As more of an afterthought, I added a programmatic interface to the already existing shared library and the main functionality of LabPython was born.

    As those old LabVIEW script nodes have been depreciated several years ago by NI, it would be definitely not a wise choice to try to build anything new based on that tech. Not even sure if LabVIEW 2023 and newer even would allow LabPython to be loaded as a script server. But its programmatic interface should be still usable, although for quite a few reasons not with Python 3 without some serious tinkering in the C code of the shared library interface.

  9. 47 minutes ago, crossrulz said:

    That sounds like the LabVIEW Solution Builder, which I use.  It works quite well.

    That's it! It didn't work for our use case as it can't really work around the issues of LabVIEW being unable to support two different platforms at the same time being loaded. As such it had not really significant advantages to the MGI Solution Builder in the way we had started using it.

  10. On 4/29/2024 at 10:43 AM, eberaud said:

    My team and I didn't have any of those excuses back in 2012, we were running on Windows only, 'My Computer' target only, and were only dealing with LabVIEW 2011. We were probably guilty of starting coding without a clear enough understanding of how PPL work.

    We thought we could create a Plugin archiecture based on PPL to avoid having close to a hundred plugins built inside the executable. But this created a 2-way dependency that made this impossible. I later realized that every dependency of the PPL needed to be themselves in PPL, but this felt like a lot of work and we gave up!

    pic.png.0262472c1ccb9a1f12e3fecac9ed197e.png

    Not sure about 2011 to be honest, but no you do not have to have all dependencies included in a PPL. You can have a PPL depend on other PPLs and configure the build to exclude that dependency from you PPL build, so that it remains external. This has of course to be down from bottom up, which is quite a work. Only PPL dependencies and other binary dependencies can be excluded from being pulled into a PPL. So if you want code that has to be shared between your PPL and other PPLs or your exe, that code needs to be in its own PPL, so each of those can refer to it.

    Yes it is not trivial and you need to plan before you start programming. You need to have a clear hierarchy overview and be able to cleanly modularize your code into different PPLs.

    Tools like the MGI Solution Builder definitely help with that as you can script the creation of a whole hierarchy of PPLs to be compiled in the correct order. Someone from NI was busy creating another solution that could build PPLs and in the process of building them also relink any dependencies on lvlib's into dependencies of lvlibp's but that didn't quite finish.

  11. Well, basically your program does never read anything from the serial port so your sending anything like *IDN? to it is totally superfluous and even wrong. As soon as it starts up it starts to spew a line of text every 500 ms, no matter if there is anyone listening or not.

    Basically, you want to startup your LabVIEW program

    - initialize the serial port with the correct parameters, leaving the Enable Termination character on as you now do

    - do one read of at least 100 bytes or more, possibly even multiple times to make sure the serial port input buffer is cleaned from any partial or old data

    - do NOT send anything to the device

    - then do a VISA Read with something like 100 bytes at least, every 500 ms, DO NOT USE Bytes at Serial port!!!!!!!

    You should see a string like "Temperature: <number> °C | Humidity: <number> % | Air Quality: <number>".

    The degree sign ° and the pipe symbol | might however pose a problem. No sure what locale your Arduino device uses, but it may not be the same as your Windows computer uses and then those characters will not match with what you expect.

  12. Definitely can echo things. PPLs work fairly well when you only use one platform (Windows x86 and x64 are two different platforms in that respect). Basically a PPL is quite similar like a DLL in that respect, it is binary compiled code and only works in the LabVIEW platform that it was created in. In addition you also have to watch out about LabVIEW versions, although with the feature to make a PPL loadable in a newer LabVIEW version since about 2017 or so, this is slightly less of a problem, but not entirely. There are possible issues with executing a PPL in a newer LabVIEW version than in what it was created.

    Where things really get wonky is if you want to support multiple platforms in LabVIEW. Different platform versions of PPLs in the same project is absolutely out of questions. You can't have a project that references a PPL under your My Computer target and the same PPL in a Realtime target in that project (same in name only, they obviously need to have been recompiled for each target). LabVIEW will get into a rats about that and render both targets as broken since it will try to match the two incompatible PPLs to both targets. But it is even worse than that! Even if you separate the two targets into their own projects you have to be extremely careful to never load both at the same time. For some reason the context isolation between LabVIEW targets (including targets in different project files that should be fully isolated in theory) simply doesn't work for PPLs. It seems that LabVIEW only maintains one global list of loaded PPLs across all possible contexts and that of course messes royally with the system. Instead PPLs should be managed based on the context they are referenced in and there should be no sharing at all between them.

    There is also an unfinished feature in LabVIEW that allows to install PPLs and other support files in target specific sub directories, so that you could theoretically have PPLs for all the different targets on disk and reference them with the same symbolic path which then resolves to the target specific PPL. But it has many bugs and doesn't quite work as intended on some platforms and as long as PPLs are not managed on a context base it is also of limited usefulness even if it would fully work.

     

  13. And what is the program on your ESP32 doing? Does it even listen on the according serial port? Does it know what it should do when seeing an *IDN?<new line> on that port? What does it send back when seeing that command?

    The ESP32 is a capable microcontroller board but it needs a program that can implement the reading of your sensors and react to commands from your LabVIEW program and send something back. And that program needs to be implemented by you in one of the supported programming languages for the ESP32. Most likely you will want to use ESP-IDF as a plugin in either Eclipse or VSCode. 

  14. The only thing I have found to work is to maintain separate projects for 32-bit and 64-bit and have them each build into a seperate location on disk. Anything else is going to mess up your projects almost beyond repair possibilities.

    That applies to both the projects to build your PPLs, possibly one project per PPL in the case I worked on, as well as for the applications using those PPLs. Using Symlinks to map the build locations of the PPLs to a common path that is referenced by all the projects making use of PPL dependencies (including PPL build projects), helps with maintenance of the projects as they all only need to reference a general location for dependencies rather than an architecture specific location.

  15. 5.0.1 and in the meantime 5.0.2 has been since released. One issue, but that is not really new and existed before: Don't disable mass compile after install, it may take some time but it sure fixes stale shared library paths in the VI and I have so far not found a way that makes those paths automatically fixup at package creation, since the path seems to need to be absolute.

    The two possible approaches I'm currently considering:

    1) use a so called Symbolic Path (/<user.lib>/_OpenG.lib/lvzip/lvzip.*).

    Disadvantage:

    - only works if installed into default location

    2) use Specify Library Name on diagram for the Call Library Node and calculate its path at runtime.

    Disadvantage:

    - makes the shared library be not a visible component to the VIs, so that the shared library needs to be added explicitly in every application/shared library/assembly/source distribution build in order to be available in such

    - extra execution time for the dynamic calculation of the path

  16. If you use the chroot trick that NI/Digilent did for the Linx/Hobbyist Toolkit it is theoretically doable but far from easy peasy. And you still have to sit down with the NI/Emerson lawyers as I told you before.

    However I doubt you want to run your Haibal library in a specially compiled Debian kernel running in a chroot inside your Jetson Linux distro. That is necessary since the entire NI library tree and LabVIEW Runtime is compiled for the arm softeabi that NI uses for their Xilinx XC7000 based realtime targets.

    And yes you can NOT run a LabVIEW VI without LabVIEW runtime on the target! Period!

    And that NI did put the NI Linux RT source online has nothing to do with that they want to be nice to you or let you build your own LabVIEW realtime hardware but simply because it is the most easy way to comply with the GPL license requirements for the Linux kernel. But those license requirements do not extend to any software they create to run on that Linux kernel, since the kernel license has an explicit exemption for that. Without that exemption there would simply not be any commercial software to run on Linux.

    And I understand what you want but that does not make it feasible. I WANT to win the jackpot in the lottery too but so far it never happened and quite certainly never will. 😀

  17. If and how the DLL uses exceptions is completely up to the DLL and there are actually several way this could work but only the DLL writer can tell you (if it is not explained in the documentation). Windows has a low level exception handling mechanisme that can be used from the Win32 API. It is however not very likely that a DLL would make use of that. Then you have the structured exception handling or its namesakes from different C++ compilers. And here things get interesting since each compiler builder was traditionally very protective about their own implementation and were very trigger happy about suing anyone trying to infringe on the related patents. It means that GCC for a very long time could not use the Microsoft SEH mechanism and therefore developed their own method that was avoiding Microsoft patents. So if your DLL uses exceptions, and doesn't handle them before returning from a function call to the caller, you might be actually in a bit of a bind as you really need to know what sort of exception handling was used. And if you use a different compiler than what was used for the DLL, you will most likely be unable to properly catch those exceptions anyhow, causing even more problems.

    Basically, a DLL interface is either C++ oriented and then needs to be interfaced by the same compiler that was used for the DLL itself anyhow, or it is a standard C interface and then it should NEVER pass unhandled exceptions to the caller since that caller has potentially no way to properly catch them. One exception are Win32 exceptions that the LabVIEW Call Library Node actually is prepared to catch and turn into the well feared 1097 error everybody likes so much, unless you disable the error handling level completely in the Call Library Node configuration dialog. 😁

    Your example code, while probably condensed and not the whole thing, does however ignore very basic error handling that comes into play long before you even get into potential exceptions. There is no check for the Load_Extern_DLL() to return a valid HANDLE. Neither do you check for the function pointers you get from that DLL to be all valid.

    p2_CallbackFunction_t is rather superfluous and a bit misleading. The _t ending indicates it to be a type definition but in fact you simply declare a stack variable and assign the reference to the CallBack function to it. Luckily you then pass the contents of that variable to the function, so the fact that that storage is on the stack and will disappear from memory as soon as your function terminates is of no further consequence. But you could just as well simply pass the CallBack function itself to that function and completely forget about the p2_CallbackFunction_t variable declaration.

    Once your function returns, you have indeed no way to detect exceptions from the DLL anymore as there is no stack or call chain on which such an exception could be passed up. The way this should be done is by the DLL handling all exceptions internally and passing an according error indication through the CallBack function in an error variable. It can't use the CallBack function to pass up exceptions either since the CallBack function is called by the DLL, so the exception handling can't go from the callback to LabVIEW but only from the callback to the caller, which is indeed yes ... big drumrolls ... the actual DLL.

    If your DLL doesn't catch all exceptions properly and handles them by translating them to an error code of some sort and passing that through the callback or some other means to the calling application, then it is not finished in terms of asynchronous operation through callbacks. Exceptions only can pass up through the call chain, but if there is no call chain such as with a callback function, there is no path to pass up exceptions either.

  18. 8 minutes ago, jhoskins said:

    That would be a viable option if I was only displaying information, but it also controls the equipment. I know you do not know my system and are only offering help and I greatly appreciate it. Thank you and I always learn new things when I post on here.

    Hmm, not trying to criticize you but having 100 (or even 25) little windows that all display data and allow control too, seems to me to be a pretty difficult UX. It's definitely not something I would immediately turn to. Probably would have more like a list box that shows the information for each device, possibly in a tree structure, and letting the user select one and then make the controls for that available in a separate section of the screen where the control will be specific to the selected device. Shaun's example, while nice technically, shows the difficulty of that approach very well even without much of user control. The graph in there is pretty much way to small for any usable feedback.

  19. 4 minutes ago, jhoskins said:

    How else would you suggest to show the information needed at 1 time on the main front panel? I use the DQMH framework and clone the VI and put that VI in subpanel on the main UI. I have been doing it this way for a long time now and have tested it up to 200 and have seen any issues. Updates coming in are really slow as the cameras send out status updates every minute. Realistically it is around 25 panels but it depends on how many cameras need to be monitored and controlled. But lets not get caught up in that, in general I would just like to learn to use Francois tool to programmatically align controls and indicators in a grid pattern based on screen size.  Similar to what Shaun is showing above. (Nice VI by the way, mind sharing?)

    I would likely use a Table or MultiColumn List Control.

  20. 4 minutes ago, ShaunR said:

    Booh with bells on. :lol: Hello 7 hr build time. 

    The whole ADS library overhead in an application adds about 0.0something seconds to the whole build time of any project. As long as you have a linear hierarchy in object class dependencies, there is virtually no overhead at all beyond what you would have for the involved number of VIs anyhow. Once you happen to create circular dependencies the app builder will go into making overtime to resolve everything properly and your build times start to grow into the sky. At some point you can get all kinds of weird build errors that are sometimes almost impossible to understand.

    Untangling class hierarchies is in that case a very good (and usually also extremely time intensive) refactoring step to get everything to build nicely again.

  21. 3 hours ago, ShaunR said:

    Booooh! :D

    For this type of functionality it is absolutely not Booh. 😀

    It is OOP in the sense that it uses LabVIEW classes as an easy means of instantiating different driver implementations at runtime. One interface (or a thin interface class in pre LabVIEW 2020) and then a child implementation to call the DLL, one to call VISA USB Raw and possibly others. On top of that one driver as a class to do MPSSE I2C and another for MPSSE SPI. That way it is very easy to plugin a different low level driver depending what interface you want to use. D2XX DLL interface on Windows, D2XX so interface or D2XX VISA USB Raw on Linux and LabVIEW RT. With a little extra effort in the Base Class implementation for a proper factory pattern implementation, the choice which actual driver to load can be easily done at runtime.

    I did the same with other interfaces such as Beckhoff ADS, one driver using LabVIEW native TCP, another interfacing the Beckhoff ADS DLL, one for Beckhoff ADS .Net and one for Beckhoff ADS ActiveX, although the ActiveX one has a bug in the type library that tells LabVIEW to use a single reference byte as data buffer for the read function and there is no way to overwrite the type lib information except patching the ActiveX DLL itself. The Typelib should declare that as an array datatype but instead simply declares it as a byte passed by reference. The same thing for a C compiler but two very different things for a high level type description.

    The base class implements the runtime instantiation of the desired driver and the common higher level functionality to enumerate resources and read and write data IO elements using the low level driver implementation. For platforms that don't support a specific "plugin" you simply don't install that class.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.