Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. Nowadays I use a DB file for settings which means you can mutate from version to version with a single, non application specific, query and do other stuff like revert to defaults without having to write application specific code. I'm also leaning further towards having a single "config" DB file for all applications which works great especially if you have multiple applications (it's like using the windows registry, but works on all platforms and you can copy it!).

     

    You can do something similar with INI files and have global INI directory somewhere outside your applications (as bmoyer is suggesting) which has a sub-directory structure with the app name and version. Loading and saving is just a matter of building a path using the App name and version (i.e. non-application specific). This doesn't get around mutation, but it means that if you un-install or re-install you can always get to the same point as you, in effect, build up a history even if they delete the entire application directory.

  2. I wasn't around when we made this decision, but I would guess the rationale was something like the following:

     

    Backwards compatibility would be a big burden for us. Every time we made a change to the execution system, we would have to consider how older code would behave with the change.  It would increase our testing burden and make execution bugs (the worst kind of bugs!) more likely.  It would make some kinds of big architectural changes (like the DFIR compiler rewrite) even more scary, and we'd be less likely to take on that risk.  It would make the run-time engine bigger.

     

    Now the C runtime is backwards compatible (I think?), but I'd imagine they aren't adding as many new features as we are.  The pain is also eased because you get a C runtime installed with the operating system.

     

    OK. Played a bit more with yours and Rolfs comments in mind.

     

    I will modify my original statement to

     

    If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, as long as you have BOTH run-time engines installed but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012 with the appropriate run-times . If you do this,however, you need to test, test, and test some more.

     

     

    Dynamic libraries, however, still are no-where as bad as diagram-less VIs (LabVIEW dynamic libraries being slightly worse from a deployment perspective than C ones, it seems.).

  3. I'm trying to understand if there is a licensing problem using MySQL database with the NI DB Connectivity Tool. The ODBC provided with are GPL-licensed and I think that if you link you code with it, there can be a problem if you plan to use a license different from GPL.

    Moreover I find that NI DB Connectivity Tool is quite slow.

    I think you'll find that "linking" has a very specific meaning for GPL licencing rather than "connecting" which is probably what you are thinking (and what ADO facilitates).

  4. This is not entirely correct.  Let's say you have a LV 2009 32-bit built library (i.e. a .dll).  You always need a 2009 32-bit run-time engine to load this!  The one exception is if you're in the LV 2009 editor on Windows, but in that case you should have the run-time engine installed anyway so it's a moot point.

     

    It is not the case that a LV 2012 run-time engine can load a LV 2009 built library, no matter what features the built library uses. The same is true vice versa - the versions have to match for it to be able to load. (although 2009 SP1 and 2009 count as the same version for these purposes)

    Can you expand on that since that has not been my experience.

     

    Are we talking about MSVC dependency linking being the reason or is there something else.

     

    ......later, after playing a bit......

     

     

    So that's pretty definitive. It looks like it checks. But I would still like to understand what the issues are i.e. what makes a LabVIEW dll different from a C dll apart from feature support

  5. We were planning on wrapping the DLL with the VIs and not exposing our users to the pain... but basically what you are telling me, is that from now on, I would have to keep building a new version of the LabVIEW driver for each version of LabVIEW, because the DLL would be version specific.

     

    I think I ought to clarify this. I assume you came to this conclusion from Rolfs comparison with panel-removed VIs. It's not actually as bad as that, Dynamic libraries in themselves aren't so much version specific but they are platform specific.

     

    A dynamic library can be loaded in any version of LabVIEW with a caveat.

     

    IF the library was written purely in C. You could load it in any version of LabVIEW and you wouldn't need the LV run-time for non-LabVIEW users (this you know).

     

    If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012. If you do this,however, you need to test, test, and test some more.

  6. No, 2012 is not a cut point, just the latest version at the time. We aim to maintain it as long as is practical, and we have at this point maintained backward load support longer than at any other time in LV's history, so far as I can tell. I suspect the current load support will go for quite some time because there's not really a problematic older feature that all of use in R&D want to stop supporting.

    Phew. Many thanks for clarifying. :thumbup1:

  7. We just last year walked a LV 4.0 VI all the way to LV 2012. It opened and ran just fine. You have to open it in LV 6.0, then LV 8.0 then LV 2012, as those are the defined load points, but the mutation paths have been fully maintained.

    Off topic (apologies).

    Is 2012 a load point? Or just that you loaded it finally in 2012? More generally. At what version is it planned that 2009 vis will not be loadable?

  8. How can I test the DLL? We wanted to use it in our LabVIEW code so we would have a single DLL for everyone and like I said earlier, to make it easier to replace in the future if they decided to build a DLL in C. But it seems, from what I am reading, that this will be more pain than gain, right?

    For everyone? Including Linux, Mac, Pharlap and VxWorks? If you are going to support all the platforms that labview supports, then you will need 6 dynamic libraries and you can't use labVIEW to create some of them. Two is just for windows.

     

    However. If you are committed to making the labVIEW driver dependent on a dynamic library (which is the case if you plan to replace it later with a native C implementation) then you are unavoidably making a rod for your own back. Avoid dynamic libraries in LabVIEW if you can - here be monsters (at least you didn't say you wanted .NET or ActiveX ... :D ).

  9. Wait, I am building the DLL, does this mean that I have to build two versions of the DLL, one in a LabVIEW 32 bit version and one in a LabVIEW 64 bit version?

    Well. You don't have to. But if you don't, then those with LabVIEW64 bit won't be able to use it (that's assuming you are only supporting windows ;) ).

     

    You are much better off leaving the LabVIEW code for LabVIEW users (and then it will work on all platforms including Linux, Mac etc) and just compile a 32 bit DLL for non-LabVIEW people and worry about the 64 bit when someone (non-labview) asks/pays for it.

  10. C calling convention it is and we won't try to get fancy with memory management.

    Thank you guys, I think we are going to learn a lot (probably more than what we wanted/expected) about DLLs ;)

    And make sure they supply you with the 32 bit & 64 bit versions. Most suppliers think that only the 32 bit is sufficient since 32 bit software can be used in windows. However. LabVIEW 64 bit cannot load 32 bit libraries!
  11. Eventually the DLL will do serial calls, so yes we will be dealing with byte arrays that might have null in between. 

     

    Make sure you specify to whoever is writing it that it must be "Thread-safe". Otherwise you will have to run it in the UI thread.

     

    Actually using fully managed mode is even faster as it will often avoid the additional memory copy involved with the MoveBlock() call.But at least in C and C++ it's only an option if you can control both the caller and callee, or in the case of a LabVIEW caller, know exactly how to use the LabVIEW memory manager functions in the shared library.

     

    That probably limits it to just you then :D

  12. Would defining the string as a "Pascal String Pointer"  remove the need to know in advance how large the string needs to be

     

    No, well. not really. It depends if you are going to have nulls in your data then you could use the C String and not worry about it. However. I'm guessing that because you are looking at string length bytes (pascal style strings can be no more than 256 bytes by-the-way)  that you are intending to pass arbitrary length binary data that just happen to be strings..

    .

    There are two ways of transferring variable length data to/from a library.

    1. Memory is allocated by labview and the library populates this memory the data (library needs to know the size and the resultant data must be no more than that passed - create and pass array like the ol' for loop of bytes)
    2. Memory is allocated by the library and labview accesses this memory (Labview needs to know the size and the resultant data can be any size- moveblock ).

     

    Either way. One or the other needs to know the size of the allocated memory.

     

    The general method is case no.2 since this does not require pre-allocation, is unlikely to crash because the data is too big and only requires one call for allocation and size, You call the function and have the size as one of the returned parameters and a pointer (uintptr_t) as the other Then use Moveblock to get the data (since it will be known at that point by the size parm). You will also need a separate function to release the memory. This also happens to be the fastest :)

     

    CDECL calling convention is the one of choice as the STDCALL is windows specific (you are intending to port this to other platforms.....right?)

    • Like 1
  13. Apart from Daklus sound advice. You might also check that you are using the High Performance Ethernet driver. Not so much for bandwidth, but more for CPU usage.

     

    Missing pieces of the image (or entire images) is usually due to bandwidth saturation/collisions. Just because a camera is capable of supplying images at a frame-rate and resolution doesn't mean that it can all be squirted over a LAN interface. You generally have to play with the camera settings to get things working nicely. Whilst the "theoretical" maximum of a 1 GbE is 125MB/s, in reality I have never achieved more than about 100MB/s reliably (assuming jumbo frames are enabled) and a 100Mb interface you will be lucky to get 10MB/s (rule of thumb is about 80% of interface speed).

     

    If Jumbo frames aren't being used (default is usually 1500) or are not supported by the interface, then this is usually the bandwidth restriction and you will have to go down to lower resolutions and frame-rates as the packet overhead crucifies the performance (note that if you are going through a router or switch, jumbo frames will also have to be turned on for these devices and match the packet size of the LAN interface).

  14. I'm still split on what to think about the summit. I completely understand wanting to up the ante with respect to who can attend otherwise the summit could blow up into something way out of hand, but part of me also is disgusted at the closed nature of the summit. I firmly believe it's not in anyone's best interest to keep these good ideas sealed behind an ivory tower.

     

    Like Comic-con (nerds) without the babes in lycra :D You shouldn't lose sleep over it ;)

  15.  Shaun, I think you are getting at what I was actually just thinking. For instance, this is all VISA communication (for now, and I expect that won't change). I was thinking of having a file then if the device changed, just update the commands in a file for that device, point the software at the file to be read, then send out the commands through VISA. Now I don't need a class for each device, I can just have a "VISA black-box device" which has a read and write that just wraps the visa read/write functions and sends out the appropriate commands
    I've done it many times, even to the point of a complete scripting language for one client. It works well as long as you don't have to handle state too much and it is sequential. You end up with files of the form

    CHECK_VOLTS, TCPIP->READ->MEAS:VOLT:DC?, %.2f Volts, 10, -10

  16. Well. You haven't really given much detail since if a parent class is too much hassle. I would guess anything else would be too.

    However. There are some simple edge-case scenarios where you can make a flexible system that can be extended without writing additional code at all.

    Consider a DVM that is CMD->Response for all operations. By defining a "translation" file you can convert operations to commands (or a series of commands) and expected results so that you can use a simple parser and just choose the file dependent on device. If a new DVM is used (but the tests are the same) then you just create a new translation file. You can extrapolate this technique for the tests themselves too. However. It all depends on the system, what you need to do and the complexity/flexibilty required.

    None of that has to do with classes or VIs, however. It's more a design choice to abstract away hw dependencies from the programming environment.

  17. Now, with "Must Implement", we have the ability to contractually require functionality in subclasses while maintaining the ability to extend parent class functionality, having acknowledged that Dynamic Dispatch is the incorrect tool when distinct function interfaces are desirable.
    Of course. A polymorphic VI has the feature that although you must have identical conpane layouts and directions. You can have different terminal types and numbers of defined terminals. However, the info for selecting the method is by the type wired to it or selection by the developer. In my example, I would just be requiring that class behaves like a polymorphic VI at run-time and method selection is dependent on the object rather than the data type wired to it. (I in fact, see no semantical difference between a polymorphic VI and a DD class except the choice mechanism).
  18. Yet this "classic labview behaviour" does not exist as you suggest (in any manner other than adherence to convention), since there's no concept of the "Call Parent Method" node or an inheritance relationship defined. Call this pedantic, but those relationship/constructs/contracts do help establish intent for subclass design, and ensure certain essential parent abilities are called from the subclass.
    Well. I would argue it does exist. A vi is calling the parent (the sub-vi). You just don't have your hands tied by the syntax of classes.

    Is a vi not "inheriting" from a sub-vi (Create sub vi menu command)? Can you not "override" the sub-vi behaviour? Are you not "encapsulating" by using a sub-vi?. Can you not "compose" a sub-vis inputs/outputs?

    I have always argued that the only thing LV classes bring to the table above and beyond classic labview is Dynamic Dispatch and a way of organising that makes sense to OOP mindsets. If you are not using DD, then all you have is a different project style and a few wizards that name sub vis for you..

    If you look at your example, then you are simply calling a sub-vi. However. You are restricted in doing so by a peculiarity of the implementation as a class.

  19. I'm still not getting this (emphasis added by me)

     

    These two issues would be addressed by a "Must Implement" feature, shown below (note: this is not valid syntax with LabVIEW today, but through image manipulation we're able to demonstrate the proposed syntax)

    post-17237-0-10425100-1358099919.png

    It is valid syntax if you don't use classes since it is simply calling a common sub-vi. You seem to be (in this example) arguing for classic labview behaviour (which already exists).

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.