Jump to content

ShaunR

Members
  • Posts

    4,939
  • Joined

  • Days Won

    306

Posts posted by ShaunR

  1. Cheers Brian, this put me on the right path. It turns out 255.255.255.255 is a valid UDP broadcast address, but the datagram won't make it beyond the local network, that is it won't make it past the network adapter. So not very useful for most situations. The correct way to do this is to broadcast to an address masked with the compliment of the subnet the adapter is operating on:

     

    attachicon.gifclient2.png

     

    Works like charm.

    You might want to take a look at the Transport.lib as well. It has an example of UDP Multicast and some nice features such as encryption, compression, timestamps and payload size (you can use more than 1500 byte payloads on windows and linux).

    • Like 1
  2. Hey,

     

    I am going to be developing a lot of .lvlibs for reuse by others, and I would like to quickly generate documentation for them. I started programming a tool that recursively looks through a library by using the "Owned Items[]" property and iterating through those in order to find folders, .vis and .ctl. Then using the "VI Documentation VIs", the tool would just output a word document that contains VI icons, terminal lists, and descriptions. Does a similar tool already exist? I would gladly use such a tool if it already does.

     

    C

    Yes. You can export a complete hierarchy to a couple of formats (including HTML) from within the IDE Look in the help under "Print Dialog Box"

  3. Dave: Yes, that is basically it.  When the parent quits, a local user event is fired to tell the UI loop to quit.

    Yes, all my "actors" inherit from that parent.  Think of it like the Actor Core in AF.  The parent handles all messages for that 'Actor'.  It also contains all state data.  Inside the parent code, I use dynamic dispatch to execute the various messages received.  The UI loop is a 'helper', not another 'actor'.

    I have implemented override-able error handling within the parent message code.  My 'helper' UI loop simply sends errors to the message handler to be dealt with.

     

    It occurs to me that my only solution might be to have the 'helper' UI loop call the error handler directly instead of passing it to the message handler.  That might mean I cannot reuse my error handler method as easily as I wanted to.

     

    Also still leave me with the need to stagger the shutdown for the error logger so it can catch any shutdown errors from the other processes.

    Why not just have a "Shut-down" actor which knows what order to shut things down?

  4. I don't use Git either, but I expect the only LabVIEW specific files that should be ignored are those listed in the original post. I also like Shaun's idea to ignore the various OS-specific database caches.

     

    I agree with Jack about not excluding binaries. Some of my LabVIEW source code directly calls binaries that I've created in C++/C#. The C source is not part of these packages rather they just include the binaries that are produced from another package.

    Perhaps it is because I also maintain the binaries (generally). These are usually under a separate repository and pulled in when needed (not as easy in Git as SVN) so that up issuing them doesn't require up issuing of the labview source (separate tree).

    Probably more my workflow so perhaps not a good idea as a generic.

  5. If you have a sequence engine. Then it becomes fairly straight forward to dictate what should happen when. This is one of the reasons why I use messaging with a queue for commands and events for response.

    Most of my apps are present in the real world where you need to do things like set a slide to the home position before then turning the power off. It's an extension of your error handling problem and is one of sequencing.

    The standard architecture that I have adopted is that only the sequence engine is allowed to issue commands and all other modules can only be listeners to other modules but not interact or act on them. An example of this would be a UI which can listen to the slide controller and show the position, but when the move button is pressed, the command must be sent to the Sequence Engine which will then tell the slide to move (and probably a load of other things too). This topology does not have inherent broadcast capabilities since each message must be placed on the appropriate modules queue and has to be considered and programmed by the designer. This is, however, trivial and the pro's far outweigh the cons since you dictate the order of commands and precisely target the processes. It is very rare that a broadcast control message can be acted upon without consideration to other processes.

    • Like 1
  6. (I'm sure you--MJE--know most of the stuff I say below, but I'm spelling it out for the benefit of other readers.)

     

    I propose implementing an actor with the expectation that its message queue is a de facto job queue violates the one of the fundamental principles of sound actor design.  Message queues transports and job queues have different requirements because messages and jobs serve different purposes.

    <snip>

     

    There's a lot of gray between an actor and a helper loop.  The implementations can look very similar.  I try to keep my helper loops very simple to avoid race conditions often found in QSMs.  They are very limited in what operations they perform.  They don't accept any messages and send a bare minimum of messages to the caller.  ("Exited" and "Here'sYourData.") 

    This is my fundamental objection to the Actor Framework. It blurs the line between messages and processes. It sort of funnels your application architecture to be the Actor Framework itself.
  7. Deprecation? As a common solution? Do you work in an environment where revisions of classes take two years between iterations and where you can support all the old functionality during that time? I definitely do not. Backside functionality of a component is revised on a monthly basis. I mean, sure, deprecation *sometimes* works for widely distributed libraries of independent content, but that is a non-starter for most component development within an app.

     

    As for them making changes to your own code, that's one of the strong arguments for distributing as binaries, not as source code. Myself, I prefer the "distribute as source and let them fork the code base if they want to make changes but know that they are now responsible for maintaining that fork for future updates." But I understand the "binaries only" argument. It solves problems like this one.

    Deprecation as opposed to deletion. If you just delete it you will break any existing code anyway. It's nice to give developers a heads up before just crashing their software ;)

     

    What have binaries  got to do with anything? That's just saying use it or use something else.

  8. And, as for the "private" argument -- the other reason for having private methods, like private data, is because they are the parts of the parent class that the parent may reimplement freely without breaking children in future revisions. They are often precisely the parts that you do not want children using as cut points because then you cannot change those function signatures or delete those methods entirely in future releases without breaking the child classes.

     

    Of course you can change or delete them. You just need to "deprecate" them first (which to me you should always do anyway).

     

    If I have defined them as protected it's not my problem if their code breaks child classes. They have made a conscious decision to override my bullet proof one for whatever reason sounds sane in their mind, so they should be aware of the consequences.

     

     

    A trivial one... I have a private piece of data, which, you admit, is useful to keep as private. I may implement private data accessors for that piece of data because it can aid the development and maintenance of the class itself to be able to breakpoint and range check in those accessors. But if I make the accessors protected or public, I have substantially limited my ability to change the private data itself.

     

    There are lots of others, IMHO, but that seems to me to be an easy one.

    You haven't changed anything (and don't try to bring public in as an equivalent - it's not). Similarly to my previous paragraph, they should understand what the consequences are since they understand why they are doing it. By making it private you are denying them the opportunity to add, in your example, logging to that accessor. So. What will they do? Hack your code! When it finally all falls to pieces three weeks later after they have forgotten about the hack and they have put in a bug report for your class (which you won't be able to replicate) you will eventually find that out if/when they send the code. 

     

    You don't stop them from doing anything by making it private. What you do is force them to modify your code to make it fit their use case. Bear in mind also. It is only rare occasions when it is required, but the argument is that if they wish to do so, however unsavory it may be, then they should be able to without modifying the original, tested code. Then it's their problem not yours.

  9. Since subclasses inherit state from the parent, it could be desirable to ensure the parent object is constructed properly by imposing 'Must Call Parent' in addition to 'Must Implement'. (Any parent enforcing 'Must Implement' without specific functional requirements such as this is probably is better designed without the contract, allowing the subclass designer the freedom to construct the object with a constant and setters.) And 'Must Call Parent' also can ensure atomicity on construction when it's important to fully construct the object before invoking any methods on it.

     

    It sounds like the overzealous parent class designer you describe is taking contracts to an extreme, and has crossed the line of "strategically-placed contracts to make the class easy to use correctly and hard to use incorrectly".

     

    Just for clarity: do you suggest 'Private' scope should not exist, or should developers just consider 'Protected' way more often?

     

    Personally? More the latter (but I have heard reasonable arguments for the former). For example. In languages where you declare the scope of variables, then it's imperative to define variables that maintain state as private (this restricts creating debugging classes). Methods, on the other hand, should generally be protected so that the you don't restrict the ability to affect behaviour and I have never seen *(or can think of any) reason why any should be private. Even those that the developer sees as private "may" be of use to a downstream developer.

    There are two different types of reuse, and two different "ease of use" requirements, and they oppose each other. So the answer is that you put as many restrictions on the class as makes sense for the intended usage.

    I think that here we fundamentally disagree. There is only "re-use"; one "instance", if you like. Can it be re-used without modification. Re-purposing without modification goes a long way towards that and the more restrictions, the less it can be re-purposed. One is aimed at the user, the other at downstream developers but they are not in opposition (we are not looking at Public Vs Private). When re-purposed, you (as the designer) have no idea of the use-case regardless of what you "intended". Suffice to say a developer has seen a use case where your class "sort of" does what he needs, but not quite. Placing lots of restrictions just forces down-stream developers to make copies with slight modifications and that is an anathema to re-use.

    As for "ease of use". Well. That is subjective. What is easy for you may not be easy for me especially if it is a use-case that was conceived when your crystal ball was at the cleaners :D

  10. No class (or sub vi) ever declares what it is to be used for.  It only declares what it does, and it does that in code.  What it is used for, or how it is used, is entirely up to the person writing the calling code, not the person designing the class.

    This is also the crux of the Private Vs Protected debate. What is it better to do? Put so many restrictions that they have to edit your code for their use case (and you will get all the flack for their crap code), or make it easy to override/inherit so they can add their own crap code without touching your "tested to oblivion" spaghetti -  regardless of what you think they should or shouldn't do.

  11. Nowadays I use a DB file for settings which means you can mutate from version to version with a single, non application specific, query and do other stuff like revert to defaults without having to write application specific code. I'm also leaning further towards having a single "config" DB file for all applications which works great especially if you have multiple applications (it's like using the windows registry, but works on all platforms and you can copy it!).

     

    You can do something similar with INI files and have global INI directory somewhere outside your applications (as bmoyer is suggesting) which has a sub-directory structure with the app name and version. Loading and saving is just a matter of building a path using the App name and version (i.e. non-application specific). This doesn't get around mutation, but it means that if you un-install or re-install you can always get to the same point as you, in effect, build up a history even if they delete the entire application directory.

  12. I wasn't around when we made this decision, but I would guess the rationale was something like the following:

     

    Backwards compatibility would be a big burden for us. Every time we made a change to the execution system, we would have to consider how older code would behave with the change.  It would increase our testing burden and make execution bugs (the worst kind of bugs!) more likely.  It would make some kinds of big architectural changes (like the DFIR compiler rewrite) even more scary, and we'd be less likely to take on that risk.  It would make the run-time engine bigger.

     

    Now the C runtime is backwards compatible (I think?), but I'd imagine they aren't adding as many new features as we are.  The pain is also eased because you get a C runtime installed with the operating system.

     

    OK. Played a bit more with yours and Rolfs comments in mind.

     

    I will modify my original statement to

     

    If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, as long as you have BOTH run-time engines installed but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012 with the appropriate run-times . If you do this,however, you need to test, test, and test some more.

     

     

    Dynamic libraries, however, still are no-where as bad as diagram-less VIs (LabVIEW dynamic libraries being slightly worse from a deployment perspective than C ones, it seems.).

  13. I'm trying to understand if there is a licensing problem using MySQL database with the NI DB Connectivity Tool. The ODBC provided with are GPL-licensed and I think that if you link you code with it, there can be a problem if you plan to use a license different from GPL.

    Moreover I find that NI DB Connectivity Tool is quite slow.

    I think you'll find that "linking" has a very specific meaning for GPL licencing rather than "connecting" which is probably what you are thinking (and what ADO facilitates).

  14. This is not entirely correct.  Let's say you have a LV 2009 32-bit built library (i.e. a .dll).  You always need a 2009 32-bit run-time engine to load this!  The one exception is if you're in the LV 2009 editor on Windows, but in that case you should have the run-time engine installed anyway so it's a moot point.

     

    It is not the case that a LV 2012 run-time engine can load a LV 2009 built library, no matter what features the built library uses. The same is true vice versa - the versions have to match for it to be able to load. (although 2009 SP1 and 2009 count as the same version for these purposes)

    Can you expand on that since that has not been my experience.

     

    Are we talking about MSVC dependency linking being the reason or is there something else.

     

    ......later, after playing a bit......

     

     

    So that's pretty definitive. It looks like it checks. But I would still like to understand what the issues are i.e. what makes a LabVIEW dll different from a C dll apart from feature support

  15. We were planning on wrapping the DLL with the VIs and not exposing our users to the pain... but basically what you are telling me, is that from now on, I would have to keep building a new version of the LabVIEW driver for each version of LabVIEW, because the DLL would be version specific.

     

    I think I ought to clarify this. I assume you came to this conclusion from Rolfs comparison with panel-removed VIs. It's not actually as bad as that, Dynamic libraries in themselves aren't so much version specific but they are platform specific.

     

    A dynamic library can be loaded in any version of LabVIEW with a caveat.

     

    IF the library was written purely in C. You could load it in any version of LabVIEW and you wouldn't need the LV run-time for non-LabVIEW users (this you know).

     

    If you create the library using the LabVIEW build process, then the user should have that version of the run-time. The main reason for this, however, is more to do with the supported features that you may put in the dynamic library rather than the compilation process (although NI tends to use different compilers with each version - so it is also a safer bet for that reason). Therefore it is possible to load a  LabVIEW 2009(32 bit,) built library in an executable built in LabVIEW 2012(32 bit) with the 2012 run-time, but it will not be possible to load a 2012 built one in 2009 with the 2009 run-time if you have used features that are unavailable. This just pushes the maintenance overhead downstream to the installer. Similarly, a dynamic library built in 2009 can be loaded in the IDE of, say, 2012. If you do this,however, you need to test, test, and test some more.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.