Jump to content

shoneill

Members
  • Posts

    867
  • Joined

  • Last visited

  • Days Won

    26

Posts posted by shoneill

  1. You can also check out a specific file at a specific revision by using the svn cat command and redirecting into a file:

    svn cat -r revision URL > Filename


    I use this to investigate files, detect files marked as modified or conflicting, check out the base file and compare using LVCompare.

     

    I use this to detect "false" changes due to RT deploys or conditional disable changes in compiled code and so on.  Files which are marked as modified or conflicting but which have no LVCompare changes versus their base reference URL can be reverted without danger and this makes commits much easier to handle when working on RT projects.

  2. Well let's just call it an educated guess.  With a lot more "guess" then "educated".  But I DO know that each target has specific impelmentations of most FPGA nodes so I could well imagine something like this being handled in the background.  Perhaps the XNodes are themselves created by other XNodes depending on the target being chosen.  Again, I have NO data leading me to think this, it's just intuition.  And guessing, did I mention that already? :lol:

  3. ^^ This

     

    This is why NI needs to keep spending major time in refactoring / bugfixing the current IDE.  Stuff like this happens to me all the time.

     

    Spend two days trying to find a seemingly impossible behaviour?  Oh, never mind, it's just that the IDE wasn't deploying VIs properly to the RT system....... Using an outdated file (correct local, incorrect remote)

    • Like 1
  4. Using the strategy pattern for a specific set of functions doesn't havfe to extent into the entire application.  It can itself be encapsulated in it's own sub-system.  The essence of the strategy pattern doesn't even have to use LVOOP, it can be done with vanilla LabVIEW (but I'm not sure that would make sense).

     

    The Strategy pattern is NOT an architecture framework, it's an approach to solving a problem, the scope of which is left up to the programmer.

  5. Slightly unrelated behaviour:

     

    When working on RT, I'd often get bad deploys (but without error) so that the code actually running ont he RT would be different fromt he code the IDE thinks is running on the RT.

     

    It seems that the "is running" determination is sometimes a bit less reliable than it should be, especially when dealing with multiple contexts.  I think there are some race conditions in the code associated with this aspect of the IDE.

  6. Sounds like the Strategy Pattern.

     

    If you have much shared code between the different modes make an abstract base mode with the read method in there (to be overridden by child classes) and inherit from that for the different modes.  Then use that class in your device class which in itself may have a completely different inheritance tree.

  7. I don't really see any other use case for my code at least.  I find it useful when I input an object to a generic function, do something to it and then return it.  I as the programmer know the object does not change, but LV can sometimes not know this explicitly so here "Preserve Run-Time class" prevents all objects exiting the sub-VI not being automatically cast (edit-time type) to a more generic class.

  8. "Preserve Run-Time Class" will CHANGE the type of your object to match that of the selector (The RUN-TIME type of the object is guaranteed to match) whereas "To more specific" will leave it unchanged or return an error (and when it's successful, the EDIT-TIME type changes to match the selector).

     

    If the input types of your "Preserve Run-Time Class" do not match exactlym the output object will be different than the input object.  This can be very dangerous if you have any references / resources inititalised in the incoming object.  What I don't know is how LV retains / removes certain fields of the objects if a type change is required within a tree of inheritance or whether it just returns a default object (I think it's a default object).

     

    I only use "èReserve Run-Time Class" within either DD functiona or functions which accept a static Object input but outputs the same type.  By wiring the output to "Preserve Run-Time Class" (and knowing that my code should ALWAYS return the same type as the input) LV then autoadapts the output type of the Sub-VI to match the input, even if the VI connectors are not DD.

     

    In your example, you would most definitely use "to more specific" rather than "Preserve run-time class".

     

    Shane.


    If your object is a visitor to NI Week:

     

    Lets say the inheritance hierarchy is

    NI Week Visitor -> NI Week Presenter -> Jeff K

     

    Right?  Jeff K is a presenter and a visitor, but clearly not all visitors are Jeff K.

     

    Using "To more Specific" on a Presenter with "Jeff K" as a selector will either allow you access to the Jeff K functionality on the SAME person or it will fail because the person just looked like him but isn't him and we all get to have a laugh.

    If you use "Preserve Run-Time Class" and the guy isn't actually Jeff K, the function will actually clone Jeff K for you to make sure that the object returned actually IS Jeff K, but because memories and skills are not cloneable, he's pretty much useless to you int hat state.  On the other hand, using "Preserve Run-Time Class" on Jeff K (who's presenting himself as a visitor only) ends up being similar to "To more specific Class" as it doesn't actually change the object.

     

    So the function "Preserve Run-Time Class" is to be used only when the TYPE of the object being forced is more important than the data contained within.

    • Like 1
  9. Thanks.

     

    I found those two links yesterday.  I still think that there are certain types of code smell which are typical to LabVIEW which would benefit the community if we could categorise and name them.  The immediate follow-on to that is detailing ideas on how to actually refactor them into better code.

     

    Please note that I'm not necessarily looking for resources for myself, I just happened to think that it's weird that we don't have any kind of resources in that direction.

  10. I think the way from UML to design pattern is more or less documented but the aspect of refactoring I am missing is when you have inherited a bloody mess of entangled (non-LVOOP) code from someone and getting from THERE to a proper decision of what when and how to refactor it.

     

    I'm talking about large code bases which have grown organically over years and comprise several hundred (or even thousands) of VIs.  Many of these VIs make (to me) some VERY dubious design choices but the tenmillion dollar question is: Where to start refactoring.  There are several aspects to the code which I would love to change but they touch nearly every VI in the hierarchy.  Making changes there breaks everything.....  Knowing where to start unravelling the ball of twine is sometimes the difference between success and abject failure.

  11. I have just bought and am slogging through the aforementioned book on refactoring.

     

    While it focusses mainly on OOP (not a problem for me) I was interested to read that the author mentioned that the scope of the information is really rather limited due to specific idiosyncracies of particular languages or applications.  I immediately thought of LabVIEW (I wonder why that would be) and wondered if there was ever any ideas to approach the idea of refactoring along the same lines of this book.

     

    I know we've been receiving lots of helpful information regarding the seminal Gang Of Four book regarding design patterns but I think the missing link for many users who have existing code is how to get from here to there.

     

    Due to the different programming paradigm of LabVIEW, refactoring often looks a bit different and we have probably got our own set of spaghetti nightmares.  Add to this the aesthetic aspect (readability aspect) and you have a significantly different approach required.

     

    While I have my own refactoring experience I was wondering if there are any such standardised approaches to refactoring in LabVIEW?

  12. We created something similar.

     

    We first put all FPGA communications into sub-VIs.  Then we created a simulation loop for the RT where we called a model VI instead of the actual IO vi.  Because we kept the FPGA reference encapsulated within the sub-VI, any code which didn't actually make use of the VI ran fine.

     

    So I would recommend simply encapsulating the communications to your FPGA target and things work better.

  13. You are correct, but in most AF systems, the message simply calls a method in the Actor where the real work is done.  So, the code that launches B would be in A.  But that does not mean you could not put the code in the message 'Do' only and isolate B from A.  But then you need to ask how is the message being called?  In most cases, A is calling itself to create B due to some state change or other action.  In that case, some method in A need to send the message to itself and then A has a static link to the message which has a static link to B.  This is exactly what happened to me and took a while to understand what what happening since there is no way to visualize this.

     

    If the code for launching B is in A, then yes then A is dependent on B, that's the way the system is.  If A needs to have knowledge of B in order to be able to function then those two modules are coupled whether you like it or not.

     

    With the code for launching B in a message you are right that a static copy of that message on the BD of A will still cause a link.  Can you interact with B as a plain actor?  If so, then you could also load the messages which are associated with certain state changes (used for deciding whent o launch B) via Factory method - as a base Actor - and then your static dependencies are gone.

     

    I've done this kind of thing before where I have a pre-determined state machine (Actually based on User Events from specific UI elements in my case - nary an actor in sight) and then I provide at run-time the objects (commands, messages whatever you want to call them) to the running code to tell them "When A happens, do this".  The list of Events is static (This is the API) but the actual actions performed at any stage remains reconfigurable and any specific dependencies disappear.  You have a skeleton of a state machine where the meat is provided at run-time.

  14. Regarding your original post here:  Maybe I have the wrong end of the stick here but here goes.....

     

    Your Actor A has a base "Do" implementation which gets called on any message being received, right?  The only thing adding B as a dependency is the message set up to do this (and it's associated dynamic dispatch "Do").

     

    I was the opinion this would make Actor B a dependency on the specific message for launching B and NOT of the actual Actor A.  Or do you somehow have the messages stored within A?

     

    I don't currently use the AF because I too found it not a correct fit for what I do (and have been pondering an event-based system for quite some time now - unfortunately without concrete results).

     

    Shane.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.