Jump to content

shoneill

Members
  • Content Count

    857
  • Joined

  • Last visited

  • Days Won

    25

Everything posted by shoneill

  1. Oh and BTW, I'm currently programming on FPGA targets with, you guessed it, lots and lots of objects. I certainly don't see how I could be achieving similar flexibility and scaleability without utilising objects. The fact that LabVIEW does a full hierarchy flat compilation of all objects (and thus all Dynamic dispatch calls must be uniquely identifiable) makes some very interesting techniques possible which simply can't be done anywhere NEAR as elegantly without objects. Or is that not OOP in your books?
  2. I didn't say that the interfaces may comprise objects but that the systems themselves may, even if their interfaces are Object-free. I suppose the word "contain" is maybe more precise than "comprise".
  3. But, like others here, I don't get your point regarding the evils of OOP (Both in general and specifically in connection with this topic). I bet a lot of the windows subsystems you are used to interfacing with may or may not comprise Objects. What difference does this make?
  4. Interesting discussion. Seeing how I utilise user events for inter-process communication a lot, spawning callbacks dynamically (which then perhaps write to a notifier or whatever method is preferred) means it should be rather simple to implement this feature (I'm hugely in favour of callbacks for this functionality either way due to the ability to properly hide the user event refnum from the listener - a major leak in the otherwise very useful implementation of user event registrations). I might just give it a try at some stage.
  5. I have vague memory of hearing (or reading) at one stage that one COULD implement a different flavour of the channels if required. I can't remember where or when I came across that information.
  6. I really don't like defer front panel updates (unless working with tree controls), so for me that makes it a PITA.
  7. There is a PITA way of getting what you want. Set the Array index to the element you want to set focus to. Set the visible size of the Array to a single element (The one you want to set). Set the focus. Set the size of the array back to the original settings. Re-set the Index of the Array (if required). Like I said, PITA. By forcing the array to show only a single element, the control reference to the array element will be forced to the one you want. I haven't tried doing this in advance and then using several array element references, but I think they will all change in unison (as the array element is a property of the Array).
  8. So one thing I have learned here is that calling a VI by reference is WAY faster than I thought. When did that change. I may have last tested in LV 6.1
  9. So questioning my previously-held notions of speed regarding calls by reference I performed a test with several identical VIs called in different ways: 1) DD Method called normally (DD cannot be called by reference) 2) Static class method called by reference 3) Same static method VI called by reference 4) Standard VI (not a class member) called by reference 5) Same standard VI called statically All VIs have the SAME connector pane controls connected, return the same values and are run the same number of times. I do NOT have VI profiler running while they are being benchmarked as this HUGELY changes the results. Debugging is enabled and nothing is inlined. The class used for testing had initially NO private data, but I repeated the tests with a string as private data along with an accessor to write to the string. Of course as the actual size of the object increases as does the overhead (Presumably a copy is being made somewhere). The same trend is observed throughout. I can't attach any images currently, LAVA is giving me errors..... Results were: 1) 0.718us per call (DD) - 1.334us with String length 640 2) 0.842us per call (non-DD, Ref) - 1.458us with String length 640 3) 0.497us per call (non-DD, Static) - 1.075us with String length 640 4) 0.813us per call (Std, Ref) - 1.487us with String length 640 5) 0.504us per call Std, Static) - 1.098us with String length 640 It appears to me that calling a vi by reference versus statically adds approximately 0.3us to the call (nearly doubling the overhead). Given this fact, a single DD call is actually slightly more efficient than calling an equivalent static member by reference. (or a standard VI by reference for that matter). Of course we're at the limit of what we can reliably benchmark here.
  10. Ah crap, really? I keep forgetting that. Well it was basically a discussion where Stephen Mercer helped me out with benchmarking DD calls anc making apples to apples comparisons. LVOOP non-reentrant : 260ns Overhead LVOOP reentrant : 304ns Overhead LVOOP static inline : 10ns Overhead Standard non-inlined VI : 78ns Overhead Case Structure with specific code instead of DD call : 20.15ns Overhead "manual DD" (Case Structure with non-inlined non-reentrant VIs) : 99ns Overhead A direct apples-to-apples comparison of a DD call vs a case structure witn N VIs within (Manually selecting the version to call) showed that whatever DD is doing, it is three times slower (in Overhead, NOT execution speed in general) than doing the same thing manually. Again, bear in mind this measures the OVERHEAD of the VI call only, the VIs themselves are doing basically nothing. If your code takes even 100us to execute, then the DD overhead is basically negligible.
  11. Try setting the DD VI to not be reentrant...... The tests I made were with non-reentrant VIs (and with all debugging disabled) and I saw overheads of the region of 1 microsecond per DD call. I have had a long discussion with NI over this over HERE. If DD calls really are by-reference VI calls int ehbackground that would be interesting, but I always thought the overhead of such VI calls were significantly slower than 1us. Maybe I've been misinformed all this time.
  12. I dont understand what you are trying to show there to be honest.....
  13. CAVEAT: I can't open the code provided as I don't have LV 2016 installed, so I don't know HOW the OP is calling the VIs by reference. I'm assuming it's over the Connector pane with a strictly-typed VI reference?
  14. Being someone on the side of "Why is LVOOP so slow" I can't let this stay uncorrected. While DD calls ARE slow, they most certainly do NOT utilise call VI by reference in the background, They're too fast for that. My benchmarks have shown that the overhead for a pure DD call is in the region of 1us per DD call. Note that if a child calls its parent, that's TWO DD calls (and therefore int he region of 2us overhead). Please note this is purely the OVERHEAD of the call, the actual call my take considerably longer. But even if the code does NOTHING (kind of like here), the 1-2 us are pretty much guaranteed. So LVOOP is slower than it should be, but I don't know if I'd equate it with calling VIs by reference. That's way worse I think.
  15. I also understood the opposite of what you apparently meant (and also thought "hey, you CAN do that with NI").
  16. Check to see if any line intersects with any other. For N line segments, this requires (N-1)! comparisons. Easiest way to do it. Once you know which lines intersect, you can remove the in-between lines and move the start / end point of the two directly affected lines to the intersection point.
  17. So it's not a bug in the node per se but a bug in the mind of the person who thought this is how it should operate.....
  18. Poly VI works as long as at least the connector pane pattern is the same, even if the data types of the individual connectors is different, this is true. If the code is rapidly changing, the mainteainance of the poly VI can be a pain though. Apart from that, I agree it's a very nice way to kind of get the best of both worlds.
  19. I spent many hours trying to solve similar problems only to come to the conclusion that the solution I'm looking for makes no sense logically. Asking for Dynamic dispatch to take care if which VI to call is only valid if the connector pane in invariant. As soon as the connector pane is no longer invariant, then Dynamic Dispatch alone cannot know what to do. You can either move to a generic dat ainput type (A solution I personally am not fond of) or you can implement some VIs to pass in arguments BEFORE calling "Initialise" (as others have pointed out). The second version has the advantage of allowing a new "Initialise" to be called ont he object whenever you might want that as all of the parameters are internalised. I wouldn't do this as it doesn't solve your problem, it actually only makes it worse because instead of casting from Variant to whatever data you require in your ACTUAL Initialise funciton, you are trying to cast objects which is (AFAIK) less efficient. Doing this with classes versus variants brings nothing new to the table. You can still wire in the wrong configuration class and get run time errors. The option with internalising the parameters into the object before calling "Initialise" can at least catch type errors (even if it can't catch missing VIs to set certain properties). Of course the other option is to have a dedicated "Initialise" method for each class and ditch Dynamic Dispatch for this function altogether. By designing an "Initialise Device X" and "Initialise DEvice Z" methods independently of each other, you can then have the connector pane exactly as you require it with inputs declared required and strict typing. Of all the methods mentioned here (assuming re-initialisation is not a big need) I would simply create non-dynamic dispatch initialise VIs.
  20. Ooh, James, that's another interesting feature of RT deploys I forgot to mention. Random broken VIs in classes which run perfectly well on their own. I've seen this on many occasions also. I mentioned these issues to some R&D guys at NI week and they seemed surprised at the information. Where does all the information go when we send it to NI?
  21. Below follows my personal experience and frustrations, YMMV. I have been cursing this "feature" for a few years and have tried raising the issue with NI. My dream would be context-free VIs. There are certainly a few issues with saving VIs when using RT targets. When working on multiple targets, apparently LV switches out certain libraries depending on the target. Using one VI on a windows target and the "same" VI on a RT target may actually reference a different VI in the background with LV switching out target-specific VIs as required. The user doesnb't get any feedback on this, it's all controlled by the IDE. Unfortunately, the VI format is such that certain information about this specific VI is saved in the source code of the VI (Path, connector pane and so on). So when deploying to a RT target, a save is forced which again forces the VI to produce a different source than before (other things can also lead to unneccessary changes to source code). Going back to Windows target will then prompt a save when closing the file because..... I don't actually know why exactly. Does the windows version of the file receive a notification that the file was since saved from a different context? The very same thing happens with conditional disables, the currently active conditional disable is saved in the source code of a VI even though upon loading this is guaranteed to be overwritten anyway. All of these things make code reuse over several targets a royal PITA. I was hoping any future developments would fix these problems, but I'm as yet unsure if NI has understood how annoying this behaviour is. I think any such pollution of source code by target-specific data (which is enumerated again on load anyway) should be avoided like the pest. I have since written a VI which uses SVN command line tools to do an automatic LVCompare on the current on-disk VI hierarchy versus their pendant in the repository and auto-reverting if LVCompare detects no change. Shane PS: Regarding saving of VIs between windows and RT..... "Funny" thing is when this all happens to VIs which are set to be inlined and actual code changes have been made. My experience is that LV will mark the inlined VI as being changed, but not the owning VI (Whose compiled code actually contains the "old" version of the inlined VI). When deploying LV will deploy the new inlined VI (which is a pointless exercise as the VI as such is never called) but will still deploy the old version of the parent VI (which now contains out of date compiled code). This leads to all kind of fun with VIs running on RT not being marked as such in the IDE and vise versa. Yay. Productivity. . My workflow has adapted to go through the entire VI hierarchy from the sub-VI to main VI one by one and saving each and every VI so that LV will actually propagate the changes in inlined code to the actual application being deployed.
  22. Reproducibility is the problem. As soon as we strip down things to get a format suitable to submit as a problem report, the problem invariably goes away. This is the case with several things we observed over the years from wierd RT deploys (The RT guys had never heard of our issue at all when I asked about it at NI Week) to corrupt FP control references in typedefs created by QD. Reduce code, problem goes away. Great. How does NI propose we document these problems without sending a complete VM with LabVIEW installed AND complete source code for our project? Sometimes these problems just get swallowed whole by the inconveneice of trying to demostrate their existence.
  23. BTW a tip for anyone who might want to contribute next year. If your camera has a large enough capacity (mine could record over 30 hours in one sitting), it's quite feasible to set the camera up in one room and then actually go see a different presentation. This way you can optimise your personal perferences separately to your preferences for recording. I say this because Mark was too quick reserving a seat in Room 15 for the advanced track and without multi-tasking like that I would never have gotten to see Altenbach's presentation. There were also enough power sockets available so that battery power was not required. I don't know if that's alwaqys the case but it was here at least. Additionally, each room had an audio mixer at the front of the room, it should be possible to connect the mic input of a camera to one of the outputs of the mixer in order to optimise sound quality (if NI allows that) instead of using a microphone from the back of the room.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.