-
Posts
867 -
Joined
-
Last visited
-
Days Won
26
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by shoneill
-
Oh this is why I would make use of Callbacks. You can delegate handling of the event to a callback VI which can send the new data via Queue, Notifier, Occurrence and global variable, file, Smoke signals, whatever. I would still recommend using a queue interface for the user (programmer) but leveraging the subscribr / unsubscribe feature of events purely for the act of distribution to N listeners.
- 16 replies
-
- many to many messaging
- queues
-
(and 2 more)
Tagged with:
-
-
I had a quick look. I thought it was very over-engineered and offered nos ignificant advantage over simply using the Queue primitives themselves but then I reailsed that you force ALL messages to ALL subscribers, something which Queues normally don't do. If you want to present different Queues for Publishing and Subscribing, you can utilise Event Callbacks for the broadcasting / subscription / unsubscription instead of having to code it yourself. I have half a working version running, maybe I can post it tomorrow.
- 16 replies
-
- many to many messaging
- queues
-
(and 2 more)
Tagged with:
-
On a slightly different note: In the original discussion leading up to this side thread the ball was brought to roll by the mention of "If you derive Sequence and Step from a List object.". This involves inheritance, with the derived class referring to the class which has inherited from some base class. But later (in this thread) it has just occurred to me that the following text "show you an example without classes at all just to show that using lists has nothing to do with inheritance" seems to contradict that. Perhaps we have our lines crossed? Perhaps you meant to refer to object composition in your original "derive" comment? In this case we're actually talking about the same thing. The entirety of the "Inheritance vs Composition" bases around the typical interpretation of the term "derived class" which is pretty well established to involve inheritance. If it was not meant in that sense, then the entire discussion is moot. this would also explain your surprise at the discussion even being about composition vs inheritance at all.
-
By the way, I recently came across several discussions on the best way to implement the "Composite pattern". Some claim that the interface for adding, removing and listing sub-objects (Interface, not the actual code - this still leaves the concrete implementation the freedom to implement whichever method of grouping they require) should be part of the base object type. This still feels like interface bloat to me. I am forcing a single object (say, a Circle) to implement methods like "Add Child" or "Remove Child" even though those methods make no sense for that object. You can't add anything to a Circle without violating it's identity as a circle. Others claim that two differing interfaces are required, one with and one without the extra methods. The addition of a "Is composite?" option allows the user to then determine whether the object is comprised of sub-objects or not (or whether it's even capable, even if it currently is not). A static cast could then be made to allow access to the extra methods if really required. I tend to think sometimes, within certain application spaces, that a sequence of test routines (for example) are not simply a collection but the ordering is important and more often than not, leaving out a single step will negate the benefit of the entire sequence. Can this be aligned with the composite pattern? Is there a different name for this? It's like using the composite pattern during object creation but not beyond that. A Sequence can be created using single steps or other sequences which in turn can comprise of single steps or other sequences which in turn...... This seems to obey the ideas of the somposite pattern but the ability to access/modify or delete sub-objects afterwards can be quite detrimental in these situations. Has anyone else got an opinion on this?
-
Any chance of a version for LV 2012 or does it utilise newfangled features?
- 16 replies
-
- many to many messaging
- queues
-
(and 2 more)
Tagged with:
-
I have recently started using LVOOP on my FPGA targets to implement "pipes". I declare a datatype and then have aq base class with a Read and Write method. I can then choose at compile-time whether these will be Registers, Handshakes, BRAM FIFOs or whatever. By defining these items outside of the sub-VIs and passing them in as arguments, I can dynamically link parallel lloops on the FPGA, essentially creating an interface between processes. Using this and multiple clock domains allows for some pretty cool re-use of code.
-
Well the first thing I would recommend is VI-defined Registers. I have used these and it's cool being able to simply instantiate X versions of a given register in code. I try to keep as much as possible out of the project definition. DMA channels and so on need to be project-defined but normal FIFOs, Registers, Block RAM and so can be defined within the VIs. I have also used LVOOP (calling mainly static methods although FPGA DOES support Dynamic Dispatch only if the concrete class at every node is discernable at compile time.
-
How do you get a strictly-typed VI Reference out of your LV FILE Object here? Your LVFileRefs is generic, right? Do you choose the "Get VI Reference" manually? In my earlier post, I didn't mean to add the LVFileRefs object into the Children, I meant for it to stay in the Parent. If it's in the Parent, it's also in the Children. I was a bit unclear on that and you seem to have picked up on something I didn't mean to infer, sorry. I simply wanted to state that a "chain of responsibility" for creating the concrete implementation objects does not preclude composition in any way. Do you have the factory method for instantiating your child objects as part of your parent class? I would much prefer the chain of responsibility because using dynamic loading of classes, it allows instant scalability. The way youa re doing things you have chained your parent to each of its children, unless I have misunderstood something. Giving a parent class knowledge of a concrete child implementation seems wrong to me
-
How about using the "Chain of Responsibility" for part of your factory? Have all of the child classes accept a refnum and "ask" them if they can instantiate on that refnum. That way you can make your way through the list of known children until you find one willing to instantiate on the wire. This way you have kept the instantiation part of the class but still have the ability to retain the refnum itself within the class in question. You can still use composition with your LVFileRefs object held within the individual classes, but you need not expose the entire object to the user (especially requiring the user to re-wire the object back in!). So the part of your post where you say ""NewRefs" had to have an object to operate on", you have ALL child objects to operate on, but the first one which says "Yup, I can work with that refnum" gets passed on. If none want to take responsibility, the factory method fails. If several want to take responsibility, the first to report willingness wins.
-
Making the functionality in your non-DVR approach a method of the File class is probably the best way to go. Whenever I have the thought "But the user needs to do X" then I know it needs an object method to do that because most users are stupid (myself included, in fact, especially myself)
-
Old content remains in lvclass files
shoneill replied to Steen Schmidt's topic in Object-Oriented Programming
Every class (as opposed to every object) stores its default value. Even when writing to a class via a static Accessor, overhead in combination with storing whether that object has default data or not leads to writes being slower than similarly designed clusters. This "default data" behaviour also raises its head when flattening to XML as only the OBJECT data is saved, not the DEFAULTOBJECT data. So every object points to a default instance of it's own class in memory where the default values are stored. For this reason, mutation history may be more important as the objects themselves saved to disk have no idea of any changes made to the non-saved default values of previous generations if it's not stored within the mutation history. I find it a very unwieldy construct but my opinion on the matter is most likely rather insignificant. -
Any particular reason why you're getting so annoyed about this particular topic? I have no stake to claim here nor any territory to defend. But it is becoming clear that instead of trying to understand my issues with the topic at hand you're trying to reduce me to either someone who is capable of having this discussion with you or not. Well (back to my strange disclaimer) I care not what you think of my credentials. By the way, I never attached the authority (of the authority fallacy) to yourself, it's clear you're not the expert on the topic (and that's not meant in an inflammatory way), but rather to the author of the NOMAS document. I do not blindly accept what he has written there just because he's done some good things. That would be a incorrect and I would be surprised if you did either. I have a look at it and identify good and bad points. If what I think are bad points can be shown to be otherwise, I have learned and I can adjust my thinking accordingly. But at the moment I'm getting precious little feedback as to why you have such an issue with my stance as opposed to having a problem with me even daring to take a stance. I have given points where I think the proposed work is weak. My gripe with the topic stands. If you agree, fine, if you disagree, tell me why. This is all completely irrelevant to the discussion at hand. If you want to continue to insult ME instead of discussing the merits or non-merits of my responses to your question then so be it. Fire away.
-
Yes I read it. I didn't refer to it (again) because I have already addressed this as a central problem in my first post. Either way, while he lists off some possible solutions to the problem, the absence of an actual fixed implementation for this problem leaves way too much room for wriggling to make the idea fit whatever you want it to. Hence my deus ex reference. Take the problems you are experiencing, put them in a new box and call it a solution. Just don't open the box! If Project Open Cobalt is indeed based on the ideas of NAMOS then I too would be interested in hearing about the details of actually implementing the ideas, where problems arose and where changes were required in order to achieve their goals. But then we'd be having a very different discussion. The idea of embedding the time scale "pseudo-time" in the peer-to-peer communications (as per Project Open Cobalt) is but one of a number of possible real-world to pseudo-time options the author mentions in his 1978 paper. If they have chosen a specific implementation then the reasons for this over others may enlighten me. It's a world of difference to me if pseudo-time was the actual basis for TeaTime or simply whether it served as an inspiration for the implementation of a similar idea. There are lots of new things I learn all the time, most of these things are in themselves really old. I imagine that's true for all of us. I'm very open in admitting the limits of my current experience and (alleged) knowledge. In the limits of this discussion, I fail to see what relevance the 30 years or my past exposure (or non-exposure) to pseudo-time has to do with the correctness of the idea or indeed my observed problems with it. It seems to be invoking the authority fallacy, something I have very purposefully and deliberately referred to in my "strange disclaimer". You could almost think I saw that one coming.... .
-
The following is my opinion based on my experience and having read some of the documents linked to earlier in the thread. I am not an expert but I fear no expert opinion. I also am not deferent to people who claim to know more than me because knowing more is not always the same as knowing better. If this leads people to think I'm talking about things I know nothing about then so be it. Regarding my "cop-out". I dislike having discussions on purely theoretical ideas because it allows people to make silly statements like "Just assume X is true". I dislike that kind of discussion if it is apparent that such assumptions are essentially "deus ex machina" assumptions. I believe the essence of "pseudo-time" is such a thing, a sleight of hand (which the author amusingly uses as an example in his NOMAS document. It pretends to solve problems it has simply shifted to a different location. I may be wrong but that's my take on it at the moment. Regarding the focus on CAP, you brought the ability to simultaneously satisfy C and A and P into the equation not me when you stated that a paper by David P. Reed debunked the idea that you can't have C and A and P in the same system "if you consider consistency to be able see data at the same pseudo time rather than absolute time". I find this argument to be highly suspicious for the same reasons I think the whole idea of "pseudo-time" is suspicious. In a distributed system, pseudo-time suffers from the exact same propagation and synchronisation problems as other data. Even the formulation of "seeing data universally in pseudo-time" simply side-steps the entire issue. In a centralized system it may have merit, but not in a distributed system as I can see it. My opinion on the idea of pseudo-time is that it's an interesting mental exercise but solves nothing in the real-world where response times are important where results obtained by interfacing with the system cannot be revised afterwards. "Oops, I shouldn't actually have put my hand in the blender, turns out it WAS switched on"! For data which stays completely WITHIN the bounds of the system, it might be fine (i.e. journalling) but as soon as you mix in real-world interfaces (where incorrectly reported values cannot be corrected post-fact) some of the the benefits (and consistency) are lost. Reading some of the NAMOS paper you linked to reveals a host of inconsistencies. In one place, pseudo-time defined the order of execution so that incorrect operations can be detected, in other places he talks of tolerating out-of-order pseudo-times where the real time determines the correct order. I fail to see how both can be used without completely destroying the whole concept. Even the creation of the pseudo-time itself is pre-destined to be a central, shared, exclusively accessed resource in order to ensure that each message can be guaranteed to have a unique identifier i.e. a global counter. This might work on a single machine, but once you cross physical boundaries (TCP) this approach has problems. This is also acknowledged at the very end of the document "- that it can't handle a high degree of contention among the transactions acting on a particular object. We have suggested above a strategy that mixes together NAMOS and some sort of centralized transaction scheduling discipline." By linking all of this to exclusivity (Page 63) "To say that no other program can interfere with the queue during the execution of the program means that the pseudo-temporal environment must provide a means to reserve a range of pseudo-times for exclusive use of the program, so that no other executing program can access the queue object in that range of pseudo-times" is essentially calling for a lock. It's not a global lock, it's a lock on a particular Time T (or range T1 to T2). So taking the ever-increasing "pseudo-time" identifier, a centralized numbering system and the ability to refer to any current or older version of data it sounds a lot like Subversion. For data which changes very infrequently, this might be a workable approach but for "lively" systems, the memory requirement for an approach like this would be prohibitive. Not to mention the work required in how to handle conflicting updates. Dynamic deadlocks also open up a whole new group of problems. Also, the lock mentioned on Page 63 become locks on specific version numbers in Subversion (which leads us right back to deadlocks). I'm not saying the idea is without merit, but I don't see the revolution. The concepts exist and are in widespread use today. Not for messaging in the Actor framework for example, but it's not completely new either. The combination of such a mechanism in a distributed messaging system would be interesting if it weren't for the fore-mentioned issues..... So yes, it is a journalling messaging system but problems still arise in ensuring integrity of the pseudo-time entity itself given the nature of distributed systems. The same problem which is observed in ensuring consistent data across machine boundaries exists also for ensuring the consistency of pseudo-time (latency, partitioning etc.). A system which aims to solve such problems cannot rely on itself being somehow magically impervious to the very same problems it is trying to solve. And as usual, it I am wrong I'll happily take any newly-gained information on board. As always.
-
"execute when all inputs have been satisfied" is synchronous. If we do this then we lose availability in the presence of partitioning (inputs never arrive). I can't comment on the overall idea because I find the idea rather theoretical. You can't simply define a different time scale because not all of the actors are operating on this time scale. I think the invention of "pseudo-time" sounds terribly like a purely theoretical construct as does the rebuttal of David P. Reed by assuming that everything is in "pseudo-time" (it isn't). I like the description HERE. Based on this, if we define a "pseudo-time" we can look in the logs and see that to Time X, the caller in Chapter 3 DID give his details and that they WERE in the system. Unfortunately, the caller in Chapter 3 does not conform to "pseudo-time", he's calling in absolute time and expects an answer in absolute time so the answer he got still frustrates him. Defining any given arbitrary time scale does not solve any of these customer's problems because he is not bound by this scale.
-
This is the problem though. I admit to having a much less than full grasp of the topic but it seems to me that this consideration is a purely theoretical machination. In the real-world, users and agents interact with a distributed system in absolute time, not pseudo-time. Translation between pesudo-time and absolute time is not free (or trivial) and will automatically run into the issues outlined in the CAP theorem. So by simply formulating the problem in pseudo-time versus absolute time nothing has changed because you can't shift the interaction with he outside world to pseudo-time and the CAP theorem still holds.
-
Generate occurrence in FOR loop returns same reference
shoneill replied to eberaud's topic in LabVIEW General
Destroying an Occurrence: The logic goes like this: You wire up a "Wait on Occurrence" with a Timeout of -1. This controls the loop execution. No polling due to timeouts. When do we exit this loop? With Queues and Notifiers we can destroy the reference and use this to let the listening processes know that the communications channel has been closed. With Occurrences you can't do this unless you destroy the occurrence. There is no error out for an occurrence, therefore a Timeoue = TRUE for an Occurrence set to wait forever becomes a Quit condition. I'm not condoning the usage (I'd be closer to condemning it), I have never used it and would never choose it over other methods. But apparently there were times where existing code needed modification for such a quit method and this was the solution. Just to add another 2c. I'm actively REMOVING occurrences from our software whenever I need to refactor something. Not that anyone thinks I'm a weird Occurrence fetishist or anything. -
Generate occurrence in FOR loop returns same reference
shoneill replied to eberaud's topic in LabVIEW General
I have seen a VI which actually destroys an occurrence. It's certainly not supported and I don't think it's included in a LV install either. I think it allowed a piece of code which was waiting on an occurrence with timeout "-1" to exit with a timeout flag set to TRUE, giving the loop an exit strategy. But I never actually used it so don't quote me on that. -
Are you launching the class member from within the class itself?
-
You can put a reference in each level of hierarchy of your class. I don't see another way to do it. Level 1 functions will use the Level 1 Reference. Level 2 functions use the Level 2 Reference (which exposes Level 1 and Level 2 Functions). Calls to Level 1 functions from Level 2 objects will still use the Level 1 reference. This isn't a problem I think. The only other way I can think of is continuously doing a dynamic cast between different statically linked references and the singly base reference stored in the base class. I wouldn't recommend doing this.
-
My no is defined purely and simply by the fact that I will not be at NI Week.
-
See, that's your problem right there.... No seriously, this happens with every XControl. It's like they have a subpanel border around them. I've grown to just accept it how it is. You have to manually re-position the label.