-
Posts
3,183 -
Joined
-
Last visited
-
Days Won
204
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by Aristos Queue
-
<blockquote class='ipsBlockquote'data-author="for(imstuck)" data-cid="102360" data-time="1364917921"><p>And why would you create a child class of DAQ Output, that doesn't write to the output of a DAQ, no matter what the data type is?</p></blockquote> Ah. Suddenly the issue makes sense. You might not have a single Write function on the child. It might have a compound way of setting up the write. That's an edge case, granted, but valid nonetheless. But let's ignore that edge: it is hard to see how the parent class gets involved. It can't specify the conpane. It may not be proper to even specify the function name (i.e. one might have "Write Boolean.vi" and another have "Write Double.vi"). Overall, this seems mostly like the child class adding new functionality the parent knows nothing about. The closest it comes to the parent would seem like a tag on the poly VI: "all children must add one function of some name and conpane to this poly VI". Generally this doesn't seem like a real problem since the lack of a Write VI will be caught at compile time... or, rather, at the time of code writing since, as you point out, the class isn't particularly useful without this part. :-)
-
Well, that's unfortunate... I went to take a look at the Kickstarter... got this message: Drive with Dash is the subject of an intellectual property dispute and is currently unavailable. No need to check the servers — the rest of Kickstarter is doing just fine. If you are interested in this project, please check back later. Thanks for your patience.
-
for(imstuck): The problem you're having is the same that Jack is having. I'll be honest... I don't get the problem. It seems like a problem I *should* be having in my Character Lineator. In order to support the LabVIEW built-in data types, I either create a single function (for the primitive types) or I create a class that defines the serialization for the compound types (i.e. Waveform, Timestamp, etc which are built out of lesser components). Those classes have to have some functions that are common in all the classes but each with a different types in the conpane. It seems like that is where I should be feeling a need for Must Implement, but I just don't see any difficulties whatsoever. Because I don't see the problem of needing to define any sort of Must Implement, part of me wonders if I've just been working software so long without this concept that I can't actually see the need for it. Or it's possible that LV has something like what you're asking for, and I'm just instinctively using it (because I created it) and you guys are missing it because I named it something weird. Jack has tried to explain it to me, and that didn't work, so maybe you can give it a try. a) You clearly expect the parent class to be able to say something like "a child must implement this VI". But the parent class cannot specify what the conpane of that VI should be... not the types, not even how many terminals it needs. So what exactly does the parent VI specify? Is it simply "children must have a VI of this name" and nothing else? b) Let's say that a child doesn't implement this VI. Is the child class not runnable somehow? Or is it just not usable in its calling context? c) If the child class *does* implement this VI, is it a VI that is somehow invoked by the parent or a framework that uses the parent? I don't see how that can be since the conpane isn't even specified UNLESS part of saying "you must implement it" is also "and you must add a call to it to this case structure" or something like that. Is that the case? Or is this "must implement" VI something that a caller of the child class would use? If it is just something that the user of the class would use, why is it of concern to the parent class whether that VI gets written or not? How is the existence/non-existence of this function something that the parent even knows to specify?
-
I've given some more thought to my "add a straightjacket later" idea. It's a bad idea. ShaunR's comments provide a good starting point for explaining why. Quick summary of the "add a straightjacket later" idea: The idea of the straightjacket was that all classes would be written with no restrictions (every thing public, no requirements for override, an anarchist paradise). Then you would apply, as a separate file, restrictions to the class for a particular project or compilation. "Within this project, this function is private; this one is protected and is Must Override; this one is Must Call Parent; this one must have exactly three Feed Forward nodes; this one must meet O(n) performance characteristics..." and so on. (Note: That last one about O(n) performance constraints, isn't something that any compiler I know of is capable of proving on its own at this time, but the ACL2 language theorem prover is getting close, so in the next 20 years I expect many programming languages will be able to apply such constraints to interfaces). Forget classes for a minute. Every function (aka VI in LabVIEW) has preconditions. The function will not execute correctly unless those preconditions are met. There's an option that a developer faces on every individual function... do you document the preconditions in a comment or do you check them at the beginning of the function? True story: There was a particular math function that only worked on ascending arrays of data, and it checked that the data is ascending and returned an error if the data was not ascending. A developer realized that the work that function was doing was exactly what he needed for a completely different math operation when used on a descending array. He couldn't use the function because it asserted the array must be ascending. And, in this case, he couldn't borrow the code either because it was a compiled DLL he was calling into. Was the author of the original function wrong to check that the array was ascending and return an error? Those of you arguing that everything should be public are making this exact argument, whether you realize it or not. The function was written with knowledge of its calling context. And in that context, it was an error to have any other type of array. Let's take this back to access scope... Take this little snippet of code: This is a member of a class. It finds the first value in an array that is between two range limits. What happens if "Range Limit 1" is greater than "Range Limit 2"? No values get found. So the programmer screwed up, right? It needs a check at the beginning to flip the range around if they are passed in backwards, right? Oh, and a value might not be found in that array... it might be empty. Probably need to add a Boolean output for "found?". Right? No. Turns out that the class limits access to these fields. This array *always* has at least one element in it whenever this VI is called. And the Range Limit 1 is always less than Range Limit 2. How can that possibly be? Because this VI is private. It is only called by callers who have already checked the array is non-empty and has valid refnums in it. This is NOT a case of putting too many restrictions on the code. This is a case of putting exactly the right restrictions on the code. The code for bounds checking and range checking and a whole lot of other pre-condition testing is NOT something you can add later. If you're not going to add the restrictions until later, you have to include handling in the public version for all the possible values of the inputs. But all that pre-condition checking is a performance hit. An *unnecessary* performance hit if the code knows the conditions under which it will be called. This is not some random library function. This is a member of this class. And it is proper to make it a private function to limit the conditions under which it may be invoked. So "preconditions of a function" was one problem with the whole idea that caller requirements are separatable from the code. The next is more interesting. In thinking about the idea of the straightjacket, I kept coming back to the above paragraph. I also looked through various code bases I've worked on over the years, in G, C++, and C# (didn't bother digging into any of my old JAVA code). What I found was this: there are cases where a completely wide-open initial base class is created... or, just as likely, an interface. Nothing private, just the functions that are needed to do the job, sometimes with placeholders for functions that may be filled in later. But then someone will create a "canonical class implementation" of that base class or interface that is for a specific scenario. And that one does have all the restrictions placed onto it, specifically because it is designed to be the canonical implementation that *most* of the children inherit from. It isn't just "Oh, for this project I need to have these limitations". It is actually "I am being implemented in such a way that I only work properly under these limitations." An easy one is the class with an Init and Uninit functionality. The first base class may not do anything in its implementation. The first child -- aka, the canonical base class for most of the other children -- overrides the Init to register the object with another module of the application. This canonical implementation does this in its base implementation so that children derived from it do not have to do that manually themselves. But children might override Init and never override Uninit. The children will have a functionality mismatch if they override Init and fail to call the parent class implementation... the Uninit will throw an error trying to unregister from the core module when it was never registered in the first place. What point is there to leaving this as a runtime error? None. Better instead to mark the canonical Init as "Must Call Parent". And there are many other variations along this theme. In short, yes, classes are designed for the environment in which they expect to be called. You *can* write classes that are fully agnostic of their calling environments, and those classes are completely valid for things like public classes of published libraries. But anytime you have implementation code, you have calling conventions. Every function has its requirements, every class has its higher level requirements. You write code to be independent to the degree that it needs to be or that you want it to be. You complain that the guy downstream will just modify your code and complain when he breaks something? That could happen if you make everything public! The argument is not an argument against putting proper restrictions on code. That's an argument against downstream users of a library modifying the library and then calling it your fault, and that's a discussion to have with your manager not with your architect. So, if you happen to have a library that is *designed* to have all of its functions open for anyone to call, where every function is fully expecting to be called under the full range of its input values, great! But you design for that, and your code looks totally different than if you design for a more limited calling environment. And in those cases, your requirements are better checked by the compiler than by the run time system -- you get better run time performance and downstream developers spend less time figuring out what to code and you have fewer edge case errors springing up in deployed systems. If you aren't properly adding restrictions -- access scope, Must Call Parent, Must Override, etc -- to your functions, you are wasting your developers' time and your end users' time. I stand by my original statement: Usage requirements of a class, whether for callers or for inheritors, are best spelled out in the code such that the compiler can enforce them.
-
Ah, yes, another OO architecture question
Aristos Queue replied to GregFreeman's topic in Object-Oriented Programming
Another possibility: Look up "visitor pattern"... it's covered on the LV OO design patterns page over on ni.com. Basic idea: One "Do On DAQmx" object "visits" each object in your array and collects information about the task(s) to be performed. After visiting the entire array, it does the actions. For any objects that are not DAQmx objects, it tells them to go ahead and perform their action immediately. Various variations on this theme exist. One might be the right solution for you. -
For Loop Pass Through Utility *Cross Post Link*
Aristos Queue replied to Norm Kirchner's topic in VI Scripting
A) There's a minor bug in your existing implementation... you need to close your VI reference. I've fixed this in my version (moved the open of the VI Ref inside the "BD" case and then closed that ref at the end of that frame). B) The new version that I've put up here detects whether any of the wires are Error Clusters and, if so, turns on the Conditional Stop terminal for the For Loop and wires it. If there are multiple Error Cluster wires, it adds a Merge Error node before the Conditional Stop. My version is incomplete in one way... if you use this on a For Loop that already has a Conditional Stop terminal wired, it does not add the Or node to blend the existing stop conditions with the error cluster. I started to write that code but got bored. It wasn't the main use case for this VI -- it is intended for new For Loops, predominantly -- but I've left my start on that code inside a diagram disable structure in case someone wants to finish it. BEFORE: AFTER: This new version still needs the other subVI from the original post. Scripting - For Loop Pass Through_With Conditional Term.vi -
> So that must be documented somewhere No, it doesn't have to be documented anywhere. And as much as you are willing to bet that it exists as a spreadsheet somewhere, I'm willing to make the counterbet. Why would I make the counterbet? Because I was working on this exact problem last month, and the only definitive way I found to identify whether a given primitive/datatype/control/subroutine/configuration was valid on a given target was to let the target syntax checker run. An individual node may be generally known to work on a target, but not in one of its configurations. Or only if the project is configured a particular way. The edge cases are rare relative to the total number of nodes, but there are still plenty of them.
-
Wait... so you want to be able to do this *without* loading the VI on the actual target? I'm fairly certain that *we* could not write something that a priori said whether or not a VI would work on a given target because each target defines what it accepts. It allows us to write new features into core LV and have targets add support for those over time. Functions that work on one FPGA may not work on all FPGAs... functions that exist on one desktop target may not exist on all desktop targets. It is up to the particular target to decide whether to accept the code or not when the VI loads on that target. So, no, I don't think there's any way to do this short of loading the VI on the target and seeing if it is broken and, if it is broken, removing it (if your goal is to only include non-broken VIs). And if you build any sort of caching scheme for that information, you would want to invalidate that cache whenever the LV version bumps. The documentation is a broad categorization of yes vs no, but any specific target may have differences with the general declarations. I would say the only reliable way is to load a VI on the target and see if it is broken. (Notice that table doesn't say anything about FPGA, for example.)
-
Kudos value in the Idea Exchange
Aristos Queue replied to GregSands's topic in LabVIEW Feature Suggestions
I have not tested whether the type testing or the following is better performance, but my tweak below fixes the undetected coercion problem. -
This is easy. Write a VI that has a conditional disable structure on the block diagram for each of the cases you want to identify. "Desktop" or "RT" or "FPGA". Then have each frame output the appropriate value of an enum. When you call that subVI in your code, you can test that enum value. Now, as for why you would want to do that instead of including the conditional disable in the calling code so as to avoid the runtime check, I have no idea. But assuming there is a good reason to do that, the solution above should solve your problem.
-
[Discuss] BlinkingLED
Aristos Queue replied to LAVA 1.0 Content's topic in Code Repository (Certified)
mtat76's concerns are true but irrelevant. No matter what you do, there is work going on in the UI thread, and any blinking LED is going to have to do some work in the UI thread in order to affect the UI. This is not an "unfortunate aspect", it's the way things are designed to work. LabVIEW timeslices the UI thread to all requesting parties. Can you do things that hang the UI? Sure. Such a hang will hang all of your UI processes, not just this XControl. By and large, when those processes occur, they are at times when your user is not paying any attention to the rest of the UI. -
My lvlib paths/URLs are obviously absolute
Aristos Queue replied to flintstone's topic in Development Environment (IDE)
Rolf has the right answer... there is no relative path from one drive to another drive, so the path that gets stored is an absolute path. -
Fabiola de la Cureva: "I hide the configuration file in a hidden directory with a strange name. I do that to avoid the 'users with initiative' problem." [i.e., debugging why the system is broken only to discover that someone has been fingerpoking the setup.] That little phrase "users with initiative"... so much more tactful than the terms I've heard developers use over the years. Not that I would ever say that about any of *my* users. :-)
- 1 reply
-
- 1
-
Actor-Queue Relationship in the Actor Framework
Aristos Queue replied to mje's topic in Object-Oriented Programming
So when that case arises, Filter would send a message to Nested to handle it, perhaps packaging as a single message a whole block of messages that lead to the problem. Doing so means that Nested assumes that it is launched by a Filter for correct operation since it does not include its own filtration system (something that would be redundant under the design of a private Nested actor class). Any other holes? -
Actor-Queue Relationship in the Actor Framework
Aristos Queue replied to mje's topic in Object-Oriented Programming
I am not sure I'm following your terminology. But if I'm reading this right, your example isn't quite what I was thinking, and tweaked, I don't think the race condition exists. Check my logic, please... Here's the original launch stack... Caller Actor | Filter Actor | Nested Actor Nested Actor is given Caller Actor's direct queue so Nested can send to Caller directly... the advantage here is that Filter then only has to worry about messages coming one way and not about sometimes having to pass them up to Caller. Now, here's the fix that I think stops your race condition: Filter should *never* send messages to Caller. Filter's job is all about shielding Nested. If Filter wants to say something to Caller, that's actually a real job, so have it send a message to Nested that says, "Please tell Caller..." Does that close the hole? All the state knowledge is now in Nested, in one place. -
There's a "mini-language" approach that I've seen done. You give your child classes (TCP/UDP/CAN/etc) a method "What To Do About This Error.vi"... that VI takes the error in and returns an array of string output. Each string is a command like "Log the error", "Drop the error", "Retry Operation", "Consign The Network To The Flames Sanctum Ignious Eternum", etc. The framework then does the commands one by one (like processing a message queue in one of the string-based state machines). Variations on that system abound... you can make the mini-language be an array of message classes, each of which invokes a particular method on the framework. Sometimes you can get away with having the "What To Do About This Error.vi" call methods of the framework directly, but most people don't like that because giving the child classes scope of the methods generally means the child can call methods at other times, potentially destabilizing the system. The string and msg class approaches strictly limit what the child can request and when.
-
Actor-Queue Relationship in the Actor Framework
Aristos Queue replied to mje's topic in Object-Oriented Programming
I haven't worried about this aspect much because it seemed to me that any actor that needs to shield itself from incoming messages from its caller can be implemented as two actors, one that listens to the outside world and drops abusive requests, and an inner one that actually does work. The inner one doesn't even necessarily have to route through the outer one for outbound messages. Thus separating the two queues is trivial for those actors that happen to need it, but most don't. Is there something wrong with that approach? -
Actor-Queue Relationship in the Actor Framework
Aristos Queue replied to mje's topic in Object-Oriented Programming
Here's the most detailed one: https://decibel.ni.com/content/message/33454#33454 There have been others, but they've been tangents in the middle of other threads. It's basically just me listing off the same workarounds that have been listed here, but a bit more detail about the options. So far, everyone I've pointed to that thread has found something there that works for them. -
Actor-Queue Relationship in the Actor Framework
Aristos Queue replied to mje's topic in Object-Oriented Programming
What use cases are you trying to solve? I want paragraphs describing particular functionality that you cannot achieve with the AF as it stands before we introduce new options. I created the AF in response to one repeated observation: many users need to write systems for parallel actor-like systems, but it takes lots of time to design one that is actually stable, and it is incredibly easy to destabilize them with the addition of features. I've built a few of these systems both with the AF and with other communications systems, and they are *hard* to debug, simply because of the nature of the problem. The more options that exist, the more you have to check all the plumbing when considering what could be wrong. We need the plumbing to be invisible! I stuck "learnability" as one of the AF's top priorities. I get mocked for that claim sometimes ("You call this learnable?!") but when compared to the nature of the problem, yes, it is a very approachable solution. Introducing options is a bad thing unless we are solving a real need. So don't tell me "I can't do filtering on the queue," because that's a solution. Instead, tell me "I can't process messages fast enough" or "I need to only handle one copy of a given message every N seconds". And then we can talk through how best to implement it. In the case of filtering, there's a fairly long thread on the AF forums about various ways to do this with the current AF, and general agreement that those are *good* ways, not hacks or workarounds to compensate for a hole in the AF. But are they limited in the types of applications they are able to write? That's the real question. Yes, the AF demands a particular programming style. That consistency is part of what makes an AF app learnable -- all the parts work the same way. If there is something that cannot be written at all with the AF, that's when we talk about introducing a new option. So, please, spell out for me the functionality you're trying to achieve. In terms of filtration, I think that's been amply (and successfully) answered. In terms of proxying, take a look at version 4.3. If there's something else, let me know. -
I posted a reply to MJE, but one section of my post there is relevant here: > When I add a flexibility point, I prefer to do it in response to some user need, not just on the off chance that > someone might need it because every one of those points of flexibility for an end user becomes a point of > inflexibility for the API developer, and, ultimately, that limits the ability of the API to flex to meet use cases. That pretty much sums up what I've learned over my years of programming for component development. I had actually been putting together notes -- based on this conversation -- for a concept I was calling "a class' optional straightjacket". Where it falls short -- at the moment -- is module interoperability. Suppose we say that "all actors will stop instantly all activity upon receiving an Emergency Stop message". And we made that an optional straightjacket for actor classes. An actor that chose not to live by the straightjacket isn't just a loose actor, it is actually not an actor... it's ability to be reused in other systems is actually decreased. It is, in a sense, unreliable. That lead me to consider "a class that chooses not to wear the straightjacket *is not a value of the parent class*". Bear with me here, because these notes are a work in progress, but you brought it up. If a parent class has a straightjacket to let it be used in an application, and the child class chooses not to wear the straightjacket, then it can still inherit all the parent class functionality, but it cannot be used in a framework that expects classes that use the straightjacket. This makes it very different from an Interface or a Trait because the parent class *does* wear the straightjacket, but inherited children do not necessarily do so. Thoughts?
-
Actor-Queue Relationship in the Actor Framework
Aristos Queue replied to mje's topic in Object-Oriented Programming
Before everything else: Have you looked at experimental version 4.3? Does the option to add actor proxies satisfy your use cases? If that does not address your use cases... There's a whole lot of thinking behind the walls in the Actor Framework. I'll try to walk through them. Up front, I want to say that I'm totally open to changing parts of the AF... lots of it has already changed over the last two years of user feedback. These are the arguments for why it is the way it is now. They are not necessarily reasons for why it has to stay that way. 1) Assertions of correctness. Can you guarantee the correctness of a message queue that drops messages? Maybe but not necessarily... the message that gets dropped might be the Stop message. Allowing the plugability of arbitrary communications layers into the framework breaks the assertions that allow the framework to make promises. I've tried to make sure that no one can accidentally reintroduce the errors that the AF is designed to prevent (a slew of deadlocks, race conditions and failures-to-stop, documented elsewhere). "The queue works like this" is a critical part of those assertions. What I found was that too much flexibility was *exactly* the problem with many of the other communications frameworks. When people tried to use them, they quickly put themselves in a bind by using aspects of the system without understanding the ramifications. This is an area where even very seasoned veterans have shown me code that works most of the time but fails occasionally... generally because of these weird timing problems that cropped up from mixing different types of communications strategies. 2) Learnability of apps written with the AF. My goal was to build up a framework that could truly be used by a wide range of users such that a user studying an app written with the AF he or she has certain basics that are known for certain. I wanted debugging to be able to be straightforward. I wanted a module written as an actor to be usable by other apps written as a hierarchy of actors. Plugging in an arbitrary communications link causes problems with that. 3) Prevent Emergency Priority Escalation. I went to a great deal of trouble to prevent anyone from sending messages other than Emergency Stop and Last Ack as emergency priority messages. Lots of problems arise when other messages start trying to play at the same priority level as those two. In early versions of the AF, I didn't have the priority levels at all, and when I added them, the successful broadcast of a panic stop was a major problem that I kept hearing about from users developing these systems. An actor that mucks with this becomes an actor that breaks the overall promise of the system to respond instantly to an emergency stop. "But I don't want my actor to respond to an emergency stop instantly!" Well, tough. Don't play in a system that uses emergency stops... play in a system that only sends regular stops or has some other custom message for stopping. Actors are much more reusable in other applications when they obey the rules laid down for all actors. 4) Maximize Future Feature Options. The Priority Queue class is completely private specifically because it was an area that I expected to want to gut at some point ant put in something different. Maybe it gets replaced with primitives if LabVIEW introduces a native priority queue. Maybe it gets an entirely different implementation. I did not want anyone building anything that depended upon it because that would limit my ability to change that out for some other system entirely or to open up the API in a different way in the future. I firmly believe in releasing APIs that do *exactly* what they are documented to do and keeping as much walled off as possible so that once user experience feeds back to say, "This is what we would like better," you don't find yourself hamstrung by some decision you didn't intend to make just yet. When I add a flexibility point, I prefer to do it in response to some user need, not just on the off chance that someone might need it because every one of those points of flexibility for an end user becomes a point of inflexibility for the API developer, and, ultimately, that limits the ability of the API to flex to meet use cases. 5) Paranoid about performance. Dynamic dispatching is fast on a desktop machine. Very low overhead. But I was writing a very low level framework. Every dispatch, every dynamic binding to a delegate, gets magnified when it is that deep in the code. I kept as much statically linked as possible, adding dynamic dispatching only when a use case required it. 5) Auto Message Dropping Is A Bad Idea. There's a long discussion about message filtration in the http://ni.com/actorframework forum. It's generally a bad idea to try to make that happen with any sort of "in the queue" system for any sort of command system. The better mechanism is putting the filtration into the handler by using state in the receiver... things like "Oh, I've gotten one of these recently and I'm still working on it so I'll toss this new one." Or by introducing a proxy message handler... a secretary, you might say... who handles messages. Putting the proxy system together is what I was working with people to put together the networking layer that I published in January as version 4.2. (I added a cut point in response to a use case.) 6) Lack of use case for replacing the queues means lack of knowledge about the right way to add that option. Who is the expert about the type of communications queue? The sender? The receiver? Or the glue between them? MJE, you mention querying the actor object for which type of queue to use. Is that really the actor that should have the expertise? Perhaps Launch Actor.vi should have a "Queue Factory" input allowing the caller to specify what the comm link should be. Honestly, I don't know the right way to add it because no actual application that I looked at when modeling the AF had any need to replace the queue. What they generally needed instead was one type of queue instead of the three or four they were using (i.e. communications through a few queues, some events, a notifier or two, and some variables of various repute). And I just noticed Daklu's signature. In light of this discussion, it makes me giggle: -
IMHO, the answer is "yes" in the initial writing the code case and "hell yes" in the event that you're releasing version 2.0 and the requirements have changed. Every error caught by the compiler is worth months of runtime errors and errors that potentially are not found until end users are seeing them.
-
Deprecation? As a common solution? Do you work in an environment where revisions of classes take two years between iterations and where you can support all the old functionality during that time? I definitely do not. Backside functionality of a component is revised on a monthly basis. I mean, sure, deprecation *sometimes* works for widely distributed libraries of independent content, but that is a non-starter for most component development within an app. As for them making changes to your own code, that's one of the strong arguments for distributing as binaries, not as source code. Myself, I prefer the "distribute as source and let them fork the code base if they want to make changes but know that they are now responsible for maintaining that fork for future updates." But I understand the "binaries only" argument. It solves problems like this one.