Jump to content

Daklu

Members
  • Posts

    1,824
  • Joined

  • Last visited

  • Days Won

    83

Everything posted by Daklu

  1. Thanks for the feedback everyone. That's what I thought but since I rarely use variants I don't have a good understanding of how other developers use them. Hopefully I'll get LapDog.Messaging v2.1 out soon.
  2. A while back James posted this question. So... I'm working on an update for LapDog.Messaging. Among other things, the update adds a library of native array types. I'm strongly considering adding a VariantMessage based on James' arguments on that thread. My question... Is there sufficient value in having a VariantArrayMsg included in the array message library to justify the additional overhead? I prefer to keep the library reasonably small so projects don't get bogged down with lots of unused classes, and an array of variants can just be stuffed inside a regular VariantMessage so I'm not seeing a lot of value in a VariantArrayMsg. Thoughts?
  3. I thought containment was just a general term encompassing aggregation and composition. Wikipedia implies containment is a specific form of composition, which is in turn a specific form of aggregation... (Italics mine.) I'm not particularly fond of that topology and I'm not sure where the author got it from. The two linked references don't appear to support it. MSDN says, though their definitions appear to be linked to specific feature available in COM. Me too. I feel like sometimes I get nit-picky with terminology but it sure gets hard to understand what others are talking about when we can't agree on the definitions of fundamental concepts.
  4. Here's how I do it. I think it qualifies as aggregation but I'm interested to hear what you think. Suppose I have a two classes, Car and Engine. I don't want to instantiate a Car object without a valid Engine implementation, so on Create Car.vi I add a required input for Engine (which I instantiate with its own creator method.) Furthermore, the Car.Destroy method return the Engine object that was present when the car was destroyed. I can create an Engine object and manipulate it before creating the Car object. I can also manipulate the Engine object after the Car object is destroyed. What I can't do is directly manipulate the state of the Engine object inside the car via DVR or references. If the Engine needs to be manipulated I either write Car methods that delegate to Engine methods, or I write accessors for the Engine object. I prefer delegation as it preserves encapsulation, but if discover I'm writing delegate Car methods for every Engine method I likely have a design flaw. At that point I'd probably go back to accessors for the Engine object (or perhaps have an Engine input terminal on the relevant Car methods.) The user calls Car.GetEngine to retrieve the object, performs operations on it, and calls Car.SetEngine to put it back in the Car.
  5. (Hadn't seen this post before...) This doesn't make sense to me. As far as LV is concerned, a default object is just as real as a non-default object. I'm guessing what you really are looking for is a way to know if the bundled object has actually been configured. With that in mind, here are other options I use at various times: Labview doesn't have constructors, but one of the first things I do when I create a new class is add a "Create MyClass" method to it. I add required inputs for any data members the class needs to operate correctly. By convention all my objects are instantiated using the creator method instead of by dropping a class constant. In your case, your "Create OwningObject" vi would have a required input terminal for "Owned.lvclass," and users will not be able to instantiate an owning object without also giving it a configured owned object, assuming they are following the convention. (This avoids your problem 90% of the time.) Ditch the comparisons and let your owning object perform its operations on the default object. Ideally I give the default object some reasonably simple and useful functionality. If that's not possible and the operation fails you can return a descriptive error message. Ultimately it's up to the developer to make sure they are using your api correctly. (This covers another 9% of the cases.) Use LVObject as a placeholder for the owned class in the owning class. Create a private vi to retrieve and downcast the object to the owned class. If the downcast fails you'll know the user has not set the object correctly. This is very similar to what James suggested, but you don't have to create a separate parent class that doesn't do anything. (Another 0.9% covered.) Include an "IsConfigured" parameter in the owned (not owning) object. I'm not particularly fond of this approach, but sometimes it is necessary.
  6. Is this a design question or a refactoring question? i.e. Are you rewriting from the ground up or trying to incrementally refactor the current app into actors. Regarding refactoring, I've had a few projects where I've tried refactoring from by-ref data (usually functional globals/action engines) to by-value data passed via messages. I think success depends a lot on the specific application and how well the original developer separated the functional components. The by-ref code I've seen often is highly coupled making it hard to refactor to a by-value paradigm without making big, sweeping changes. I believe it is possible to do it incrementally, I just haven't been successful doing it yet. If it's a design question, here's how I would approach it... The Instrument Actor would replicate the instrument as much as possible. In other words, if the instrument doesn't store the most recent 5 minutes of data internally, the Instrument Actor does not either. Once you have that actor fleshed out and working you can decide how you want to implement the data buffer. The buffering functionality would be in a separate class. It could be an Instrument Actor child class, or a class that wraps the Instrument Actor (aggregation,) or a class sitting in between the Instrument Actor and the rest of the application (also known as man-in-the-middle.) I prefer aggregation or man in the middle over inheritance as I think it's much easier to independently test the buffering code using those techniques. If your buffering is built into the Instrument Actor (or its child class) directly you'll have to have an instrument connected to test it. Many others perfer inheritance. The best decision depends a lot on your circumstances and goals.
  7. Just because it's an accepted idea doesn't mean it's a good one. I took the liberty of downloading the code you provided for the user on the other thread and poked around. Before I comment on the code, let me first say this... --Your solution is based on the code supplied by the other user, who was trying to address a very specific problem. Sometimes we help people bandage the cut on their arm without pointing out the railroad spike buried in their skull. I understand that. In that particular case it sounds like the solution you gave him does the job. I also understand examples are necessarily simplistic and cannot cover all the cases we are likely to encounter. Still, I'm not sure it's an example of a good general purpose solution. First, the purpose of your Not A Refnum test is simply to initialize the references on startup. If anyone calls the AE's Close action the entire thing breaks down. The NAR test isn't particularly helpful for keeping the system up and running. The limitations of that AE would be far more clearly communicated by replacing Not A Refnum with an Is First Call function. (Or even better, an explicit "Allocate References" action.) Second, using an AE as a poor man's mutex can't prevent a race conditions as long as the refnum is exposed to other code. The only way to verify a race condition does not exist is by inspecting the code and making sure no operations are performed on that reference anywhere other than in the AE. Imagine how much fun that will be on a large project. There are other things that smell too (one event refnum attached to multiple event structures(!?), no clear owner of the references, etc.) but they are sidebars to the question of pretesting references.
  8. Yeah, but are they semantically equivalent? If you were to explain the context of your api to a typical LV user would they naturally expect registering a value of zero is the correct way to unregister an entry, or would you have to explain it to them? I still vote for symmetric calls in collections--it makes the api much easier to understand. --- Edit --- Here's a question for you... Upon unregistration are you removing the entry from the table or just disabling it? I'd expect Unregister to remove the item while Register (0) would disable it but leave it in the table. (Though my preference would still be to provide specific enable/disable methods.)
  9. I was curious about the performance difference between the left implementation in the first image and the left implementation in the second image, so I whipped up a quick benchmarking test. (Unfortunately I'm still working through some premium membership issues so I can't upload any files or images.) I created an array of 250,000 queues and iterated through them once using Not A Refnum, then again comparing them against a queue constant using the Is Equal function. Not A Refnum took 39 ms while Is Equal took ~1 ms. So yeah, Not A Refnum definitely takes significantly longer. But Not A Refnum still only takes ~150 nanoseconds per call, so is it worth worrying about? Perhaps if you're running in a really tight loop, but I have to admit I'm rarely concerned about a time hit that small. (As a side note, creating all those queues took 53.1 seconds for an average of 212 microseconds each. That's 1,000 times longer than the Not A Refnum function.) -------------------------------- To phrase AQ's point in a slightly different way, it's a question of pretesting versus posttesting. In general, pretesting feels cleaner to me. I find it easier to reason through the code when fewer errors are possible. However, operations on a reference can't be pretested without exposing your self to a race condition. The only thing you can do is attempt the operation and then see if it worked as AQ shows in the "Combined Good Usage" example. I don't think it is a practice that should be encouraged. In fact, it should probably be actively discouraged. You're still exposing yourself to a race condition. Let's use your "Performance Problem" snippet as an example and ignore the performance issues you raised. The purpose of that snippet is to guarantee the output terminal contains a valid refnum. You are correct that we get the desired behavior in those cases where the input terminal has a zero refnum. It will fail the test, allocate a new queue, and because nobody else has that new refnum it is guaranteed to be valid. But if the input terminal already has a valid refnum on it all bets are off. You end up with the exact same race condition as in your "Bad Usage" example. The only place we can safely use that snippet is in situations where we know the input queue will have a zero refnum, and if we know that there's no reason to check the refnum in the first place. Rather than trying to list the specific examples of where it is and isn't okay to use Not A Refnum, I'd just go with, Pretesting a reference before performing an operation on it creates race conditions. It doesn't matter what function is used for the pretest (Not A Refnum, Is Equal, etc.) or what you're actually testing for (valid refnum, specific data values, etc.,) if you're pretesting to decide execution flow there is a race condition. (Unless, as you pointed out, there is nothing happening in parallel.) No, but it would make writing code with race conditions easier and more visually pleasing, possibly increasing the number of users who encounter that race condition.
  10. Sounds fun. Can you share what the nodes are/were?
  11. I'm also curious about the terminology. Why is "Register" so important? If you're stuck with that terminology I agree with mje--I'd much rather have an explicit Unregister method than magic inputs that make the Register method unregister an entry. (Besides, are you *positive* nobody will ever want to register a value of zero?) What kind of mapping are you doing? 1 to 1? 1 to many?
  12. Yeah, but Chris has a Hollywood smile and killer sig, so we get extra points for that. (Besides, you'd need more than a simple majority to justify breaking an existing api.) I think that would be an appropriate solution.
  13. When I'm feeling particularly snarky I call it Error ID: 10T. ("Idiot") (I haven't yet built that error into an application... maybe someday...)
  14. One of the dangers of jumping straight into the Actor Framework is you can end up writing code without understanding the subtleties or impact of your design decisions. In some respects that's what has happened with the QSM--it's super easy to write relatively complex code that on the surface looks great, but it's also super easy to create an unholy mess of things, and most people who use it don't understand what issues they should be looking out for. From what I have seen of the AF it is nowhere near as error-prone as the QSM. It may turn out my concerns are unfounded. I actually hope that is the case. However, for better or worse the AF appears to be on the cusp of becoming the next QSM for the LVOOP crowd (meaning commonly accepted as a one size fits all solution,) and given all the crazy problems I've seen with QSM implementations that worries me. Assuming you already understand OOP fundamentals (and it appears you do,) Head First Design Patterns is the first book I recommend for people starting to learn how to design OOP applications. After that the list opens up quite a bit. (Object Thinking, Design Patterns, The Object Oriented Thought Process, Practical API Design, etc...) Personally, I'd avoid using exam examples as the basis for anything other than learning how to pass the exam. Constraints imposed by the exam conditions (like the time limit) can lead to short-term design decisions that are not immediately clear to users unfamiliar with OOP, and those decisions might not be compatible with your requirements. I know I wasn't particularly happy with the code I turned in for my CLA exam. (Mostly... I thought the code was great on the exam I failed. I thought the code sucked on the exam I passed.) That said, I have not looked at AQ's sample test implementation. It may be a beautiful example of how to design a sustainable LVOOP application. *shrug*
  15. I also tend to think of a piece of data existing independent of any given vi, so from that perspective I agree with you. However, I'm not sure that's necessarily the correct way to think about it. As I understand it, each vi is allocated its own data space in memory. No other vi has permission to edit that data space. When a parent vi passes data into a sub vi, the sub vi's data space gets its own, independent, copy of the data.** The "name" of that data space is defined by the wire name, which in turn is defined by the terminal name. In other words, there is no independent "name" attribute for individual pieces of data (unless you explicitly add one,) only names of data spaces. So while part of me thinks it would be more clear to call it Get Terminal Name, doing so also kind of obscures the underlying nature of dataflow. (**Yes, we know compiler optimizations often are able to eliminate unnecessary data copies so strictly speaking it might not be true. But it's still useful to think about it that way.)
  16. The easiest way is probably to deploy your Series and Model classes as a source distribution to a predefined directory and have the application look for them there. If you're uncomfortable with having the source code directly visible on disk, I know others who have successfully deployed plugins as llbs, though I have no experience with that. Personally I think packed project libraries sound like more trouble than they're worth for deploying plugins. Depending on your situation, Shawn's suggestion may be a better approach and save you lots of time in the long run. Maybe the database contains a list of all the critical parameters for each product and the UI presents the list to the operator, giving them an opportunity to fill in the data. That way you don't have to keep going back to edit the source code--you can just update the database.
  17. Dependency management has not been discussed much on LAVA or in any of the NI material I've seen. Understanding its importance was one of those "Aha!" moments for me. Now creating a component dependency map is one of the very first architectural tasks I do, both on new projects and when taking over an existing code base. The VI Hierarchy window has the ability to collapse a library down to a single icon. That can be very useful for figuring out your dependencies in an existing project. I don't have an easy "one size fits all" fix for breaking dependency cycles. The best solution depends on many project-specific considerations. However, here is an (incomplete) list of general guidelines... 1. Avoid dependency cycles. (I know I said it before, but IMO it ranks pretty high on the "rules not to break" list.) 2. Reuse components should *never* depend on application-specific components. Ideally reusable components have no other dependencies, though occasionally it may depend on another reusable component. 3. The dependency map doesn't need to be a tree. In other words, it's okay for 2 or more higher level components to depend on the same low level component. 4. Custom data types, such as classes and typedefs, tend to create lots of dependency links as the data is passed around the application. Off the top of my head here are three ways to work around that: 4a. Create a low-level, app-specific library for your custom data types and have all your non-reuse components depend on that. 4b. Use Labview's native data types to exchange information. A 2d string array makes a great lookup table for passing arbitrary information. (Note: Often using native types to exchange information increases the opportunities for runtime errors. It takes diligence to ensure you are adequately testing those possibilities.) 4c. Use "adapters" to convert from one data type to another. This can be especially useful when your reuse component communicates in generic or custom data types and you want to use a different, application specific data type. [Note: In this thread, when I use the word "dependency," I am referring only to static, or compile-time, dependencies. Other types of dependencies include dynamic dependencies (depending on a dynamically loaded vi via path information) and definition dependencies (depending on a schema, protocol, or other definition not explicitly defined by code.) Managing those dependencies is important but beyond the scope of what I am discussing here.] --------------------- [Edit - It will be well worth your time to learn about dependency injection. Don't know how I managed to forget mentioning that...]
  18. No, you are not misusing libraries as far as I can tell and the "problem" you are seeing is one of the reasons I use them. In short, you have discovered an error in your design. When building an application you have to be aware of the dependencies you are creating between components. Currently your low level components are statically dependent on higher level components. Sometimes that is useful (such as when implementing a HAL or plug-in frameworks) but a reasonable rule of thumb to start with is, "let high level components depend on low level components, but not the oher way around." Grab a piece of paper and draw a rectangle for each component in your app. Next draw arrows from each component to every other component that it statically depends on. This is your dependency map. Do you see any cycles? (My guess is you'll find several.) You'll want to change your code to break those cycles. That's why you aren't able to load a single library without loading the whole application.
  19. Agreed. Local variables in and of themself aren't bad... I use them frequently in my fp display loop. This is the important thing to remember. Indicators and controls are ways for users to see and change the internal state data--they should not be used to store the state data. As long as I follow that rule I haven't had any problems with local variables.
  20. *flap* *flap* *flap* Did somebody say QSM? Actually the CLD is one of the few places I do recommend using QSMs. The exam problems seem to be designed for it and the code is thrown away after its graded. There's no concern for creating a sustainable application and putting time into architectural concerns takes time away from the things that earn points.
  21. John Lokanis and I ran into Michael and Justin from JKI at the airport. They're walking through the terminal, bags trailing behind them, and I blurt out.... "You guys just arriving?" Good job Dave... here's your sign.
  22. I don't really know how they are related as I don't have access to the .Net source code. If I were to guess I would say they might be siblings, but they're actually interfaces--not concrete classes--and they may not be in any sort of inheritance hierarchy. Here's a snippet showing the full property nodes before and after the typecast. Oddly, the SteppedLevelGenerator property node (before typecasting) always returns error 1172 while the FrequencySweepGenerator node (after typecasting) does not. [Grrr... my premium membership incorrectly expired and I'm over the freebie upload quota. I'll try to upload it once I get my membership sorted out.]
  23. I'm using a custom .Net api on a current project, and for reasons that are too lengthy to explain I've resorted to using a typecast to change a .Net reference wire from one .Net object type to a different, but similar .Net object type. (Property nodes have similar fields, but in a different order.) It works okay while smoke testing, but I'm not very comfortable with it at all. I don't know the underlying mechanics of how LV maps property nodes to .Net accessors, leaving me wondering what kinds of failures I'm likely to see. Anyone have insight they'd like to share?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.