Jump to content

mje

Members
  • Posts

    1,068
  • Joined

  • Last visited

  • Days Won

    48

Everything posted by mje

  1. I dislike using user.lib at all, pretty much for all of the above reasons. I usually use an export of a needed library to a folder within the project that needs it. So yeah, my libraries are duplicated all over the place. It does make for some tricky check-ins sometimes...(compiled changes to library VIs using conditional disabled structures, most notably). It does suck not having all those things in a menu though
  2. Yeah, I was aware variants weren't reference types, there's a fundamental difference between a reference, pointer, and handle. A variant is a little more sophisticated than just a binary data dump though. It is aware of native types, and can intelligently convert between compatible scalars (not sure how well that translates to arrays & clusters). I guess this thread just brings up one of my major beefs with the LV IDE: NI really needs a good profiling tool to allow you to watch how your memory footprint is set up. What I wouldn't give for the equivalent of a watch and locals window to be able to browse my stack and heap data spaces during break points. QUOTE (jlokanis) At the risk of going offtopic...I admit predicting memory allocation when dealing with queues is rather nebulous to me. Anyone got a good handle on that, maybe a link or reference? I completely missed you were dealing with previews. I remember somewhere that that primitive will *always* return a copy, which makes sense since, because not copying it would open a whole can of worms with regards to synchronization and thread safety. It is entirely possible to use a queue and never deal with copies, see the NI singleton implementation if I recall. But what I don't understand is how that plays with fixed size queues. I was also under the impression queues hang on to their element buffer space (excluding heap allocations for arrays, reference types, etc). Those two strategies seem exclusive to me, I'm not sure how they're mixed and when one will be used as opposed to the other.
  3. I thought variants were really just handles, so a U32, or maybe U64 nowadays? So you're saying that running a variant down a wire from the queue copies the entire variant to the wire, all attributes and everything? The linked article below (very bottom) to me implies otherwise. I'd have expected LV to be smart enough NOT to make a copy of the actual variant unless you modify the data or attributes. http://zone.ni.com/reference/en-XX/help/37...data_in_memory/ -m
  4. I see this as well. I always attributed it to LabVIEW and/or the OS doing some caching. The effect is more pronounced when working off network drives, but locally there's still an observable difference.
  5. QUOTE (Matthew Zaleski @ Mar 21 2009, 07:41 PM) The short answer is it doesn't. But there are subtleties. Assuming GetNextLine() is not re-entrant, only one task will ever be able to call the method at once since there will be only one copy of the VI in memory. So there's some built in synchronization right there, but it's not really OOP that's buying you that, the principle is the same for any VI with default entrancy. Something to consider: you have multiple methods that would seem to operate on the file reference, like SeekToLine(), and maybe other "fluff". So even though LabVIEW will keep you from calling GetNextLine() simultaneously in two places, you'll likely still need to build some kind of synchronization mechanism to ensure that say, GetNextLine and SeekToLine are not called at the same time from different tasks. There are many ways, mzu mentioned the singleton pattern. Properly build a semaphore into your class will also work. Also be aware of the context of split wires if you have non-reference data in your class, or if references change.
  6. Let's face it, clusters can get absolutely huge from a real-estate perspective, both on the front panel and the block diagram. The only solution I've ever used to avoid the ugliness of a huge cluster is to always use a hidden control, even for constants in the diagram, that way I get my nice little icon regardless of the physical size of the cluster. I have one problem with this though, and that's when dealing with clusters that are connected to a VIs connection terminal. I consider it very poor form to hide anything that's connected, but often enough I have to just to preserve the sanity of my panels (as well as myself). My only solution so far has been to just leave a comment saying "ControlX is hidden" or something like that, so anyone browsing the panel knows to look for it in the diagram if they want to locate it. What I'd really like is to be able to represent any cluster as an icon, so it takes up a fixed (and small) amount of space. Bonus points for being able to switch between icon and full form on the fly so I can edit the data. Double bonus points if this can apply to BD constants too, since there's nothing worse than having a typedef'd constant resize in a diagram and oblilterate some formatting.
  7. QUOTE (Jeff @ Mar 18 2009, 05:42 PM) This. Best feature ever. A close second is conditional disable structures (or rather the definition of symbols). The only reason they're not first is I don't think they belong exclusively in the project scope to begin with...you really ought to be able to define them in class/libraries in my opinion, possibly even at the VI level. Just have the hierarchy sort out any conflicts and allow me a clear way to see which scopes are overriding what.
  8. mje

    Buffer hunting

    My original thought was to slap in an inplace structure and see what happens. Sure enough, it's the winner. The curiousness is the allocation that appears, but I remember something AQ said once in a thread a few months back (can't seem to track it down). It amounts to buffer allocations are not always used. I'd hazard a guess that in this case, the buffer allocation must be able to be there, since there's no way to know the size of the arrays ahead of time. However, under tested conditions, they're all matched in size, which allows the in-place structure to reuse the 3 buffers and forgo the allocation completely. The question that then pops into my mind, is then why is the native behavior of the add prim such that when not operating in place and working with two arrays, the allocation goes on an input? Notice when operating in place it moves to an output. That little nugget seems to suggest to me what might be preventing the optimization of the original code. LabVIEW's pretty smart about such optimizations usually, the fact that it doesn't work in this case I find a bit surprising. As an aside, I was fiddling around with this for the last twenty minutes or so and got much the same results that jdunham summarized in his image. I added another case though: Which would be similar to bsvingen's alternative case, but using native LV code. It does invoke an allocation of a non-arrayed DBL. I was surprised at how slow it was: worse out of any of the cases (though I never considered the swap bytes prim).I suppose the reason is that although the allocation is likely on the stack (always the same size, just a single DBL value), it still needs to iteratively copy the value from that memory location into the buffer, opposed to writing directly to it. I'd guess I had traded an array allocation for an array copy in that case.
  9. I have a class that stores some data in functional globals (FGs): this is essentially data that might be considered class/static data in other languages. The FGs are privately owned by the class. I'm worried about the lifetime of the VIs though, as ultimately I'm not familiar with how classes are handled and I'm dealing with a bunch of dynamically loaded VIs. If I'm not mistaken, a VI can unload from memory if the "owning" VI unloads, even if it's referenced elsewhere, and is one of the things you need to keep in mind when dealing with FGs. Do I run the same risks when dealing with class methods? That is, does the VI being owned by a class change the behavior at all? Can a class partially unload if the VI that brought the VIs into memory unloads? Heck, can an entire class unload if the referencing VI dies? -michael
  10. Right, the behavior you're seeing is by design, it's fundamental to dynamic dispatches (or polymorphism, as it's usually called in object oriented design), it has to be that way to be following the object paradigm. Text-based languages handle this dilemma by allowing you to qualify the method name in some way (Parent::SetSpeed, or AirPlane::SetSpeed). I'm aware of no such mechanism in LabVIEW, nor can I think of any hack to get it done in LabVIEW.
  11. QUOTE (Ton @ Mar 5 2009, 05:37 AM) I'll nitpick here, in that it's always a problem with LabVIEW's implementation of events. Other frameworks don't necessarily behave this way. Events can be designed such that multiple signals of the same event over-write the previous signals...I can't remember what this is called. A common example is mouse move events in some architectures...if you're busy handling something, they won't continue to pile up on top of one-another...you'll just get a single mouse move event with the most recent coordinate when you finally get around to handling it. Often I've wished that LabVIEW had an option to support this type of archtecture out of the box, or at least provided more control over the event queue such that it could be put together manually.
  12. QUOTE (crelf @ Mar 2 2009, 05:24 PM) I'd argue the opposite, doing otherwise is plain dirty. A second class, or another VI should never need knowledge of the class exposed to it at all, that's part of the reason a class has a private and public interface. QUOTE (jdunham @ Mar 2 2009, 06:20 PM) Well I would think it really is the responsibility of your class to figure out how to transform itself for file reading and writing. This is called Serialization (sorry if you already knew that). If any other code knows how to serialize your data, then you've broken encapsulation because that other code has to know the structure and contents of your object. This. Keeping your data encapsulated in one place is very important, and one reasons for adopting an objective design to many projects. Only your class should know how to rebuild it's internal structure based off data it deserializes from disk, similarly what to throw away when serializing to disk. Define an interface (preferably an actual dynamic dispatch) that is called by your chess game. The game will in turn call it on each contained piece, etc, until it's all done. Keeping it dynamic helps a lot too when you have long inheritance chains...each class handles it's own data then passes the request onto it's ancestor.
  13. The problem (at least on Windows) is that some paths are shall we say, non-relativistic: they just can't be resolved relatively. Use as many ".." tokens as you wish, but they won't be able to change a drive since the Windows file system doesn't mount drives to a navigable location, and no, My Computer doesn't count, go ahead, try to navigate there in a console. Any solution you implement would need to be aware of this limitation.
  14. Well, I really liked your idea, ragglefrock, and it turned out working it onto the framework proved simple enough. Attached is an updated 8.6 file: And a legacy 8.5 file: The OpenG Error Tools dependency is still there. There's still no example dealing with callbacks, but as before MessagePump::SendMessage.vi still wraps a callback up if you want to take a look. Basically callbacks have been abstracted into a base class called...get ready for it....Callback. Crazy I know. The class defines a dynamic method DoCallback.vi, which is called by the framework after every message has been processed, and by default does nothing. Derive the class though and implement the method and voilĂ ! Instant signaling of whatever synchronization routine you wish. I threw in three classes encapsulating notifiers, user events, and even one for a VI call, but nothing's stopping a user from creating others. All in all, I think I like this change, it hasn't added much complexity, but has definitely enhanced the utility of the class. By the way, is it normal I can't edit the original post to update the zip files for download? -m
  15. Here's an LV8.5 version: When doing the export, I've realized there's a dependency on the OpenG Error Tools, so you'll need that installed. This goes for the downloads in the original post as well.
  16. QUOTE (ragglefrock @ Feb 19 2009, 12:05 AM) Great idea. Initially I was using events as a callback since the first few implementations went along side event-based producer loops, but I realized the necessity for a whole loop construct might be limiting the use of the class. However, your idea of supporting multiple callback types via a dispatch really intrigues me...I'll have to do some exploration and see what comes out. Thanks for the feedback! -m
  17. Wow, thanks for the replies...to be honest I never expected it would be doable. Looks like I'm delving into unknown territory for me here, this might take some time. Thanks for the pointers all of you, and thanks a bunch for the VIs AQ. I'll have to look into scripting, it's been something I've been avoiding so far, but it ought to be fun!
  18. Ah, that's interesting to know how it's done. The "common starting point" really must be replicated. It's a set of two nested case structures, shown below. It's really just a tiny thing, but man, would it be nice to have any override already have that code in the diagram. Not only is it convenient, but it demonstrates the intended use of the method.
  19. I've created a framework class which I've used a few times, and tweaked it enough that I think it might be reasonably stable. It's a simple message pump/loop class. It essentially implements the consumer half of what LabVIEW calls a Producer/Consumer Design Pattern, but it can fit into several frameworks when you think about it. At it's most basic usage, you define a child class with some arbitrary private data, then override ProcessMessage.vi to operate on that data in response to messages. Messages are strings, and parameters are passed to (and returned from) the loop as an Object. This means that you'll likely have supporting data classes defined that allow you to pass relevant data to and from your message handling loop. Attached are two files: Contains only the class files. Contains the class files, and a project that has 3 examples to get you going on what the class does and how to use it. Some thoughts: I'm not sure the reentrancy of some of the VIs is the right thing to do. The MainLoop has to be, since it must be able to exist in multiple places if more than one version is running. Similarly, GetMessage has to be reentrant since it's the VI that the loop blocks on when the queue is empty, every running loop needs it's own copy of this VI. The other VIs, I'm not convinced that reentrancy is needed (ProcessMessage, InvokeCallback, Idle, HandleError). I'm leaning to yes for reasons I won't get into here, but I'll see what any of you have to say, if anything. I also flipflopped a few times on how to handle parameters. Objects are still "messy" in my opinion in LabVIEW, mostly owing to the fact that native data types don't inherit from Object. So even for simple parameters, you're forced to define very simple classes, which I often call "Static Data Classes" since they're often just wrappers around a single native LabVIEW type. Using a variant might be simpler, but I figured since this is an OOP based approach to a messaging framework, it's probably more elegant to keep it all object based. Every time I write another "static data class" though, I curse this decision, there's even a few of them in the examples, see the Int32 class...take a wild guess as to what it does. I still haven't checked the overhead involved in SendMessage.vi, where a notifier is created and destroyed with each call. But I can't seem to think of an elegant way of re-using a notifier where a previous value won't run the risk of triggering a race condition. I'd really like to use an Enum for a message, but again, LabVIEW types not inheriting from Object kills that idea. I thought of using my own class that I define, but that just gets messy when using the pump VIs on the block diagram since there's no native way to deal with class constants. -Michael
  20. The default implementation of an override VI is to simply wire up all the terminals to a call to the parent VI. I have a framework class where a method almost always has to be overridden, and I find myself duplicating code every time I create an override VI for a new class. Let me clarify, the code of each VI will ultimately be different, but they all share a common starting code, not unlike whenever you select a new Producer/Consumer VI for example, the familiar loop structures are always laid out. So I think this is a different issue than what Jim posted about earlier today. I'd like to define a template such that whenever a method is overridden, the new override has a starting bit of code in it beyond what currently is defaulted to. Defining the parent dynamic VI sa a VIT doesn't seem to do the trick. Is there a way to do this?
  21. Let me make sure I understand your situation: you have multiple sibling classes that define a method which contains identical code? As you said, ideally this should go into the parent class, but if that's not a possibility, perhaps re-thinking the inheritance chain might do the trick? It seems to me that having identical siblings breaks one of the fundamental paradigms of object oriented design, you're essentially implementing an interface identically in all classes...
  22. Well the posted solution obviously works, I'm more of the school that things should work out of the box. If I fix it on my PC, it does nothing to fix it on any other PC that might be looking at the code. In my eyes, it's not a fix at all, it's a kludge. I'm not saying this is a bug on NI's part: their objects resize as the font metrics change, which is the behavior I expect. I do believe that it's handled poorly though. For a cross platform development environment that is so dependent on graphical presentation, I'd expect said presentation to be independent of something as silly as choice of system fonts. NI should be using fonts installed with the RTE as a default for display to avoid such issues, resulting in all presentations looking similar. I'm actually very surprised they don't do this since they go to so much effort not to use any OS window models for controls and such. For the most part, controls and layouts that don't derive from the System palette look quite constant across operating systems from what I've seen. The default reliance on the OS for fonts seems like an oversight. This has bothered me for some time, as I used to have similar issues while sharing development on XP and Linux, or even multiple XP systems where accessibility settings are different. -m
  23. I code on XP at work and Vista at home. I find this issue very annoying, as it makes a mess of all but the simplest VIs, ones with lots of property/invoke nodes and associated constants can become a pretty good mess of overlapping goodness. I've become accustomed to leaving plenty of white space in my diagrams now, but it still doesn't solve the "problem" of wires getting bent. I've thrown the housekeeping rule of keeping wire bends to a minimum right out the window, since the condition and position of the wire depends on the last OS the VI was compiled on. Things can get messy around CINs or VIs that use 6x6 type connector panes.
  24. It's probably been a decade since I've played with MFC, but this thread brings a few ideas to mind. I'm rusty enough that I won't be able to provide a solution, but I can point out a few things that might get you rolling since restricting yourself to Windows system calls seems to be ok. SetWindowPos (user32.dll) is the function you need to move a window using system calls, you'll need to get your window's handle, I've seen discussions on how to do this on lava. It ultimately just boils down to another function call from user32 I think. Note there's some funky session isolation security issues that creep up on Vista, I have absolutely no clue as to whether or not they'll come into play. WM_MOVE is one of the related signals you might wish to receive, the message is broadcast whenever the window moves. The caveat is I have absolutely no idea how to hook into it from LabVIEW. It's trivial in MFC where you use a C++ macro, but... In general, the MSDN reference for Windowing: Windows gives a pretty good overview of how the system manages window resources. I wouldn't be surprised if there isn't a higher level of abstraction available through .NET calls.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.