ragglefrock
Members-
Posts
105 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ragglefrock
-
QUOTE (crelf @ Feb 20 2009, 12:46 PM) I tried to figure this out once, but there was just too much vertical knowledge involved.
-
MessagePump: A Messaging Framework Class
ragglefrock replied to mje's topic in Application Design & Architecture
Nicely done! I like the encapsulation above all. It's a simple framework to use on the top-level. I also really like the idea of passing unknown messages onto the parent class. This is an interesting implementation for handling messages. You could probably architect it another way that wouldn't involve requiring each Message Handler override to call the parent method in a default case, but right now I can't think of exactly how to do that. Other areas to explore might be to allow different types of callbacks such as user events, which are useful for code that already has an event structure set up. Perhaps there's a way to dynamic dispatch this functionality. -
QUOTE (turbophil @ Feb 18 2009, 12:06 PM) You can have the best of both worlds. Look into the Call Setup property for a subVI. Simply right click your subVI on the caller's block diagram and select Call Setup from the right click menu. From there you'll see a few options: Load with Callers, Reload for Each Call, and Load and Retain (or something like that). Reload for Each Call is basically equivalent to the Call by Reference you're trying to implement, but it's as easy to debug and deploy as a subVI. Try using this setting on both RT and Windows. Alternatively, you can use Load and Retain or whatever it's called, which means don't load the subVI until it's first called, but then keep it around in case you need to call it again. Debugging is great. You can step into it, set breakpoints, access its front panel, block diagram just like a regular subVI. But deployment is very easy. It's seen as a dependency and automatically deployed to the target or included in a build with everything else. You don't need to add it as a dynamic VI in the App Builder.
-
How do I copy an LVOOP method to a sibling class?
ragglefrock replied to Jim Kring's topic in Object-Oriented Programming
QUOTE (Jim Kring @ Feb 18 2009, 01:41 PM) For the scripting method, the VIs located here look promising if you can figure out how to use them correctly without seeing their block diagram, which obviously could be tricky. But from their names they seem to do a lot of the operations you seem to be trying to do. <LabVIEW>\resource\Framework\Providers\LVClassLibrary They seem to be the project providers that handle actions from operating on classes in a LV Project. For instance, I see one VI in the New Override folder called CLSUIP_ReplaceDynDispatchCtls.vi that seems like it could be useful. But without seeing the BDs it's hard to tell exactly... I'd personally prefer a more code-oriented approach if I could devise one. For instance, you could put the Flatten to String method in the parent, but the data isn't a variant, but a generic Data class. Every child class contains a child class of the generic Data class that contains the child-specific data. Then you're not dealing with variants. The difficulty would be that you would need to create a unique Data class for every child object and go through the trouble of creating accessors, too. -
How do I copy an LVOOP method to a sibling class?
ragglefrock replied to Jim Kring's topic in Object-Oriented Programming
QUOTE (Jim Kring @ Feb 18 2009, 12:30 PM) Sorry about avoiding answering your actual question, but you might also consider a Delegation Pattern, described as such in the link: "To have two independent classes share common functionality without putting that functionality into a common parent class." -
what is the counterpart in labview for VB Byte type
ragglefrock replied to menghuihantang's topic in Calling External Code
I don't know anything about VB, but you could probably use a Numeric return type in your Call Library Function Node with the representation Unsigned 8-bit Integer. Generally it shouldn't really matter for simple data types with DLL calls what the exact data types are when passing data back and forth if the sizes happen to be the same. -
QUOTE (bsvingen @ Jan 21 2009, 03:28 PM) You can find function prototypes and more in Chapter 6 of the document http://www.ni.com/pdf/manuals/370109b.pdf' target="_blank">Using External Code in LabVIEW.
-
Inplace Structure Array Size Management
ragglefrock replied to Norm Kirchner's topic in Application Design & Architecture
QUOTE (Norm Kirchner @ Jan 20 2009, 07:56 PM) You definitely can. See my last pic with the for loop. In this case the Inplace Element Structure is doing absolutely nothing different than just using Bundle and Unbundle by Name. And it's definitely not faster to use plain bundles than the By Name counterpart. The LV compiler figures out at edit time exactly which operations to perform in each case. There's no run-time penalty. The only possible way in which one could take more time than the other is if you used a Bundle by Name and overwrote the same field twice in one bundle, which there's no good reason to do. -
Inplace Structure Array Size Management
ragglefrock replied to Norm Kirchner's topic in Application Design & Architecture
I think you're thinking too hard about this. First of all, the Inplace Element Structure doesn't produce different behavior for a Subtract Node inside it than if you just used unbundles and bundles. By that I mean it doesn't affect the zero padding at all. Try taking your original example with the plain Unbundle/Bundle nodes and replace them with the Bundle By Name and Unbundle by Name nodes. Voila, your buffer allocation is gone. Or you can simply keep the plain bundle nodes and wire your two scalar values across and the buffer allocation will disappear again. I realize this doesn't produce a different functional result, but for whatever reason it helps LV analyze the inplaceness of the diagram The best way to make sure the Waveform being modified doesn't have its array resized by doing the subtraction would be to manually loop through its elements in a for loop, which is certainly less efficient than using the Subtract Node's built-in behavior. -
Best Architecture for Time-Based Events?
ragglefrock replied to lvb's topic in Application Design & Architecture
QUOTE (Yair @ Jan 5 2009, 01:17 PM) The Timeout architecture would be perfect and easy to implement, but I don't know of any APIs that give microsecond timeout resolution. -
Best Architecture for Time-Based Events?
ragglefrock replied to lvb's topic in Application Design & Architecture
To get <1ms resolution you'll probably have to end up polling. You could use a Timed-Loop with a MHz clock, if you're running on RT. This loop would constitute your timer, and its period would be set to the relative alarm time in microseconds. Every time it executes, it could signal via an RT FIFO that the event occurred. This avoids polling and gives great timing. The problem comes when you want to let the user adjust the timing and pause the timer. To do this, you have to get a message to the Timed Loop. That's easy enough with another FIFO, but the problem is that the loop only executes on its period. So if you send a pause command, the loop has to wait for the next event before processing the command. That doesn't seem to fit your requirements. This becomes especially noticeable when the events get far apart. For instance, if your alarm rate is every 5 seconds, then you'd have to wait up to 5 seconds to be able to pause or adjust the alarm. -
I'll throw in another very useful aspect of XControls. They're extremely useful in custom probes, because they allow the probe's panel to remain in an active state after the data has passed through the probed wire. This is very useful for analyzing reference-based data, because you don't have to know exactly what you're looking for before the data passes through the wire!
-
QUOTE (mesmith @ Dec 9 2008, 11:47 AM) The downside here is that it is difficult to know what the proper name will be if the spawned slave VIs are reentrant clones of each other. You'd need to have some sort of ref count to know which reference to obtain. For instance, your queue name might be Slave Ref %d, where %d represents an incrementing number unique to each spawned slave. But then you need to pass the slave that unique number somehow, which invalidates the whole situation. If the spawned slave VIs are all different VIs, then this is a perfectly good solution. Just a thought...
-
QUOTE (Greg Hupp @ Dec 9 2008, 11:12 AM) Before you call the Run method, call the Set Control Value (Variant) method on the VI and set the Queue reference inputs by name. As long as the slave VI has queue reference controls of the correct data type and name, this will work. One caveat is that if you build your system into an executable, you can't remove the Front Panels for the slave devices, because they need to be present for the Set Control Value method to work.
-
What is the perfect use for the Semaphore?
ragglefrock replied to BrokenArrow's topic in LabVIEW General
QUOTE (BrokenArrow @ Nov 17 2008, 02:14 PM) I'll do it for $52 -
What is the perfect use for the Semaphore?
ragglefrock replied to BrokenArrow's topic in LabVIEW General
Also note that global variables (not FGVs) do NOT give you the same protection a semaphore does! It's very possible that two or more parts of the application could get control of the shared resource at the same time. This is known as a race condition. For example, suppose you have a boolean global called Don't Write that you set to true while one part of the code is writing to a shared file. If that global is true, other parts of the program don't write anything and wait for the global to be false. Sounds safe? Wrong... It's possible (and in the long run probable) that two parts of the program could check the global at relatively the same time, both read false, and both set the global to true and start writing. The worst part of this is that it rarely happens. That's bad because now you have an intermittent bug that's difficult to reproduce, and therefore difficult to debug and difficult to prove you've fixed it later. The only fix to using a global variable for protection would be to put semaphores around the use of the global variable, but then you've pretty much ruined the use case for using a global at all. Hope this helps! -
May I be excused? My brain is full (out of memory)
ragglefrock replied to torekp's topic in LAVA Lounge
QUOTE (torekp @ Oct 10 2008, 08:09 AM) If you're reading file contents into one giant array or string, the issue you have to deal with is not how much memory is left, or how much physical memory your specific computer has. The one important factor is how much contiguous memory LabVIEW's process has out of its 2GB allotment. Like most other programming environments, an array in LabVIEW can't be split up into different pieces in memory. So it's very possible that although LabVIEW is only using 500MB of its total 2GB, there isn't any one place in that memory that's more than, say, 350MB wide. If so, LabVIEW can't allocate a 350MB array, even though it has plenty of total memory left. The lesson here is that it is not very useful to know the total amount of memory left, and very difficult to determine how much of that remaining memory is in any one place. Instead, it's much better to architect your application to not use one big array, but multiple separate smaller arrays. Try reading your file in in 100,000 element chunks instead of all at once. You might be then able to store all the separate arrays in one queue for easy, efficient access later on. Or you can statically have multiple shift registers on a while loop for the various parts. It will be much much less likely that you'll ever get out-of-memory messages. Other programming languages solve this problem with data structures such as linked-lists. This data structure stores data elements separately, not in one big chunk. Each element contains the relevant data, and a pointer to the next data element. So you can always traverse the structure to get any desired piece of data. And since the data's not contiguous, you almost never worry about running out of memory. The downside, of course, is that it takes a lot longer to get to a specific element within the data structure (so-called random access). With an array it's easy, because you know where the array starts and how big each element is, making it trivial to "jump" to the desired element instantly. -
Programatically down casting strict typecast control
ragglefrock replied to HChandler's topic in LabVIEW General
You can also use the Scan from String primitive to get the index of a particular enum string. This will work with any enum already, so there's no reason to build a generic subVI for it. -
Question about shared clones
ragglefrock replied to John Lokanis's topic in Development Environment (IDE)
QUOTE (jlokanis @ Sep 2 2008, 03:01 PM) Here's the reasoning behind shared clones not having uninitialized shift registers. The problem isn't that they share a data space. Each clone will get its own data space, just like any other reentrant VI. The problem is that it's usually impossible to safely predict which clone each specific subVI call will use. It might not be the same clone you used last time for that specific subVI call on your diagram. So the data space you use with the data from the uninitialized shift registers is not guaranteed to be the same data that subVI call set last time. You are sort of correct in saying this is less of a problem because your subVI calls don't ever end, but go on forever (to some extent). So you don't really care about which clone's data space you get "next time," since there probably isn't a next time. But that brings up this question: why have uninitialized shift registers at all for a VI call that runs only once and doesn't ever stop? Why does that help you? It sounds like you might as well have initialized shift registers to maintain the data while that subVI call is in place, but not necessarily to save it for the future. -
QUOTE (vugie @ Aug 29 2008, 03:37 AM) LabVIEW 8.6 has a small new feature that allows CLFNs to specify an input parameter to be a pointer-sized integer. LabVIEW will adapt the data to fit the pointer size for the operating system currently being used (32 or 64-bit). The idea, I believe, is to always use an I64 or a U64 for the pointer data, and then let LabVIEW decide whether to only use 32 bits of that data or the whole item. If you can't use LV 8.6, you'll probably have to use Conditional Disable Structures to configure separate calls for 32 or 64 bit systems. Or you can write a wrapper DLL to do the same type of data management. Can't think of a great option for switching between Single or Double floating point numbers. Your ideas seem to be as good as any I can think of right now, unfortunately. http://digital.ni.com/manuals.nsf/websearch/1CEFD3AEAB830B3886257451006BD8BD' target="_blank">Link for LV 8.6 release notes. Pointer-sized integer feature is page 45.
-
QUOTE (crelf @ Aug 28 2008, 09:48 AM) Absolutely correct. Right thing to do now is to use a http://zone.ni.com/reference/en-XX/help/371361B-01/glang/static_vi_ref/' target="_blank">Static VI Reference.
-
QUOTE (rolfk @ Aug 27 2008, 04:53 PM) Not sure if Typecast is that smart to use data inplace. I actually doubt it. I personally saw significant overhead (~15usec) for typecasting a double into a U64. Same size, but slow performance, especially since I was doing it in a loop. If it was a true c-style cast type operation, it would have taken no time at all. Some functions are smart in the sense you described, such as U8 Array to String and vice-versa. Those are free functions that only have edit time behavior. Type Cast I believe is always a safe run-time function. I actually ended up writing my own DLL function in C to do a cast from double to U64 to get around the performance hit. The DLL function wasn't inplace and copied the source data over to the destination data, and it was still a lot faster than Type Cast.
-
QUOTE (rolfk @ Aug 26 2008, 02:34 AM) In addition, Typecasting is much slower since it implicitly flattens the source data to string, then unflattens it into the target data type. That's a relatively slow operation, which I've found out the hard way. It's not really a big deal until you start doing it thousands of times in a loop, however.
-
QUOTE (vugie @ Aug 14 2008, 04:02 AM) I am not sure there's a safe way to pass a pointer to LabVIEW-allocated data and have it be continuously updated after the original function call completes. LabVIEW might think the memory is not being used anymore and deallocate it or overwrite it. So I believe you might have to write at least one wrapper function to do a malloc for the struct type and return a pointer to the memory. After that, you won't need any additional functions to simply read the value. LabVIEW has a built-in function in LabVIEW.exe called MoveBlock. You call it much like a DLL function in the Call Library Function Node, except you type in LabVIEW instead of a DLL name. MoveBlock can copy from a source location (the malloc'd struct from the simulation) to a target location (a LabVIEW cluster with 6 doubles). See chapter 6 of this PDF for more info on the function prototype for MoveBlock. There's possibly also an appropriate function that resembles malloc that you can call directly from LabVIEW to avoid needing a wrapper DLL at all. Check out AZNewPtr for instance. If I understand the second problem correctly, you need to get a function pointer to a VI to pass to the simulation DLL. This is rather tricky, but someone found a roundabout way to do this using LabVIEW's .NET callback functionality. Check out the solution here.