Jump to content

ragglefrock

Members
  • Posts

    105
  • Joined

  • Last visited

Everything posted by ragglefrock

  1. QUOTE(neB @ Jan 4 2008, 02:31 PM) You could use your action engine to abstract away the fact that you're using tcp to transmit the data to the subvi.
  2. QUOTE(tcplomp @ Nov 26 2007, 01:43 PM) I don't think User Events are the best way to signal your XControl, at least not how it is set up currently. You are successfully firing and registering for the User Events, but the problem is that the XControl doesn't wake up automatically to handle the User Events. Remember that XControls are by definition NOT continuously running VIs. There's an external process of some sort that decides when to wake them up so they can run and handle their UI events, then go idle again. User Events don't wake them up apparently. They are getting successfully queued up, however, so as soon as some other event wakes your XControl up, it's ready to handle all the User Events at once. There have been posts on this forum about how to successfully use User Events with XControls. Maybe someone figured out a good solution. I don't know of one, personally. I would encourage you to use some other communication mechanism for your XControl such as a Value Signaling property node for an internal/hidden control on the Facade VI. It's a little clumsy, but XControls definitely process those messages. Otherwise, you need some sort of way to ping the XControl periodically just so that it will wake up to handle any incoming messages from User Events. This sort of defeats the purpose of an event-based design, I realize...
  3. QUOTE(jlokanis @ Nov 13 2007, 03:36 PM) I saved the VI I posted in 8.2, so you should be able to look at it. This doesn't require the inplace structure and should by itself give you a big usage memory improvement. If you do rethink this, then I would suggest finding a way to write the important data to file, and then be able to read the file and build a tree-friendly data structure from that. Your current architecture requires you to read and then update things, which would be difficult with File IO, so it'd only work if you find a way to simply write data to file without worrying about what's already been written. For instance, if you have a property called Symbol for a tree that you want to update occasionally, then you should write the tree tag and the symbol to the file as a pair. Then if you need to update it later, you don't change what you wrote to the file the first time, but you just append a new symbol and tree tag pair to the file. When you get around to displaying the tree, you read all the messages from the file and only take the most recent messages for each tree tag. In other words, you'd read a message to update the tree symbol twice, and the second message would overwrite the first. This would help you completely avoid memory allocation issues. The downside is that you have to spend a little extra time catching up when the user's ready to view the tree.
  4. QUOTE(jlokanis @ Nov 13 2007, 02:11 PM) Well, if you need to add data, you need to add data. That memory has to come from somewhere. There is a distinction, however in resizing an existing buffer versus allocating a completely new one. You can't allocate a new buffer with the inplace element structure, but you can resize that buffer. LabVIEW's in charge of resizing buffers for you so you don't have to think about it. I'm pretty sure (although this is hearsay) that what it does is resize the buffer to be bigger than you requested so that there's room to grow. Then when you grow beyond those bounds, it resizes again. That way it avoids constant allocation. But if you don't trust that or want to customize it, you could implement the same functionality. Just initialize a very large array of elements and have a field in your cluster that denotes where you are in that array and its size. Then use Replace Array Subset instead of Build Array until you run out of room. Then use Reshape Array to resize the array to be twice as big, for instance. That's a sustainable model that requires relatively little memory allocation and should run smoothly. Regarding the Variant Database Norm's speaking of: Yes, I believe LabVIEW will be forced to make copies of items when you check them out. An overkill method might be to store many single-element queue refs in the Variant Database instead of the data directly. Then you're copying a queue refnum (32 bits) instead of the whole thing. That's kind of a pain, I admit.
  5. QUOTE(jlokanis @ Nov 12 2007, 04:31 PM) The new Inplace element structure would indeed help avoid extra memory allocations, but you can also get about 90% of the improvements you'd see here without it, simply by keeping an important thing in mind regarding LabVIEW's inplaceness algorithm, which I'll explain. LabVIEW is really good at operating on data inplace when you unbundle something, then bundle it back in. However, LabVIEW can't be certain when it's safe to operate inplace if you either unbundle or bundle conditionally. That's exactly what you're doing, and it's causing LabVIEW to back up the entire data structure for every operation, which becomes exceedingly costly. To avoid this, try to always unbundle and bundle regardless of the situation. In other words don't place one of these nodes in a Case Structure when you don't have the partner node in that same Case Structure. The better solution in your case would be to always unbundle the data, but then to decide whether you bundle back in the same untouched data or the modified data. LabVIEW's a lot better at processing that. It's true that the inplace element structure would save you an extra allocation when you index and then replace array subset. Here LabVIEW always makes a temporary copy. However, I'm guessing this is not the cause of the big problem you are seeing. I'm attaching a modified version of your VI below. Note that I really didn't spend much time analyzing this except to make sure the buffer allocation dots went away for the main Tree Data structure. You should definitely double-check everything to make sure it functions the same as it did.
  6. QUOTE(yen @ Oct 8 2007, 03:48 PM) I thought I had it, nevermind....
  7. QUOTE(Aristos Queue @ Sep 12 2007, 09:53 AM) This is also the approach taken by many new programs such as iTunes and iPhoto (Apple isn't the only one with the idea). They really don't want users thinking about the file structure at all. That should be completely hidden. For instance, if I crop a picture in iPhoto, iPhoto will make a backup of the file in case I want to revert the changes a month later, and put the backup in some folder on disk I've never heard of. But I don't think about all this complicated file management, I just hit Undo. On the other hand, there's a (potentially) big difference between users of basic Apple programs and engineers, who might want that integral control over everything, even if it hurts them.
  8. QUOTE(crelf @ Sep 12 2007, 10:06 AM) I prefer this method of getting an array's size.
  9. QUOTE(Aaron @ Sep 11 2007, 02:25 PM) If you have a reference to the other VI that's running, you can wire that VI reference into a property node to get a reference to the Panel. Then wire the Panel reference into another property node and get a reference to all the controls (Controls[ ]) on the panel. You might have to loop through those controls looking for the right label, and then cast it from a generic control reference into the desired type (ring, slider, etc.).
  10. QUOTE(LV Punk @ Sep 7 2007, 12:24 PM) I agree with only showing as much data as necessary. You could use custom scrollbar controls to emulate real scrollbars for the table. When their value changes, get the new bounds for the large table and replace the visible data. This way the table never really stores more than say 200 cells worth of data. Here's an 8.5 example (non-XControl, sorry!)
  11. QUOTE(Eugen Graf @ Sep 6 2007, 07:43 AM) I'm confused as to why the Read case needs to queue itself up. Reading the Queue happens every iteration anyway, so it doesn't serve any purpose to have a specific Read case to tell it to do so. Can you explain what you're trying to accomplish? Do you only want to process messages after hitting Connect?
  12. [Accidentally deleted this post while I was editing it, here we go again...] Here's an example that does a Graph-type container in a very simplistic, data-flow safe manner. I'm not claiming it's elegant, but please let me know if it fails to fit any requirements. It's just an array of nodes, where each node contains data and links to other nodes. The links are really just indices into the overall array. To add nodes, you just append them to (or insert them into) the array. You don't really delete nodes, you just delete the links to the node you're removing and flag its index as the first node to replace when adding a new node. If you want, you can empty out its memory by replacing it with a dummy item. It all seems data-flow safe, though. If you branch the array wire, you get a copy of the graph. There's no real tricks involved. The downside I guess is that don't really reclaim memory when you delete objects. The array will never shrink in size. You could maybe look at Variant Property storage if you wanted to circumvent this limitation.
  13. QUOTE(Neville D @ Aug 10 2007, 04:51 PM) It only really makes sense if you want to executes states at a specific rate (maybe not very likely), or if you want to run your state machine on a specific processor. In that case you can just set your Timed Loop period to zero and use it somewhat like a regular while loop. There will be some extra overhead in this approach versus a regular while loop. If you want to be able to run your state machine on a specific processor in LV85, another option is to put your existing state machine with a regular while loop inside a single-frame Timed Sequence Structure and set its processor affinity. This is simpler and probably has less overhead than the option above, but this option won't let you dynamically change processors while your state machine executes.
  14. QUOTE(ooth @ Jul 29 2007, 10:12 PM) Yup. Linux might have different scheduling behavior than Windows, but it's not a Real-Time OS. At any time it could take the CPU away from LabVIEW and do whatever it sees necessary. [There is development on a Linux RT-type system, but I don't know much about that. Your standard Linux desktop, though, isn't RT.] Operating Systems like Windows treat priorty settings for user code as requests, not commands. You request a priority, and all things equal Windows will do its best to abide by that request. But it makes no promises. I heard somewhere that one of the highest priority processes in the system at all times is the code that keeps track of the mouse and updates the cursor. Microsoft was sick of people calling tech support saying their computer was frozen with Windows 95 and before, when in fact there were just higher priority tasks executing than dealing with the mouse. Now you can move your mouse around all you like when your computer is hanging and feel all warm and fuzzy inside. Really, there's no difference
  15. QUOTE(Ben @ Jul 24 2007, 11:12 AM) Ben, I don't consider myself a picture control guru, but the only thing that comes to mind here (and you might know this already) that would help you drag items in a picture control around is to have each item in the picture be its own picture data. Then you have an array of picture control items, one for each object. Then to render the actual picture that gets displayed, simply use the Concatenate String function to concatenate (overlay) all your pictures. You can even just wire the array of picture items into a single-slot Concatenate Strings function, and it will output one picture item with all the items overlayed. The advantage here is that moving one item doesn't force you to redraw or recalculate the rest of the items. Just index out that particular picture, convert it to a Flattened Pixmap, adjust its XY coordinates and bounds, and put it back into the array. If you look inside the Picture Control functions, a lot of them do this to overlay pictures. I did this with a calendar app I was doing as a hobby so that I didn't have to redraw the grid and everything just to update the date numbers, or items on a particular date. Hope this is what you meant.... Here's an example of what I meant in terms of overlaying images. To add dragging to this, you would have to keep track of where each item was so the user could select and "drag" it with the mouse.
  16. QUOTE(torekp @ Jun 28 2007, 06:30 AM) Good catch. Actually most of the time difference you were seeing was the overhead of a subVI call. Changing the subVI's execution to subroutine reduces the time taken from 1300ms or so down to around 80, and with some further optimizations you can get it down to around 48ms. Copying and pasting the code itself directly into the loop reduces it further to around 16ms. Still, the quotient and remainder function only takes around 5ms. The remaining difference appears to simply be buffer allocations. My "optimized" algorith had like 5 or 6, while the quotient and remainder function purports to only need 2. QUOTE(ragglefrock @ Jun 28 2007, 09:48 PM) Good catch. Actually most of the time difference you were seeing was the overhead of a subVI call. Changing the subVI's execution to subroutine reduces the time taken from 1300ms or so down to around 80, and with some further optimizations you can get it down to around 48ms. Copying and pasting the code itself directly into the loop reduces it further to around 16ms. Still, the quotient and remainder function only takes around 5ms. The remaining difference appears to simply be buffer allocations. My "optimized" algorith had like 5 or 6, while the quotient and remainder function purports to only need 2. OK, further update. I can get my homemade code attached below to run exactly as fast as Quotient & Remainder. Sometimes a little faster (10-15ms faster over 100000000 iterations), but it's probably just the error of the timing. Here's what I did: 1. Turn off debugging. This was the last piece that allowed my code to catch up in speed. 2. My calculation for the Quotient was okay in my code, but the code for the modulus was very slow. I was using multiplication, which is fairly expensive (not on the order of division, but more than addition and bitshifting). Here's my code below for you to examine. The performance is really about the same. So sorry for claiming I could do better, but this isn't that bad.
  17. QUOTE(Ben @ Jun 14 2007, 06:20 AM) I agree with that in principle, but I don't think the default value created by the VI with an unwired terminal would be a constant value. Maybe copied from a constant, but not a constant itself. Here I'm just guessing, but it's the basis of my original answer.
  18. QUOTE(Tomi Maila @ Jun 13 2007, 11:26 AM) I would doubt that. The inplaceness algorithm just determines if it's safe to reuse the input buffer for the output data. Whether that input buffer comes from a default value when unwired or a wired input value shouldn't make any difference. In any case, the Hide the Dots game would quickly verify whether there is a difference.
  19. QUOTE(Thang Nguyen @ Jun 13 2007, 06:37 PM) TDMS VIs won't directly give you a time-based subset of your data, but all the information is there to do this with a little work yourself. Here are two approaches, the easy but inefficient approach and the more correct method (both methods assume you wrote your data to the TDMS file as waveforms): Easy but Inefficient: Read the whole channel from file and use the Waveform Subset VI to exctract the desired portion. Bad for very large channels. More Correct Method: If you write a waveform to a TDMS file, the underlying data is really stored as an array with a special channel property called wf_increment to store the dt value. You can read this property for your channel and get the double-precision dt value. The property for the t0 value I believe is called wf_offset (view the TDMS channel in the TDMS Viewer to get the exact names). Then you just calculate based on the t0 and dt the offset in the channel array where the desired waveform subset begins, and how long the subset will be. Then use the TDMS Read inputs Count and Offset to specify how many datapoints to read out. Then build an waveform from this array with the new t0 and output it.
  20. QUOTE(brianafischer @ Jun 13 2007, 08:55 PM) David's ideas are probably the best way to go about tackling this idea. It's very reliable, and one of his solutions really points out the true nature of this problem. It's unlikely you really need an unlimited number of buttons. How many buttons can a user process visually? There's usually some max, even if it's 50 or more. So just statically have 50 buttons, and only show the number you need at that time. Here's http://community.ni.com/examples/linked-object-list-in-labview-8-0-using-xcontrols/' target="_blank">another example you could look into if you have LV8.x, but I'd warn you that it's overkill and pushes the limits of what LV's built to do. But hey, that's half the fun...
  21. QUOTE(Aitor Solar @ May 25 2007, 02:12 AM) Hmmm... you might consider as a workaround creating your own property for your XControl called myDisable. It would have the same inputs (0, 1, 2) as the regular Disable property for a control, but all it would do is turn around and disable the actual controls on the Xcontrol panel instead of disabling the XControl itself. It would have the same affect, but technically your XControl itself would not be disabled, so the Value property wouldn't be affected. Obviously the problem with this approach is that if you're disabling a number of controls together using an array of references, then you'd have to check which references aren't normal controls and call your version of the disabled property node instead of the regular one. Just a thought...
  22. QUOTE(Michael_Aivaliotis @ May 15 2007, 01:44 PM) It seems to me your best option would be to put the tedious work into a utility VI that you keep in your project. It could flush out the folder in the project, scan the folder on disk, and then add all those items (recursively if necessary) into the project. I'm not sure what to tell you about forgetting to do this. Can you programmatically trigger the Application Builder to build a build spec such as your installer? I can't remember. If so, you can have your utility VI do the syncing and the building all at once. This could be your one source for building the installer. Ton mentioned in the other thread using .NET events to synchronize everything, but that seems to me to require having some service running 24/7 that might even keep your project open forever. Seems like overkill, and .NET events for folder changes have a finite buffer that might overflow if you add a large number of files at once.
  23. QUOTE(ned @ May 15 2007, 10:34 AM) I'm not sure if LabVIEW folding the loop has any effect on buffer allocations. Even if it does, my guess is that LabVIEW would still have more difficulty determining which two tunnels go together in this case if you don't use shift registers. For simple algorithms it might be trivial, but when traversing multiple cases of a case structure, the path might not be clear. At best LabVIEW might be able to find the inplace path through the loop, but using shift registers is a big hint to LabVIEW. Stick with shift registers. I was surprised to learn that LabVIEW can inplace various other tunnel forms such as an input and output auto-index tunnel! I never knew that and am very glad to know, since this is quickest way to operate on array elements. You still run the risk that LabVIEW won't recognize the path through the loop, though, so complex algorithms might even benefit from a single-cycle loop inside the for loop with shift registers. Don't quote me on that, as I've never seen it in practice. Just a thought
  24. QUOTE(crelf @ Apr 25 2007, 06:50 PM) Note that Jim's method won't work if you have Wait Until Done set to True on the run call. That establishes the same caller hierarchy and produces the same error when trying to abort the dynamic VI. If you need these plugins to return data to you and still be able to abort them, you'll probably still need to dig into them to establish some event-driven communication back to the caller.
  25. QUOTE(Jim Kring @ Apr 7 2007, 10:21 AM) Another editing step that takes longer with for loop FGs is disabling the auto-indexing that occurs by default. It's generally not the desired behavior with FGs. That's the reason I've stayed away from them, personally.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.