Jump to content

ragglefrock

Members
  • Posts

    105
  • Joined

  • Last visited

Everything posted by ragglefrock

  1. QUOTE (Darren @ Aug 11 2008, 12:55 PM) Hmmm... not at all what I see. I have Load in Background check, but regardless if I wait 10 minutes after launching LabVIEW (first time or not), it still takes 45 seconds or so. Perhaps a slight exaggeration, but definitely inline with what Ben's seeing. Also have many toolkits and modules installed. I will try Load on Launch and see if that helps, but my LabVIEW launch time is already pretty high (much much higher than 8-12 seconds). Do you think SCC is somehow playing a role in this?
  2. Is there some setting to preload the QD data in the background? The first time I use it it takes like 45 seconds to load. It's hard to motivate myself to use this time-saving feature when I have to burn 45 seconds on the first use (per LV context). I have Load Palettes in Background checked in the Tools>>Options dialog, but that doesn't seem to have any affect. Ideas?
  3. QUOTE (jpdrolet @ Aug 6 2008, 11:53 AM) You could also use the actual system time to see if more than 2^32 ms had elapsed since the last call.
  4. QUOTE (crelf @ Jul 28 2008, 05:11 PM) Merging error wires based on where you click is a great idea. I'm not a fan of express VIs for a number of reasons. I don't want to imply that this Wish List idea is an extension of express VIs. It should just be an editor ease-of-use concept. I personally dislike express-stuff because I don't think it really teaches people how to fully use LabVIEW. There's very much a limit of what you can do with express VIs, and once you reach it, you aren't really prepared to start programming with "normal" LabVIEW functions. You don't have a good concept of bundling, unbundling, arrays, etc. I'm sure I could explain that better, too. What else would you expect to happen when you wire one error wire into another besides merging them? (Actually, I think I may have answered my own question. It's common to do this when inserting a function between two others already connected by an error wire.)
  5. QUOTE (Tom Bress @ Jul 28 2008, 02:50 PM) Good point. There could be a right click option similar to Reorder Controls in Cluster. Visually, it could always be oriented such that the top error gets priority.
  6. There's a really cool feature with Dynamic Data Type that allows you to merge signals really easily on the block diagram. Ignoring the fact that I never actually use DDT, here's how it goes: If you wire a DDT directly onto another DDT wire, LabVIEW automatically inserts the Merge Signals function instead of producing a broken wire. Pretty cool. It would be even cooler if LabVIEW would do the same if you wire two error wires together. LabVIEW could insert a Merge Errors function that actually scales to the number of error wires to be merged. It's very common to have parallel operations that both have error outputs that need to be merged. This would provide a very easy way to do so. It also seems possible since the error wire is a specific data type that LabVIEW recognizes.
  7. I'll throw in my pet peaves about the editing environment. They're small individually, but over the course of the day they cause me to lose a lot of time and patience! There are no keyboard shortcuts for specific alignments of block diagram objects. Yes, I know you can hit Ctrl-Shift-A to redo the last alignment, but that causes me to remember what the heck the last alignment was, which is often not what I want to use this time. I would be willing to memorize a few extra keyboard shortcuts to get the alignment I want (left, right, up, down). For instance use the numeric keypad and have Ctrl-Shift-4 for left alignment, Ctrl-Shift-6 for right alignment, etc. I do this about 50 times a day (slight exaggeration), and it's always a pain point not to have an automated method. Creating block diagram space in one dimension only. I very rarely ever want to create space in two directions, because that can completely screw up the alignment of the whole block diagram. I almost always want to create space in one direction, so I have to very carefully make sure I only Ctrl-drag in one direction. One pixel off and I have to redo the operation. Very annoying. Ok, I'll throw in another one which is definitely more like a feature request than a pet peeve, but it's close enough: When you have an array control or constant, you can add or remove dimensions through the right click menu (very cool). But you can't bridge the gap between scalar and vector items in the same manner. So it's always annoying when I have an array control and I want to replace it with a scalar of the same type. I usually end up deleting the control and starting from scratch, which makes me reconnect it to the Connector Pane. Same goes for a constant. I'm tired of dropping down empty array shells to create arrays of things. What would be really cool would be an automatic right-click menu option to do this. Right click a scalar of an appropriate type (non-waveform, non-xcontrol), and you get an option to Change to Array. Right click a 1D array and instead of Remove Dimension, you see Change to Scalar. This would make life easier, and conceptually I think it's a pretty straight-forward continuation of the Add/Remove dimension idea.
  8. QUOTE (neB @ Jul 2 2008, 02:26 PM) The easiest thing to do is to have one root folder that contains a class with any number of subfolders for methods, typedefs, etc. I would personally never have a class member at a directory level lower than the lvclass file, and I wouldn't ever keep two lvclass files in the same directory. In fact, I generally prefer to mirror the lvclass structure in the project on disk.
  9. I think right clicking the class and selecting Save As will do exactly what you want. It copies the member VIs, inheritance settings, maintains the directory organization and everything.
  10. QUOTE (Eugen Graf @ Jun 24 2008, 04:52 PM) The question's a little confusing. How are you using your LVOOP class in C++? Are you calling a LabVIEW-built DLL that creates and uses LVOOP classes? To flatten the LVOOP class appropriately, you will need to use the LabVIEW Flatten to String function. You can create a wrapper VI that calls this function with a class input and create an exported DLL function to call this from C++.
  11. QUOTE (Yen @ Jun 7 2008, 03:11 PM) I don't think that's a valid test. You're opening two references to the queue, so even if you stop one of the VIs and let it go idle (or even close it), you still have a top-level VI running that has a reference to the queue. I couldn't manage to create a VI that failed, but I believe the danger is if you only use Obtain Queue once from a VI that eventually goes idle, then that queue reference won't stay valid in other parts of the program that continue running after the VI that created the queue goes idle. But again, I tried to reproduce this and was unable...
  12. QUOTE (rolfk @ Jun 6 2008, 12:31 AM) OK, I've answered my own question regarding this worry. The easiest thing to do it seems would be to make the wrapper VIs on the server that access the actual queue resource Reentrant with Shared Clones (supposing you have LV85). This makes life so much easier! This allows the client to simply open a regular remote reference to the wrapper VIs without thinking about whether additional instances ever need to be opened. The server is in charge of opening additional instances as needed. For instance, if one client is blocking trying to enqueue data into a full queue, and another client tries to enqueue data onto an empty queue, the server will open another instance of the local enqueue VI. It's pretty awesome... Attached are the examples I put together. There's only a subset of the queue functions, but the idea is easily extendable. One assumption I made is that the server VIs must be in memory before the client tries to access them.
  13. QUOTE (rolfk @ Jun 6 2008, 12:31 AM) Here was my worry: Suppose the server creates two queues, A and B, with one or more clients connected to those queues. The server has a set of wrapper VIs (let's suppose for now they're non-reentrant) to access all its queue data. Suppose queue A is full and B is only half full. A client tries to enqueue an item into A with infinite timeout, so it blocks. Another client tries to enqueue an item into B, which should succeed, but the wrapper VI on the server is blocking on A, so B has to wait for access. It seems to me, to allow full parallel access to multiple queue resources, we'd need one reentrant set of queue wrapper VIs instantiated per queue. Still, this does seem better in many ways than dealing with all the threading and TCP manually.
  14. QUOTE (Aristos Queue @ Jun 3 2008, 01:28 PM) And to add to this statement, having a cluster of elements is a common way to mask a real (non-user) event. For instance, if a UI event like Mouse Over returns data for Coordinates and the Panel Ref, then I can create a User Event with these two data elements in a cluster and register it in the same case as the real UI event. This lets me fire UI event code programmatically. Not sure this justifies the disjoint behavior, but it's another tool with event programming.
  15. QUOTE (Götz Becker @ Jun 1 2008, 01:54 PM) Yes, that's one piece that isn't implemented quite correctly. There's really two timeouts that should be accounted for, the timeout for sending the TCP data and the timeout for processing the queue command. Theoretically these should probably be added together into one composite timeout. QUOTE (Yen @ Jun 1 2008, 11:55 AM) Stupid question - wouldn't the implementation be simpler if you used wrappers for the queue functions and then used remote CBR calls to call those wrappers like shown in the other thread? That's an interesting idea. I'll think about it. You'd have the overhead of VI Server calls, but my implementation has its own overhead as well. You'd also have to think about the reentrancy of the wrappers, so that multiple clients could have parallel access to the server's queue resources. Just a thought...
  16. A previous post about sharing a FG between two separate LabVIEW projects (or computers) got me thinking about an example I had been working on, but at some point I lost some focus on it. I thought I'd throw it out here in case it helps or interests anyone, and hopefully to get some solid feedback on how the idea could be made better and more applicable. The idea is to wrap the LabVIEW Queue functions into a library that allows users to share queue-based data over a network with basically the same API as the normal Queues. In the example I'll post below there are only a few differences between regular queues and the new Network Queue: Network queues only transmit variant data. Users would have to wrap the functions to get specific data types in and out, but it wouldn't be hard. There are two functions to obtain the queue instead of one. There is Obtain Queue (Client) and Obtain Queue (Server). There is one and only one server where the actual queue is hosted, and there can be any number of remote or local clients. In the current architecture, if a client calls a function that blocks (like trying to dequeue from an empty queue with a timeout), then all other function calls from that same client will also block until that one completes, even if they wouldn't normally block (like a Flush or Get Status function). This is because there's only one pipeline for all function calls for each specific client. I'm using LabVIEW Classes, which restricts the functions from being used on RT. There's no particular reason other than encapsulation that I'm using LVOOP, so we could strip that dependency out. So there are a few limitations, but basically it's a pretty useful library that makes it pretty simple to share data, distribute data over Producer/Consumer type design patterns, and so on, without having to use Shared Variables which for the most part have the same limitations and dangers as regular Global Variables. So take a look at the attached class. It's meant to be extracted into user.lib. It's written in LabVIEW 8.5. Please let me know what you think! Network_Queue_Class.zip
  17. QUOTE (Jim Kring @ May 8 2008, 06:43 PM) My guess is that they didn't implement this automatic binding because installers can be used to install any number of applications from different Build Specifications, not just one. That might not be the most common use-case, but it's certainly possible. They'd have to somehow allow the user to specify what application's version takes priority, and maybe that opens a whole can of worms. But I agree it'd be helpful!
  18. QUOTE (neB @ May 3 2008, 09:44 AM) My basis for thinking this is that the type isn't specific to one VI, but to a connector pane pattern. Two VIs with totally different inplaceness specifications could have the same connector pane and use the same type specifier. LabVIEW can't know in advance what the behavior will be, and it needs to compile the code a certain way in edit mode. I don't think it can make any adjustments when the code runs for actual inplaceness of the called VI.
  19. QUOTE (rolfk @ May 2 2008, 05:44 AM) I believe one big difference with calling a VI by reference rather than as a subVI is that the strictly-typed VI reference doesn't contain information about inplaceness like a VI has when it calls a subVI. The main VI will have to alter the order of execution or perhaps make backups before calling a VI by reference, because it can't know what will happen to the data during the call.
  20. QUOTE (orko @ Apr 22 2008, 06:42 PM) I would vote for storing N 1D arrays rather than flattening the array to string. The disadvantage there is it makes it difficult tor read back only part of the data, or to scan through the data looking for certain events, especially from other programs that can't handle LabVIEW's flattened string data natively. Another idea would be to use Reshape Array and create long 1D arrays out of the 2D arrays. You simply need to store the dimensions of the array in order to read it back properly. One good way to do that would be to use custom channel properties. This would pretty much require, however, that you always write the data to file in the same dimension-size chunks. http://lavag.org/old_files/monthly_04_2008/post-5171-1209001157.png' target="_blank">
  21. QUOTE (7J1L1M @ Mar 11 2008, 09:03 PM) You can easily do this if the string you are replacing is exactly the same size as the string you are writing. If you set the file position to a certain point and then start writing, this will overwrite whatever is already there. You don't really erase data, just replace it. So if the replacement part is the same size as the old part, your job should be done. Since this is string data, and not formatted numeric data, however, it may be unlikely that you will be so lucky to have equal sizes. In this case, your best option is to pipeline the process by only reading sections at a time past the replacement section, then rewriting them at the end. You can get it so that you only load say 100,000 bytes at a time, which should be more efficient. If you have full control over the file format used here, you might try adapting some insights used in the NI TDMS format. In this format, all data is always appended to the end of the file. With the exception of defragmenting the file, you never ever erase existing parts of the file. You just use clever indexing to invalidate them and append valid parts to the end. This might be a lot of work, but maybe it'll trigger some idea in your head. The end result is that you never have to load but a small index portion at the beginning of your file in order to dump data at the end.
  22. QUOTE(atilla @ Feb 14 2008, 08:29 AM) I don't think anyone's quite hit on the right answer yet. The reason you're seeing the buffer allocation dot on the shift register for the data that's being modified is that LabVIEW has to allocate a buffer to copy the constant data into data that can be modified. You are initializing the 8MB array using constant values. The output of that Initialize Array function is always going to be the same, so LabVIEW constant-folds it out by putting the actual array in the executable code during compile-time. The result is exactly the same as if you had placed an array constant with a million elements in it on the block diagram. So LabVIEW sees that you are going to modify this array with the replace array subset in the loop. LabVIEW won't let you modify the constant copy built into the executable code itself, because that would change the VI's compiled code itself. You wouldn't be able to run the program again and get the same results! So LabVIEW allocates a mutable buffer and copies the constant data in. Shift Registers don't always allocate a buffer for their use. They can certainly reuse buffers where appropriate. Here, however, it is not appropriate to modify constant data. To get around the copy, you need to trick LabVIEW into not constant-folding the 8MB constant value onto the diagram, so that the array actually gets allocated at run-time instead of compile-time as a constant. You can do this by changing either the array size parameter or the array element value parameter to a control with the proper default value set. Then LabVIEW can't 100% know that the initialized array will be the same every time, so it won't constant fold. The result will be 1 8MB array instead of 2.
  23. QUOTE(Aristos Queue @ Feb 8 2008, 01:51 PM) I ran the tests he had posted with the updated balanced tree and saw times twice as long what he posted. Not sure if that had anything to do with me running it on a Mac (seems like it really shouldn't).
  24. QUOTE(Aristos Queue @ Jan 28 2008, 01:46 PM) Pretty sure that works. At least with 8.5 and LV Class members.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.