Jump to content

Daklu

Members
  • Posts

    1,824
  • Joined

  • Last visited

  • Days Won

    83

Everything posted by Daklu

  1. QUOTE (Aristos Queue @ Mar 21 2008, 04:24 PM) Are your other posts regarding buffer allocation dots stored in a central location? I checked the wiki but didn't find anything.
  2. For my current project I have string data that I need to have organized into a tree structure. Since there is already a tree control I decided to use that and created a bunch of support VIs to do the operations I need, including decomposing the tree into a variant and vice versa. Now that I've done that, I've discovered it's much too slow. I suppose it shouldn't surprise me; there is an awful lot of overhead that isn't really necessary for my simple needs and there are tons of operations using references. So my question is, what is a good way to simulate a tree structure without using the tree control? Two solutions immediately come to mind: Store the Tree as an Array of Clusters - Each cluster in the array corresponds to a single tree node. The cluster contains the data string, the index to the parent node, and an array of indicies to the children nodes. It seems like there are lots of advantages using this method over the tree control: Easier (and faster?) conversions between the data type and variants (with an array I might be able to skip using variants altogether), no jockying with tags, can act on the data directly instead of using references, etc. Store the Tree as Variant Attributes - I found an .llb in the Code Repository written by John Lokanis that uses this type of implementation, although I haven't looked close enough to see how well it will fit my needs. Additional stream-of-thought questions: If I use an array, what's the best way to manage it? If I fill it sequentially I imagine it will get rather large and sparse after repeated add/remove operations. Hashing seems a bit overkill, assuming I could even come up with a decent algorithm. Perhaps preallocating an array and having it run through a compression routine when it fills up? Hmm, if I preallocate an array with default clusters how would I keep track of which element is the 'next empty' element as the array gets passed around? Reserve element 0 as a pointer? Encapsulate the whole thing as a variant and keep track of the pointer using an attribute? (ugh) If I implement it as variant attributes I'll have to convert it every time I want to do an operation on the tree. How does variant conversion compare to array access in terms of speed? Are there any other inherent advantages to using arrays or variants that make one "better" than the other for this? Thanks.
  3. Given that we've only been using Labview in my group for a little under two years, all the projects I can think of are either in 8.2 or 8.5. Any new projects I create are done in 8.5 although that is not the case for everyone here. Keep in mind, however, that I am the local "expert," having used Labview for all of 1.5 years. Most of the users here have not heard of OpenG nor would know what to do with it if I sent them a link.
  4. QUOTE (neB @ Mar 11 2008, 07:28 AM) Not a pocket programmer, but I frequently code with my elbow on the desk and my chin resting in my left hand. It's terribly inconvenient to have to move my head so I can ctrl-shift to the grabby tool.
  5. Since we're all professional programmers and do things strictly by the book, such as designing the application before coding , I'm curious what tools you use during the design process. On my current project I started with Excel but switched to Visio when it became apparent the application complexity (~100 VIs) wasn't easily captured in Excel. By using the shape data fields I can get Visio to sort of work, but it's difficult to model nested structures, variant attributes, and a few other things. What I have now is a flowchart layout with blocks for each event case and each processing loop state. I'd like to be able to easily see notes associated with each block, such as what data it generates or changes. I'd also like to easily see notes describing the data types I'm passing between blocks, including variant attributes, type of variant, nested structures, etc. [As I typed up this post I realized writing an architecture.vi would work much better than Visio. I don't think it's quite what I'm looking for but it's definitely a step up.]
  6. Thanks for all the info and tips. I understand Labview references much better now.
  7. (Apologies if I misuse the terminology. I'm still a little fuzzy on the language of Labview references.) Labview documentation implies that you only need to close references that you specifically open using the Open VI Reference primative. Is this acceptable programming? I tend to use a lot of VI Server references to front panel objects in my code. (i.e. I right click on the control and create a reference, property node, or invoke node.) What are the implications of the following: Branching the reference. When I branch should I close each branch? Sending the reference to a sub vi. Should the sub vi send the reference back out for the calling vi to close? Is it okay to have the sub vi close it or simply leave it dangling? References within references. If I use a reference to a cluster to get an array of references to all the controls in the cluster, should I close the references in the array as well as the cluster reference? Aborting a vi. What happens to references during application development when I abort the vi rather than gracefully closing it? Reading over my questions I guess it comes down to how Labview deals with copying references and garbage collection. I understand the concept of pointers in text languages but I'm not sure how Labview handles it.
  8. QUOTE(T_Schott @ Mar 5 2008, 05:34 AM) If I'm grouping a large number of components in which it makes logical sense to put in a sub vi that is definitely the way to go. Usually when I'm wishing for it I have a small number of components (~3-5) where adding a sub vi would add to the overall complexity of the project. For instance, in my current project I have a sub vi I use in several places. The output of the sub vi is usually immediately wired to an unbundle by name so I can get the data I need. For readability it makes sense to group the sub vi to the unbundle control to keep them aligned, but it makes no sense to create a series of sub vis for each unbundle combination I might need.QUOTE(BobHamburger @ Mar 5 2008, 09:23 PM) A little clever application of scripting could solve this issue. Sounds like a good code challenge. *Ding* You're it! I'm looking forward to seeing what you come up with. QUOTE 2) Ctrl-shift-drag when performed inside a set of grouped objects would... A) deal with the group as a single object, or B) just blow-up the group? Ctrl-shift-drag makes a copy of whatever is selected. Selecting an item in a group selects the entire group. Therefore, ctrl-shift-drag copies the entire group, just like what happens on the front panel. (That's how I would deal with it anyway.)
  9. It would be handy if I could set front-panel like groups on the block diagram. It would make laying out readable code a bit easier. Not a huge deal, but a nice touch.
  10. Makes me wonder about the "seasoned" programmers the OP mentioned--especially the one who insisted NI's answer was wrong. Moral of the story: When in doubt, ask LavaG.
  11. Just in case anyone was waiting for this to start... I've decided this idea isn't a worthy Labview challenge. It was ridiculously easy for me, a certified Labview noob, to implement a Graham scan and find the convex hull. Optimization techniques might be a bit interesting, but then I have to worry about differences being lost in the noise and perhaps optimizations working better on some computers than others. The Graham scan was so easy to implement any attempt to created a "Limited" class would be like running a Formula 1 race with no tires. Boring. Ah well, I keep thinking about the Labview challenge... maybe something good will come to mind.
  12. QUOTE That's a problem. Why do you have two different event structures? Best bet is to restructure your program to eliminate the second one. Regarding your specific question it sounds like you might have branched something and one of the loops is operating on a copy. Beyond that I'd have to see your block diagram to see what you are doing.
  13. QUOTE(crelf @ Feb 19 2008, 12:11 PM) Wish I could lay claim to it, but that's actually the op's idea. It's a little difficult to explain what I'm asking for. I want to be able to select or otherwise indicate the code segment I am interested in. All the code is executed but only the selected code is "debugged." Any non-selected code that executes while I'm stepping through my selected code isn't highlighted, doesn't display values, doesn't change my view to that to that location, etc. I created a vi that illustrates the problem I'm hoping this would address. Open the block diagram, click Highlight Execution, and step through the math loop.
  14. QUOTE(gmart @ Feb 19 2008, 03:10 PM) How much more feedback do you want? I can copy and paste the post as many times as it takes to convince management. :laugh:
  15. I'd like to see a way to have block diagram constants take up way less space. I'll create large cluster typedefs and use them to initialize arrays, with Bundle by Name, or with Variant to Data. Sometimes I'll have real values in them but usually I'm just using them as a typedef. The problem is they take too much space on the block diagram. I know I can change Autosizing to None, which I do; however, every time I make a change to the typedef Autosizing resets to Size to Fit and I have to go back and resize all my constants. I even had to turn off the Auto Grow feature because I got tired of having my block diagrams blow up. Something like a control-sized icon specifically for constants or typedefs would be fabulous.
  16. QUOTE(PatNN @ Jan 31 2008, 06:28 PM) I'd be thrilled if I could simply ignore sections of code while I'm debugging. For instance, if I'm stepping through code that contains parallel loops I'm constantly being shifted around the block diagram to view the next code step when I'm only interested in the code in *this* loop. It gets frustrating.
  17. Late response, but what the heck... QUOTE(robijn) True, the comparison is not perfect. Keeping procedures and block diagrams smaller than one screen are both considered best practices. However, the nature of sequential text languages means having to scroll through code is not as much of a hinderace as scrolling through a block diagram. Text is easy to digest in pieces. Reading a book isn't difficult even though the text is split across 400 pages. Images are difficult to digest in pieces. Imagine trying to play a game of chess with the restriction that you can only view a 2x2 block of squares at any given time. The game gets much more difficult. The graphical and data flow nature of Labview requires a better understanding of the entire vi--you need to have a better grasp on the "bigger picture" so to speak. Generally speaking the consequences for violating this best practice are more severe in Labview than in text languages, which equates to Labview effectively enforcing a specific coding style. QUOTE(robijn) I think this feature would be considered by many as an invitation for satelite view programming... That would result in even worse block diagrams than what I've seen sometimes. The current limitation doesn't stop people from writing bad code. It can certainly make understanding bad code harder though. I suppose the question ultimately comes down to whether you think the language should enforce coding conventions. Personally I don't think it should. One other area zooming would help me is when I'm changing my block diagram layout. Often once I get a VI to be functionally correct I see ways to change the layout to make it more readable. Having to scroll around the screen to find my code segments as I'm repositioning them makes this more difficult. QUOTE(TobyD) Where do you work!?! I had to fight for months to get one 24" monitor! Currently contracting near you on the east side of the lake. This particular coworker actually brought in his own monitors. He also brought in two of his own computers, his own binocular microscope, his own Metcal soldering station, and various other personal tools. I guess that's a benefit of being 45+ and single... too much disposable income. :laugh: QUOTE(AQ) None of the MDI environments attempted thus far has been as good as the current scenario of individual windows. I appreciate the difficulties you mentioned in the previous thread and certainly believe you. Let me refine my request by saying I'd like to see *much* better window management. Perhaps instead of throwing icons all over the taskbar there could a "Windows" tab in Project Explorer with a with a twist down "Show Project Explorer" button on the toolbar.
  18. QUOTE(neB) Official? And here I thought it was just a loose, casual competition between forummers. You're going to scare me away with talk of 'officialness.' QUOTE(neB) If you are willing to sponsor the challenge (write rules, evaluate results, etc)... I'll sponser it but it will take me a bit of time to formulate the rules. I'll try to have it ready for kickoff by the end of the month. Off the top of my head here are a few things I am thinking. ('Contestant' refers to the person submitting the VI; 'Competitor' refers to the VI itself.) Have two classes for competitors. An open class where anything goes (Wikipedia here I come!) that I'm guessing will end up being a contest of who can squeeze the most performance out of their VI, and a limited class which will include some sort of restrictions that require us to use less optimal solutions. Maybe only allowing a certain number of sub VIs or structures, or ... ? (Consider this a request for ideas.) Patterns will consist of a few randomly generated patterns created by me at the time of the contest and crafted patterns submitted by each contestant. Open patterns and limited patterns will be kept separate. I will publish the random pattern generator prior to the contest. Each contestant will be allowed multiple competitors in either class, however each contestant is only allowed to submit two crafted patterns per class. No special sequences allowed to identify your own patterns. Your VI must evaluate your patterns honestly. Scoring will be rank based with points awarded to roughly the top half of the competitors. VIs that do not produce the correct answer will not be eligible for points. An upper time limit will be imposed based on a general brute force algorithm. Any competitor that does not finish within the time limit will not be eligible for points. The number of points in each pattern is a open question. I have no idea how long these VIs will take to run. Ideally the easy patterns will be ~5 seconds with the difficult patterns in the 30 second range. I also have nothing to offer as a prize. I might be able to create a trophy icon or some such thing, but it's unlikely to be anything a person would be proud to put in their sig.
  19. Correct, this is not the travelling salesman problem. I didn't want to do something that had been analyzed to death.QUOTE(Yuri33 @ Feb 14 2008, 11:16 PM) Well see, now you're spilling all your secrets. I suppose there are already algorithms on the internet to determine the convex hull, which makes the challenge less interesting. That's part of the reason I suggested having people submit crafted patterns; if you can create a pattern that confuses the common algorithms while making your own robust to it you have an advantage. Maybe this isn't so much a "coding challenge" as a "critical thinking challenge with coding." Can you identify weaknesses in the algorithm and overcome them to improve it? Mostly though, I'd really just like to see another coding challenge--whatever it is. I got my start in Labview by stumbling across the Tic Tac Toe challenge on the internet in '06. I thought it was fun and find it a way to keep things fresh and interesting. [Edit] Eh... after googling "convex hull" I discovered I failed in my quest to choose something that hadn't been analyzed to death. The best thing about the internet is the amount of information instantly available. The worst thing about the internet is the amount of information instantly available. Have all the interesting problems (that don't require a PhD in mathematics) been solved?
  20. Very, VERY cool. Keep it up! I'm sure you meant the name to be said as, "VI preview." At first glance I read it as "Viper View," with viper being spelled as vipre. (For non-native english speakers, a viper is a general term for a snake.)
  21. This is a problem I've been thinking about for a while and as far as I know there aren't any canned routines to solve it, making it a potentially interesting Coding Challenge problem. Imagine an XY plot of any number of points placed arbitrarily; something like a shotgun blast pattern. The goal is to find the maximum distance between any two points. Simple, eh? For 10 or 20 points, yes. For 100 points, not bad. What about 1,000 or 10,000 points? The brute force method is an O(n^2) function; can you find something better? I'm thinking of throwing several random patterns with various numbers of points at each algorithm as well as having each person submit a pattern of their own choosing. It could be random or it could be created specifically to take advantage of weaknesses in other algorithms. For scoring I would probably advocate some sort of ranked based scoring as opposed to cumulative time. Comments?
  22. Great article Jim! That earns a printout and permanant thumbtack on my cubicle wall! QUOTE I've set up sync links with my user.lib directory and the directory I use for working on projects. When I come across a need for a general purpose vi I code it up, add it to a _MySolutionsForEverything directory, and add it to the subpalette I want. Since I'm still a Labview noob my personal toolkit changes quite often. Mostly adding functions but occasionally refactoring, renaming, or just deleting ones that don't work very well. I do all this on my c: drive and sync it up with my flash drive when finished so I can copy it down to my other dev computers. It's not a perfect solution but it's miles better than what I was doing before. I was trying to do toolkit development on my flash drive. For a while I experimented with using my VIs directly from the flash drive rather than my c: drive but that failed miserably. Mostly I would make changes on my flash drive and copy that down to my c: drive. Ultimately toolkit dev on my flash drive didn't work so well. When I made a change to my toolkit I'd have to copy it down to my c: drive, modify the palette, and then copy the new .mnu file back up to my flash drive. I do the project work a little differently--all my project dev work is done directly on my flash drive. I sync with a folder on my dev computers as a backup and just in case I forget to bring my flash drive. Based on the information in this thread I'll probably add a few more files or directories to my sync list. QUOTE It's a private method: I haven't done anything with built-in private methods before. How do I find it? (And if it's private, how is it that we can call it from outside whatever class or library it's in?) -------------------------- In case anyone else ever has this problem, I figured out what was wrong with my Favorites palette. You have to be in a Category or Icon view to add VIs to your Favorite palette. I knew this, so although I use Tree view by default I would switch over to Icon view using the View button at the top of the palette when I wanted to add something to my Favorites. Labview doesn't like this. You have to go into the options menu and change the default if you want to add to your Favorites palette. I also created subpalettes in my Favorites palette (which is easy to do) but haven't found a way to actually get any primatives in the subpalettes. Is there a way to put Labview primatives on a palette of my choosing?
  23. In the spirit of tips and tricks, here's what I've learned about customizing the palettes. I think it's all correct, but it's very possible I've made a mistake somewhere. I find the help files rather vague. I trust the more experienced users will set me straight. \LabVIEW Data\x.x\Palettes\ - This directory contains .mnu files for built-in palettes you have customized via the palette editor. VIs and controls are not stored here. Labview checks this directory on startup to see if there are any custom .mnu files that override the default files. The custom .mnu files are stored in a directory structure that mimics the location of the original .mnu file. In general I believe we should avoid editing these files directly. (I have deleted them before to reenable the default palettes.) \Program Files\National Instruments\LabVIEW x.x\menus\ - This seems to be where all the default .mnu files are stored. I believe the "categories" subdirectory holds Functions palettes and the "Controls" subdirectory hold Controls palettes. There are several other subdirectories that don't seem to correlate to locations on the palettes. Some addons put menu files here (such as the OpenG toolkit) but I don't know if there are any conventions on what we should put here. \Program Files\National Instruments\LabVIEW x.x\user.lib\ - Controls and VIs dropped here are automatically detected and placed in the User Libraries subpalette. This is the easiest way to add custom VIs and controls to your palette. Since the User Libraries palette is auto-populated by this directory you cannot use the palette editor to remove controls, VIs, or subpalettes that exist in user.lib. NI recommends most custom VIs go here. If you create a subdirectory in user.lib that subdirectory will automatically show up as a subpalette. Any VIs in the subdirectory will show up in the subpalette. If you create a new .mnu file and place it in user.lib that .mnu file will show up in User Libraries as a subpalette. This is useful if the palette structure you want doesn't match your VI directory structure. \Program Files\National Instruments\Labview x.x\instr.lib\ - VIs dropped here are automatically detected and show up in the Functions -> Instrument I/O -> Instrument Drivers subpalette. I'm not sure what happens with controls dropped here. This directory auto-populates the Instrument Drivers palette so, like the User Libraries palette, you can't remove subpalettes or VIs from the Instrument Drivers palette. NI recommends you put (you guessed it...) instrument drivers here. \Program Files\National Instruments\LabVIEW 8.5\vi.lib\ - This seems to be where most of the shipped VIs reside. It's possible NI addon or modules (is there a difference?) are put here too. Maybe third party VIs as well? \Program Files\National Instruments\LabVIEW 8.5\vi.lib\addons\ - This directory appears to auto-populate the Addons palette in the same way the user.lib directory does with the same restrictions other synched directories have. Although I said you can't remove subpalettes and VIs from palettes that are automatically populated, I lied. If you right-click on a subpalette icon such as User Libraries while in the palette editor, a context menu will appear with the option "Synchronize with directory" checked. If you uncheck that option you can go into User Libraries and delete things to your heart's content. You need to reenable the option if you want any future VIs in user.lib to automatically appear in the User Libraries palette. (Of course, if the items you deleted from the palette are still in user.lib they will show up on the palette again.) I frequently use this technique to remove missing subpalettes (the ones with the big question marks) that I otherwise can't get rid of. Like tcplomp mentioned, in a directory that is synched with a palette, prepending a subdirectory or filename with an underscore ("_") will prevent Labview from putting that item in the palette. (In fact, I think I learned that from his Code Capture tool.) This can be a good way to keep your User Libraries palette from getting out of control. Many of the menu directories have a 0 byte file named readonly.txt. I believe if you delete that file you get more control over the palettes contained in that directory. I haven't experimented with this much so use at your own risk. There are a few things I'm still trying to figure out: I'd like to organize my VIs and add subpalettes to my Favorites palette. Unfortunately I broke something somewhere and now my favorites palette remains perpetually empty. Resetting my palettes to default didn't help. I'm sure there are conventions on which directory (user.lib, vi.lib, and vi.lib\addons) to use when you are distributing VIs, but I don't know them. I've seen things put in all three. Anyone have ideas on the rules here? I'm under the impression that you can add items to the Tools menu by putting a .mnu file in the right place. No idea if that's right or not. In any case the Tools menu looks to be reserved for VIs that are executed (wizards, etc.) rather than VIs used in a block diagram. In Labview documentation I've seen references to "addons," "modules," and "toolkits." Are these synonyms or is there a real difference between the three? Does the "correct" distribution location depend on what your VIs are categorized as? tcplomp, you mentioned firing app.palettes.refresh a couple times. I've looked for it in the application property and invoke nodes but haven't found it. Could you explain how to do that?
  24. QUOTE(Neville D @ Feb 8 2008, 09:22 AM) I develop on multiple machines too and use sync software to keep projects and palettes updated across them. When I'm done with a programming session on one computer I sync with a flash drive and use that to sync with the other computer. A couple sync programs are SyncToy (free) and Syncback ($30).
  25. QUOTE(Yen @ Feb 7 2008, 12:55 PM) There's the rub. This is for a piece of test equipment that will very likely need to be expanded in the future... near and/or far. It is probable that someone else will rearrange the order of the tabs. Since I reference the tab value all over the place it would be easy to miss a required change somewhere. This really isn't *that* big of an issue. I have a work around in indexing the tab control out of the array before I send it to a sub vi. Mostly I'm trying to make it idiot-proof when it comes to maintenance and expansion. (It's pretty embarrassing to break your own code and not know why. Or worse, break it and not know I broke it.) Thanks for the system/Labview control tip. I'll be sure to change it. QUOTE(Doon) The only problem I can foresee in my implementation is if something were to rearrange the array -- otherwise, smooth sailing. Ding! I know some idiot (like me) is going to change things around and screw it up. Like I said, not a big deal. It's just easier to maintain and understand my code. As I sit here and think about your post, I realized I'm trying to make my data self-aware. If my data knows what it is, then my sub vi's and process loop states can operate on them correctly rather than requiring me to tell it what's coming. I have no idea if that's a good idea or not... I didn't even realize I was it doing until now. I'll have to think on that for a bit.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.