Jump to content

Daklu

Members
  • Posts

    1,824
  • Joined

  • Last visited

  • Days Won

    83

Everything posted by Daklu

  1. QUOTE (Maca @ Jul 18 2008, 01:02 AM) Dunno... what's ETS? QUOTE (Ben) If that device was supported by traditonal DAQ then RDA may help. By "that device" I presume you mean the 7340 controller card? And, um... what's RDA? I'm feeling rather right now...
  2. I have a test system with an NI-7340 motion control card and a 3-axis robot. I'd like to do dev work on my computer but need to execute code on the test computer. Since I am frequently executing code during development I was wondering if there's a way to expose the motion controller in the test computer as a network resource available to my dev computer? I couldn't find anything along those lines in NI-MAX. I looked at Remote Front Panels but that appears to be geared toward finished applications rather than developing them. Remote desktop is an option, but its too slow for working in a graphic intensive dev environment. Any ideas?
  3. As mentioned above, patents typically aren't suitable for software. But here's a bit more information on patents... I worked for a patent/trademark laywer about 15 years ago. At that time the patent process started at ~$5000; more if any difficulties were encountered during the filing process. If I recall correctly to maintain your patent you have to defend it, meaning you need to keep up to date with competing products and take them to court if need be. If you don't do it you lose the rights to your patent. Essentially you open your wallet to your lawyer for as long as you want to maintain your patent. "You should research copyright, trade secret, and trademark to decide if one of those is a better way to protect your software." I believe trademarks are only for logos, product names, brands, company names, etc. i.e General Mills can put "Cheerios" under trademark, but not the cereal itself.
  4. Since we can't look at the block diagram, can you tell us how the maze will be defined? At first I thought the front panel defined the maze, but creating walls by changing the colors doesn't seem to do anything. It looks like the front panel array is simply a representation of a maze predefined on the block diagram.
  5. I'm sure we've all experienced customers who continually want "just one more feature" and the bizarre code that often results from trying to accomodate their wishes. While trying to fix a bug in a program built in that way I ran across this code snippet. After figuring out what it does I had to paste it in a test vi just to make sure I wasn't missing something. (The developer who created this program is quite competent. I think it's just a case of getting into details too much and missing the bigger picture.)
  6. I am familiar with object oriented concepts but have never implemented anything beyond very simple examples. Now I have to rewrite a battery gas gauge application that is heavily tied to a specific gas gauge and specific I2C interface to work with multiple gas gauges and I2C interfaces. The lightbulb in my head flickered on and I thought this would be a perfect place to try out LVOOP. First question was triggered by a comment from AQ on another thread: QUOTE It sounds like my problem is very similar. Currently I have a Battery base class that only has Get/Set property (private data) methods. I plan on having Battery A and Battery B both inherit from this base class. The Battery base class isn't meant to be used directly, so any other methods I implement there are meaningless and must be overriden. I do this by having the base class return errors for any vis that are supposed to be overriden in child classes. Labview doesn't directly support virtual (must inherit) vis so I was wondering if there is an "official NI recommended" way of accomplishing this? Second question is a little more complicated. I believe I need to create an I2C Interface base class that Interface A (NI 8451) and Interface B will both inherit from. (Similar to the Battery classes.) I spent some time embedding I2C interface information in the battery classes so, for instance, a battery object could read data from the gas gauge, convert it to human readable format, and spit it out for the user. If we were tied to a single I2C iterface I could do that but it gets rather complicated trying to do that with different interfaces. Now, I'm thinking of keeping the Battery classes and Interface classes completely separate and having them interact within the application vi. For example, suppose the user clicks a button to read a specific register. The battery object would send its I2C address and register location to the interface object, which would do the actual communication with the battery. The interface object returns the data and passes it to the battery object's 'Decode Data' vi, which might convert it to a cluster. (It seems a little odd to have a battery object that can't read it's own information, but what the heck...) I *think* I'm on the right track but wanted to get some feedback before I invest too much time in this path. There are lots of things I still haven't figured out, like how to deal with the different data types from Battery A and Battery B. Currently my dynamic dispatch vis output variants and I have typedefs in each battery class defining its data. How do I find the class the object is instantiated from at runtime so I can convert the variant into the right data type? I'm also puzzling over how to present the data and options that are specific to each battery to the user. Sub panels? [Note: It's amazing how much clearer problems become when I spend an hour trying to describe them to others.]
  7. QUOTE (PaulG. @ May 28 2008, 05:24 PM) Most newer keyboards have an application key (usually) between the right-hand Ctrl and Alt key. It duplicates a right mouse click for whatever is selected.
  8. Topic says it. When I right click the NI 8451 vis or I2C Configuration property node, the options for creating a control, indicator, or constant are grayed out. The only way I've been able to get a Configuration Refnum as a control or indicator is to open NI's 8451 vis and copy from there. Isn't there an easier way?
  9. I've resolved the problem. The root cause is one of the following, I'm not sure which: The project file became corrupted or Labview got into some weird state that required a restart or reboot. I had open control/project/vi files that had not been saved, thus breaking everything. It was probably #2, meaning Labview was saving me from my own idiocy. QUOTE Thanks for the tip AQ, that has helped a lot.
  10. QUOTE It's not a scoping issue. This happens when the typedef and vis that use the typedef are within the same class. (But yes, I did try rescoping it anyway. ) QUOTE However in addition to what Justin said, it may be that you are having some sort of type recursion that is not allowed. Hmm... that could be. I'm using essentially the same data in my typedef as I am in my control private data, though I'm not using the typedef as the control data. One that is annoying is the number of errors in the error list that spawn whenever anything goes wrong. It makes it difficult for me to figure out what the real problem is. QUOTE Is it preferred/discouraged to put several classes within the same lvproj? I think the lingo is throwing me off a bit. Classes and Project Libraries use the Project Explorer but are not projects themselves--at least I haven't been considering them projects since they do not have a .lvproj file. Is this correct? The class hierarchy I'm developing is intended to be used by other developers in several battery testing systems. I intend the classes to be copied to the application directory rather than placed in user.lib. On the one hand, these classes aren't a project in the sense that there isn't a complete application when finished. On the other hand, all the classes are related so having them in a lvproj makes it easier to manage the group. If I have the class tree in a lvproj, does another developer need to use the entire class tree or can they extract a single branch of the tree, assuming there are no sibling dependencies. (I'm thinking of the kind of restrictions that go along with project libraries... use one vi and you get them all.) This really isn't a critical issue. Whether or not I place the classes in a lvproj seems to be a matter of personal preference. I'm really just trying to figure out if there are standards or best practices with this.
  11. So I'm dipping my toe into the chilly waters of LVOOP and encountered something that doesn't work the way I expect. Is it a bug? Is it a 'feature?' Is Labview protecting me from my own idiocy? In my class I have a vi that takes a cluster of data as an input. I would like this cluster to be a typedef that I could use internally in the class as well as have available to class users. However, any class vis that use the typedef as a input, as well as the typedef itself, break when the typedef is in the class structure. If I disconnect the typedef on member vis it works, or if I place the typedef outside the class heirarchy it works. Is this expected behavior? Other questions I've been unable to find answers on: -Is it possible to create a class or method as 'must inherit?' -NI examples always show classes and lvlibs contained within a lvproj. Is there any particular reason for this other than being able to set up build specifications? -Is it preferred/discouraged to put several classes within the same lvproj? i.e. I have a Battery class which will serve as a parent for several specific battery child classes. (i.e. Energizer, Duracell, etc.) Currently I am planning on putting them all in the same lvproj. Good? Bad?
  12. QUOTE (BrokenArrow @ May 13 2008, 05:45 PM) Hence the inclusion of "almost" and my comment, "I suppose there may be some trivial cases where a single vi could serve as a driver." QUOTE (BrokenArrow @ May 13 2008, 05:45 PM) But I assert that it isn't a driver, but is a LabVIEW VI. Why can't it be both? "VI" and "driver" are not mutually exclusive. A vi is simply a bunch of G code saved as a single file. It refers to where the code is. A driver refers to what the code does. The case you presented is both a vi and a driver. I'm certainly not claiming my definition is the best there is, but it works well for me. I constantly struggle with "where should this code go?" when writing code. Keeping that definition in mind helps me make better decisions. It also easily answers questions in AQ's post. QUOTE (PaulG) "The actual dll or C code is the "driver". Or am I full of pixie dust?" If you're going to follow that path, is the source code the driver or the compiled code? If it calls the OS API to perform low level functions is the operating system the driver? Drivers written in .NET and compiled into intermediate code? Drivers written in interpreted languages? What a mess... The dll is a driver--so is the Labview code wrapped around it. I see no problem with drivers having multiple layers. To refer to my translator example, suppose I need to do business with someone from Russa but there are no English-Russian translators available. My Swedish translator has a friend who speaks Russian so we use both of them to bridge the gap. My translator handles the English/Swedish translation and his friend handles the Swedish/Russian translation. Can I claim one of them is the "real" translator? QUOTE (AQ) What if the DLL is a LV-built DLL? Do the VIs in the DLL count as a driver? It waddles, quacks, and floats on lakes... must be a driver.
  13. To me a driver is a piece of software whose sole purpose is to pass information from an application's business layer to/from a piece of hardware. If the driver contains business logic it is no longer just a driver and becomes an integrated part of the application. Or as I call it... a PITA. I see a driver performing much the same role as a translator. If I need to conduct business with someone in Japan, I find a translator so we can communicate. If the translator is making business decisions on his own, then when I get another translator to do business in Sweden my system breaks down. I need to train the Swedish translator to make the same decisions as the Japanese translator. I restrict the translator's function to translation. As to the original question, "When is a VI a 'driver,'" I'd say a single vi is almost never a well-written driver. (I suppose there may be some trivial cases where a single vi could serve as a driver.) A good driver is generally a collection of vis. If your hardware has a single vi as a driver it likely needs to be broken up or rewritten.
  14. QUOTE (Jim Kring @ Apr 9 2008, 05:38 PM) I was just poking around trying to figure out the whole Labview versioning system and come up with a good scheme to keep track of applications. It suprised me the installer had it's own version number, but when I experimented with changing the numbers I couldn't find where the installer version number is used. I also discovered the VI property History:Revision Number isn't available when built into an executable. It makes it a little difficult to correlate what I see on the dev screen with what I see in compiled applications.
  15. QUOTE (crelf @ Apr 9 2008, 06:21 PM) Heh, I hadn't noticed that. I'll have to go back and fix that. (Upon graduation they should give every engineer an english major to act as editor for life... it would help with the homeless problem. :laugh: )
  16. I've been creating a Labview Library using VIs from NI's USB-8451 Interface module. I noticed that when I use the ni845xControl.ctl in a vi and I wire it up so it can execute, I can't open the vi through the library interface. When I try, Labview doesn't respond but eventually will crash. What I find odd is that I can open the vi through Windows' File Explorer just fine. Also, if the vi is wired incorrectly (the arrow is broken) it will load correctly. In the attached file the vi containing the control as a constant was loading correctly, but then it stopped. I searched the known issues for 8451 and lvlib but didn't find anything. I also tested it on a second computer with the same results. Is there something obvious I'm missing?
  17. Currently I'm going through all our Labview tools created over the last 2 years and archiving them in our source control system. Most of them are simply vi trees dumped in a single directory with many outdated and unused vis present. I'm switching them all over to 8.5 projects so I can create build specifications to ease my task. My questions are: When storing projects in source control, what do you include as part of your project as opposed to leaving under dependencies? VIs from user.lib? instr.lib? vi.lib? DLLs? When you check in code do you create a zip build and check that in? (As near as I can tell you need to do that to collect the vis not in your project directory, unless you want to find them and add them all manually.) Does your strategy change when you are checking in code for a project currently under dev versus closing a project? Many of these projects are being archived and if they are ever checked out again it's likely it will be by a different user on a different computer with different addons installed, etc.
  18. I did some benchmarking to compare the standard unbundle method to JFM's boolean mask method. Initially the unbundle method was faster with increased benefit if the item was near the beginning of the array. This makes sense as it will stop searching as soon as the item is found while the masking method iterates through the entire array twice before searching. However, later testing with slightly different test code had the mask method consistently faster. Odd. Unfortunately my thumbstick appears to have digested my code.
  19. QUOTE (Norm Kirchner) It will never be displayed as a tree although I will display parts of the tree in combo boxes occasionally. 99% of the time I use the tree I don't care about displaying it. QUOTE (Norm Kirchner) The reason I ask is what is the overhead you're speaking of, because to grab values/properties from the tree is very fast depending on what you're doing, and w/ the Variant based DB that I have integrated, if you're extracting and modifying information and adding information, the only overhead is the usual time associated w/ the variant operations. [WARNING - Long boring application details follow] This will be a little easier to explain if I give you an example of the data heirarchy and explain a little bit about how I implemented the application. One particular data file type I work with is organized into the following 'filters.' (I'll explain the naming in a minute.): Test Number -> Speed -> Angle -> Offset Each data file may have multiple Test Numbers, each of which may contain multiple Speeds, each of which may contain multiple Angles, each of which may contain multiple Offsets. It is important that I am able to have cousins with the same name, since I will have (for example) data with the same angle but different speeds. There will be several data points associated with each unique filter combination. When I load the data I decode a 'tag' value recorded with each data point to determine what filter combination that data point belongs to. I group all the points with identical filter combinations into an array and bundle that array with a tree representation of the filter combination. This cluster I refer to as a 'Level 1 Element.' (Creative naming, I know.) I store all the data that has been loaded in an array of L1 elements. (Since multiple files of the same type of data are loaded, I prepend the filters above with a file alias filter to uniquely identify the data file, but we'll ignore that.) My application is for data analysis. For example, I'll take a set of data and calculate the min, max, mean, and st dev of the error. The data set may include a single L1 element, or it may include an entire data file. Users need to be able to choose how much data to include in the data set when displaying data and calculating metrics. I populate combo boxes with appropriate filter values and users use the combo boxes to select the data they wish to view, hence 'filters.' Undoubtedly some of the overhead has to do with poorly written code and bad architecture on my part. These may not apply to your code, but some of the inefficiencies I've encountered are: Since I couldn't store a tree as a native type in the L1 element I had to store it as a variant. But, since you can't convert a tree directly into a variant and maintain the structure, I built a pair of vis to convert between a variant and a tree control. Any time I want to operate on a tree I have to convert it into a tree before I do anything. Let's say I have a data file with 10,000 data points and there are 100 unique filter combinations in this data set. With each data point I have to search through my L1 elements to find the one this point should be added to. On average I will have to search through 50 L1 elements before I find the one I want. That means I have to convert a variant to a tree 500,000 times just to load that data. Every time the user changes a filter combo box I have to do a variant -> tree conversion for each L1 element as I search the data store for the right data. (My initial implementation attempt stored the data heirarchically instead of as an array with a unique filter combination. That would have improved data searching efficiency but it got very confusing very quickly.) All the operations I'm interested in require a reference to the control. As I understand it reference operations are inherently slower than operating directly on data. I might have to build thousands of individual tree branches for a given data set. It appears to add up quick. Simply deleting all the nodes in a tree seems to take an unusually long time. (Which I have to do often seeing as how 'Reinit to Default' only changes the selected value in the tree and doesn't alter the structure at all.) My tree nodes are defined by the path, not by the node value. Due to my kissing cousins (cousins with the same name) knowing a node string doesn't do me any good unless I know the entire path. Because the tree control doesn't allow identical tags they have no value to me beyond using them for add/delete/etc. operations. I spend a lot of time finding the tag <-> string relationships, especially when checking a tree to see if a branch exists. I'll be the first to admit I didn't spend a lot of time looking through your api. Perhaps I dismissed it too quickly without giving it a fair shake. I'll take another look at it. With the unique requirements for my tree my hunch is that a simple, application-specific tree will work better than shoehorning it into a multipurpose tree api. (Of course, my hunches often get me in trouble...) QUOTE (jdunham) Don't be scared. You don't have to do anything with the classes. I have looked at it since my last post and it clearly belongs in the deep end of the pool. (Where'd I put my floaties?) Assuming I could decipher it enough to use it, I'm not sure that implementation would work since it appears to be a binary tree. Features like auto-balancing would also really mess me up. QUOTE (jlokanis) Let us know how you end up solving the problem! This is what I've cooked up so far. It is far from a general tree implementation but I think it will do what I need. I haven't tested it much nor benchmarked it yet so I don't know if it is better than what I've got. Changing the tree implementation is fairly major application surgery; I'll need to test the various solutions before weilding the knife. Download File:post-7603-1206578444.zip
  20. QUOTE (Aristos Queue @ Mar 26 2008, 01:18 PM) That's a good tip. Thanks! I hadn't thought about the order in which the cluster elements would get searched. Kind of a moot point in this particular case though as the search prim doesn't work for what I'm trying to do. (I coded it up as an example of how I imagine it would work.)
  21. In my code I often have arrays of clusters in which I need to find the element that matches one piece of data in the cluster but the other pieces of data don't matter. For example, if I have an array of clusters in which each cluster contains a number and a string, I'll need to find the element in which the string matches a specific value but the number doesn't matter. (Often I won't even know the value of the number.) I've always handled this by iterating through the array, unbundling each cluster, and comparing the strings. (Upper code line.) My benchmarking shows the search array prim to be ~20% faster than iterating for arrays that don't use clusters. Is there a way use a wildcard in place of a value in a cluster that would allow me to use the search array prim? (Lower code line.)
  22. QUOTE (krypton @ Aug 8 2006, 12:05 AM) Certificate before a job, or job before a certificate? My personal experience is there's no way to predict what will impress a prospective employer, especially coming right out of college. For instance, when I graduated college my first boss hired me in part because I had earned my private pilot license after 11 months of training. He thought that demonstrated perseverance and dedication. He was primarily looking for someone with good work habits and certain personality characteristics. Everything I needed to know I learned on the job. Once you've been in the industry for a while certificates may be useful or even expected if you are looking for a Labview specific job, but I think they have limited value when looking for your first job. Rather than getting a certificate, I'd do projects, write a one page summary about each one, and give those to your prospective employers. When documenting your projects be sure to include what problems you had and how you solved them. *Every* interview I have been on has asked me about how I overcame challanges. My $.02 [Edit] However, there's nothing wrong with getting the CLAD if the cost is not prohibitive. You can also purchase the course kits for intermediate training. They are expensive for a student budget but much more affordable than instructor-led training.
  23. QUOTE (Aristos Queue @ Mar 24 2008, 07:08 PM) Well then... I went ahead and started a new wiki page to store the information. I started it by simply copying and pasting your post. I don't have time at the moment to search for the other posts but hopefully I can do that soon. http://wiki.lavag.org/Buffer_Allocation
  24. QUOTE (crelf @ Mar 20 2008, 04:11 PM) I did see that and looked at it a bit, but it also uses the tree control and carries the overhead associated with it that I'm trying to avoid. (I wish I had seen that before I cooked up my version though.) QUOTE Have you checked out the Map Class? Interesting, but... QUOTE (AQ) THE CODE IN THIS .zip FILE IS REALLY ADVANCED USE CASE OF LabVIEW CLASSES. Novices to classes may be scared off from LabVIEW classes thinking that all the code is this 'magickal'." ...I'm scared. I understand the concepts of classes but have little pratical experience with them. The discussion alone is over my head much less the code . Maintaining it could be difficult. I'll have to play around with it when I'm ready to rewrite the tree section. QUOTE (jlokanis) The variant tree structure is fast for reading but slow for writting. So, if you build the data tree once and then read from it often, this is a good approach. Lots of reading and writing. I guess I'll either have to go with AQ's map class or an array of clusters.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.