Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. QUOTE (jorwex @ Aug 7 2008, 07:34 AM) That's the true sign of a newbie. No, there's no good reason. It's just a complicated feature set that mostly works and doesn't generate too many complaints so it has remained the same. I think you figured out why I didn't want to type up a whole set of instructions. I'm glad you got it working. One final tip: you can use the Control Editor to specify the exact size of the plot area. LabVIEW scales the axes automatically, but if you match the plot area to the size of your X and Y dimensions, then the rendering of the intensity graph will be faster and more accurate. If you're off by a few pixels, then LV has to do a 2D interpolation of the whole data set.
  2. Well a simple example I just made does not reproduce the problem. Basically I was passing App.MenuLaunchApp and App.MenuLaunchVI into the "Open VI Reference" function and getting error 1004, VI not in memory. However it's working fine right now. I'll be sure and post back if I can recreate the problem. OK. I figured out my problem. I tried to re-open the VI reference by name, but the second time, I did not wire the proper application reference. Thanks for listening!
  3. QUOTE (Darren @ Aug 6 2008, 12:58 PM) Thanks for the quick reply. I have two projects open, one of which contains the VI I launched from, and one which doesn't. MenuLaunchApp is returning whichever project is frontmost. (still using LV 8.5, sorry). I'll try to get a test VI together. Jason
  4. I wrote some VIs which launch from the Tools menu (by the well-known trick of storing the VIs in the "C:\Program Files\National Instruments\LabVIEW 8.5\project\" folder. When I select the menu item, there is a property node for App.MenuLaunchVI, which gives the VI name, and there is App.MenuLaunchApp. I was hoping that gave the application instance which owns the launch VI, but it just gives the topmost lvproj which is open. Does anyone know how to get the AppInstance which owns the launch VI? Jason
  5. My pet peeve is the file dialog box. It's great that it starts in the last folder you accessed. However, only one last position is remembered, and it's shared between opening up VIs and other code from the File menu, versus opening data files from various runtime methods of opening files. Only a crazy person keeps programs and data anywhere near each other on the hard drive, and it's very irritating to have to navigate back and forth. This gets even worse if you have parallel branches of similar code (managed with a professional source code control system, of course). Right after you switch branches, it is very easy to browse into the wrong working copy on your hard drive, since the folder structure is usually identical, and LV's file dialog box will almost always start you in the wrong one. With the miracle of lvproj "application instances" you can have both trees open at the same time, and the file dialog box will work as hard as possible to encourage you cross-link your code. Often, loading the VIs with throw warnings which you ignore at your peril, but the right place to fix this is in the file dialog box starting place. My wishes: 1. For loading VIs, it should always start in the lvproj folder or one of its subfolders 2. For loading data files with File VIs, path control browse buttons, executing the file dialog function, etc, it should maintain a separate "last used folder" amd start there. I have already submitted something like this to the LV product suggestion center, but I encourage others to do so too.
  6. QUOTE (Max1971 @ Jun 25 2008, 06:06 AM) You are on the right track. I don't see why those examples shouldn't help, though I could be more specific if you had links to those examples. The main reason DAQmx is better than the old driver is that the same functions work whether your tasks are AI, AO, or Counters. Int the DAQmx example ...\LabVIEW 8.5\examples\DAQmx\Synchronization\Multi-Function.llb\Multi-Function-Ctr Retrigg Pulse Train Generation for AI Sample Clock.vi, you can see that the counter channel starts counting based on a trigger from another task. You should be able to use that same subvi (...\vi.lib\DAQmx\configure\trigger.llb\DAQmx Start Trigger (Digital Edge).vi) to start your counter task from another counter task's start or input signal. Good luck.
  7. One probable cause is a LabVIEW annoyance. If you are editing your code in the BranchA working copy, and then you close it and open your BranchB working copy by double-clicking the lvproj file or grabbing it from the recent files list, then when you use the Open... or Save As... functions, you usually end up browsing in the folders of the wrong branch! This is because LabVIEW only remembers the most recent folder you used with the file dialog box whether you are opening a VI, or accessing a data file when your program is running (I've complained many times about this). Of course in many cases, the wrong branch looks indistinguishable from the right branch, so if you don't inspect the file dialog box to see the complete path, it's very easy to save or load VIs from the wrong working copy. You have to be very vigilant. There is a lot NI can do to help this, but it doesn't seem to be much of a priority. I think they don't want to second-guess the user's on-disk folder layout, but this makes it very easy to make a mistake. OK, so if you are already cross-linked, the best thing to do is to move the working copy to a completely differerent place on the hard drive to make sure LabVIEW's relative path links can't find the cross-linked VIs from the other working copies. Make sure the working copy is moved to a different level (number of folders from the root) to confuse the links. Then when you load, will still have to re-link a lot of the VIs yourself, but when you are done hopefully everything is fixed. Of course you want to immediately Save All and then do an SVN commit. Godspeed
  8. QUOTE (James N @ Jun 18 2008, 10:00 AM) First off, you should not be having these problems. You don't need to switch to Perforce. Something is going wrong. I don't agree with James, LabView should only be keeping relative, not absolute, paths to each VI. I keep checkouts on my hard drive from a half-dozen active branches of the same code base, and they don't get cross-linked. Of course I am somewhat careful to make sure there are no links outside a give working copy except to the LabVIEW installation folder (vi.lib). I can open up multiple branches at the same time, as long as I rename one of the lvproj files (I usually revert afterwards rather than checking the renamed lvproj into SVN). I can move the working copies of any branch anywhere on my computer and it will open just fine, though all developers on our team do keep the same hard drive layout to make builds easier. Make sure you are double-checking the repository location of files you are opening by checking in the subversion tab of the windows file properties display. Look at your VI hierachy with full paths turned on, and make sure that all vis are inside your current working copy or else in vi.lib. Good luck, and if you have some more information or some samples, we can probably help some more.
  9. QUOTE (menghuihantang @ Jun 17 2008, 12:55 PM) Use parallel while loops. That's what LabVIEW is great at. Keep each loop as simple as possible and minimize the interaction with the other loop. If you need to share data or events, you are going to need to use some kind of communication system outside of the dataflow, like local or global variables, queues, notifiers, functional globals, datasockets. Your best bet is probably to have one loop for TCP/IP listening, one for user event listening, and then a third loop where all the action takes place, fed by your own events generated in the other loops and then passed into the main execution loop. Queues are the favorite way to do this. Do a search on "producer-consumer architecture" on this site and at zone.ni.com.
  10. QUOTE (Eugen Graf @ Jun 13 2008, 12:44 PM) If you are frequently using a typdef which contains an enum describing the type of data along with a variant containing the data itself, then you have a prime candidate for using LabVIEW lvclass objects (LabVOOP) instead. With LabVOOP, you can put different data types (different child classes) on the same wire without having to have an enum and lots of case structures whenever the variant data is interpreted. Instead, you drop your code into dynamically dispatched VIs, and at run time, the correct DD VI is called based on the type of data in the wire. This should work very well in combination with the use of queues to pass the data around.
  11. jdunham

    Need Help

    QUOTE (marp84 @ Jun 11 2008, 07:59 PM) I would just try to reload your Device Driver CD from your LabVIEW 8.0 install disks. At the beginning of the install, there are checkboxes for all the components to be installed. Make sure that "LabVIEW 8.0 Support" (or something similar) is activated under the DAQmx item. Good luck.
  12. QUOTE (Justin Goeres @ Jun 10 2008, 05:52 AM) I did feel bad about jumping all over the optimization point without offering anything constructive. Now that there is more information available, I can see you're doing it exactly right, Eugen. Code it up the easiest way first, and see whether it's fast enough, and only try to mess with it if it has a serious problem. QUOTE (Eugen Graf @ Jun 10 2008, 06:44 AM) I have to programm an application, which postprocess some data, so it shoulb be like MS Excel. My programm should read raw data from flash over 7 Slots x 15000 Pages x 60 Datasets (each dataset contains ca. 10 doubles as binary) and show this data in a table. I don't show all the data on one table, but show one of 7 Slots and it's enough to make my PC slower and eats some RAM. The problem is, that this data is not only duplicated, much more the problem is the table with data. Because I convert binary data to ASCII ! It was the first part of the program. The second part should read the saved table convert it back to doubles (for plots) and make postprocessing. After poistprocessing I have to show the raw data and posprocessed data in one table. One row contains about 25 values! And this data is twice in RAM: doubles for plots and ASCII for the table. I would consider keeping the storage of the huge datasets separate from the user interface. Instead of keeping them in ASCII, you could keep all the data in native binary, stored in a functional global (uninit shift register). You can't look at all that data at once anyway. You could just pull small segments of the data out of the storage and write them to the table. If someone modifies the table, you catch the event and update the storage in the right place. If the user scrolls the table or changes pages, then you go back to the storage, and get the new section of data and toss it into the table. Even if they scroll around a lot and your code has to do a lot of gymnastics it may still be quite a bit faster than keeping all the data in the table, and using the table as a storage mechanism. Jason
  13. QUOTE (Xrockyman @ Jun 9 2008, 03:30 PM) Sure, the NI DAQ products work great as hardware counters. LabVIEW doesn't really have a true interrupt handling system which can be connected to a DAQ event, you will probably need to poll the counter. There are lots of examples for reading from the counters in the LabVIEW Examples. If you have tried writing some code, and it's just not working, then try to post some code and you might get some more specific help to point you in the right direction. Salut, Jason
  14. QUOTE (Eugen Graf @ Jun 9 2008, 03:55 PM) This absolutely applies to LabVIEW. The Java page was just the first one that came up on Google. People way smarter than me swear by these rules, in all languages. As my co-worker often says "the compiler is smarter than you". It's just not a good use of your time to try to outwit the development system, especially if it makes your code harder to read or harder to maintain. You should do what you are good at, which is coming up with nifty things to ask the computer to do for you, and you should let the computer do what it's good at, which is getting it done really freaking fast. I think you will find that programs with clean designs and clean diagrams actually run pretty fast. QUOTE (crelf @ Jun 9 2008, 03:27 PM) Wow - that's one hell of a generalisation. Even the page that it links to is over simplified. Whilst I agree, at least in part, with the sentiment, I've gotta give my standard response to generalisations like this: "it depends" Saying "Don't optimize as you go" is like saying don't write using punctuation. Also "...making sure that the code is clean... and understandable" can be considered forms of optimisation. I don't think that's a useful definition of optimiZation (c'mon, crelf, you live in North America now ). Using punctuation is like having clean, easy-to-read LabVIEW diagrams that humans can understand, which should be the primary design consideration. Human labor time is much more precious than computer time, so it doesn't make sense to optimize unless there is a problem (i.e. the user interface is sluggish for other humans), and if there's a problem, you can usually fix it by profiling the code and fixing it in just a very few places. If you write code that other humans (maybe yourself two years from now) are going to waste time understanding or debugging because it's so confusing, then where is the optimization in that? I think there's a lot of value in having clean code and doing sensible things, but Eugen's original question was about whether he should make his code messy in order to fix a 'problem' that was just speculative. BTW I learned all of this the hard way!
  15. QUOTE (Eugen Graf @ Jun 9 2008, 01:37 PM) 1. That's what wires are for, so use them! Putting data out on the connector pane does not necessarily make a copy. 2. The http://www.cs.cmu.edu/%7Ejch/java/rules.html' rel='nofollow' target="_blank">first rule of optimization is: Don't do it! Your computer uses the same amount of electricity (in general) whether or not your code is efficient. From the above link: QUOTE Write your program without regard to possible optimizations, concentrating instead on making sure that the code is clean, correct, and understandable. If it's too big or too slow when you've finished, then you can consider optimizing it.
  16. QUOTE (georgekm @ Jun 6 2008, 08:05 AM) (Assuming you are using DAQmx) Generally a timing/trigger line is not digitized and stored, as far as I know. The easiest thing is to just run the trigger line to an unused analog channel so that it's easy to plot. Of course you will likely see a one-sample delay in the digital plot, because you won't start sampling until after the line goes high (assuming a typical trigger configuration). You can also wire the encoder line into a counter input and do a buffered counter operation at the same rate as the analog input. You will want to set the counter task to start and clock with the analog input scan clock, and make sure it starts and is armed before your analog operation starts. You should be able to find some DAQmx examples for syncrhonized tasks. You will still have to call separate functions to get the analog data versus the digital data, and you will want to set the DAQmx.Read properties of the slave task so that you know you are reading the same samples as from the master task. See http://forums.lavag.org/Independent-A-D-sampling-is-it-possible-t10987.html&p=46138#' target="_blank">this post for DAQmx Read properties.
  17. QUOTE (rolfk @ Jun 4 2008, 10:05 PM) You inspired me, so I submitted this. Other people should go submit related ideas at the http://digital.ni.com/applications/psc.nsf/default?OpenForm' target="_blank">Product Suggestion Center Vote early and vote often! :thumbup: QUOTE (LabVIEW Product Suggestion) Numeric controls and indicators are only one line, so they resize when the font is resized. Strings controls get larger when the fonts get larger, but are not reduced in size when the font is reduced. This makes it difficult when the VI is moved to a new OS, or loaded on a different computer with different font display settings. I realize this is made more complicated since there is some limited support for multiple font sizes in the same control (not used in 99.99% of cases). It's a real pain to get string inputs to line up (especially arrays of strings) when the display changes. Our current workaround is to set the fonts in stone and put Tahoma 13 in the INI files for all our built applications, but this is very limiting. What I would suggest: Add a string property which is "Scale to font size" and if this property is active, have a property "NumLines" (this would be analogous to "NumRows" in an array display). Those properties could also be combined by considering NumRows=-1 to be a fixed user-defined size (the current behavior). If a string input is changed to "limit to single line" = TRUE, then default NumLines to 1.
  18. QUOTE (rolfk @ Jun 4 2008, 01:02 PM) I think we were trying to say the same thing. It would be great if you could set a string control (especially if input is limited to a single line) to scale with the font the way the numerics od.
  19. QUOTE (BrokenArrow @ Jun 3 2008, 06:11 AM) I looked at your image. I think the fonts are exactly the same, but they are rendered to a different pixel size in Vista than they were in XP. Remember that integers can only be one line, so labview will always resize the numeric for the specific font size. For strings, the control itself does not resize automatically. As a test, select all of those objects and change the font size to 8 or 9 or something. The numeric array will get a lot smaller, but the string arrays won't change, even though their fonts do.
  20. QUOTE (BrokenArrow @ Jun 2 2008, 02:57 PM) I think it is the same issue. The first thing I had to do when I opened LabVIEW in Vista was change the default fonts from Vista fonts (SegoeUI, which doesn't exist on XP) back to Tahoma, and then my existing GUIs were not all ruined. II think the folks in that thread could have solved their problems by changing the LabVIEW fonts to non-default(XP compatibility) settings rather than changing the theme. Changing the theme would be equivalent, but changing only the LV fonts would not affect the other applications. It's also possible that there are other NI display bugs, but I haven't seen similar issues, and I have been running Vista 6 months, with old-style XP fonts, of course.
  21. QUOTE (crelf @ Jun 2 2008, 07:49 AM) The problem is not just Vista versus XP. You will have horrible font problems if your application is viewed on computers with different "font size" display settings in XP. Our solution for the font woes is to make sure that appFont=""Tahoma" 13" dialogFont=""Tahoma" 13" systemFont=""Tahoma" 13" are in the LabVIEW Preferences INI file of each deployed application (and of course in the LabVIEW.INI file itself. Then you are more or less immune to different display configurations modifying your button sizes and text field extents. See http://digital.ni.com/public.nsf/allkb/65F...62562C70073BE06 and related links for more info. The down side is that it's harder to make an application that looks like native XP on when run on XP and looks like native Vista when run on Vista. It's a small price to pay for all the time you save.
  22. QUOTE (george seifert @ May 22 2008, 11:52 AM) You can totally do this. Hardware Scanning is nearly always recommended. And yes, you keep track of your own buffer pointers. The main trick is that in it's default mode, DAQmx.Read will remember the last point where it read from and start there. If you read from separate threads, you need to change the DAQmx.Read properties. I decided it was easier to describe all this in LabVIEW, so here you go. These are just code fragments, not tested or anything, but I've used these methods in many applications. http://lavag.org/old_files/monthly_05_2008/post-1764-1211516168.png' target="_blank">
  23. QUOTE (James N @ May 8 2008, 09:06 AM) Note that the Find dialog lets you limit your text search to the block diagram only and labels versus data, which can be really handy when looking for more common words. The search box will (thankfully) remember your choice, which can trip you up next time, but generally I only search on block diagrams. This is really handy if you are searching for specific bundling and unbundling of items in a cluster. The item name will be on every front panel of every VI using the cluster, but often it will be in just a handful of places on diagrams.
  24. QUOTE (Jim Kring @ Apr 23 2008, 12:44 PM) I think it's more like how a 'six-figure income' is anything over $1E5.
  25. QUOTE (Dirk J. @ Apr 17 2008, 02:16 PM) I don't understand how you can "Create an instance" with previously initialized data, since labview classes are by-value. Are you just branching the parent wire and then casting it as child? I think if you post some example code, you will get more responses.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.