Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. QUOTE (jlokanis @ Jan 6 2009, 02:15 PM) I don't think there's are true right and wrong way, but if you create all those objects at the top level and pass them down, you end up with an awful lot of wires. On our team we use mostly named Qs/Ns so that they are globally available, and use unnamed objects only when necessary. We wrap the obtain (open) function in a subvi so that the name and the data type are only in one place. We only use a few of those, and many subvis take the Q/N reference as an input. Whether to pass with a wire or use the open function is mostly a judgment call. QUOTE (jlokanis @ Jan 6 2009, 02:15 PM) The plus is I would not be copiing the ref value each time I branch a wire and also each sub-vi would have it's own private copy of the ref instead of everyone using the same ref number for the underlying queue/VI/notifier/etc. The minus is I would have to incure the cost of calling all the optain/open and release/close operations in every VI. Honestly I don't think either of these matter at all. Copies of references are cheap and fast, and cleaning them up is easy. Obtaining a new reference is slower, but as long as you're not doing it in a tight loop, it's not a big deal. It's much more important to keep your code bug free (maintainable) and memory leak free (close all your references) than to worry about the performance of these references.
  2. Hi John, thanks for posting. When I evangelize LabVIEW to my co-workers and others, sometimes I say "It's quite possible that in another 50 years, everyone will program computers this way, and text languages will be archaic" I really believe that's possible, for a variety of reasons that we all know and love (dataflow, built-in parallelism, modularity). BUT, then I have to say "but I don't think National Instruments can get us there, since it's designed and marketed as a niche language for a niche market". Before I make you feel any worse, I don't really think this is NI's fault, or that it's easy to fix. Conquering the world is a might big challenge (Do I hear someone chanting "Open Source... Open Source..." in the background?). However I wish I heard more from NI about this kind of long-term vision, and yes I have been to plenty of NI-Weeks. But yeah, we need better deployment tools and all that other stuff on your list of New Year's Resolutions.
  3. QUOTE (brianafischer @ Jan 5 2009, 01:45 PM) The problem is that in the real world, the Windows/Mac/Linux OSes don't guarantee response time (that is, they are not "deterministic"). If you use some LV timeout operation and ask for 10msec, you'll *probably* get control back in about 10ms (unless Windows starts re-indexing your hard drive, or someone opens Excel, or...). Even if you could ask for 10usec response, the system's ability to return to your task in that time is a crapshoot. This problem can be solved with LabVIEW Real-Time, and the special hardware needed to run it, but that toolset is kind of spendy. The next step down is to use the LabVIEW Timed Loop and a DAQ card with a counter/timer, but you are still at the mercy of the OS. The lack of determinisim is why it's useful to use the OS System clock for your timing. If your timeout is continually late (it will never be early) then the errors will accumulate. But if you constantly check the time of day or the millisecond tick count, then you can correct for this. On the other hand, polling always gets a bad name, but your polling loop may not be totally resource-intensive. Don't forget that the CPU is usually running NOOPs most of the time anyway, or else the OS is doing low-level polling operations. If your task is low-priority, polling is probably not a big deal, unless you are trying to do power management. Of course if your task is low priority, you won't get the deterministic response you crave. The other issue is figuring out what to poll to measure time. You might be able to poll the Program Counter of your CPU (which counts clock cycles) but I don't remember how to do that.
  4. We have a pretty big app, and we build it frequently without crashing LV. I would just use a divide and conquer approach. Take the top-level VI and delete half of the subvis, patch up the loose wires, and build it. If it doesn't crash, delete the other half of the VIs and see how that goes. Naturally you will want to try eliminating all calls to various features like VISA, GOOP (if possible), et cetera. It sounds tedious, and it is, but in one afternoon you should be able to finger the culprit. Then NI Tech support should be able to help you if the fix is not obvious at that point. Good luck!
  5. QUOTE (brianafischer @ Jan 5 2009, 09:23 AM) You seem to have two questions, one is about <1ms timing, which is hard but not impossible. How are you going to measure that time? That should dictate at least part of your design. I don't think either of your use cases bear any relation to sub-millisecond timing. The other question is whether a stopwatch-like app should use relative time or absolute time. I would vote for absolute time because your timing code may have bugs (which you will eventually fix, of course) whereas the OS time is generally correct already.
  6. QUOTE (Jim Kring @ Dec 31 2008, 11:20 AM) Given this, I would amend my recommendation so that you export first, and then revert your working copy or check out a fresh one, and then copy the properly linked version back over your working copy.
  7. Just another data point, I have been using SVN since about the same time (2004) and have not experienced this problem at all. QUOTE (Ton @ Dec 31 2008, 09:21 AM) Exporting is a good idea, but my recommendation would be to commit your svn working copy, if it's not too hopelessly broken svn-export your files to another location on disk so that there is no SVN at all Load up the exported copy and fix all the linking. Save all the changed VIs, of course *carefully* copy your exported hierarchy back onto the original files, replacing everything. This should be a single drag-and-drop with the Windows Explorer commit your fixed working copy back to the repository
  8. Which ADO functions are you using? Last time we used the NI toolkit it was extremely slow, but we have our own wrappers to ADO and our stuff is working well enough. The main thing is to use the ADO.GetRows method. I'm on a deadline today, but I'll try to see if I can get more details on how to make it fast. Jason
  9. QUOTE (Phillip Brooks @ Dec 22 2008, 07:30 AM) That looks pretty cool. This is the one I got for traveling, the http://www.amazon.com/Acer-X163Wb-display-widescreen-dynamic/dp/B00154JMEK/ref=pd_bbs_sr_1?ie=UTF8&s=electronics&qid=1229961945&sr=8-1' rel='nofollow' target="_blank">Acer x163wb. It's a widescreen 1366x768, and you can get it for under $120 nowadays. I just kept the original box, which has a briefcase-style handle, and it's totally ready for travel, since it's small and light.
  10. QUOTE (gleichman @ Dec 20 2008, 07:42 PM) I was thinking about this after a recent business trip when I bought the smallest second monitor I could find. (I've decided I'm never traveling without it again). Of course the drawback is that you need AC power, and it got me thinking about native dual-screen laptops. I thought that they would use a hinged design so that you open the laptop with its horizontal hinge, and then a second screen would fold out with a vertical hinge, with maybe an extendable leg to stabilize it in the open position. I guess the sliding design makes it easier to run with just one screen when necessary (like you're on an airplane or else you need more than 20min of battery life). Cool post!
  11. QUOTE (PJM_labview @ Dec 19 2008, 09:12 AM) What about VIs in disabled structures? Normally they are not compiled, but maybe if they are re-saved in another location, then they are touched.
  12. OK, this is all guessing... QUOTE (Nate @ Dec 18 2008, 11:11 AM) It's certainly checked at compile time, because if you violate the rules, your VI will have a broken run arrow. I doubt it needs to be checked at runtime. QUOTE (Nate @ Dec 18 2008, 11:11 AM) ... If a method in this child class attempts to access a private data member in the parent class, what prevents it? The child VI can't access private parent data. The child VI will have a broken run arrow. If you try to load it dynamically in any way, you should get the normal error code for trying to run a broken VI. QUOTE (Nate @ Dec 18 2008, 11:11 AM) I'm hoping the only thing that prevents it is the fact that the child class should be broken at design time by the LabVIEW compiler, and attempting to load it using the Get LV Class Default Value should gracefully report an error that the child class has errors, but I'd like get a solid answer to this question. Oh, you beat me to it. Seems easy enough to test (I have time to write posts, but not to write code, sorry). I can't give you a solid answer since I'm not privy to the internals, but I just can't imagine this working any other way. Dynamically called VIs can be run by systems which don't even have compilers (using the run-time engine) so there's pretty much no way this could be evaluated differently at runtime (again, this is speculation). QUOTE (Nate @ Dec 18 2008, 11:11 AM) ... but I confess I have not attempted to do any benchmark testing. Well, let us know what you find out.
  13. QUOTE (Val Brown @ Dec 18 2008, 10:45 AM) Well, what if there were a separate, open-source library of a lot of functions which really should have been put into vi.lib in the first place? Of course OpenG does exist, and it is your free choice whether to use it or not. It's not worth getting mad or griping about it. Just decide whether adding it to your toolkit makes sense for the realities of your situation and move on. Of course discussing that choice in this forum is just fine, I just don't get the apparent frustration you are expressing.
  14. QUOTE (Val Brown @ Dec 17 2008, 04:22 PM) Great idea, thanks. QUOTE (Val Brown @ Dec 17 2008, 11:24 AM) It's just that my experience has been with traditional LV dataflow and that's what I feel most comfortable with at this point. I think there are two interesting topics of discussion. One is your point about different flavors of LV-OOP, which is kind of interesting, and the other is about getting away from pure dataflow, which segues into debates about by-value or by-reference programming. For OOP, I am trying to use the native ("LabVOOP") structure as much as possible, both to reduce dependencies, and to try to use by-value programming as much as possible in favor of by-reference (Endevo GOOP and OpenGOOP) approaches. I'm sure there is merit to using the other stuff, and I expect we'll hear from their fans shortly. Once you start to question the use of by-reference objects, then you have to question the use of locals, globals, event structures, queues, and notifiiers. I think all of them are tremendously useful, but I have to say it's a lot easier to grow bugs in code that is not strict and simple dataflow. I think you could argue that all of those are deviations from the concept of pure dataflow programming, and yet now that we have these tools, LabVIEW programs can get more sophisticated and useful. The first time I used a by-reference object, I was thrown for a loop. I had an IMAQ image, and I branched the wire, ran a filter on one side and compared it to the original. I couldn't figure out why there was no difference , but of course the data was never copied because my wired only contained references to the image. Many years later, my code is filled with queues and refnums and whatnot, and I know that branches of the wires all refer to the same data, but it feels like the purity is gone.
  15. jdunham

    SM2060

    QUOTE (rachelanne @ Dec 17 2008, 01:17 AM) I think that's one more time than most other people reading this board. Did you contact Signametrics?
  16. QUOTE (PJM_labview @ Dec 17 2008, 03:35 PM) It looks more like a warning than an error. Translating from English to real English, it says, "I know you asked me to save everything, and I did, but two seconds from now when I claim you still have unsaved changes, please don't write nasty emails to NI Tech Support"
  17. QUOTE (lenny wintfeld @ Dec 17 2008, 02:41 PM) Welcome to LabVIEW! QUOTE (lenny wintfeld @ Dec 17 2008, 02:41 PM) 1. How do I create the intialization object shown in the "Labview for Everyone" example program. And as an aside, how would I get labview to describe that "thing" to me? The easiest way is to pop up on the shift register -> Create -> Constant. You can also drop an empty array constant from the array palette, then drop the control ref constant inside it (you can also drag it back out of the existing one to see the empty array constant) You can also use the "initialize array" function with no size input (length=0) to get an empty array. That's what I usually do. If you have the wiring tool on, and the context help window open, it will tell you the data types of each wire when you hover over it with the cursor. That's how you figure out what the differences are. QUOTE (lenny wintfeld @ Dec 17 2008, 02:41 PM) 2. The object I created has the same type as the "Labview for Everyone" object (both are a "1D array of Control Refnum"). But based on the error message, mine has nothing that it refers to. What does the "Labwview for Everyone" object refer to? Using Build array puts one element into your array, but it's basically a NULL reference not pointing to any particular object. Your downstream functions are complaining about that. QUOTE (lenny wintfeld @ Dec 17 2008, 02:41 PM) 3. In my version of the progam with the object that causes the problem not wired into the shift register at all (e.g. just "floating") the program runs just fine! Am I just lucking out with an unitialized shif register? If not, what is the purpose of the initialization object? Well you can drop an Array Size function on that array see what happens. You should notice that the array size grows forever; it's a memory leak. You don't notice any effect because that array is filled with redundant control references. When you disable the same control 400 times, it's still disabled.
  18. QUOTE (willemjs @ Dec 17 2008, 12:51 PM) http://lavag.org/old_files/monthly_12_2008/post-1764-1229552508.png' target="_blank"> It gets a little more complicated if you have multiple projects (lvproj files) open.
  19. QUOTE (Val Brown @ Dec 17 2008, 11:24 AM) Well at risk of both a thread hijack and a mild flame war, I suggest you take another look at LabVIEW's built-in OOP (usually called LabVOOP). It's pure dataflow. You could also call it clusters on steroids. As I see it, it adds two features. One is to hide the implementation (i.e. the messy details) of each component of your code from the others. This makes it more likely to avoid bugs, because without this, it's very easy to change a subvi or a typedef and have it break some other code which you were not paying attention to at that time. The other benefit is dynamic dispatch. If you've ever passed data in a variant (or a binary string, or a cluster with optional fields), and then accompanied that with an enum which tells receiving code how to unpack the variant, then you are better off using LabVOOP/Dynamic Dispatch because it does the same thing with less code and less chance for error.
  20. QUOTE (shoneill @ Dec 17 2008, 08:35 AM) I think this is right on, but when I tried to make my first XControl, and having about 15 years of LabVIEW experience, it took at least a full day or two to get it working to my satisfaction. I could easily see spending a week or more on a moderately complicated control, even after getting familiar with the tool. I see lots of benefits for making reusable UI components, but I think the economics don't justify making a lot of XControls. I would love to hear any success stories, though. Anyone? Bringing this back to the original question, I would focus on OOP first, since it can change the way you think about programming, usually for the better.
  21. QUOTE (Tomi Maila @ Dec 11 2008, 06:22 AM) QUOTE (Tomi Maila @ Dec 15 2008, 12:25 PM) The important factor will be development speed and quality and ability to flexibly adapt to changing needs. I also tend to prefer agile and other iterative processes, where all the requirements are not known upfront. I think you partly answered your own question. Sometimes you will have to educate your customer that your services are different than those of a plumber, accountant, mechanic, or lawyer. Unlike all of those, you generally doing something which has never been done before and is not 100% predictable. The customer, or the people they answer to, can get confused about that.
  22. QUOTE (normandinf @ Dec 13 2008, 11:42 AM) I didn't see where anyone suggested looping a string with array subset. I'm not sure why a string would be used at all. crelf stored the patterns in a spreadsheet, but it seemed like he was doing that for lack of a better idea. Presumably the patterns would be loaded from a file just once, so speed wouldn't matter, but chris didn't really provide enough details. The most efficient way is to store the pattern and the mask is to pack the booleans into integers. The pattern was 128 bits, so you could use four U32s for each pattern (and 4 more for the mask), but you're probably right that integers or booleans are fast enough, .
  23. QUOTE (Mark Yedinak @ Dec 12 2008, 03:43 PM) I don't think you can force a cluster to do this. The cluster border will scale, but the contents won't. However, it should be easy to unbundle the cluster items for the purpose of displaying them. Then if you want them to scale in different ways, use splitter bars and panes to do it ( I think that's exactly why they were added to LabVIEW, though I haven't tried.). Look at C:\Program Files\National Instruments\LabVIEW 8.6\examples\general\controls\splitter.llb\Multi-Panel Front Panel using Splitter Bars.vi That example just has the graph scaling with its pane, but you can set the other controls to scale with their panes and you start to see the possibilities. Good luck.
  24. QUOTE (crelf @ Dec 12 2008, 02:28 PM) The standard way to do that is with bitmasks. QUOTE (crelf @ Dec 12 2008, 02:28 PM) Match: pattern = 1 0 1 0 . 0 1 0instrument = 1 0 1 0 1 0 1 0 So in this case, your criterion consists of a mask: 1111 0111 (in other words 1 = care, 0 = don't care). and a pattern: 1010 0010 in this case, make sure the don't care bits are also zero. and your code looks like: (instrument AND mask) =? pattern though it might be safer to do this (only helps if you trust the mask more than the pattern): (instrument AND mask) =? (pattern AND mask) Of course labview will handle integers or boolean arrays with equal aplomb. you can use the mask with other boolean operations to test for just the zeros, or just the ones, or toggle selected digits.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.