Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. I mostly hate it, but I leave it on. It's annoying when doing initial testing. However, very occasionally it finds an error cluster which I forgot to wire, sometimes in the middle of a crowded diagram, and that makes it worth it.
  2. This seems like you may be asking for help with your homework or university lab work. We're happy to help, but you would need to post your code and what you've tried, and a better description of why it's not working. And if it's not homework, you need to do that anyway. Good luck,
  3. QUOTE (pallen @ Aug 20 2008, 07:27 AM) I really don't think this is worth the trouble. It's an annoying nightmare that never ends. I had one situation where I had LV 8.2.1, and the customer was sticking with 8.2 (since NI wanted to charge $ for that .1 upgrade). It's pretty much impossible to get work done in the new version without saving everything, since closing any front panel will hit you with a dialog box. Then if you save everything locally, it becomes difficult to see which VIs actually got modified with your changes. I was using version control, so it looked like everything need to be checked in. Sure you can figure it out from file modification times, or keep good notes, or keep saving the entire hierarchy to a new location with "save for previous version..", but I found it very anti-productive and I finally got the end-user to upgrade and put me out of my misery. Doing it again, and if I had had a disk space issue, I would have copied labVIEW 8.2 itself to an external hard drive and tried to run labview off that, then trying to maintain a program in the wrong version. I think you'll regret losing your 8.5.
  4. Back on topic: one question I like is to make a diagram (I have a standard one), remove all of the comments and most of the labels, and ask the interviewee to figure out what it does. More specifically, they have to choose good labels and write appropriate comments. This tells me whether he/she can read diagrams, figure stuff out, choose sensible variable names and write clear comments. At the same time it's a lot faster than writing code from scratch. It's really a different skill set than writing, so it can't replace coding samples and other tests, but it's helpful to see how the person thinks and how they will dive into our existing codebase.
  5. QUOTE (Aristos Queue @ Aug 19 2008, 08:21 PM) Sometimes I think NI is listening to the wrong customers. QUOTE (Aristos Queue @ Aug 19 2008, 08:21 PM) That was eight years ago. This developer went on to develop LV classes, propelled to some degree by wanting a structure in LV that could express the associative map that had been implemented all those years ago. And now, today, there are two very different map implementations: and (search for "LabVIEW Generic Container Map")<h4></h4> Well, if you run into that developer again , please tell him that eight years is a long time to wait, and we're still waiting (since the first one has a performance bug and the second one is languishing in pre-release status). Lest anyone think I am totally hijacking the thread, I am assuming that Heiko is implementing some kind of associative array since that's what Open G dictionary is for. At some point after so many people are reinventing the wheel, I hope we can get to a native implementation with a great API and fast performance. Thanks AQ for your efforts to move this forward!
  6. QUOTE (Aristos Queue @ Aug 19 2008, 04:36 PM) I was writing my reply when his was posted and I didn't see it until I sumitted. :-) Wow, I don't usually slip in there first. BTW, wouldn't it be great to have a native associative array or hash table in LabVIEW? It could be just like the OpenG dictionary but with super-fast lookups and native polymorphism. It would fit in really well with the queue and notifier palettes. Just an idea (whose time has come, and is maybe a bit overdue).
  7. QUOTE (Ton @ Aug 19 2008, 11:35 AM) Ton is partly right that those questions can affect how well one performs a job. You could argue that people with children are less likely to work overtime, but then people without spouse or children are more likely to switch jobs or move to a different city for personal reasons. At any rate, it's almost moot. In the USA it is just not ok to ask about marital status or even citizenship during the hiring process. (If US Citizenship is required, you can ask yes/no whether they are eligible). All you can do is state what level of effort and/or travel the position will require. If you can still ask more personal questions in Europe, then fine, but this limitation has not yet caused the collapse of the US economy (I prefer to blame that on our elected officials).
  8. I wasn't at NI Week, but I would take that comment to mean that LabVIEW has no way of unloading the public and private VIs and CTLs which make up your class. A specific instance of a class (data on a wire) should be instantiated and garbage collected according to the same rules as normal LabVIEW data. If you put your class initializer routine in a for loop and ran it a million times, then those million copies would not persist until LabVIEW quits, but only until the data on the wires have run their course.
  9. QUOTE (LV_FPGA_SE @ Aug 19 2008, 08:01 AM) The Windowing functions are also commonly used for this purpose. (Palettes -> Signal Processing -> Windows) Cosine window is pretty good. It's easier than a ramp because you don't have to calculate the starting and stopping. It gets trickier if you signal spans mulitple iterations of the loop, and if you can't pre-generate the whole signal at once, but it doesn't seem like you are in that situation.
  10. QUOTE (NeverDown @ Aug 18 2008, 10:09 AM) How are you doing the resolve? You are just editing the VI? I haven't noticed any problems. You can also manually resolve conflicts. Usually when a VI conflict happens, you will get 3 files instead of one: MyFile.vi MyFile.vi.r1234 MyFile.vi.r1200 If you just delete the second 2 files, Tortoise will think you have resolved the conflict. If that doesn't work either, then you may have more serious problems with your working copy.
  11. I looked at your VI. You are doing a continuous generation and not monitoring the error cluster. I think your DAQmx write buffer is regularly running out of samples (especially since your buffer size is one), though I only glanced at the code. one solution: don't use buffered output at all. You should be able to output immediate non-buffered values to the analog out, though it is possible they removed this functionality when NI-DAQ turned into DAQmx a few years ago. Second possibility: Some where in the analog output task property nodes there is an option to regenerate the analog buffer, which basically implements a circular buffer in the hardware fifo. If you have than, then having a one-sample buffer continuously generated will do what you want. Sorry I don't have time to find out the correct property today. 3rd possibility (not recommended) use a really much larger buffer and make sure that you don't fall behind. This is probably not worth the trouble since it is more programming and will still probably fail sometimes Good luck
  12. QUOTE (zorro @ Aug 14 2008, 02:37 PM) [this should be moved to the Hardware Forum] I think we'd need to know more about how you are generating this signal. I assume you are using an E-series National Instruments multifunction i/o card, but I couldn't find a model 3016 anywhere at ni.com. If you are using E-series, and the DAQmx drivers, then is the task being interrupted? If you are continuously writing to the card in a software loop, then maybe you have a bug in your code which manifests itself only occasionally. Plesae post your VI which controls the analog output channel. (If you haven't encapsulated this into a subvi, now is a good time to do so!) If you are using the E-series card in the correct way, then you should not see any dropouts.
  13. QUOTE (Yair @ Aug 14 2008, 10:53 AM) No, but I sure will give it a try. That's why I posted to the forum. Have you been using this successfully with an SVN workflow?
  14. QUOTE (patufet_99 @ Aug 14 2008, 09:46 AM) OK, one last try. When you read the analog input, you should be able to know the cumulative Daqmx scan mark of the beginning of your current buffer, and thus the scan mark of your trigger event which is some short time later. You can then call the property DaqmxRead.TotalSampPerChanAcquired. Don't be put off by the mysterious name; this extremely useful property tells you the current value of your analog task's scan clock. So this value minus your trigger event scan mark tells you how long it's been since the trigger event in real time (of course you have to use your sample rate to convert this to seconds or milliseconds). Then you can use "Wait (ms)" to wait the remaining time required for your constant delay. Obviously you would test this a few times and pick a constant delay large enough so that your remaining wait time is always a positive number. With this technique, and no obvious performance/design problems in your code, you should get repeatable accuracy of 2-5ms (just a guess). If you want nanosecond accuracy, and/or hard real-time performance, you can accomplish the same thing with the counter/timers on your board and some fancy DAQmx setup. Please PM me if you need help with this. Have fun, Jason
  15. I also want to mention that Subversion's awesomeness also includes efficient binary deltas for file changes. What that means is that if you're someplace with a slow internet connection, and your co-worker fixes a harsh bug in your biggest disk-hogging GUI VI, then many times your SVN update will still be really fast, because SVN will only need to send the newest part of the file to you. I don't know how perforce does this, but some other systems either have to send the whole file, or can only do this with text files. Also, SVN works pretty well off-line. If you are on a plane, you can still revert to the last server version without an internet connection.
  16. QUOTE (gmart @ Aug 13 2008, 05:34 PM) Yes! So the problem is if I have a typedef, and it's used in 70 calling VIs, I'm going to open and inspect all of the callers to see whether changes are needed. Let's say I need to change four of the callers. When I close the other 66 VIs, I have to be very careful and decline to save the changes. If I go ahead and save the callers, then when I check my code back into SVN, it's very hard to remember which four of the VIs had real human-initiated changes. If I just check all 70 VIs into source control, and then my co-worker tries to check in the 3 he changed, they will have conflicts. He'll ask me where my changes are, and I'll rack my brain to remember which ones matter. That just doesn't work. We can use LVdiff, but you can waste 5-15 minutes per VI on that, and it's easy to make a mistake. So instead, we make sure to check in all the recompiles with a separate check-in. The commit log has special text, and we have a tool that can tell us whether a given VI has real changes or just recompiles. We have written special merge tools to figure out whether there are real conflicts, because SVN thinks that many of the VIs are conflicted. Since merging is a pain, we only do it once a week or so, and without this system and this tool we would have to manually resolve hundreds of conflicts, many of which are just an artifact of LabVIEW's VI storage system. Even with our special tricks, I still have the problem that if I actually try to use the code (maybe I want to be radical and test my changes), those unsaved VIs will keep popping up save dialog boxes every time a panel is closed. That's so frustrating that pretty soon I will have to bite the bullet and save all the VIs. I try pretty hard to save my work as I do it, so at this point *hopefully* I can check in my true changes. After checking them in, I will recompile everything and check that in with the separate checkin. Then I can get my testing done, crossing my fingers that I don't have to back out the changes. The alternative which I use sometimes is to change the typedef and check that in, then save all, and check in everything, and then find the callers which need changing and save those. This has the same problem that if the implementation is not so great and needs a lot of changes, and backing out of old versions, then I have a big conflict resolution mess on my hands when I go to merge with my co-workers. So some of our tools make this easier, but pretty often (like now) I feel that the time spent with these hassles nearly offsets the amazing productivity gains we get by using LabVIEW in the first place. When you throw in all the tools we had to write to make real automated builds with good svn-integrated versioning, I wonder if we should have ported to a different language long ago. OK, ranting aside, my purpose in writing this was to see if other people are having the same problem and if there are any innovative solutions we could share, and secondly to see if anyone at NI (gmart, I guess that's you, thanks for listening!) has the faintest glimmer that this is a problem that could benefit from better internal tools. It's not so much the SVN source code integration (well that would be great), but a way of telling whether a VI has real changes versus typedef-induced recompiles, and support for turning off the automated saving in this very-frequently-occuring situation. <Zippy> Are we off-topic yet? </Zippy>
  17. QUOTE (gmart @ Aug 13 2008, 03:15 PM) So If I don't want to check out the callers, because I don't want to recompile just yet, can I get any work done, or will I constantly be asked to save changes every time one of the callers leaves memory? I think that has kept me from making this transition so far. I want to delay saving for any VI which didn't see a real change so that I can limit my SVN check-in to the files which I actually worked on. Once I save the recompiles it's very difficult to track which VIs have changes that matter and which don't. I only want to check in VIs with true human-initiated changes, so that I can minimize conflicts with my co-workers, and I can figure out which VIs were edited if there is a new bug. If I save every VI that wants to be saved and check in all the "changes", then merging with other branches is very, very difficult. My current solution to this is to refuse to save VIs which I haven't personally modified, but I have to say it's a pain in the neck. I suspect the SCC stuff doesn't help with this, but I would like to raise awareness and hold out some slim hope that others have solved this a bit more elegantly.
  18. QUOTE (Tom Bress @ Aug 13 2008, 01:58 PM) I totally agree that if you are working alone, you don't need any kind of locking (check-in/check-out) But even if you are a lone developer, you may find yourself merging. If you have a certain snapshot (SCC revision) which is known to work, and maybe you've released it to someone who is paying you money, you should always tag that. Then you can do new development in your main version (the trunk), breaking the system horribly. If the end-user needs a small fix, you make a branch from your tag, release the update right away, and then merge your fixes back into the trunk when you have the time. In Subversion there is no difference between a branch and a tag, but to stay sane, you should keep them separate, and never edit source code in the tag area. Any fixes should happen in your branch area.
  19. Our group has several developers and over 3000 VIs, and we are using SVN with no SCC integration and using a copy/merge model rather than check-in/check-out. On the whole it works well, but merging can really be a pain. We have some custom tools for ignoring recompiles in a merge, but it can be a headache. I also wrote my own tool with similarities toTon's project scanner (No fancy gui or anything). I'd rather scrap that and use a LAVA open source tool. I would be interested in opinions about whether a switch to check-in/check-out would be useful (Sorry to hijack the thread somewhat). The benefit would be the SCC integration (PushOK) and the project scanner and any LVProj support, and the risk would be that working on the merge model for so long (4 years) would make the changeover too difficult for the team. How does the SCC plug-in handle times when you change a typedef and several hundred VIs automatically recompile?
  20. QUOTE (patufet_99 @ Aug 13 2008, 02:29 AM) You don't have to wait for an entire buffer to fill before you read the data. If you read the data as fast as possible, you can probably sample each point and test your trigger conditions a lot more often. Then you can control the digital output immediately, and you would have a repeatable 1-2ms latency.
  21. QUOTE (patufet_99 @ Aug 12 2008, 11:36 PM) I think we need some more information. Are you using an NI hardware product? E-series? M-Series? What is your sample rate (seems like it must be slow if your waiting scheme is working). If your process is continuous, then what are the conditions for resetting the digital output? If your NI card supports buffered digital output, then you should be able to set the trigger for that task to the analog trigger signal. If not, then you should be able to set up one of the counter/timers either in pulse generation mode or terminal count mode, and trigger that from the analog signal. Either way you should be able to get 50-nanosecond response times for an E-seris card, and you should be able to add any delay you want and it will be very precise. Either of those methods will require you to learn about DAQmx timing, triggering, and synchronization, but there are plenty of examples in the NI Example Finder. Probably none to do exactly what you want, but if you get stuck, you can ask more detailed questions or post your code. Good luck
  22. QUOTE (Antoine Châlons @ Aug 8 2008, 08:10 AM) If you search the LabVIEW help for "extended mantissa", it comes right up. It says that Windows, Linux and Mac Intel, EXT is 80 bit, and Mac PowerPC is 64 bit (same as DBL). you should be able to cast the EXT value to a string and count the bytes, or send an email to support@ni.com; either of those would get you an authoritative answer.
  23. QUOTE (sachsm @ Aug 7 2008, 05:56 PM) That's already available in the OpenG lvdata package. The VI is called Get Strings from Enum__ogtk.vi. It takes a variant as input and gives you both the list of strings, and the current value's string.
  24. I'm not understanding why you don't use the typedef enum in the SubVI as well. Why pass the value as a string? The more strong typing you have in your code the fewer bugs. Other than that it seems like a harmless idea. It's sort of like how you can wire any kind of data into a variant. It's pretty handy, but it can also lead to bugs. I would prefer to be forced to use the "To Variant" function so that my own worst enemy (me) would have a harder time making wiring mistakes which don't break the wires. I also don't understand why NI doesn't use any typedef enums with it's property nodes, like enabled/disabled/grayed-out. At the very least that property node ("Disabled") could create a ring when you pop up on it, instead of just an integer. I assume you already know that the Format Into String function is an easy way to get the enum's current string value, though it would be nicer to for enums to have their own special conversion function. Scan From String does a pretty good job in the other direction too.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.