Jump to content

jdunham

Members
  • Posts

    625
  • Joined

  • Last visited

  • Days Won

    6

Everything posted by jdunham

  1. Well I'm not going to google around for formal definitions of dataflow, but to me dataflow means that the order of operations is solely determined by the availability of input data. Of course most mainstream languages set operation order based on the position of the instruction in the source code text file, with the ability to jump into subroutines and then return, so that's the cool thing, dataflow is totally different than normal programming languages. (I know you knew that already) The basic syntax of LabVIEW is that data flows down a wire between nodes. I'm calling that "pure dataflow". Any time data goes from one node to another without a wire (via queue, notifier, global, local, shared variable, writing to disk then reading back, RT FIFO, DVR, etc) then you are not using pure dataflow. All those things are possible and acceptable in LabVIEW because it turns out that it's just too hard to create useful software with just pure dataflow LabVIEW syntax. One thing I do take as a LabVIEW axiom is "Always prefer a pure dataflow construction". In other words, don't use queues, globals, and all that unless there is no reasonable way to do it with just wires. Well anyway, that's what I meant. If you call your diagram "pure dataflow" then you don't have to agree with anything else I said. Of course your diagram is perfectly valid LabVIEW, because LabVIEW is not a pure dataflow language. It's a dataflow language with a bunch of non-dataflow features added to make it useful. You could say the data is "flowing through" the queue but for me the dataflow concept starts to lose its meaning if you bandy the term around like that. So my definition of pure dataflow is different from Steve's in the original post, but it's a definition that is more useful for me in my daily work of creating LV diagrams. Sorry for the confusion. Jason
  2. One option is MSDN Operating Systems, it's what I have, though it sounds like I could be leveraging it more. A $699 subscription gets you 10 license keys for testing for most OSes, and I think you can request more if you need it. It's meant for testing, not for sharing with friends, but that's exactly what you are doing. In addition, Microsoft has knowledgeable salespeople you can call on the phone (or with chat online). I think there are thousands of people who are doing the same thing with virtualization so they should be able to point you in the right direction. However you may get better results if you don't actually call them Nazis when you discuss your situation with them.
  3. I agree you could make LabVIEW "more dataflow" by changing the semantics of the language; that is, by changing the rule that a node with multiple outputs releases them all at the same time. However, i don't see it as being useful. So many of the advanced programming techniques like the queued message handler, notifiers, the event structure, and even the lowly global variable are useful in LabVIEW because they violate dataflow. If you could do it all with dataflow, you wouldn't have to fiddle with those advanced techniques. The dataflow paradigm is really great for LabVIEW, but in its pure state it doesn't really get the job done. I don't think any more purity would help.
  4. Please vote for conditional array indexing! http://forums.ni.com/t5/LabVIEW-Idea-Exchange/Conditional-auto-indexing/idi-p/918682 http://forums.ni.com/t5/LabVIEW-Idea-Exchange/quot-Conditional-Append-quot-at-array-indexing-tunnel/idc-p/1116902 see also http://lavag.org/topic/7574-conditional-append/page__p__44719entry44719 There is an OpenG function which does this, a native polymorphic version would be so much better. A lot of times I have an array of clusters, and I want to filter the array by testing the value of one of the elements. This feature would really help clean up the code. Jason
  5. I find it hard to believe that anyone claims never to overwrite errors. I'm not sure how you detect the presence or absence of a file without trying to open it, getting error 7 and then filtering it. I suppose you could list the containing folder, but that seems a lot harder. There's nothing wrong with the former method of generating an error and then clearing it. In addition there are plenty of routines that can throw a User Cancel error as a normal part of user interaction. So maybe the clarification is that you should never overwrite an unexpected error. We have a Filter Error function that is useful. I've never figured out why this hasn't been added to vi.lib. The clear errors function clears all errors, but you should only be clearing expected ones! As far as generating errors, we have one that dumps the calling path into the error source, which I'm sure is pretty common. What it also does is that you can wire anything into it, and it uses the OpenG Lvdata tools to serialize the arbitrary data to a string, and then you can get a data dump in the error message.
  6. I still use TeraTerm, which is ancient but works just fine. http://en.sourceforge.jp/projects/ttssh2/ Jason
  7. Thought of that too! Oh sure, that's what they all say... Anyway, as long as you understand the potential for quagmire, you are slightly better prepared to avoid it. Good luck! Jason
  8. I have noticed that if I do a Find operation in the LabVIEW Project, and the project is large, then any change makes LabVIEW unresponsive for 30-120 seconds, and the Z-order of the open windows is shuffled around. This does not include "Find and Replace" (ctrl-F in a VI) or "Find All Instances (right-clicking on an icon) but does include "Find in Project" (Crtl-F in an open project) or "This VI in Project" (Ctrl-shift-E) in a VI. If I close LV and reopen it, the problem goes away until I use the Find function again (which I have learned not to do). It does not always happen, but if many windows are open, it is more likely to occur. I don't have a CAR # for this, because I can't always reproduce it. Bruce, does this match anything you experienced?
  9. There are some ideas at http://lavag.org/topic/13706-text-labels-on-x-axis-of-plot/ There are some ideas at http://lavag.org/topic/13706-text-labels-on-x-axis-of-plot/
  10. In their infinite wisdom, Microsoft have determined that file extensions are too hard for you, so the default is to hide them and show you an icon instead. This is one of the first things I turn off on a new machine. In Windows Explorer menu (press "Alt" if no menu is showing, another dubious UI improvement), choose Tools -> Folder Options -> View -> Advanced Settings and uncheck "Hide extensions for known file types". Since the extension is really part of the file name, I don't understand where the obsession with hiding it comes from, but things will make more sense after you turn off that option.
  11. Yeah, there's lots of suck to go around. One tip. If you when your open WinZip use "Run as Administrator" (it's in the right-click menus) and then you will be able to write into restricted folders such as \Program Files. Don't run 64-bit LabVIEW if you want to build EXEs which can be run on normal (32-bit) Windows.
  12. Well that does look like a pretty sweet device. However, you're going to need three of them and a USB hub, so that's about $200, and you could have bought an NI 6601 ($400) , a $60 ribbon cable, and a cheap connector block from http://www.daqstuff.com/68_pin_daq.htm and be off to the races. That's a little more money, but you wouldn't have to write your own labview drivers for the device. Of course you'd still have to wire up your own differential level shifter chip, so you may have done the right thing after all. The NI should have much higher data rates but that may not be important for your project. Anyway, good luck with it.
  13. Hi Stuart: One use for a plugin/OOP architecture is debugging/logging. I have a VI that I sprinkle around in my code that logs messages and errors to a text file. I think a lot of programmers have something like that, and I think your students can understand the value. So if you make it object oriented, it's much easier to change the way it behaves, like I have a version that emails me when there is an error. You could also use it to post status messages to a web page, or put up certain error codes in dialog boxes. But all the code that calls this logging VI has no need to know how it works. If I distribute the code, I just have to distribute the base class, and the user can ignore it or write their own descendants of the class to meet their own needs. For this implementation, I keep the object in a global, so that I don't have to pass the lvclass wire through all though hundreds of routines that call it. Jason
  14. that's double-clicking to be precise. That will also pan the block diagram s othat the terminal is on-screen rather than off to the side. You can exploit that by resizing the diagram window to a couple of centimeters on each side. Then when you double-click on the control, it will "zoom" right to the terminal location. Then you can restore the window size and see where the heck you are. If you see the marching ants, but the terminal is invisible, it may be behind the edge of a loop or structure. Those act like real-world glass windows where you can see stuff in the field of view, but other stuff in that universe which may be tucked off to the side is not visible until you move it into the "glass" part of whatever structure you are looking "through". Of course you probably have lots of structures within structures, which will add confusion but you should be able to work it out. Jason
  15. So wait, why aren't you using an NI Counter/Timer card? They are awesome at this. Each encoder should be able to use one of the counters, and the other quadrature channel typically goes into the digital I/O port. Your local NI sales can usually do a much better job than I can of walking you through the nitty gritty of picking the right card and connectors. You may need a 422-TTL level shifter IC. You can either get one for $1.50 from digikey or if you are allergic to chips, you can get one packaged in a $45 device from B&B electronics, and get another $20 of connectors to make it work with your breakout box.
  16. I don't know the details of your application, but is it necessary to run all your code together in the same application? Sure it would be nice if LabVIEW were re-architected to make this all work, but it shouldn't be necessary. You can run a separate instance of LabVIEW (copy the EXE to LabVIEW2.exe) and run your servers there, or just build them into a separate application. In the real world, your client/server should running as a service. We use FireDaemon to accomplish this for our client/server system which can accept an unlimited number of connections. Services generally don't provide a user interface directly, so it's not unreasonable for NI not to have planned for this. You have a manager or client app which uses files or a database or TCP/IP on the localhost to pass events and data to the server. In general this should make your application more robust. Edit: OK, I just read your posts over on the dark side. You already know all the stuff I wrote, and AQ told you that this is not going to change anytime soon. Why are we having this discussion?
  17. Error -603 is thrown when some LabVIEW code looks in the Windows Registry for a key and doesn't find it. Sounds like maybe the key names changed between versions. Hopefully you can look for registry access items within the LabIVEW source code and use regedit.exe to browse the registry and see whether the you can find the key it's looking for. Good luck.
  18. Whoops, I screwed that up as well! Right, but you should at least consider using the packager to distribute your own tools between your projects, or be ready to come up with an equivalent workflow. It's OK, I'm just being snarky. Have fun!
  19. Well I think you've figured out that you want to keep your library code in a separate folder and keep it under managed development with the help of a source code control system. I use Subversion, but I don't think the choice of system matters all that much. Where the real problem arises is making sure that improvements to your library don't break your old projects. Even if you think it won't happen, it certainly will, and Murphy's law dictates that it will happen in some dangerous or at least embarrassing way. So what is important is a workflow where you link your active projects to your main library "trunk" code, and then when your projects are feature-complete and headed into a predominantly testing phase, you freeze a version the library code used in that project into its own branch. Then if you do find a problem in the frozen branch and you fix it there, you have to remember to merge it back to the trunk and any other project which might be vulnerable, and don't forget that merging really sucks in any language and is truly awful with LabVIEW. Or else you have to have the self-control to fix it in the library and push new versions of just those libraries you fixed into the otherwise frozen code. If you don't have granularity in your library versioning, then you're back into the risk of fixing one thing and breaking two more. Of course as you push a fix into your frozen code, you should check the check-in logs and make sure you understand all the other changes you've made to the library's trunk and decide whether the other changes are putting your code at risk. Obviously the amount of effort you put into all of this is dependent on the risk tolerance of your project, but you are better off enforcing good discipline in your workflow before you start programming the reactor core control rods than after. If you're unlucky enough to be a freelance consultant, you'll probably end up having active projects stuck in different versions of LabVIEW, which will just make the problem worse. Either you stick with developing library code in the oldest extant version (which is a bummer) or else you get even more careful about handling your changes. So I think VIPM (google it!) is supposed to help quite a bit with this, and I'm sure the JKI guys will pipe up with more insight. Sheepishly I admit that I don't yet use it for my own reusable code. However I would say that using any tool is not as important as understanding the basic problems around versions and reliability and backwards compatibility and unintended consequences. The point is acknowledging that copies of your library code will proliferate and managing that, rather than pretending it won't happen because you're such an awesome coder and your designs come out so well the first time. Jason
  20. You could also use graph cursors or annotations, and position them just under your bars. That would look decent enough if you set the Yscale min to -1 rather than zero, and then you could put the labels at Y = 0.5 or something. You'd fill the bars down to zero rather than down to -infinity. You would have a few compromises with the appearance, but the zooming &c.would all work. Bar Graph With X Labels.vi
  21. What if the metric were the actual percent of requests handled within 2 days or some similar metric? I bet LAVA would score well without anyone having to give a future commitment on support, which is not really feasible with open software.
  22. Seems like it would be easier to use several graphs, hide all the Y axes except for the first, and use user events to keep the plots on the same Y scale. If you were insane, and had nothing better to do for the rest of the month, you could make it into a sweet XControl. Depending on how many traces you need to display, you can hide some of the graphs and recompute the others' widths to fill the plot area.
  23. It costs thousands of dollars to process a clearance, and this cost is borne by the employer. I doubt you'll have much luck trying to get a clearance for a consulting gig. Plus your employer is probably not stupid. If you ask for a clearance when you don't need one, it will be obvious that you want it to help you go look for another job. Maybe you can just ask for a salary boost more challenges and skip the need to moonlight.
  24. I usually use Notifiers rather than User Events. Many of my notifier subscribers don't run GUI loops at all. Otherwise, I don't think there's too much difference, but the APIs are very similar between Queues and Notifiers and it might clean up your code.
  25. If you are updating the array with new values, you shouldn't need the in-place element structure. The replace array subset function should not make a copy in most instances. In-place really only shines if you are taking data out of the array, modifying it, and stuffing it back in.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.