Jump to content

Guillaume Lessard

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by Guillaume Lessard

  1. Adding to JFM's comment: The module is fairly expensive; you can use the RT OS on a standard PC in parallel with Windows or running the real-time OS only. There are also a variety of custom NI hardware. Another (cooler IMO) option would be the FPGA module, but that's even more expensive! Depending on your tolerance to cost and time, you might be better off requesting an evaluation package for some embedded processor and program that directly. They usually come with a board with some voltage outputs and a PC-based compiler. It would probably be more time-consuming, but certainly less expensive. =/
  2. Do you need to decide during each loop iteration whether you'll be turning the laser on or off? If so, what you need is labview real-time, so that you can perform computations during each iteration of a timed loop. On the other hand, if what you need is a relatively slow on-off modulation of a fast blinking laser (with 2 states of "blinking" or "off"), let me suggest the use of a 555 timer circuit in astable mode, controlled by the 555's 'enable' pin. You'd use your parallel port to control the enable pin. http://en.wikipedia.org/wiki/555_timer_IC Never underestimate the power of the 555!
  3. I'm late to this party, and this is a reply to Jimi's initial post. But first a comment to Aristos: LabVIEW already provides a host of facilities that allow a careless programmer to make it deadlock. Why oh why is adding one more an unthinkable action??? I've made my programs deadlock in plenty of ways, and I don't blame LabVIEW for letting me! Basically, multithreading is hard *even* in LabVIEW! My non-LabVIEW experience of multithreading is with java, and I found that its facilities do fix most problems; one of my close friends is an expert at multi-process and multi-thread systems, and -- compared to the field of non-experimental languages -- he finds java's facilities pretty fine. I have yet to encounter anything to make me disagree! In any event, when synchronized blocks don't do it you have to proceed to the next step and explicitely use mutexes and notifications -- LabVIEW already has those. Back to my reply to jimi: Your special synchronized wire is something quite complicated and unlike typical LabVIEW behaviour. It could all start with an automatic synchronization behaviour on methods such that, when the compiler finds a bundling operation on object data, the method would automatically synchronize on the object data. Obviously, methods that only read object data would *not* synchronize. Now, if a programmer chooses, they could turn off the automatic synchronization on any write method; a "synchronization structure" would then appear around the bundling operation. This new type of structure would allow for improved granularity on data synchronization. Tunnels into such a structure would synchronize on the object data wire; whether more synchronization options than that are needed is left for further discussion. Note that since the synchronization is a (rectangular) structure on a (finite) block diagram, there's automatically an exit point. Now, for more complicated stuff, you've got your mutexes and whatnot, and careful application of all that should manage to cover pretty much all cases. If not, there is probably a need to redesign ;b Feel free to tear it apart, but that seems good enough to me. As long as I'm allowed to stay away from the underlying implementation details, that is!
  4. On my first contact with LabVIEW back in 1997, I tried to see it a bit like you argue; being an *analog* electrical engineer by training, I thought about circuits. However, having been well trained in circuit design and analysis by some wizards of analog electronics, I just could *not* *understand* LabVIEW. That's because LabVIEW is *nothing* like a circuit. Not even a digital circuit; that is not even a digital circuit from which the clock is abstracted. And a water distribution system is even more complex than a standard electrical circuit, because it has to contend with scaling physics across and along a single pipe that an electrical circuit never even has to deal with... Of course you can model such a system in LV if you want, but it's not going to look simple, and you're in fact going to be operating on queues and buffers, not simple dataflow. For an idea of the consequences of correctly modeling an analog circuit on a computer, I suggest something like "Discrete-Time Signal Processing" by Oppenheim and Schaffer.
  5. I have used an RT-FIFO as a lossy queue on the windows side, but you lose some functionality compared to the normal queues. That's unfortunate. It has led me to wish for exactly what LV Punk wishes for. Still, it can be a handy shortcut.
  6. You can do multiple repositories, importing a shared "utilities" repository as an "external". Unfortunately, I don't think there's an interface to do so in TortoiseSVN; I handle my externals by ssh-ing into a unix box and using the svn cli tool: % svn propedit svn:externals <target directory> The advantage of using externals rather than branching for shared VIs is that you can "freeze" them at a given version; on the other hand, it makes updating your working copy substantially slower. Also, there's probably little reason to use multiple repositories; you can do: <root>/Project1/trunk <root>/Project1/branches <root>/Project1/tags <root>/Project2/trunk <root>/Project2/branches <root>/Project2/tags <root>/Utilities/trunk <root>/Utilities/branches <root>/Utilities/tags ...all in the same repository. You can also use externals within the same repository in order to keep everything in sync (I do this for some simulation code I run in Igor.) Hmm. I hope this doesn't raise more questions than it answers...
  7. I have binary files (LabVIEW and otherwise) that have been revised 200-odd times, and I've never had problems; in that respect, I doubt the fault is with LabVIEW. However, I have only used subversion 1.2 and 1.3. Could a bug in your old version of SVN be the cause of this?
  8. Exactly! I'm convinced that LabVIEW would be better and more accepted if the language had been developed for its own sake, but the money in that just wasn't obvious enough. Oh well. Jimi, on the topic of functional programming: I think that functional programming tends to be barely readable enough in the standard 1D-representation of text. I can't quite imagine how that could be represented in 2D without making it an utter and complete mess... do you have something in mind?
  9. Do I know that LabVIEW's approach could actually change the world (of programming)? No. I do think that it has the most potential of any language I know... The language and the engine of LabVIEW have always been part of a high-priced, targeted application, it has never been appealing for non-DAQ people to try it. As a consequence, its graph-dataflow system has effectively been shielded from widespread use. Java didn't change the world because in the end it wasn't much more than what it sought to replace. The VM did a whole lot more world changing than java itself did. However, Java got an extremely ambitious treatment from Sun and, as a result, gained a big foothold without being particularly revolutionary. Some ideas deserve enormous ambition. I happen to think that, in the past, NI has not had enough ambition for LV's language. I'm not necessarily right, but is any of this not clearly opinion and speculation? I certainly do not mean "unambitious". I would agree they've been "very ambitious", which is less ambitious than "insanely ambitious". It's all a matter of degrees! I'm sure my argument would go over much better around a table and beverages...
  10. "Nowadays the business of selling compilers [...] is pretty much extinct, so that boat has been missed." 10 years ago, however, there might have been a chance! I agree. There are a few non-dataflow "graphical programming" tools out there, and I think the reason they don't have much of a following is that graphical programming without dataflow is not a good marriage. I speculate that the same may explain the low acceptance of dataflow text languages. Dataflow and graphs? Obviously good. However, it's proprietary and will be for years to come. The proprietor has a somewhat limited view of the usefulness of its product, resulting in a comparatively tiny user base: compare and contrast LabVIEW with Java in terms of number of deployed apps. Hmmm. Java is proprietary too, but they made a huge effort to push it everywhere. It wasn't different enough to really change the world, so it didn't. It's still everywhere, though! I guess that's the source of my disappointment: NI did not try to change the whole world. They just successfully changed DAQ. Has this lack of ambition doomed graphical programming to remain a fringe phenomenon?
  11. "introduced" being the key word here. The next must-have app will not be written in G. How many programs on shareware.com (or any number of similar sites) were done in a graphical language? Probably not zero, but the proportion is awfully close to that. Any large websites powered by a back-end written in a graphical language? That's zero. Why is there no truly general-purpose graphical programming tool out there? Is it really only suitable for niche applications? Really I wonder.
  12. I'm not talking about making money from a standardized language; there would be no reason to give away the IP for free. I'm talking about making money from selling compilers. In the 90s, compiler makers made money selling compilers... Borland did just that for many years, until they put all their eggs in the wrong place and forced many of their hobbyist customers to look elsewhere. MetroWerks did pretty well until they got snapped up by Motorola. A more general-purpose graphical programming system (not tied to LabVIEW) might have been able to generate traction and make money. NI could have done that 10 years ago. If it had happened, the language could have evolved much faster! As it is, G is just one feature of LabVIEW, and it gets improved for the sake of LabVIEW -- not for G's own sake. LabVIEW had so much more stuff than just the compiler (and such a large profit margin) that it always was too expensive for "just" programmers, especially the hobbyist kind. Nowadays the business of selling compilers to hobbyists is pretty much extinct, so that boat has been missed. Graphical programming could have changed the world already. Instead it has pretty much only changed data acquisition, and the rest of the programming world is still trying to figure out how to make their programs multithread.
  13. I do think that NI shot themselves in the foot by never allowing G to become separate from LabVIEW (i.e. a G compiler being standalone, with LabVIEW building on top of it). They effectively didn't allow G to be perceived as a general purpose language by tying it up to to DAQ in people's minds. They condemned it to being a specialty NIche item, and it's probably going to stay that way for a while. With G as a general-purpose language and generating revenue by itself, we would have had GOOP as a language feature for years already. We would also probably have fast ways to handle data references, rather than the painfully-slow queue-dequeue method we have to use to imitate them. I'd call that money left on the table. A lot of it.
  14. First, in order to maximize your precision, make sure your measurement range is coherent with what you're measuring. If you're measuring a value between -1 and +1 with a -10 to +10 range, you're potentially throwing bits away. Second, the lower order bits generally measure noise-related fluctuations. Unless you have a really exquisitely-built low-noise circuit, I wouldn't pay attention to anything below 0.1 millivolt. Note that this gives you a dynamic range of about 4 orders of magnitude; not bad! Have you quantified your electrical noise? A millivolt rms or more is quite common! The card cannot know whether its measurement has 14 bits worth of good information or whether some of them simply measured a fluctuation in the noise. That's *your* job, as a user. The card reports, you interpret. Finally, single-precision floats have 24 bits worth of mantissa. It's hard to see how assigning 14 bits from a 16-bit integer to a single would lose any information...
  15. Agreed. I just thought an illustration would help. I'd also wired my expample before I got to your reply... why waste?
  16. I think that most of it comes naturally as a side effect of good graphical style: eliminate artifical timing/data dependencies between parts of your block diagram and you're on your way there. Two loops are probably a good idea given the existence of event structures and threading, even in cases where you get no parallelism. UIs certainly feel better that way...
  17. The following sort of logic should do the trick (replacing the "reading" controls by input from your motor, and assuming that you make sure to make a reading more often than it can possibly roll over): Of course in LV7 you don't have 64-bit integers for free, but you can either do them yourself with two 32-bit integers or use doubles; their mantissa has 53 bits of precision so you won't lose count for a good long time.
  18. I looked for such things. Note that when I remove the 3D Curve indicator entirely, my VI still runs correctly (it's a useful display, but it's just a display). That shows that there aren't any dangling references: there would be errors otherwise. That, and the property nodes for that control are rather inscrutable since it's ActiveX. I'd be much happier using it if it were a normal labview graph! (thanks for the warnings trick; I found a couple weird things that way. Not stuff that affected this problem, though.) I'm using the 3D curve graph pretty much in its simplest possible form: (if I right-click on the terminal on the BD, there is no find menu, just "find control". There really is nothing else than that little section of the BD that uses the indicator.) ::sigh::
  19. Unfortunately, that still doesn't work. I'm completely baffled. That's also part of my problem: I have failed to replicate it. That made me think I might be able to do something by removing and the control and adding it back, but no dice.
  20. I have a problem where a VI refuses to save the "plot style" property of its 3D Curve Graph (the ActiveX 3D curve from the "graphs" palette.) That's in the CWGraph3D property editing window, tab "plots", subtab "style", menu "plot style. It correctly saves the graph's other properties; the ones I've tried, at least. The "plot style" property is correctly saved in other VIs I slapped together for a test. But in the VI that matters (the one that has all of my overly complex UI code,) it just won't stick. (All this while the plot template has all the properties I want my curve to have; I'd have thought that would be enough!) I tried deleting the graph, saving the VI and then adding the graph back. Still won't stick. Has anyone ever seen such a graph object misbehave so badly? Has anyone got a trick to make my graph behave? I hope so, because it's starting to feel like a nearly-bottomless time sink!!
  21. There is a thread here where people have shared some nice VIs. jpdrolet's "loop continue" is excellent for stopping parallel loops, and it might work well for you. I use it all the time now...
  22. Some time ago I made the mistake of using multiple event structures to handle events from multiple tabs. Since each tab represented mutually exclusive modes of operation, I thought it was a good idea. I ended up with a really painful program to deal with, and when the inevitable shared-by-all-modes control was needed, all kinds of issues came up. I will have to rewrite it with a single event structure, not that I know exactly the right approach to accomplish it with minimal work... Unless you have a really clever way to deal with with multiple event loops, I'd say don't do it! (...and you do find a way to make it work well, I'd like to see it, too!)
  23. What are you measuring? Unless you need to act on each and every data point, you should be able to do buffered acquisition that is triggered by your pulse; then there shouldn't be any dependence on the OS. One can do lots of precisely-timed DAQ with E-Series cards without needing any real-time OS!! The daqmx drivers do happen to be more lightweight than the traditional ones. If you can afford the time, switch to daqmx would somewhat future-proof your program. Agreed. And LabView Real-Time 7.1 introduced a fantastic VI for use with daqmx: "Wait for Next Sample Clock". This puts your hardware in the driver's seat, and jitter goes down to nearly zero. If you must use software timing in Labview Real Time, then you have to live with jitter in the microsecond range.
  24. In the "false" case, simply wire the input to the output as a pass-through. Then it won't reset anymore and you'll have the monotonically increasing value you expect. You shouldn't have selected "use default if unwired" option of the output tunnel from your case structure -- in fact, I would personally recommend against using it in general, as it tends to hide the data flow (as well as errors.) Cheers, Guillaume Lessard
  25. NI has a "Simple TCP Messaging Protocol" set of VIs that help in implementing such a thing with a limited amount of pain. See here. I've used (and modified) it with much success in my real-time client/server project. Guillaume Lessard
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.