Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by bsvingen

  1. I dont have labview on this computer, but the fibonacci number is very easy to implement iteratively in labview. Here is a C version from wikipedia unsigned fib(unsigned n) { int first = 0, second = 1; // this while loop will exit when the int reaches 0 // it is equivalent to while(n != 0) { n-- .... } while (n--) { unsigned tmp = first+second; first = second; second = tmp; } return first;} Just make a while loop with three shift registers, one to hold first, one to hold second and one to hold n (well, just make a LV version of the C code). Fibonacci numbers are the typical example of recursion because they are always written recursively. But to implement a "standard library fibonacci routine" with recursion is complete insanity because the iterative version is so much more effective. F(100) would probably take years (seriously and literaly) to calculate recursively on a PC (coded in C), while it takes some nano seconds iteratively. When the recursive routine is simply a straight linear iteration written recursively, then there are no difference in efficiency because both routines scale as O(N) (under the assumption that the cost of the recursive call itself is zero), and therefore a recursive routine will become more compact and "to the point". In LV the cost of a recursive call is far from zero, and the only extra thing needed compared with a recursive version, is an extra shift register in the loop.
  2. There are some things here that just cant be standing uncorrected First: F(0) = 0 , not 1 Jace, your procedure in your last post does not calculate the fibonacci numbers, just try with the number 4 as input. While recursion can be simple to implement (considering the language support it), it can be incredible slow if implemented naively on a non-optimizing compiler. For instance, a naive recursive implementation of the fibonacci number scales as O(2^n) while a iterative procedure scales with O(n). This means that to calculate F(32) requires approximately 32 operations when implemented iteratively and 4294967296 operations when implemented recursively. Even when implemented recursively and optimized it will take at least 1024 operations (scales as O(n^2)). See wikipedia for more. For other things like trees, that do not have this terrible scaling problems, then i do not see why recursion should be simpler than using shift registers. When the procedure is of order O(N), this means that the function can be used as is, also in an iterative procedure.
  3. The same problem exists when using call by reference nodes. Broken wires are afterall OK, because then it is much easier to find the problem. But when these red dots appear, it is very hard to track down. The solution is also awquard because it involves a deletion of the reference node and insert a new one - after the cause of the red dot has ben found (it seems that the by reference node remembers the last type and will not change it unless there is a broken wire).
  4. Nice I'm just wondering what the LabVIEW version of this poetry would look like :beer:
  5. You are right about that. I used your idea with sending a locking queue with the reference to modify the pointer version. It turns out that this makes it possible to very easy make two versions by just changing the "modify" VI. I have called one for _serial and the other for _lock. The serial version is just an ordinary VI, and therefore execute in serial fashion. It doesnt use the locking queue at all, and therefore becomes very efficient. The other one use the queue to lock the get modify set pass, and is set to reentrant and execute in parallel. Edit: added a newer version that has an additional serial modify pass which is internal (thus more speed ). Download File:post-4885-1160807656.zip To sum it up (when using the pointer system): Internal locking (modify internally to the global, there is no get modify set pass at all): This is both fastest and safest since deadlock and so on is impossible. It will allways execute in serial for the same reference and in parallel for different references. The downside is if one reference wait for some hardware measurements to be available, this will block all the other references as well (when using the pointer system, but not neccesarliy when using FG). For all other cases however, this will be the fastes. Locking by serializing the get-modify-set pass: A get-modify-set pass is used, and that pass is set in a non-reentrant VI. The pass will allways operate in serial mode, thus the pass is locked with respect to other passes with the same VI. Since only the pass is serialized, this version will not block other references, or the get method. This version is also very fast. Locking by queue A queue is used for locking, and is passed with the reference for each unique reference: The get-modify-set pas is now reentrant, and operates in parallel. The queue will lock the pass. Basically this method will do nothing more than the serialized GMS pass, because all it does is to serialize the pass with a lock instead of using the VI properties. There is however one advantage, and that is timout. Using queue for the lock enables the use of timeouts which in some circumstances is neccesary. Another advantage is that it will actually operate in parallel in dual processor systems (and in general with regard to internal timers). It is much slower than the other two methods.
  6. I know. After reading the manual, it is quite clear. A reentrant sub VI called by a CBR-node will allways execute in serial order for the same reference. I thought initially you meant that two different references, they can be called in parallel.
  7. Yes, but not if you use the LCOD approach. See "A Software Engineering Approach to LAbVIEW" by Jon Conway and Steve Watts where the LCOD principle is explained. LCOD stands for LabVIEW Component Oriented Design. With LCOD there is no retreive - modify - store, because the modification is done inside the functional global, directly on the variables. See also this post.
  8. I have been reading the manual (for a change ) and unless the manual i factually wrong we already have full locking capability on by default. Non-reentrant sub VIs only operate serialized, one at a time, thus it is impossible to create any problems, at least it is impossible if all parallel code is placed in non-reentrant sub VIs. Also, a reentrant functional global called by a call-by-reference node (the closest thing to a call by ref object in "native" LV), will *never* operate in parallel for the same reference. I'm just mentioning this because it makes it quite clear that "someone" has done a job thinking this through and made some decisions. The result of that thinking is more in the line of "better safe than sorry". A by-ref object will be different (as i have understood by now). However, untill recently i have never really understood what the problem was. I have used call by ref LCODs for years, and since that is a reentrant functional global called by CBR, it is 100% safe because it operates serially - allways (for the same reference). But isn't then the obvious answer to the sunchronisation debate quite clear? A call by ref object must be made 100% safe in the same manner as a reentrant VI called by a reference node is 100% safe for the same reference, and in the same manner as the default for sub VIs is non-reentrant. Then, if you want it "super-reentart" by-ref object, you are on your own and must use the already available locking and synchronisation VIs that exist, but use it manually. I mean, if you are so good a programmer that you actually know under which circumstances multithreading is an advantage, then you also know what to do and how to do it with regard to making it safe. The rest of us wouldn't notice the difference, and will be more than happy with a by-ref system even though it was 100% serialized.
  9. I don't know what the jargon "non-CS people" are but i look at it like this: Dataflow is by itself very easy to understand. What is not easy to understand is why dataflow should make programming any easier or more intuitive. Also i don't think making analogies with dataflow and real world processes will make dataflow easy(er) to understand, because those analogies will be much too inaccurate to be good analogies.
  10. OK, thanks. I can put a semaphore around the block to serialize the operation, and the problem can be solved. What i wondered about was if i put the whole operation as a block, inside a LV2 global, then the LV2 global will protect the operation just as if the operation was protected by semaphores. That is what my three examples does, and so far i have seen nothing that should suggest othervise.
  11. I checked this. You also have to set the modify_FGA.vi to reentrant and it will take 100 ms (i forgot that earlier). You do this, and the VIs will operate in parallel.
  12. I work with similar systems in power plants, and have made an application in labview that analyze all the flows, turbines, pumps etc. Your analogy is not correct. Dataflow is completely different from water flow, or any other physical flow for that matter. There are several reasons for this. First, the basic properties (temp, pressure, density, velocity) changes along the "wire" (pipe). This means that the pipe - wire analogy is not there. A pipe is a function, an icon, just like the other functions. Secondly, physical properties cannot copy themselves, since this will voilate the basic law of mass conservation. OK, maybe i sound like arguing just for the sake of arguing, but the mass conservation principle is in fact a bit important. It is the reason why dataflow cannot be used for abstraction of physical processes. This abstraction requires a by ref construct where you can control this basic physical property. I pipe is an object, with proerties as length, diameter and so on. Then it has state variables, such as temp, pressure etc distributet along the length, it has functions that relates all these variables in both space and time, and most importantly it has communication that talks to the neighboring elements. In Labview it is this (very abstract and mathematical) communication that is the wire, or more precisely the wire is only a reference to the communication process and the start and the end tells the process who shall communicate. The rest is an object, one single instance of several.
  13. Yes, the pointer version will do this (also with the external locking since it is not reentrant), but the LCOD version will not do this since this is reentrant, i think. I like this approach of data and ref(s) in the same wire.
  14. When talking about global here, i mean LV2 global or functional global. I take your answer for a yes then. But now i just read this (from another thread): This makes me confused again. So i will ask the question a bit different. A functional global ala LCOD has lots of actions. Every one of these actions exist inside the global, and the action itself is a normal block diagram and can consist of call to sub vis and so on. Is a call to that global safe? Do i run any risk of "inter-global-confusion" if two or more calls to the same instance happends simultaneously? I think a functional global is safe, because if the opposite was true, then all calls to any sub VI (functional global or not) would not be safe, and the whole concept of multithreading in LV would be meaningless. But right now i am a bit confused.
  15. OK, i think i understand the call by ref downcast thing. The call by ref node have no way of knowing which one of the child classes (or parent) the called VI will eventually call. But regarding the locking. Are my assumptions correct that within the global the get-modify-set pass is locked? I mean since there are no traditional get-mod-set pass at all, only an ordinary functional global VI call, then i would believe that this is thread safe. Will there be any change if the modify VI is reentrant?
  16. Lets consider the article is correct, then there exist no insentive for NI to open the code. They have monopoly of this small niche of graphical programming (at least they have no competitive products to care about), and as a commercial entity they are right where they want to be, and can fully exploit the commercial synergy between LabVIEW (G) and their acquisition hardware. If I was NI I don't think i would even consider making LabVIEW a more general language, because then there would be competition, and the competition would probably initially come in areas outside the core business of DAQ. I mean, just think about it for a second or two; if LabVIEW had a native by ref GOOP with a decent efficiency, it would become just as general purpose as for instance Java, visual basic or C#. LVOOP doesn't really make it more general purpose, but instead it enhances the dataflow paradigm by making data-objects. I think, if we want a general purpose graphical language, we have to make it ourselves from scratch.
  17. Far from it. Or more precisely, you are of course free to do it, but you risk the chance that the license is worth nothing when tested in court. If something in the license can be regarded as being in opposition to common practice or the wording in the law, it will be worth nothing. At least you must have a laywer to look through it, and approve it.
  18. Can't you just issue licenses individually? That way NI (for instance) can never obtain a valid license unless you exclusively send them one. I don't think you can stop others from doing derivative work no matter what, since this won't effect the original work in any way regarding copyright (unless the derivative work is considered only a copy or plagiate). Patents are very different, you cannot make a derivative work using other patents as building blocks unless you have obtained a legal right to use those patents in your work.
  19. First, i don't consider myself a "power user", i just think LabVIEW is an excellent tool that i use every day for taking measurements, analyzing and displaying data and for general programming. It is only during the last 5 years that i have used LabVIEW as my preferred programming language, before that LV was almost exclusively data acquisition and logging, and used FORTRAN and C/C++ (more C than C++) for other tasks. I am not 100% sure why i "got stuck" with LV, but i think it has something to do with the visual programming, the block diagrams that just fits my brain better than a textual language. Secondly, what exactly is DATAFLOW? Is dataflow something special? Maybe i think in too simplistic terms here (and get arrested by jimi and aristos ), but IMO any language that use call by value exclusively is dataflow. C is natively call by value, and if i use C with only call by value syntax, it becomes dataflow. When using call by value exclusively, all that ever will flow is the data, hence it is dataflow almost per def. Java is also dataflow. There is no other way to program in Java except call by value, untill you use OO where objects are called by ref. Fortran is not dataflow, since it is impossible to use call by value exclusively. So, to say that G is a visual block diagram version of Java where the objects are stripped off, is in fact a very accurate description IMO (although i know a lot of people will disagree). I have used any of the GOOPs one single time, and that was dqGOOP. It was mostly for fun, and because the dqGOOP wire was more pleasant and consistent (the same color throughout) compared with what the result have been when using ordinary queue. Functionality vise queues would do the exact same thing. I have been more of an LCOD person, but as i recently have discovered, LCODs by ref is in fact just as much a GOOP as any other GOOP. So in reality i have been using GOOP for several years. The brute force and simplicity of by ref you will only discover when using Fortran. When i discovered that labview actually made copies of large arrays by brancing the wire, i thought that this must be one huge major bug. How was i going to do anything remotely close to efficient when this language actually makes copies of arrays? Wasn't this language supposed to be THE language for data acquisistion and analysis of data? Well, the only way was to use one large loop and shift registers, or - as i found out (much) later, using LCOD principle. But still today, i just don't understand why there is no native by ref (and this has nothing to do about GOOP), why there is no way of making a real global that can be accessed efficiently like i can in both C and Fortran. And today, as then, the only reason i can think of is that Labview is made by software engineers that are more hooked up on satisfying some language philosophy than being aware of, and fixing up, the actual and very misfortunate results of that philosophy in a practical situation. It's almost ironic. The only way to handle arrays effectively in an analysisis, that is one of the main things labview is supposed to do, is to make an object oriented-like construct, that don't natively exist in the language - to make an efficient by ref global, that also do not natively exist. But the main problem is that once we figure this out, and find a working solution, we are are happy and don't think about the shortcomings. The main point is that all these shortcoming of the language could be removed entirely if there was a native by ref GOOP.
  20. I thought the timed loop was the easy way to prevent the buffer from filling up and the logging from lagging behind. At least it gives you the possibility to monitor the performance. Anyway, i have just looked at your code rather shortly, not studied it, but it looked to me like some structuring and placing it in one single loop would make it much more readable.
  21. Well, then it is obvious that you have not coded much with native LVOOP. When using LVOOP, the number of VIs doubles, because for every single class you need a get and a set wrapper due to the protection principle. Pre LV8.2 only needed a bundle/unbundle in these cases. Add reheretence to this, and the amount of VIs doubles again. This isn't neccesarily a bad thing, because when changing something that doesn't work, this assures that you can change only the things you want to change, without touching things that do work. I think it works like this: The simpler you make things, the harder it gets to change it later. The more granulated, or the more bits and pieces your code consist of, the simpler it will be to change things later. So it is more a question of what kind of simplicity you want. IMO a bi-poduct of LVOOP is poorer performance (mostly due to all the added VIs), and this is very unfortunate because it doesn't have to be like this, if the compiler was more advanced.
  22. After reading your post more carefully and looking at your code, it seems to me that what you need most is a basic cleanup of your code. That is just a matter of structuring your wires, replacing sub vis etc so it looks like a nice print board. IMO the size really doesn't matter as long as it is structured, but that is more a matter of taste. I am not that organized myself. As long as i relatively quickly can follow what i have done, i'm OK. The timed loop is essential for continuous logging, or you will sooner or later end up in buffer problems and timing glitches in between the n numbers you log during one iteration of the loop. If i understood your code correctly, you are not really doing a continous logging, but more of a batching process. You are logging some data, then this data is sent for analysis and then sent for storage and display, and then you log some more data etc. I would prefer doing this in one single loop, since this will clear your diagram from the clutter of all the queues that really are not neccesary (if i understood your diagram correct that is). Maybe you could use a flat sequence within the loop to visually separate the different processes, and put some more of your code in sub VIs.
  23. My experience is to have one loop for logging and saving data. That loop has to be a timed loop. If you have to do analysis "on the fly" on the raw data, do that in the same loop as well if the analysis is not too demanding and/or the data throughput not too high. The same goes for displaying. This works up to a point. Then put only logging and saving in that loop (if you have to save all the data), othervise do only logging in that loop and average the data for saving in another loop (send only the averaged data with a queue or FG, do not use point by point averaging but use a counter to average each 10, 100 or whatever). For displaying you can use the same loop as saving, but display only a small fraction by averaging or decimating, don't use point by point here either. The basic idea is to minimize the workload by decimating and/or averaging, and to separate the logging loop from the other loops. But it depends on the requirements. If you have to save all the data, then you have to it, and this will restrict your performance. Displaying can alsways be decimated and analysis can be done after the logging is finished. There is no simple answer to this, you have to analyse what absolutely has to be done on the fly, and then cut down on the rest.
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.