Jump to content


  • Posts

  • Joined

  • Last visited

Everything posted by bsvingen

  1. My experience is to have one loop for logging and saving data. That loop has to be a timed loop. If you have to do analysis "on the fly" on the raw data, do that in the same loop as well if the analysis is not too demanding and/or the data throughput not too high. The same goes for displaying. This works up to a point. Then put only logging and saving in that loop (if you have to save all the data), othervise do only logging in that loop and average the data for saving in another loop (send only the averaged data with a queue or FG, do not use point by point averaging but use a counter to average each 10, 100 or whatever). For displaying you can use the same loop as saving, but display only a small fraction by averaging or decimating, don't use point by point here either. The basic idea is to minimize the workload by decimating and/or averaging, and to separate the logging loop from the other loops. But it depends on the requirements. If you have to save all the data, then you have to it, and this will restrict your performance. Displaying can alsways be decimated and analysis can be done after the logging is finished. There is no simple answer to this, you have to analyse what absolutely has to be done on the fly, and then cut down on the rest.
  2. I think this would be great. But what would happen is probably this: NI will still be making their compiler and sell it. They will probably only support exactly the features they want that suits their hardware. So there will be two versions of LV, the NI version, a commercial compiler specially targeted for NI hardware, and the open version. Then, who will be the users of the open version?
  3. I have made some alternative GOOP that do not require external locking of the get modify set pass (i believe, but i leave it for other to decide for themselves). There are three alternatives: 1. FG. A functional global made as reentrant and called by a reference node. A "new" call consist of opening a new instance of the reentrant VI. All the methods are set in the "action" typedef enum. I have used this method for several years, but i always thought of it as a LCOD variant, but i see now that it is just as much a GOOP variant. There is no need for locking because all the methods are resolved internally (no get modify set). Normally i have used this with no wrappers, but when using LV8.2 it is much more convenient to make the whole system a LVOOP with the reference as a class, and then there is no way without a wrapper. Performance vise it is slightly faster than dqGOOP in get, set and modify, but much slower in create and delete (performance in create and delete is however irrelevant in most cases, at least in my applications, and for normal number of instances, 100 or so? this really makes no difference. I think the problem is that create/delete scales very poorly when number of instances increases). 2. Pointer. Here i use a LV8.2 version of the pointer system (see CR). Othervise this is very similar to the FG version. Locking is also internal here. Performance vise this is fastest. It is superior in get/set, but not only slightly faster in modify compared with FG. Create and delete is of course several orders of magnitude faster than FG. Since this is the fastest method, i will gradually use this instead of the FG i use now. Even with an external lock (which becomes an advantage programatically), the performance in modify is almost exactly that of the FG. For both these cases i have also made an external lock in the modify method. There a normal get-modify-set pass is used, and using a queue to lock the pass. The lock doesnt really do that much different, maybe only 50% slower in the tests, and it is still faster than dqGOOP for some reason. However, programatically a lock has an advantage since i don't have to modify the core when i add methods. (I normally don't mind modifying the core myself if this result in better performance, but it is much more difficult to debug). 3. FG LV2OO. This is an FG version where i use LV2OO (Labview 2 style global with action classes made in LVOOP, see aristos post about the subject) instead of the traditional LV2 global (or functional global). This has a performance penalty (approximately a factor of 3 compared with FG). Programatically this has some advantages. The lock is now internal, still there is no need to modify the core. To add more methods i simply add more "do" actions, inhereted from the parent action class. I can also modify the get and set methods without any changes to the core, which is not possible in the FG, in fact i can modify almost anything without that resulting in modifications to anything else. Programatically this therefore is the preferred method, but for me the performance penalty is too large (although it is only about a factor 2.5 slower than dqGOOP, and thus much faster than the old NI GOOP and OpenGOOP). I found what i believe is a bug (though i am not sure). The action do method is withing the global. the global is called by a reference node. When i wire the action class to the reference node, the input is OK. The output will however always be the parent class, and i have to manually cast the wire to the specific class. If this was an ordinary VI (not a call by reference node) then i do not need to do this manually, it happends automatically. Here are the files. A similar dqGOOP is included for comparison. It is all LV8.2 Download File:post-4885-1160384003.zip
  4. I see in your tests that you create and destroy the mutexes almost as often as you use them. I would believe that a more acurate test would consist of creating and destroying them once, but use them often.
  5. C++ actually. I made a matrix class system in C++ once. It was a part of larger engineering application, and the way it worked was to store 2D arrays as 1D arrays and call core Fortran LAPACK solvers (Fortran stores all arrays in 1D due to performance gains). It started on a SGI using gcc, but later moved to windows using Watcom compilers (fortran and c). Anyway i can't remember ever considering a locking mechanism to protect anything within a class. I think LAPACK, BLAS and derivated routines can be found in parallelized versions now, using multiple threads. I still don't see any need for locking as long as you don't access the same memory, since you don't have a get-modify-set pass, but operate directly on the individual data. I mean, what is the idea of having multithreading/multiprocessor applications if you implement a lock that effectively serializes the execution like it is done in the available GOOPs ? Then you would be much better of with much better performance using one single thread/processor. The only consideration is the relatively few inter thread calls, but they too can be solved by asuring that all inter thread routines writes to separate memory locations. The by-value approach of LabVIEW asures that we always operate on different data in parallel loops, and therefore multithreading is a relatively simple thing to implement, since the memory collision considerations are solved, or more precisely they are irrelevant. The same thing could however easely be implemented in C++ when using only call by value, but this would result in two major problems: 1. Loss of performance due to constantly memory allocation/deallocation. 2. No way of effectively handling inter thread (read: inter loop) calls. As i see it, this is also the two major problematic issues of LabVIEW, and particularly number 2 is something that puzles all newcomers to labview after a week or two. Problem 1 can only be solved by using by ref. Problem 2 can be solved in many ways depending on what the program actually does, and locking can be one solution (or at least part of the solution). But, as i said, i'm no expert. Maybe i have a too simplistic view on this.
  6. I think that it a get-modify-set pass that could easely be protected or locked in a native implementation of GOOP, just as it is locked in openGOOP. But probably more important is the fact that if by-ref in general was natively implemented in the same manner as other languages, there would be no need for a get-modify-set pass at all.
  7. I just wonder if all this is due to some patent related issues. For instance, there is no good reason that all sub VIs shall have both front panel and block diagram. 99% of the cases, VI are not used as virtual instruments, but as functions and subroutines. All you need for functions and subroutines are block diagram and the icon with connectors. Also, a by value object will not change anything of the basics. A by ref object would be impossible to protect in any form because it already exist (openGOOP, dqGOOP etc), and a native implementation would require a storage that is not a VI to be efficient, and then it blows the patents. Just some wild guesses
  8. Maybe instead of calling it object oriented, we could call it object disorientet :ninja: Joke aside, with LVOOP we can in fact make a whole new and different set of reference types. Instead of just being a dumb typedefed datalog reference, we can now put a whole lot of information into the reference itself.
  9. The speed is actually not bad. Sligtly faster than dqGOOP, (but much slower than the pointer system), in similar tests with get and set.
  10. Yes LV2 globals ala' LCOD. The way i use them in a very large application i have is to make them reentrant, then i call them with a call by ref node. Then i can have as many as i want, all by ref.
  11. That was pritty darn cool :thumbup:
  12. Take a look at the programming popularity index LabVIEW is far down as number 33. FORTRAN is number 21 and is twice as popular I have tried D for some dlls, and i really like it (more of a C/C++ where all the mess is taken out and replaced with order and functionality). I also have to try this F#
  13. This is just plain bull. Microsoft has a big team (100, 1000 people?) for making their flight simulator, and they still cannot come up with something close to X-Plane. X-Plane is made by one single man, and it is written entirely in C :worship: And there are other examples: LINUX for instance, the first C++, lcc. One single person can be more productive in LabVIEW, things take much faster from idea to a working application. But from there on i think things get even out, and finally the one starting with C actually gets more productive when all the limitations in LabVIEW really starts to become problematic.
  14. I agree on this one, if for no other reason than the fact that functional globals outperform all GOOPs i know of. The more i think about it, the more LV2OO seems to to be the way to go. Here you can modify the exact member you want with no need for any GMS pass. All members are protected within the global, and it is really easy, fast and unproblematic to add new actions (once everything is set up). The only concern is performance, since the dynamic dispatching thing does not seem to be too efficient as of today and because LV2OO style adds more VI calls (that because of the dynamic dispatch cannot have subroutine priority). But if i have understood Aristos correct, the performance will increase in future versions, maybe with stateless VIs ?
  15. Exactly, i agree 100%. But - when the members are mutually dependent, then locking will not solve the result of race condition. The problem is still there dispite of all the locking you can think of. The only way to solve it is to make sure no race condition can ever occur. Therefore, locking is not even necessary in most of the cases. I don't understand this. In a functional global all the members are internal to the global, there are no get and set, only unbundle and bundle at most (but that isn't neccesary either). If the global is non-reentrant, then you can have alot of different actions. However, you can also use reentrant globals and call them by a reference node, there will still be no get and set. I use this all the time, no problems yet. Yes, i agree that it can be problematic to change and add things in a global. But with LVOOP this is actually solved, i have tried it and it works perfectly. An example can be found here
  16. It seems that you are correct about this one. A lock here will indeed solve the problem. The reason for this is that a collision between init and/or a multitude of increment methods does not result in any problems about the sequence of execution. The member is only dependant on itself, so that if two increment methods collide, it is irrelevant which one executes first. Your lock works because your member is independant of the program flow. This also means that it will work with no wires attached, and will therefore be much more efficient when written as a simple functional global with a init and increment action. The problem still exist for members that are not mutually independent.
  17. Here is an example that hopefully will clarify what i mean (or shows how completely i have misunderstood this ). An object, A, has two members, a and b. In addition there is a value, c, that is used to modify those members with two methods f and g. A collision can only be a problem if a GMS calls the same instance, A1, and the two members are dependent and the sequence of calling f and g are not arbitrary. They are dependent if the modification at least consist of a = f(b) and/or b = g(a). For instance: f = b + c and g = a*c, and a = 2, b = 3 and c = 4 the results will be: a = f = 3 + 4 = 7 and b = g = 7*4 = 28 or b = g = 2*4 = 8 and a = f = 8 + 4 = 12 This shows that the sequence of calling them will produce different results. Therefore a collision of two methods (can be the same or two or more different methods) with dependant member functions are a problem no matter what, because the sequence of wich they are called will influence the result. Protecting (locking) these functions will not solve the problem of collision, simply because the problem will be there independent of the protection. If the two members are independent, then there will be no problem with collisions at all, because they will not influence each other. If the members are dependent but the sequence are arbitrary (must be only some very few odd cases), then i'm not sure what will happen, but that doesn't really matter by now. It is not enough to protect (lock) member functions f and g. To prevent problems with collisions the only solution is to make sure that f and g will never collide, they must be sequenced to execute in the correct order by the program flow. Locking them will not assure that the sequence is in correct order, but will only prevent them from executing, or reading members simultaneously. Therefore, locking the functions only has two results: 1. It has no effect at all because the members are independent. 2. It does not solve the real problem that is arbitrary sequencing at collision, and therefore has no effect. Locking is therefore an uneccesary construct that only bogs down GOOPs, IMHO of course
  18. This topic have been discussed in some other threads, but only by branching off the original topic, so i start a new thread here since there are some things i just don't get. As I have understood by now, a by-ref GOOP requires a get-modify-set (GMS) pass. I can see that in theory it is possible that the same GMS can take place at the same time for the same instance in different places, and then it is unknown what data actually will be set to. Also I can see that the set (only) method can be used at the same time as the GSM for the same instance in some other place, and then it is also unclear what will happen. To resolve this a lock is required so that data is not (read?) or at least not set any other places during the GMS pass. So far so good, but what is not clear (to me), is under which circumstances this actually can occur in a real application. Then, if it ever will occur, what will the problem be? and last but not least, will the lock mechanism actually solve the problems of collision? IMO the program flow should make certain that this will never occur in the first place, at least not for the same instance. I mean, the main reason to have a program flow is afterall to make sure that data are set and read in the correct order. A program is not a set of random events, even though randomness can be a part of the constraints. Using "ordinary" call by value LV programming, it will be physically impossible to set the value at different places (at least on a per wire basis). A by-ref system will make this possible, and is often the main reason you would like to do it. But when switching to a by ref system, one of the unavoidable drawbacks is that you have to explicitly program in such a way that data is set and read at the correct places, and at the correct time, or else your program will not work as intended. This is true for simple (by ref) get and set, and is totally independant of any locking mecanism you have. The queue primitives are by-ref, but they solve this issue by being synchrounous. They lock the data completely, effectively halting execution altogether until some data is available to read, and untill data has been read. Thus, queues are not only by-ref primitives, they are also program flow primitives making 100% sure that data is read and written in the correct sequence by completely taking control of program flow. Isn't it therefore also true that the only way of making a by-ref system 100% "collision safe" is to let the by-ref system take control over to program flow? I think so, because any other alternative will require some explisit control function (that will take full control), or data can both be set and read independent of the implicit diagram-flow. So basically what i am saying is that queue primitives (and notifiers to some extent), but used as primitives and as intended, are the only by-ref system that will prevent "read and write collision", and they do so by taking full control. Any other alternative will require that the program flow is explicitly made so that collision will not occur (if collision will be a problem that is). A simple locking mechanism for GMS pass will therefore not solve the real problem, namely inadequate program flow control by the programmer when using a asynchrounous by-ref system, but will only hide the programming error from being seen. The problem with collision is not solved by locking the GMS pass. If there is a collision between the set and GMS pass, that collision will be there no matter if the GMS pass is protected. For instance, a call to the GMS method and the set method is done simultaneously for the same instance. Two things can happen: 1. The GSM executes first, it then locks the set method untill it is finished. Then the set method executes. 2. The set method executes first, then the GMS method. The resulting data will either be data from the set, or data from the GMS, and it will be random which, thus the collision is not solved. If the GMS method was not protected, the result will be exactly the same: 1. Set executes first, then GMS 2. GSM executes first, then set. 3. GSM starts, then set (only) executes before set in GSM, result will be same as 1. In a queue-based by-ref system, i can see that the program can accidentally hang, or data can disapear, if the GMS pass is not protected or locked, but it will not solve any collision problems. In a non-queue based ref system, data cannot disapear and the program will not hang, but the collision problem is still there as well. The collision problem is a program flow problem that exist independent of any locking, and can only be solved by using ordinary queue primitives (in normal synchrouneous mode instead of any type of asynchroneous by ref system) or by explicitly making the program so that collisions will not occur (the only way IMO). A locking mecanism will not change anything regarding the results of a collision, although it probably will protect the program from hanging when using queue based asynchroneous ref system. Well, it would be nice to hear any comments on this, as I am no expert, i just do not understand what all this locking is about.
  19. Of course, in making a call by ref LVOOP you need a get-modify-set pass. Then you need to protect that *pass* so that no other VI tries to set while it's being modified etc. But that is, as far as I can see, the only time you will ever need such protection, and it is a very peculiar construct and is probably more of a typical example how NOT to use pointers. In other programming languages there would be no need for such a pass, because the data is edited directly. Still, i wonder how often, if ever, you will modify the same instance at different places simultaneously. Will this ever become a real issue, or is it more of an academic problem? In the other general ref, i have been experimenting with occurences. There I set an occurence for each pointer, each time data is set so that data can be read in another loop for arbitrary pointers fully synchrounous (similar to queue). This can be done with seemingly no performance penalty (to set the occurence), and i wonder therefore if it will be possible to do some similar for the pass. An occurence can for instance be set each time a pass is finished, and the pass will never start before an occurence has been acquired. One problem is what to do for the first ever pass.
  20. The get/set methods are already O(1) since it is just ordinary array indexing. A made a binary search tree from the pointers to test the performance against Variant attributes. Acording to aristos Variants use a read black binary tree to store/find named atributes. Attributes, as native LV objects (and using a red black tree) will be faster by a factor of 3. The performance in LV8.2 will be about 50% better for both attributes and the binary tree (attached is the 7.1.1 version). Download File:post-4885-1159604035.zip
  21. OK. Yes, i see that create can be speeded up. As the test is running now, its not far from worst possible case in "create". The test does 10000 iterations, and when half is done, the bool array searches linearly 5000 to 10000 indexes 5000 times before it hits, still this is faster than creating new queues. Yes, I agree. But why do you want to do that? and why do you want to do it simultaneously several places for the same element? This is afterall only a pointer system, a functional global with a reference. I use it for passing data.
  22. Thanks for the comments. About random access performance: All this is, is an array inside a while loop. To access an array requires O(1) operations no matter how long it is, or no matter which index you try to access. The only thing that takes time is to allocate new elements, or more precisely to increase the array. Array performance in LabVIEW is pretty good in general and this is precisely why the performance of this system is so good as it is. The bool array is only used in allocating and freeing, not in get/set. But of cource, a random access test would be nice and more precise and to the point. About concurrency: If what you are saying is true, then all sub VI's in LabVIEW are ticking bombs just waiting to create disaster. So - I just don't buy it, sorry (I'm not saying you are wrong, but if calling a function, a sub VI, is not safe, then what is?). You must also remember that this system is totally asynchronous while queues MUST be synchronous to function properly. I see it like this: It's an ordinary LV2 global with an array, and will perform just like any other LV2 global with arrays. About concurrency and stacks, I really don't know how to test this, can it be tested?
  23. I have never used a PXI chassis, but have you tried to flush the file more often? You could try to flush the file at every iteration first, to see if that helps, or the opposite - gets even worse.. I would guess that the jittering is caused by the file buffer being full and writing lots of data at once, so therefore flushing the file more often may help.
  24. Take a look at this Is this the can of worms you are *pointing* to ? Well, thanks to JFM there is at least some error checking in there while still being very efficient (twice the speed of queues). I have another one too, made in LVOOP and using Variants. That one is very general, but the more general it becomes, the less the efficiency is, and then you end up with a just as good alternative in an asynchrounous queue system. But it would be fun to compare the speeds of those two.
  25. Strange. I thought i had fixed that. Maybe it was OK before the fix Edit: It's fixed now (only use get/set myself ).
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.