Jump to content

bsvingen

Members
  • Posts

    280
  • Joined

  • Last visited

Posts posted by bsvingen

  1. Exactly which languages are you talking about where the locking is not necessary? I regularly program in JAVA, C, and C++. In the past I've worked in Pascal, Haskell, Basic, and Lisp. In all of these lanugages, if the programmer spawns multiple threads, the programmer must handle all locking. These languages have no support for automatically protecting data from modification in separate threads. The closest you come is the "synchronized" keyword in JAVA, but even that doesn't handle all the possible woes, particularly if you're trying to string multiple "modify" steps together in a single transaction.

    Am I misunderstanding what you're referring to?

    C++ actually. I made a matrix class system in C++ once. It was a part of larger engineering application, and the way it worked was to store 2D arrays as 1D arrays and call core Fortran LAPACK solvers (Fortran stores all arrays in 1D due to performance gains). It started on a SGI using gcc, but later moved to windows using Watcom compilers (fortran and c). Anyway i can't remember ever considering a locking mechanism to protect anything within a class.

    I think LAPACK, BLAS and derivated routines can be found in parallelized versions now, using multiple threads. I still don't see any need for locking as long as you don't access the same memory, since you don't have a get-modify-set pass, but operate directly on the individual data. I mean, what is the idea of having multithreading/multiprocessor applications if you implement a lock that effectively serializes the execution like it is done in the available GOOPs ? Then you would be much better of with much better performance using one single thread/processor. The only consideration is the relatively few inter thread calls, but they too can be solved by asuring that all inter thread routines writes to separate memory locations.

    The by-value approach of LabVIEW asures that we always operate on different data in parallel loops, and therefore multithreading is a relatively simple thing to implement, since the memory collision considerations are solved, or more precisely they are irrelevant. The same thing could however easely be implemented in C++ when using only call by value, but this would result in two major problems:

    1. Loss of performance due to constantly memory allocation/deallocation.

    2. No way of effectively handling inter thread (read: inter loop) calls.

    As i see it, this is also the two major problematic issues of LabVIEW, and particularly number 2 is something that puzles all newcomers to labview after a week or two. Problem 1 can only be solved by using by ref. Problem 2 can be solved in many ways depending on what the program actually does, and locking can be one solution (or at least part of the solution).

    But, as i said, i'm no expert. Maybe i have a too simplistic view on this.

  2. No, a native implementation would not take care of this. Let me lay that myth to rest right now.

    If we had a native by-reference implementation, the locking would have been left completely as a burden to the programmer. Would you suggest we put an "acquire lock" and "release lock" around every "unbundle - modify - bundle" sequence? When you do an unbundle, how do we know that you'll ever do a bundle -- there are lots of calls to unbundle that never reach a bundle node, or, if they do, its buried in a subVI somewhere (possibly dynamically specified which subVI!). There are bundle calls that never start at an unbundle. We could've implemented a by-reference model and then had the same "Acquire Semaphore" and "Release Semaphore" nodes that we have today, only specialized for locking data. Great -- now the burden on the programmers is everytime they want to work with an object, they have to obtain the correct semaphore and remember to release it -- and given how often Close Reference is forgotten, I suspect that the release would be forgotten as well. Should you have to do an Aquire around every Multiply operation? A Matrix class would have to if the implementation was by reference. Oh, LV could put it implicitly around the block diagram of Multiply, but that would mean you'd be Aquiring and Releasing between every operation -- very inefficient if you're trying to multiply many matricies together. So, no, LV wouldn't do that sort of locking for you. There is no pattern for acquire/release we could choose that would be anywhere near optimal for many classes

    The by-reference systems used in the GOOP Toolkit and other implementations do not solve the locking problems. They handle a few cases, but by no means do they cover the spectrum.

    The by-value system is what you want for 99.9% of classes. I know a lot of you don't believe me when I've said that before. :headbang: It's still true. :ninja:

    I think that it a get-modify-set pass that could easely be protected or locked in a native implementation of GOOP, just as it is locked in openGOOP. But probably more important is the fact that if by-ref in general was natively implemented in the same manner as other languages, there would be no need for a get-modify-set pass at all.

  3. I just wonder if all this is due to some patent related issues. For instance, there is no good reason that all sub VIs shall have both front panel and block diagram. 99% of the cases, VI are not used as virtual instruments, but as functions and subroutines. All you need for functions and subroutines are block diagram and the icon with connectors. Also, a by value object will not change anything of the basics. A by ref object would be impossible to protect in any form because it already exist (openGOOP, dqGOOP etc), and a native implementation would require a storage that is not a VI to be efficient, and then it blows the patents. Just some wild guesses :)

  4. ....if you really think of it as an object it is very difficult to wrap your head around the fact that you are creating a new instance of the object if you branch a wire...

    Maybe instead of calling it object oriented, we could call it object disorientet :ninja: ;) Joke aside, with LVOOP we can in fact make a whole new and different set of reference types. Instead of just being a dumb typedefed datalog reference, we can now put a whole lot of information into the reference itself.

  5. Just to confirm, you are speaking about an LV2 global that not only stores data, but contains the methods to act on it?

    If this is the case then I agree that in the simple cases, i.e. numeric operations etc. this is the way to go.

    On the otherhand, if you only use LV2 as storage, then you must have get/set methods to act on the data, and then race conditions are likely to occur if no locking mechanism is used.

    To use the "LV2 global with actions" approach when building instrument drivers etc, it becomes way to cluttered. And you can only perform one action at the time, since actions are performed within the global itself (and the global can not be reentrant, or at least encapsulated in a non-reentrant VI). Using GOOP with locking, methods can be called almost simulataneously.

    Don't get me wrong, I like and use the LV2 globals all the time to: pass data, as intermediate buffers, data filtering etc., I'm just not ready to throw locking out.

    I also don't think we would have had this discussion, if NI had gone the by_reference path instead, since the native implementation would take care of this.

    /J

    Yes LV2 globals ala' LCOD. The way i use them in a very large application i have is to make them reentrant, then i call them with a call by ref node. Then i can have as many as i want, all by ref.

  6. This is just plain bull. Microsoft has a big team (100, 1000 people?) for making their flight simulator, and they still cannot come up with something close to X-Plane. X-Plane is made by one single man, and it is written entirely in C :worship: And there are other examples: LINUX for instance, the first C++, lcc.

    One single person can be more productive in LabVIEW, things take much faster from idea to a working application. But from there on i think things get even out, and finally the one starting with C actually gets more productive when all the limitations in LabVIEW really starts to become problematic.

  7. Let me make this statement: A referencing OO system does not need a Retrieve-Modify-Store system where you retrieve all data and store all data.

    I agree on this one, if for no other reason than the fact that functional globals outperform all GOOPs i know of. The more i think about it, the more LV2OO seems to to be the way to go. Here you can modify the exact member you want with no need for any GMS pass. All members are protected within the global, and it is really easy, fast and unproblematic to add new actions (once everything is set up). The only concern is performance, since the dynamic dispatching thing does not seem to be too efficient as of today and because LV2OO style adds more VI calls (that because of the dynamic dispatch cannot have subroutine priority). But if i have understood Aristos correct, the performance will increase in future versions, maybe with stateless VIs ?

  8. I don't think locking is about keeping your program executing in the right order, for me it is a way to avoid race-conditions; in order to modify data you must first get data from memory.

    Exactly, i agree 100%. But - when the members are mutually dependent, then locking will not solve the result of race condition. The problem is still there dispite of all the locking you can think of. The only way to solve it is to make sure no race condition can ever occur. Therefore, locking is not even necessary in most of the cases.

    With a LV2 global, you can do the same thing, but then you must protect your global by having only one GMS VI, and this VI must be non-reentrant.

    LabVIEW will then schedule the GMS access automatically. If you allow more than one GMS method, then the protection against race-conditions is gone.

    I don't understand this. In a functional global all the members are internal to the global, there are no get and set, only unbundle and bundle at most (but that isn't neccesary either). If the global is non-reentrant, then you can have alot of different actions. However, you can also use reentrant globals and call them by a reference node, there will still be no get and set. I use this all the time, no problems yet.
    With a get/lock method you can lock data, perform actions on the data in subsequent VIs; and then update the object when the action is done.

    This allows us to create methods that only updates a single attribute, leaving the others intact. Resulting in an easier API for the user, compared to one single GMS VI in the LV2 global case, at least when we have many attributes to handle.

    Yes, i agree that it can be problematic to change and add things in a global. But with LVOOP this is actually solved, i have tried it and it works perfectly. An example can be found here

  9. It seems that you are correct about this one. A lock here will indeed solve the problem. The reason for this is that a collision between init and/or a multitude of increment methods does not result in any problems about the sequence of execution. The member is only dependant on itself, so that if two increment methods collide, it is irrelevant which one executes first.

    Your lock works because your member is independant of the program flow. This also means that it will work with no wires attached, and will therefore be much more efficient when written as a simple functional global with a init and increment action. The problem still exist for members that are not mutually independent.

  10. Here is an example that hopefully will clarify what i mean (or shows how completely i have misunderstood this :blink: ).

    An object, A, has two members, a and b. In addition there is a value, c, that is used to modify those members with two methods f and g.

    A collision can only be a problem if a GMS calls the same instance, A1, and the two members are dependent and the sequence of calling f and g are not arbitrary. They are dependent if the modification at least consist of a = f(b) and/or b = g(a). For instance:

    f = b + c and g = a*c, and a = 2, b = 3 and c = 4 the results will be:

    a = f = 3 + 4 = 7 and b = g = 7*4 = 28

    or

    b = g = 2*4 = 8 and a = f = 8 + 4 = 12

    This shows that the sequence of calling them will produce different results. Therefore a collision of two methods (can be the same or two or more different methods) with dependant member functions are a problem no matter what, because the sequence of wich they are called will influence the result. Protecting (locking) these functions will not solve the problem of collision, simply because the problem will be there independent of the protection.

    If the two members are independent, then there will be no problem with collisions at all, because they will not influence each other. If the members are dependent but the sequence are arbitrary (must be only some very few odd cases), then i'm not sure what will happen, but that doesn't really matter by now.

    It is not enough to protect (lock) member functions f and g. To prevent problems with collisions the only solution is to make sure that f and g will never collide, they must be sequenced to execute in the correct order by the program flow. Locking them will not assure that the sequence is in correct order, but will only prevent them from executing, or reading members simultaneously.

    Therefore, locking the functions only has two results:

    1. It has no effect at all because the members are independent.

    2. It does not solve the real problem that is arbitrary sequencing at collision, and therefore has no effect.

    Locking is therefore an uneccesary construct that only bogs down GOOPs, IMHO of course :)

  11. This topic have been discussed in some other threads, but only by branching off the original topic, so i start a new thread here since there are some things i just don't get.

    As I have understood by now, a by-ref GOOP requires a get-modify-set (GMS) pass. I can see that in theory it is possible that the same GMS can take place at the same time for the same instance in different places, and then it is unknown what data actually will be set to. Also I can see that the set (only) method can be used at the same time as the GSM for the same instance in some other place, and then it is also unclear what will happen. To resolve this a lock is required so that data is not (read?) or at least not set any other places during the GMS pass.

    So far so good, but what is not clear (to me), is under which circumstances this actually can occur in a real application. Then, if it ever will occur, what will the problem be? and last but not least, will the lock mechanism actually solve the problems of collision?

    IMO the program flow should make certain that this will never occur in the first place, at least not for the same instance. I mean, the main reason to have a program flow is afterall to make sure that data are set and read in the correct order. A program is not a set of random events, even though randomness can be a part of the constraints. Using "ordinary" call by value LV programming, it will be physically impossible to set the value at different places (at least on a per wire basis). A by-ref system will make this possible, and is often the main reason you would like to do it. But when switching to a by ref system, one of the unavoidable drawbacks is that you have to explicitly program in such a way that data is set and read at the correct places, and at the correct time, or else your program will not work as intended. This is true for simple (by ref) get and set, and is totally independant of any locking mecanism you have.

    The queue primitives are by-ref, but they solve this issue by being synchrounous. They lock the data completely, effectively halting execution altogether until some data is available to read, and untill data has been read. Thus, queues are not only by-ref primitives, they are also program flow primitives making 100% sure that data is read and written in the correct sequence by completely taking control of program flow. Isn't it therefore also true that the only way of making a by-ref system 100% "collision safe" is to let the by-ref system take control over to program flow? I think so, because any other alternative will require some explisit control function (that will take full control), or data can both be set and read independent of the implicit diagram-flow. So basically what i am saying is that queue primitives (and notifiers to some extent), but used as primitives and as intended, are the only by-ref system that will prevent "read and write collision", and they do so by taking full control. Any other alternative will require that the program flow is explicitly made so that collision will not occur (if collision will be a problem that is). A simple locking mechanism for GMS pass will therefore not solve the real problem, namely inadequate program flow control by the programmer when using a asynchrounous by-ref system, but will only hide the programming error from being seen.

    The problem with collision is not solved by locking the GMS pass. If there is a collision between the set and GMS pass, that collision will be there no matter if the GMS pass is protected. For instance, a call to the GMS method and the set method is done simultaneously for the same instance. Two things can happen:

    1. The GSM executes first, it then locks the set method untill it is finished. Then the set method executes.

    2. The set method executes first, then the GMS method.

    The resulting data will either be data from the set, or data from the GMS, and it will be random which, thus the collision is not solved.

    If the GMS method was not protected, the result will be exactly the same:

    1. Set executes first, then GMS

    2. GSM executes first, then set.

    3. GSM starts, then set (only) executes before set in GSM, result will be same as 1.

    In a queue-based by-ref system, i can see that the program can accidentally hang, or data can disapear, if the GMS pass is not protected or locked, but it will not solve any collision problems. In a non-queue based ref system, data cannot disapear and the program will not hang, but the collision problem is still there as well. The collision problem is a program flow problem that exist independent of any locking, and can only be solved by using ordinary queue primitives (in normal synchrouneous mode instead of any type of asynchroneous by ref system) or by explicitly making the program so that collisions will not occur (the only way IMO). A locking mecanism will not change anything regarding the results of a collision, although it probably will protect the program from hanging when using queue based asynchroneous ref system.

    Well, it would be nice to hear any comments on this, as I am no expert, i just do not understand what all this locking is about. :)

  12. Of course, in making a call by ref LVOOP you need a get-modify-set pass. Then you need to protect that *pass* so that no other VI tries to set while it's being modified etc. But that is, as far as I can see, the only time you will ever need such protection, and it is a very peculiar construct and is probably more of a typical example how NOT to use pointers. In other programming languages there would be no need for such a pass, because the data is edited directly. Still, i wonder how often, if ever, you will modify the same instance at different places simultaneously. Will this ever become a real issue, or is it more of an academic problem?

    In the other general ref, i have been experimenting with occurences. There I set an occurence for each pointer, each time data is set so that data can be read in another loop for arbitrary pointers fully synchrounous (similar to queue). This can be done with seemingly no performance penalty (to set the occurence), and i wonder therefore if it will be possible to do some similar for the pass. An occurence can for instance be set each time a pass is finished, and the pass will never start before an occurence has been acquired. One problem is what to do for the first ever pass.

  13. I don't think that create/dispose methods are that critical, you only do this once, but the get/set methods are called more often.

    To make create methods more or less independant of the number of elements in the array, one can implement a linked list that holds indices of the free elements, and points to the next.

    To create a new pointer, only remove first element from that list, i.e. no search for next free element. Dispose then adds free pointer to the end.

    I'll see if I can find some old linked list implementation on my disks.

    /J

    The get/set methods are already O(1) since it is just ordinary array indexing.

    A made a binary search tree from the pointers to test the performance against Variant attributes. Acording to aristos Variants use a read black binary tree to store/find named atributes. Attributes, as native LV objects (and using a red black tree) will be faster by a factor of 3. The performance in LV8.2 will be about 50% better for both attributes and the binary tree (attached is the 7.1.1 version).

    Download File:post-4885-1159604035.zip

  14. OK. Yes, i see that create can be speeded up. As the test is running now, its not far from worst possible case in "create". The test does 10000 iterations, and when half is done, the bool array searches linearly 5000 to 10000 indexes 5000 times before it hits, still this is faster than creating new queues.

    When user calls the Get Data operation, he gets the content of the stack but the data stays also in the stack. If the user wants to modify the data in the stack, he/she needs to get the data using Get Data, then modify it outside your VIs, and then write it back

    Yes, I agree. But why do you want to do that? and why do you want to do it simultaneously several places for the same element? This is afterall only a pointer system, a functional global with a reference. I use it for passing data.

  15. Thanks for the comments. About random access performance: All this is, is an array inside a while loop. To access an array requires O(1) operations no matter how long it is, or no matter which index you try to access. The only thing that takes time is to allocate new elements, or more precisely to increase the array. Array performance in LabVIEW is pretty good in general and this is precisely why the performance of this system is so good as it is. The bool array is only used in allocating and freeing, not in get/set. But of cource, a random access test would be nice and more precise and to the point.

    About concurrency: If what you are saying is true, then all sub VI's in LabVIEW are ticking bombs just waiting to create disaster. So - I just don't buy it, sorry :D (I'm not saying you are wrong, but if calling a function, a sub VI, is not safe, then what is?). You must also remember that this system is totally asynchronous while queues MUST be synchronous to function properly.

    I see it like this: It's an ordinary LV2 global with an array, and will perform just like any other LV2 global with arrays. About concurrency and stacks, I really don't know how to test this, can it be tested?

  16. I have never used a PXI chassis, but have you tried to flush the file more often? You could try to flush the file at every iteration first, to see if that helps, or the opposite - gets even worse.. I would guess that the jittering is caused by the file buffer being full and writing lots of data at once, so therefore flushing the file more often may help.

  17. Yeah right and get all the problems of C... No thanks. That's why you can't manually create a ref in Java. If you have an object (=ref) in Java then it is always poiting to an existing object (unless the reference is explicitely made NULL). Besides, in LV, what should the reference then point to, to an object in a shift register ? Which you can then silently modify without the "owner" knowing about it ? That's a can of worms.

    Well, don't we all have semaphores on our VIs that control parallel execution of the same VI ? We use this mechanism to make sure calls to functional globals (LV2 style globals) are serialized correctly and fairly. This shows it can be VERY efficient.

    Joris

    Take a look at this :D Is this the can of worms you are *pointing* to ? :D

    Well, thanks to JFM there is at least some error checking in there while still being very efficient (twice the speed of queues). I have another one too, made in LVOOP and using Variants. That one is very general, but the more general it becomes, the less the efficiency is, and then you end up with a just as good alternative in an asynchrounous queue system. But it would be fun to compare the speeds of those two.

  18. Hi all,

    I am encountering an error while building Apllication the error Description is pasted below,

    Error 1 occurred at Invoke Node in ABAPI Update Libraries.vi->ABAPI Dist Build LLB Image.vi->ABAPI Copy Files and Apply Settings.vi->EBEP_Invoke_Build_Engine.vi->EBUIP_Build_Invoke.vi->EBUIP_Build_Invoke.vi.ProxyCaller

    Possible reason(s):

    LabVIEW: An input parameter is invalid.

    i dont know why this error is coming does anybody encountered this type of error,

    by the way Vi runs without any problem,it is only when i am trying create Exe this error encounters

    any help on this is greatly apprecited

    labviewcatfish

    I have had the same error. The "reason" was that the builder just didn't understand static vi reference for some odd reason. When i changed the static vi reference to ordinary "open vi reference" it worked OK. I told NI about this, but the realy nasty bug IMO was that the error string gave me no explanation of where or what was "wrong", and that everything worked OK in the LV environment. However, your bug could be something else.

  19. Hi everyone,

    I want to point out that any "retrieve - modify - store" system has a problem: when/where do you retrieve and store the data when calling an other method of the current class ?

    I see two possibilities.

    1. You do that in a wrapper VI. This has the disadvantage that each method needs a wrapper VI, so you effectively double the number of VIs. Code replication. If you want to call a method of the current class you don't call the wrapper VI but the method VI directly, and you hand it the object data directly.

    2. You do it in the method VI. Code replication over there. Before you call an other method of the current class you need to store the current object (if you have made any modifications) , bacause the other method retrieves the object again.

    Neither way supports calling two methods in parallel. Because they would not modify the same object but two different objects and on return of the methods, both data sets would be saved to the same object in the repository so one set of modifications would be lost.

    The way to prevent this would be a system that immediately stores any change into the object. Like you would handle an object in any other language. Much more intuitive, less prone to error and much much less code replication. Think outside the GOOP box !

    Joris

    This problem will allways be here due to the call by value nature of LabVIEW. The only practical way of storing data by ref is to put it in a cluster and store that cluster (an LVOOP object is more or less an advanced cluster). If you want to modify any data in that cluster, you have to get the cluster before you can do anything with the individual data. One alternative is to store each individual data in a LV2 global instead of a cluster. This will probably work, but will be very unpractical because you have to make get/set actions for every single element + wrappers because the LV2 global would have to be reentrant and would have to be called by reference node. Besides, you cant use LVOOP objects in such a configuration.

    Anyway, I have still not seen any satisfactory reason why LabVIEW just have to be call by value and not by ref. The parallelism reason just does not hold water because the main (often the only) reason for doing parallel runs is to be able to read/write the same data different places. This can only be done by using some kind of by ref system.

  20. I think you are being a bit negative here, maybe - because:

    "Always listen to the experts, they'll tell you what can't be done, and why.

    Then go do it."

    :D

    Seriously, it is a lot of truth in those words, but maybe something like IMAQ is not the best project for open source. It is probably better to do it as a "one man show". If you know enough LabVIEW "to be dangerous" and a LOT about image aquisition and analysis, it should be quite possible IMO, but probably much more difficult the other way around. Maybe THE greatest strength of LabVIEW is that by maximizing on all the benefits that LabVIEW has to offer (interactive interfaces, graphs, interfaces to literaly all kinds of external devices and buses, extremely rapid code development, fast execution (can sometimes be really tricky - but still), allmost unlimited amount of functions and functionality, etc), you will have alot of headroom to consentrate on your particular field of expertice. The more complicated or less mainstream your field of expertice is, the more will good solutions and good applications be created by understanding that field instead of clever coding - as long as you know enough LabVIEW to be dangerous :) Of cource, you can say the same thing about all programming languages, but for those applications where you can take advantage of the benefits in LabVIEW, this will give you a tremendous advantage.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.