Jump to content

robijn

Members
  • Posts

    171
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by robijn

  1. First current LVOOP cannot handle trees and graphs or any similar complicated data structures easily or effectively. I also now think that the by-value <--> by-refence transformation may make things messy and complicated and decrease the readability of the code.

    I have not actually heard of a single real-world application with objects yet. I think I know sufficient advanced engineers in the field.

    I think by-value objects and by-reference objects are logically different but of equal importance.

    Well, I think they exist for a different purpose. You need by-ref for objects to be useful, and you need by-value if you want to be able to put all traditional LV types in classes. By-value classes are not on all points an improvement over a cluster. Their data can not be visualized on a front panel like with a normal cluster. And that transparency has always been what made LabVIEW so powerful. Program some code, set some values on the FP, run the VI and see if it works. Rapid Development !

    I've not really understood what basic LabVIEW objects NI would like to extend with the classes. I can imagine you want to have the existing Control's class structure converted to the LVOO class structure, so that you can "extend" controls yourself. However those would all need to be by-ref, not by-value, or you would not be able to refer to a control on the front panel. But would you actually want classes for setting properties of controls ? Or for some type of data ? Maybe Stephen can give us some examples of useful objects derived of traditional LV types..?

    I like your root classes idea for its clarity and because it keeps things fundamentally sound. A distinctive difference in the looks of the wire and icon should also be possible.

    Joris

  2. post-1555-1165572405.png?width=400


    Funny, LV Punk's join date is different in Europe than in the US...

    Maybe we should fight for premium membership with a game of tetris or pong. That way we have given in to the argument that competence is more important than #posts for premium membership ;)

    Joris
  3. Getting back to the notifiers, I tuned in a little late, but I get the story. It is important to note that the problem only occurs with "dynamically" wired nodes. If you wire a single ref there's no problem. It's optimized for that and that should remain. But I would think a list per node, like JFM proposed will only improve the situation and have no bad consequences like breaking old code.

    This discussion is important and it relates to very important general issues, namely code reuse and LabVIEW compile time safety mechanisms.

    I think there is an issue in this discussion that nobody has yet mentioned. During the development time the developer doesn't know in which circumstances his/her module is used. Only in projects where there is only one developer, all the code reusage situations are somehow predictable. If the developer writes code that may be used by other as part of some software, notifiers are risky due to the possible program hang feature. So even though the notifiers function well in the case the uses them, the code reuse may lead to mysterious program hang in later times. Especially if one writes a library that may be used by third parties totally independet of the developer, one should not use notifiers due to the fact the library may mysteriously hang.

    :thumbup: Indeed so ! You're very right here.

    Then a second issue that is not discussed here yet. It's more general and not related to notifiers directly. I think one of the best features of LabVIEW is strong typing. I am not fan of variants, I'd prefer polymorphic types instead of variants. Strong typing guarantees that it is hard to create bugs as one cannot connect a wire of wrong type to the node.

    Yes variants destroy the strict typing which is so important to build good programs. Theoretically it means you need to run the program for each and every possible situation and check that there is no variant converted back from an unexpected type - which is usually impossible to guarantee. That's why I would favor an "untyped" control, of which the type(s) will be determined when compiling. Stephen may remember a picture I sent in once... I will post more about it later.

    Joris

  4. Honestly I don't think this suggestion is realistic. I also don't understand how the ability to use by-value objects harms those who want to use only by-ref objects.

    I think the question should be the other way round: you should wonder whether the having by-ref objects harms those who want to have by-value objects. I don't think that's the case. So by-ref works for everyone. Don't you think ?

    Why do you think the suggestion is unrealistic ?

    It is a misconception which returns over and over again, that by-value works simple. It only seems simple but it is actually not simple to work with it, because you have to write complex structures to use the objects in a practical and safe way. Starters don't manage to build/use those structures and they end up with bad programs that don't do what they want. That is something that the language should prevent. So the current implementation is IMHO not good for starters and it is also not good for advanced users.

    Whereas by-ref wires are good for advanced users and they don't make things any more complicated for starters compared to by-value wires.

    <META>It seems you repeat earlier made statements again, which I've replied to and to which no re-reply came. So I've repeat my counter-argument. </META>

    C++ has references in addition to values and pointers. I don't think this comparison is flawed.

    Should I conclude that you want to work with references in the same way as you work with references in C++ ? How ?

    Is Java not a better comparison anyway because of the security it offers ?

    Doesn't Java offer a enough features for a professional environment ?

    I don't want all those scary C++ features... Pointers, templates (in the C++ implementation) ouch. C++ has so many things you really don't want. If Straustrup learned anything from the development of C++ it was that he should be very precise on what features he allowed in. He was not happy with the endresult of C++ which has practicall all proposed features, causing the language to be difficult to understand. It's important to keep things simple, to stick to a limited number of basic rules. For that reason I am against two different ways to work with objects. It will make things confusing if you can work with an object by-value AND by-ref.

    Joris

  5. By-value implementation allows the developer very efficiently to parallelize the processing as objects share no common memory segments. This becomes a very important fact as we are beginning to see super scalar processors with eight or even sixteen cores. As objects do not share common memory, the computation can easily be distributed around a cluster which will really provide huge benefits over shared memory processing of by-reference implementation.

    There is no reason why the efficient processing would not be possible with a referencing wire. The "sharing memory" problem would only occur if you modify the same data on two different processors, which would not be the case very often. As I've indicated many times before the problem is much worse with the current by-value implementation because you have to split the data into two objects and later have to merge the data again. Or alternatively serialize the actions, which requires some self-implemented error-free locking.

    I do not think that by-value and by-reference implementations are mutually exclusive, e.g., C++ has both variations side by side.

    This comparison is flawed as C++ has pointers, and LV has not. It is obvious that we don't want pointers. It would be fairer to compare to Java, which has a similar safety as LabVIEW. Java does not have both by-ref and by-value. Only basic variables by-value, objects are by-ref.

    Instead of blaming the current LabVOOP implementation of pitfalls, we developers should be constructive and suggest how LabVOOP can be further developed into a more powerful programming language. NI cannot go back and change the current implementation any more, but it can go forward and make it more flexible and even include by-reference objects.

    Indeed, NI can switch to referenced objects without significant problems and all 8.20 code will still run. It will require an upgrade process when upgrading VI's to a higher version, but NI does that for other things as well.

    If we assume the concept class to be more like a class in other languages, it would need a constructor and destructor. This way a programmer of the class can enforce real-world consistency when an other programmer uses his class. I.e. the second programmer can make much less mistakes.

    The upgrade process would need to do the following:

    - Insert a copy constructor where an object wire branches.

    - Add a copy constructor to each class that requires the above insertion.

    It would also allow for auto destruction (maybe a version later). That would require to maintain a reference count each object. When a reference wire ends and the referenced object is not used anywhere else anymore the destructor should be called and the object deallocated (just like it already done with queue refs). This assures you can for example properly close a communications channel when the object using it is not used anymore. This can be done in a deterministic way too, no need for the feared background scavangers.

    Then all LV8.20 code would still work even though the wires have become referencing !

    A starter programmer does not even see the difference but advanced programmers can start taking real advantage of classes in LabVIEW.

    Joris

  6. I think I must have entered that fifth dimension a couple of times. One seems to be able to enter it after some :beer: :blink: . From what I recall, things over there were not as logical as in the normal dimensions. It also seems you can start saying and doing things that you would no usually say or do, but I guess that's caused by spatial anomalies or something. But what you say is very interesting, I understand that I can theoretically step out of this fifth dimension at an other location then where I entered it ?

    Joris

  7. Eiffel :)

    Ouch, you Americans should know his name. He built the structure for the Statue of Liberty... :thumbup:

    Joris

    I like Google Earth a lot too. The fact that it's sometimes difficult to make out roads etc makes it even more interesting. Maybe I'll take flying lessons one day...

    Joris

  8. Numeric Add(dyn Numeric A, Numeric B)

    Numeric Negate(dyn Numeric A)

    Numeric Substract(dyn Numeric A, Numeric B)

    With OO you would have the addition as part of the class. So if I have a numeric I can add something to it.

    Numeric.Add( Numeric A );

    which adds A to the object's own numeric.

    You are treating the object as a functional library. That's OO abuse :nono:

    But if I understand you correctly you want to use dynamic dispatching to decide which of the following methods you need to select in order to do the correct addition.

    Numeric.Add( Double d );

    Numeric.Add( ComplexDouble s );

    But this is function overloading. Using different arguments to reach different functionality. Besides, function overloading is a compile-time-only feature, not a run-time feature (like dynamic dispatching). Maybe some languages do actually support this dynamic function overloading, I don't know of one.

    What you want can be reached with "normal" OO features in the following way:

    You could define an Addable class of which the objects you want to add should inherit, like with Java's "implements" relationship. Then you can create a function:

    Numeric.Add( Addable A );

    The implemented methods in each class inheriting from class Addable should perform some conversion. Because that class would have only virtual methods. That's why Add can work.

    Obviously these theoretical examples are quite unreal. Noone wants to use OO to add two doubles :) . But the mechanism can be used for real-world things as well.

    I think the good functioning of the core OO system is far more important than all kind of extensions like function and operator overloading, macros and templates... Currently I consider it not advantageous to use LVOO because the core functionality is unpractical.

    Joris

  9. The difference that I can see is that the submitted code runs 3-4 times faster than the Sort 1D Array approach.

    -D

    Hmm, only 1.6 times faster here, P4 1.7GHz, LV711

    If you multiply the random number by 2^31-1 and store it as an I32 it is only 1.25 times faster.

    The preallocation stuff is quite fast, and the speed of the sort function is amazing.

    I like neat small diagrams more :)

    Joris

  10. hmm, this is an old feature. You just have to add "funkyerrorcluster=true" to your labview.ini. (tested with LV 7.0 and 7.1)

    I don't know if that was a mistake, or it was intended, that this now the default setting, whatever, I like it :)

    It was intended. I've had it for years as well, worked already with LV6.1, maybe earlier.

    More colour: Coercion dots are also by default red now. Although that does not really help colourblind people, it does help me.

    Joris

  11. I participated in NI's first coding challenge (BitTwiddling) as an internal at Philips Research. It was fun, I learned a lot and we won it :yes::D

    I still find it the nicest coding challenge so far because it was so simple. You could think about it and get new ideas and you just couldn't stop working on it... Our team really liked it, I guess the boss liked it less :shifty:

    Small, simple... yet quite a challenge to do it in the fastest way.

    Or, a coding challenge with as target the least LV nodes (or code bytes) instead of the quickest one ?

    Joris

  12. As for the code, what does function recursion add? Readability. Your VI works... I think. It took me 20 minutes of staring to figure out what the heck a divide operation had to do with calculating this value.

    Yeah. What about a VI option "instantiate on call", creating a dataspace when it is called.

    Would behave just like the combo {Open VI ref reentrant, call VI, close ref} currently does.

    Could be nice for objects too (no matter whether the wire is containing or referencing)

    Joris

  13. They are typed. Strongly typed. We're not talking about the type of the VI. We're talking about the type of the call to the VI.

    Well I've seen things change type suddenly when I rewired something... Scary. It feels like you're not in control. Which was true appearently. :unsure:

    OK then. If I read your remark then I think: You should know what is coming out of a call... An object of the type of the indicator. So appearently that is true, but not with static calls.

    So with dynamic VIs, the type is still defined in the connector pane constant. And you want to cast that back to the type you put in... yes that makes sense. :thumbup:

    But then my question is, why don't you do it the same way on static calls ? It is not very "version robust" if the type that comes out at a VI-call can change when you change the contents of the VI without changing the type of the indicator. It conflicts with the class-independence you are trying to achieve. A calling VI may become broken while the connector pane is not changed and the called VI is also still executable... Having all the type propagation info is a snapshot info only, it does not make it more "version robust".

    You know, I'm not really into this automatic stuff... I'd rather think about my code and downcast manually. The downcast is a normal object downcast with as type the wires that goes in. I would think that is very acceptable in your code. It consistently adheres to the defition of the called method. And you see what is happening in the diagram, no hidden config windows.

    But if you're sure the system you have is predictable and "version robust", then you have my vote.

    Joris

  14. Bingo.

    Your thoughts accord with mine, sir. In fact, all of the type propagation information needed to support this function is available today in the compiler.

    The problem is the user interface. We wouldn't be able to just popup on a terminal and say "This terminal is...>> XYZ" because what we would need is a way to specify which terminal, possibly a fairly complex mapping. And if we implement the simple form, you'd probably next want us to support the case of the MinMax.vi -- given two class inputs that produce two class outputs, the top one being the greater and the bottom one being the lesser, you'd want the outputs to become the nearest common parent to two input types. So to really handle this you need a fairly complex map.

    Aren't these problems are caused by the fact that you do not store type information in the control and indicators of LVOOP objects on the front panel? If the user would specify what type the "object in/out" should be all problems would be solved. If I understood things correctly you adjust at edit/view time the type of object to the way it is wired. This is unlike strict typedefining in all other OO languages and also unlike the LabVIEW behaviour we had up to now.

    I filed a bug complaining that an indicator was automatically incorrectly adjusting to what I had wired, while it was _me_ who had made the mistake. I had expected a compiler error, but LabVIEW "solved" the type for me - incorrectly. It is not what you would expect from a compiler. But the bug was dismissed stating it was intended behaviour. :(

    With this behaviour you need to solve all the kind of problems that you mention. Which you would not have had if you had made the control/indicator keep this object type. Of course sticking to OO rules, so a Bear can be wired to an Animal control, but a Car cannot.

    You would not have any dynamic call problems then either, as the type is defined in the connector pane constant.

    Can you tell us some more about why you choose this auto adjust to typedef behaviour ?

    I tried pulling this together a couple years ago during LabVOOP development. It's a continuing problem. At this point I'm actually considering something more akin to an "Advanced Conpane Configuration" dialog which would allow you to fully define the mapping of input terms to output terms, specify 5 different behaviors for unwired input terminals and about 3 other really useful behaviors that a simple "click on conpane click on control" cannot specify.

    I'm starting to think there exists a rather elegant solution to the type propagation problem with the addition of a single new primitive to the language.

    Argh ! :headbang: Please just make the object control and indicator typed ! This sounds terrible.

    Joris

  15. Wire level synchronization

    What I propose is the following. An unbundle node could either be normal or "synchronized". A synchronized unbundle would guarantee the access to the private data members in synchronized manner. All data wires originating from synchronized unbundle would be of synchronized type, in a little similar manner as a dynamic dispatch wire is of special dynamic dispatch type. Such a wire must evetually be connected to a bundle node. When the wire is bundled back to the originating object, the synchronization requirement is released.

    Hi Jimi,

    I've let your sync'ed wire ideas simmer for a couple of days. Your wires may be very nice for situations which I called locking for specialistic cases. Things you cannot accomplish with some standard ("automatic") locking scheme. Your wires would indicate the data in the wires is kind-of shielded from outside influence.

    My own ideas are for a different part of the locking world, for "automatic" locking.

    O'Haskell synchronization implementation is an example of this. Integrating the synchronization directly to the user interface allows NI to change the mehcanisms under the hood, when computer science comes up with more advanced methods.

    It is difficult/impossible to make something as complex as the access locking implementation independent. I don't know O'Haskell. I do know that it cost me months to work out and implement my locking idea 4 years ago. I think to have a good implementation you should say "ah that actually sounds logical" when somone explains in two sentences how something works, no matter how complex it is internally. The best systems can be used in an obvious way. Your wires have the potential to be like that.

    There will probably be technical problems in allowing the user to connect such a synchronized wire to all existing VIs since these VIs. Therefore the programming model for all nodes that do not support such synchronized wires will be branching the wire and passing the non-synchronized wire branch to the node and then bundling the result back to the synchronized wire.

    Hmm, you are thinking about what needs to be done if the called subVI would also need to use this locked data ? Should be solvable though... I guess it can be tracked whether the calling VI was the VI that contained the synch'ed wire.

    Also deadlocks can occur because you do locking on object level. I think you have to clear some obstacles on paper to get this working.

    Joris

  16. For those of you who are savvy to the express world. We have the ability to modify the source when running the configure vi. So when the express is configured (either automatically when dropped or when told so by the user) would be to modify the source VI to have the same inputs as the caller and drop the vi-server recursion fuctions in the guts.

    Hey that sounds very interesting ! So you can create the correct connector pane constant ! I've been waiting for that for quite some time...

    Can you create such a constant given a non-typed VI ref ?

    Joris

  17. I have been reading the manual (for a change :P ) and unless the manual i factually wrong we already have full locking capability on by default. Non-reentrant sub VIs only operate serialized, one at a time, thus it is impossible to create any problems, at least it is impossible if all parallel code is placed in non-reentrant sub VIs.

    Indeed, it is safe from itself. But when you retrieve such an object from some kind of repository (a FG for example) and use it, you get into exactly the same trouble as with a reference. Or actually more problems, because it takes more time (and code) to execute the modifications if you have a retrieve-modify-store system (with a containing wire) than if you have an immediate attribute change (with a referencing wire). And you get lots of code replication.

    That's why I think NI's argument that concurrency problems cannot occur because a containing wire is used, is not telling the whole story and actually incorrect.

    Now you might say: "then just don't use a repository". The problem is that that is what we want the objects to be accesable from multiple places, and we're back at the reason why the whole ref/no-ref debate started.

    But isn't then the obvious answer to the sunchronisation debate quite clear? A call by ref object must be made 100% safe in the same manner as a reentrant VI called by a reference node is 100% safe for the same reference, and in the same manner as the default for sub VIs is non-reentrant. Then, if you want it "super-reentart" by-ref object, you are on your own and must use the already available locking and synchronisation VIs that exist, but use it manually. I mean, if you are so good a programmer that you actually know under which circumstances multithreading is an advantage, then you also know what to do and how to do it with regard to making it safe. The rest of us wouldn't notice the difference, and will be more than happy with a by-ref system even though it was 100% serialized.

    I hope more people will see it like this. The basic use (for a starter in LV) should be simple and 100% safe, and under most-often-occuring conditions also fast. If you want more performance you can accomplish that by writing your own locking. I hope NI will some day seriously consider the alternative I present, it meets these criteria. I am more than willing to tell them all the details about it, I have no patents on it ;) It is in my own interest that NI finds a good solution.

    Joris

  18. Oh, there's nothing you can't do if

    [*]you know enough about locking to understand that you need to put in a method

    That's why I propose to use a simple understandable system where you can lock basically an action, a method you want to perform on the object. It should always be possible to lock in more specialistic ways, but you should have a simple solution for the simple cases. It seems like you don't want to make the distinction between fairly simple operations requiring simple locking and specialistic things that need specialistic locking.

    [*]you have permission to add a method to the class.

    If you cannot modify a class, isn't there much more you cannot do ? I know you could extending a class, but that does not give you the freedom to change the locking scheme.

    The classic example is reading tables of a database. There are tables that are updated only rarely but read from frequently, so to maximize performance, you don't lock the table for a read operation, only for write. Locking on read can make many database projects perform so badly as to be unusable.

    Again you don't talk about the option I mentioned twice to disable locking for a specific method. Please take that into consideration.

    Joris

  19. The problem is not with two actions happening concurrently. The problem is with actions from other parallel threads happening in between successive actions in this thread. There's the rub, and I will contend that it is a lot more significant than just 2% of the problem.

    OK so if I understand you correctly you are thinking of something with these actions:

    1. Lock the data and do something

    2. do something else and and unlock the data

    Well if you want to execute these actions you can call these methods in an other method that has locking enabled. After all, this sequence of actions on this object can be considered a new action again. Why not make it a method then ? There must be a reason why you want to leave the data in a locked state to come back to it later and unlock it. What actions would you like to perform that you cannot do in a method ?

    If the programmer thinks the locking scheme would hinder performance too much, he could add a semaphore to his private class data and use that to perform his own locking. And disable the method locking.

    Further, there's plenty of situations with read operations where preventing the parallelism would work against the performance of the system.

    Hmm that was why I spent a couple of words to tell that it was not required to use this locking at all times...

    Can you give an example of a situation where you think locking during the method execution does not work ?

    Joris

  20. Both Kevin P and my solutions DO NOT WORK if there are multiple VIs posting to the same queue. Why? Because:

    Kevin's solution fails for similar reasons.

    It is very difficult to get this kind of parallel access systems right. Locking of objects is tricky. Before you know it the system creates a deadlock. Traditional GOOP (which locks on object level) is sensitive to deadlocks. Example:

    1. Method X of object A is called.

    2. At about the same time method Y of object B is called.

    3. The first method (X) requires to call method P of object B. Object B is locked however. OK no problem, we wait.

    4. But then this method (Y) calls method Q of object A. DEADLOCK.

    This condition is very hard to predict and a common problem in systems that lock on object level. This example is only the simplest form in which the problem can occur. I am affraid this cannot be prevented by transactions either, because actions on the real-world may have happened already, making a roll-back impossible (someone already mentiond this, jimi I guess ?). And a transaction system is very very complex. Object level locking is a problem.

    Any reference system must leave the burden on the programmer to handle locking/unlocking.

    But should the system not fascilitate the programmer ?

    I see no reason why LabVIEW could not do locking for me. I will explain this more.

    My point in this case is that the existence of refnum APIs in LabVIEW is not sufficient to say that a general by-reference system can be created. I'm certainly not ruling it out. But the "acquire-modify-release" mechanism has to be custom written for all the modifications that the reference's author thinks will ever need to be handled atomically.

    Why do you think that ? This aquire-modify-release is the same every time. Pure code replication.

    Whatever general reference system that LV puts forth someday cannot automatically deal with all the locking issues that everyone seems to think a native reference implementation would handle.

    No indeed, but it can handle the 98% standard cases. Simply to prevent modifications that are being made from getting screwed up because two of those action happen in parallel. That's the most common problem.

    I think locking on method level is very practical and understandable. My reasoing for that is quite simple: a method is meant to perform an action. For that action it is easy to say if it should lock the data. If you have some tree, and need to count the items in the tree, you (probably) don't need to lock. But if you want to modify an item in that tree, you'd probably better lock it. This is very understandable.

    Further you don't want to lock on long actions that call many other short running modifying methods. As long as the short-running methods lock in a safe way, the long-running method does not need to lock at all. It is already safe.

    Then, what should you lock while the locking method runs.

    The most logical thing is the object, but then you can get deadlocks (as explained above).

    For my thesis 4 years ago I created an object system (screenshot). It did locking on repository level. So the whole repository would be locked when a method requested to lock. That sounds like a terrible thing, but in practise it is not so bad. After all, only a limited number of methods are locking methods, most of them don't lock at all. The methods that do lock are usually all shortly running methods, the lock is off again in the blink of an eye. No problem at all. It is the responsibility of the programmer of this system to indicate which methods should lock. Or maybe better the other way round, he should uncheck the "exclusive" checkbox for the methods that don't need locking. This way the system even works without any inconsistencies for a starter. It will then not slow down the slightest bit as long as no methods are executed in parallel. Starters don't often do that.

    Please let me know what you think of this.

    Joris

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.