Jump to content

LAVA 1.0 Content

Members
  • Posts

    2,739
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by LAVA 1.0 Content

  1. The discussion in this thread is really out of the topic. The discussion here seems to be if OOP is needed in LabVIEW or not. I especially started a new topic to discuss the ways to implement synchronization of shared resources between multiple threads. I'd appreciate if you dear virtual architects could stick to the topic or at least close to it. If you would like to discuss about OOP in general, please start a new topic for this issue. Especially if you would like to blame one another, start a new topic within the LAVA lounge about why X.X.X. is a bad programmer or what ever. It is very hard for people to find relevant information from this forum if the topic doesn't describe the discussion at all.
  2. I wonder if this can be somehow related to the notifies problem reported on this forum last week.
  3. Naturally I cannot share my or rather our work projects at this forum.
  4. I beleave the coffee breaks are due to recompiling the code. I've seen this slow editing behaviour in a class private data control that is referenced by hundreds of VIs. When ever I change the control, LabVIEW recompiles all the VIs referencing to this control. It does it even when I just move something a little in the control. It's pretty annoying, I must agree. However why the recompiling happens and how it can be avoided cannot be said with the information you have provided so far. You either must provide us the VI or contact your local National Instruments support (if you have a support contract or if you can identify a defenite bug in LabVIEW). Verify that your VIs are not referring to any VIs in LabVIEW 7.1.1 world. Then try this trick also. Make sure all your VIs in the project are simultaneously open. You can verify this by dropping all the VIs as a subVI into a program block diagram. Then press ctrl +shift + click on the run-button. This forces recompiling all the code. Save everything in the end. And yes, change the .vit to .vi as you don't need .vit in LabVIEW to open multiple refecences to same file. Also make sure that you use 0x08 prepare for reentrant call in the open VI reference is you are about the call multiple instances of the same VI.
  5. It seems after all this debate that we fully agree on this issue. I also appreciate the dataflow nature of LabVIEW. I defenitely don't want modifications that would be against the dataflow nature. If you read my previous posts in this forum I have been pushing to see new features that would push LabVIEW even closer to real dataflow to reach the performance gains of pure dataflow. Instead of LabVOOP I would've liked to see LabVIEW to evolve more towards the features of functional programming languages which are pretty close to dataflow languages in many respects. There are many alternative ways to provide the modularity and the data abstraction in a programming language. Among all the possible solutions to this problem National Instruments chose to support concepts familiar from object oriented programming. So that's what we have to live with. As object-oriented resembling concepts are chosen to be the de-facto abstraction layer and modularity tool in LabVIEW, we just have to verify that it can cope with the programming problems of LabVIEW developers. We are not likely to see another abstraction mechanism to be built in LabVIEW for a while. LabVOOP will be the abstraction and modularity layer from now on. The only thing related to this discussion of by-ref objest is that I'm wanting to see is a decent way to abstract real world objects. I wouldn't do this if I really didn't need to do this but I just need to do it. It may be that the present way of abstracting real world objects in LabVIEW is sufficient in your projects, but it defenitely runs into constant problems in my projects where I need to refer huge amount of real world objects shared by different parts of the application. And since I cannot avoid this, I just need to deal with it. It would defenitely help if instead of dealing with the issue every time by myself, LabVIEW would give me more sophisticated tools to deal with the issue. It doesn't make LabVIEW any less dataflow. LabVIEW is currently using references in multiple places, to refer to files, front panel objects etc. I would just need a way to create my own abstract references so that I could refer to my own file types and other real world objects I need to refer. I can somhow manage with the current tools, LabVOOP is an excellent help in abstracting these real world objects. But still, I need to use queues as reference mechanism, but this could be built-in to LabVIEW providing a more easy to use and more efficient reference mechanism. It's all about efficiency of software development and nothing else. I only want to see LabVIEW features to help our software development projects to become easier to work trough and easier to maintain. Nobody forces you to use these features as you seem not to have as strong need to these features as others do. Still I cannot see how it can annoy you if we have different needs in our software development projects as you do and LabVIEW supporting these features would in no way be a less efficient tool for your needs. It must be my lack of English language skills, but I didn't really understand this example of yours. So I cannot really answer.
  6. To find more about the technique Stephen is referring to you can alway try to read Stephen's patent application: Type propagation for automatic casting of output types in a data flow program I read quite a few NI patent application, or the abstract and the first claim. Stephen's application is still quite good, but those application about automatically generating graphical program based on user input... It seems NI is trying to patent scripting many years later they introduced it.
  7. There are two major reasons why companies prohibit users installing software on their own. First information security. Second users tend to get their workstations easily meshed up which means increases support cost. So from the company IT department point of view, it's a good thing that users cannot pass the protocol by using LabVIEW applications instead of applications written in other languages. If you go on and tell you IT security people that we use LabVIEW 7.1.1 because it can by-pass the security restrictions you have enforced, what do you think they will do? So I don't think that company security policies can and should be by-passed by modifiying LabVIEW installers so that they will install on security restricted workstations where normal applications won't install. If the policy is a problem and causes too much troubles to your work, it's the policy that should be modified, not LabVIEW. If LabVIEW applications could easily by-pass all security policies, it could force companies to restrict usage of LabVIEW applications alltogether. Of course this would be an ultimate solution, but companies have their security policy there for some reason and if the security policy causes troubles, discuss the IT-department about this policy.
  8. Jacemdom, you don't seem to want to use by-ref objects in LabVIEW. So may I ask what is your solution to abstract real world objects such as files or hardware decives or a specific internet connection or front panel object, if you don't think refereces are a proper way of referring to these objects. There maybe a way that I don't know about. If your answer is do not use abstraction, then how do you refer to these objects unless by using references?
  9. Hi, This year LabVIEW celebrates its 20th birthday. A development environment that started from a virtual instrument development tool has grown into an outstanding general purpose visual programming language. Troughout LabVIEW history it has beeen debated if LabVIEW is a general purpose programming language or not. It defenitely is a general purpose programming language but maybe not as general as it could be. Every LabVIEW power user knows that even though LabVIEW has very many outstanding features, it also have very many shortcomings not present in the main stream programming languages and development environments. Requirements for LabVIEW functionality keep raising as the number of LabVIEW users constantly grows. During the history of programming we have seen that it indeed is quite challenging to develop an excellent programming language. All the most popular languages are developed as a joint effort of the computer science community. This ensures that the design decissions made are in agreement with the state-of-the-art techniques of the present time and that all the possible shortcomings are considered when these design decissions are made. LabVIEW as a proprietary programming language doesn't have the power of the computer science community. It may be extremely hard to recruit the best minds of the community to work exclusively for National Instruments already for purely geographical reasons. As a result NI presumably cannot keep up with the pace required to keep LabVIEW a high quality general purpose programming language. Until now NI has had central pantets protecting graphical design of LabVIEW. The most central ones of these patents have expired or are soon to expire leaving the field of graphical programming languages open for competition. From this perspective it would be very wise move from National Instruments to open the source code for LabVIEW programming language and releasing the patents protecting the language for the use of the community helping to develop LabVIEW. National Instruments should see this more as an opportunity than a threat. Opening the source for LabVIEW would gather new free-of-cost developers for developing LabVIEW. LabVIEW would improve and gain more reputation as a general purpose programming language. Students and computer scientists arount the world would get acquainted with LabVIEW as the barrier of expensive licenses wouldn't be there. LabVIEW has many features that are expected to be in a future general purpose programming language as natural support for multithreading. However it lacks many features that are required from a future general purpose programming language. As an open source language, LabVIEW could develop towards a generally accepted programming language of the next decade. As the LabVIEW user community grows, also NI could take advantage of its expertise in LabVIEW integration. The growth of LabVIEW user community means that the number of pontential customers for National Instruments measurement and automation products grows and NI can take full business benfit from it. NI can still sell proprietary Measurement and Automation software and hardware for integrating the NI hardware with LabVIEW, managing measurement data and managing other measurement and automation related tasks. Only LabVIEW as a general purpose programming language would no longer be a source of income. Still NI could go on selling development environment for LabVIEW. Open LabVIEW would also open new opportunities to NI. It could benefit from its expertise of software and hardware integration in the field of embedded computing. The embedded computing is growing fast as every device around us gains a microprocessor. If LabVIEW would be used as a general and most popular programming language for embedded computing, it would allow NI to sell number of new products and solutions to this embedded computing industry. As an addition if LabVIEW source was opened, NI could still keep the source closed for LabVIEW data acquisition and automation modules. This way NI could go on selling LabVIEW for the majority of the present customer base who are measurement and automation professionals and to whom the open source LabVIEW didn't offer enough properties. Of course there is also a risk that other measurement and automation industry companies can take advantage of open source LabVIEW. If however LabVIEW is released under both commercial and GPL license, NI can very well protect against competition in the field of measurement and automation. GPL license guarantees that if an other company in the industry integrates their software with LabVIEW, this other software immediately becomes GPL licensed software and open source as such. . Open soruce LabVIEW is an opportunity for the LabVIEW community, for the computer science community in general and last but not least for National Instruments. LabVIEW community get improved general purpose programming language with all the state of the art techniques and functionality. Computer science community gets the opportunity to start developing programming techniques and paradigms for visual programming and this way solving the shortcomings of text-based programming languages bringing the programming for everyone. National Instruments would gain major advantages from LabVIEW becoming generally accepted programming language. The increasing user base would get NI a number of new potential customers and opportunity to start selling new kinds of products and solutions. On the other hand, if NI will not open LabVIEW source and tries to keep its monopoly, I predict that it may be hard to keep up with the recent developments in programming language technology. General purpose programming languages will evetually offer all the benefits of LabVIEW and much more. NI will lose it's position not only in LabVIEW but also as a provider of measurement and automation hardware. I'd like to hear what you, the community, think of this issue. -jimi-
  10. Hi, I'd like to change the default class method templates for Dynamic VI and for Static VI. Is this possible? Where can I find the default templates? The current Dynamic VI template block diagram is unnecessarily thight and small (and the controls are not aligned which annoys my esthetic eye).
  11. I fully agree. When I was writing such a test, I encountered a bug in LabVIEW notifier implementation. I couldn't go on with the test as notifiers were missed. So notifiers cannot be fully trusted.
  12. I think implementing by-ref objects in LabVIEW without simultaneously implementing a decent syncrhonized access mechanism would be irresponsible from NI. You are right that syncrhronized access is not needed in producer-consumer patterns, but in most cases of multithreaded access to shared resource some sort of syncrhonizing mechanism is needed or at least transient data corruption evetually results. If such a syncrhonization mehcanism is not implemented simultaneously with by-ref objects, developers tend to start using by-ref objects in an unsafe manner. In the best case the user experience is such that it guides but doens't force developers to using synchronization in access to shared resources. Also synchronization doesn't need to be of slow performance. For example transaction based synchronization mechanisms do not suffer from the weaknesses of mutex based synchronization. I started a new thread about how to implement synchronized access to shared objects in LabVIEW. I don't think that discussion fits under the topic of this thread.
  13. Hi, Many objects in object-oriented programming have an identity, such as a file, a front-panel object or a hardware device. These objects cannot be modelled using present LabVOOP objects as LabVOOP objects gets copied as wire is branched; multiple different wires cannot all represent a single object. This issue has been irritating the community of LabVIEW users since the release of LabVOOP a few months ago. It seems that there is a huge demand for objects with unique identity i.e. by-reference objects in LabVIEW. The central problem why LabVOOP propably doen't have these objects is the difficulty in implementing synchronized access to these objects from multiple parallel threads. The problem of synchronized access can be divided into two different separate topics. First how the sychronization should be implemented in LabVIEW runtime engine. Second how this synchronization mechanism should be visible to the developer. I'd like to start this thread to discuss these two issues. Synhronization under the hood Traditionally people talk about locking of an object and about get-modify-set pass when accessing the object. Locking is traditionally done by acquiring a mutex for an object, modifying the object and releasing the mutex so that other threads can access the same object instance. This is how inter-thread synchronization is traditionally done. However, besides the mutex based locking, the computer science community has innovated also different kinds of methods on synchronizing the access to objects. One way to get object-level synchronization is modify the runtime engine so that it only allows a single method of a synchronized object to run at any time. This mechanism of syncrhonization is implemented in programming languages like O'Haskell, which is a Haskell variant with object orirented features. Also different transactional mechanisms have been successful. In transactional mechanisms multiple threads are allowed to access a synchronized object simultaneously. As each method accessing an object commits their changes, they verify that no other object has modified the object simultaneously in a manner than would break the transaction. If such a modification has occurred, everything is rolled back. Transactional mechanism do not suit to every possible situation as not everything can be rolled back. For example it's hard to roll back an action that somehow modifies the physical world. User experience of synchronization How the synchronization is generally implemented in LabVIEW shouldn't be directly visible to the developer end-user. The developer should understand the general concepts of synchronization to take full advantage of it, but in general the synhronization mechanism should be integrated directly to development environment. There should in general be no need to acquire a mutex by calling acquire mutex node but instead the end-user should be able to specify which data needs synhronized access in more sophisticated way. In the following I propose a mechanism of integrating the synchronized access of by-ref objects to the development environemnt of LabVIEW. The proposal is very preliminary but I hope it breaks the ice and the community would start innovating in how should NI implement the syncrhonization support in the user interface of LabVIEW. Wire level synchronization Only methods can access object private data members. In synchronized access to the object, it's the methods accessing the private data members that need to be synchronized. The private data members are accessed by applying unbundle node to the class wire and data is written back to the object using bundle node. What I propose is the following. An unbundle node could either be normal or "synchronized". A synchronized unbundle would guarantee the access to the private data members in synchronized manner. All data wires originating from synchronized unbundle would be of synchronized type, in a little similar manner as a dynamic dispatch wire is of special dynamic dispatch type. Such a wire must evetually be connected to a bundle node. When the wire is bundled back to the originating object, the synchronization requirement is released. These synchronized wires would look somewhat different from normal wires so that the developer instantly knows that the wire is synchronized. The developer can branch the wire, but only one wire branch can own the synchronized type. The developer could easily select which wire would be syncrhonized by Ctrl+clicking the wire. Such a wire can be considered as a combination of a data and a mutex, even though mutexes don't need to be the underlying synchronization method. The wire just guarantees that there is a mechanism in the runtime engine that makes sure the access to the wire data is synchronized. There is a need to wire data originating from a non-synchronized wire to a synchronized wire so that it can replace the private data member of the class. This is accomplished with a new node similar to bundle node, that would allow replacing the data in a syncrhonized wire with some data originating from a non-synchronized wire. The synchronized wire can be connected to a front panel controls of special syncrhonized type. This way the synchronized wire can originate from a method and allow passing the synchronized data to the calling VI and back to another method. This is practical for example in a situation when the developer wants to run different analyzes to a data class but don't want to rewrite all the existing data analysis tools as class members. So the developers writes a syncrhonization acquiring getData method that let's the calling VI to access the syncrhonized data. Then the developer passes this data to an analysis VI and passes the result back to a setData method that writes the result back to the class wire. There will probably be technical problems in allowing the user to connect such a synchronized wire to all existing VIs since these VIs. Therefore the programming model for all nodes that do not support such synchronized wires will be branching the wire and passing the non-synchronized wire branch to the node and then bundling the result back to the synchronized wire. To increase performance and decrease unnecessary buffer copies when a syncrhonized wire is branched, if the syncrhonized wire continues directly to the new bundle synchronized wire node, no buffer copy is made. Discussion The syncrhonized access to a by-ref LabVOOP objects can be implemented in multiple ways by National Instruments. The synchronized access should be divided to two different and independent parts: 1) the user experience of synchronization and 2) the runtime engine synchronization mechanisms. As LabVOOP objects have special properties compared to other LabVIEW data types, optimal user experience can be gained by designing the user experience specifically for LabVOOP objects. From user experience point-of-view this syncrhonization mechanism may not work for other data types. Separating object syncrhonization from synchronization of other data types is advantageous also for other reasons. Due to the fact that object data can only be accessed via object methods, more advanced synchronization methods may be used with objects than can be used with other data types. O'Haskell synchronization implementation is an example of this. Integrating the synchronization directly to the user interface allows NI to change the mehcanisms under the hood, when computer science comes up with more advanced methods. Therefore NI could begin with traditional and quite easy mutex-based synchronization and later move to more advanced perhaps transaction based syncrhonization methods or even combinations of multiple different methods. I hope this topic generates discussion that would help NI to implement an excellent synchronization mechanism in LabVOOP. I hope that all talented individuals in the community participate this discussion to help NI to reach this goal. I also hope that if you just have time, it would be great if you could surf the computer science resources to find out what kinds of new techniques there exists for synchronizing access to shared resources. A Large community may find much more innovative solutions than a hired few engineers at NI. Let's give NI the power of open source design
  14. I tried to answer to this Aristos' comment. But as I was editing this message, Aristos removed the comment. This forum defenitely needs a topic-locking mechanism, or alternatively we should have multiple instances of each topic in LabVOOP way so that everybody can edit their own topic instance There are entities in programming that don't have an identity. Integers belong to this class of entities; you cannot distinguish two instances of number 7 from each other. These entities can be represented using LabVOOP objects. There are however much more entities that does have an identity, such as My Toyota. These entities cannot be represented using LabVOOP objects. I think OOP language is a language that is capable of representing practically any object using objects of the language. If this is considered a necessary requirement for OOP language then LabVOOP is not an OOP language.
  15. A funny analog came into my nerd mind. You know those SciFi movies with parallel universes. Parallel universes all have same history but from some point on they separate and futures are alike but still different. One doesn't know that there really are parallel worlds. Each one thinks that he is unique. LVOOP is like such a scifi movie. Each object knows nothing about the other objects and has no way of knowing that it's not the original My Toyota but one of the parallel yet different alternate futures of the original My Toyota. EDIT So perhaps LabVOOP should be though with the analog of real world objects in a world branching to parallel universes as OOP is traditionally thought with the analog of real world objects.
  16. I think much of this discussion is about defenitions. Most of the time when people disagree strongly about something they really disagree about defenitions. I thinkg NI made a bad (marketing) choise in choosing to call LabVOOP object oriented programming as Object Oriented Programming does really mean somewhat different thing. LabVOOP is not an object-oriented programming in traditional sense since it cannot deal with the concept of real-world objects or "My Toyota" as Mads put it. What we are dealing with in LabVOOP is intelligent dataflow rather than OOP. OOP is about solving software problems with real-world-alike objects and LabVOOP is about something else. Why would we not call it intelligent dataflow. This intelligent dataflow is a very nice paradigm, but it is not really OOP. If we had a different name for it, developers wouldn't argue that it is missing something but would embrace it as a great new paradigm addition to LabVIEW. LabVOOP can deal with many programming problems but these problems are not the same set of problems that OOP can deal with, although these problem-sets somewhat overlap. Instead of continuing to talk about what is wrong with LabVOOP we should start to talk about how can we improve LabVIEW to allow programming paradigm in which encapsulated objects have identity such as "My Toyota" and how these objects can interact and react to the messages of other objects.
  17. Hi Rolf! I think you have better sight than you expect. If we get geat REAL 3D monitors, i.e. they can show depth from focus, have stereopsis and show parallax shift when you move your head, then Rolf (and all of us) could get a decent 3D experience on our computer monitors. :thumbup: As far as the required 3D technique, I have no idea if anything like this has been done. :question: /Lars-G
  18. Your consumer loop (bottom) waits on a notifier; default is -1 (wait forever). Your top loop has a timeout of 100 ms, and sends a Timeout notifier every 100 ms. You don't need to send a timeout unless the bottom loop needs to perform some periodic action; it will just "wait forever". That's the nice thing about user events, no polling or no-op type loops! You should consider using typdefs for your enums; if you add or rename a case later on, you would have to edit all the enum constants in all the different cases on your block diagram(yuck!). I usually name the typdef control to match the vi name.. Looks good! :thumbup:
  19. Can I sense some frustration from Aristos last two answeres... Perhaps NI is releasing an update for 8.2 soon and developers are stressed up under thight schedule. After all Aristos has been quite silent for the last week or so. After having used LVOOP for about a month now, I think it's in many way a great leap forward in LabVIEW programming paradigm. :thumbup: Back to the subject. Software objects often represent entities that similar to objects in our physical world. They have an identity, some properties and some behaviour. Many of software objects especially in automation and networked environments also need to be reactive to the impulses from the environment. I think this by-ref discussion is a bit misleading. I think we should be talking about objects with identity and objects that as individuals can communicate with the environment and react to impulses from the environemnt. The environemnt is then formed from other objects, physical hardware, network, user etc. References are just something below the hood stuff that perhaps user doesn't even need to know exist. There are plenty of innovative ways to manage concurrency of such an objects. One which I think is very interesting is the concept of reactive objects in O'haskell. The fundamental way O'haskell guarantees there are no collisions is that only one method of an (by-ref) object can be active at a time. All other methods of the same object must wait until this first method exits until they get their change of running. Perhaps I copy a few paragraphs from O'haskell web page below. For more detailed information information see O'Haskell homepage. The main message what I'm trying to say is that there really is a need for objects that have an identity and that semaphores and locking really is not the only way to implement this. In that sense I think Aristos is wrong here. There may be a need for by-reference system for the whole LabVIEW but there is a different need for objects with identity in LVOOP and the way to implement these two things doesn't need to be the same. Objects are different kinds of entities and therefore allow more sophisticated concurrency control methods that can be worked out for pure data.
  20. Looks good. As LabVIEW stores boolean as U8 internally, you don't need in general to cast it to I32 first and then back to U8. You can use adapt to type and boolean will be passed natively as U8.
  21. Yes! I still recall my first acronym lookup when I found the site: it was "YADA" (Yet Another Damn Acronym!) I remember this because I needed to get into the BIOS of an HP Vectra PC (Late 80s) and the HP tech told me the password was "YADA". I asked, and he told me what it stood for. Since then, I always try YADA as a password; but alas I haven't gained access to anything with it since...
  22. I wrote a notifier based mutex library for locking. It performs about as well as queues. Notifiers used directly can exceed queue performance with a small marigin. The problem with implementing mutex using notifiers is however the bug that I found when I tried to test the scalability of my mutex implementation; the notifiers may get missed. The mutex system I wrote is attached below, the bug is reported here. Download File:post-4014-1159987981.zip Edit: LV 8.0 version, previous was accidenttally for LV 8.2. I also wrote another mutex library based on occurances. There is a bug in this library as I first didn't know that occurances act like constant i.e. an occurance created in a loop doesn't really generate multiple occurances but only one. Occurance is used to notify other threads about mutex possibly coming available. Now that all mutexes are exactly one, this causes all threads waiting to any mutex to become free to start acquiring a mutex when only one is freed. This doesn't cause a functional failure i.e. everything should operate correctly but the the mutex engine may become overloaded when all the threads may start trying to acquire a mutex simultaneously. This other implementation is here. A mutex is called a semaphore in this implementation. Download File:post-4014-1159987236.zip To fix the occurance bug, one may perhaps create a predefined amount of occurances instead of only one. For this purpose I wrote a nice VI which generates 256 occurances array, all of the different. It may suffice if 256 occurances were used in a loop instead of only one. It doesn't really matter if several threads try to acquire the mutex simultaneously as long as not all the threads try to do it. Download File:post-4014-1159988026.vi Edit: LV 8.0 version, previous was accidenttally for LV 8.2. The occurance implementation of the mutex performs about the same as queues and notifier mutexes as the total number of mutexes in the system is low. When the number is more than several thousands, the occurance mutex starts to perform better. This performance advantage may disappear when the bug is fixed. I couldn't test the scalability issues when several threads try to access same mutex simultaneously. I tried to do it, but I came across the bug I mentioned above. Then I gave up, as I was only able to succesfully get about 6-8 simultaneous threads accessing the mutex before a notifier was incorrectly missed.
  23. What if we had a built in Unit Test Tool, something like the old LabVIEW Unit Validation Test Procedure, but instead being an integral part of the development environment. And, if the test vectors and the test results were to become a part of the VI-file, the version control of them and the VI would be simple. I hope this wish can be refined by the community :thumbup: as I think it is only a rough outline of the possibilties. /Lars-G
  24. This is not completely true. Queues do become invalid after all the VIs that reference a specific queue stop running, or a the Destroy option is used when releasing the queue. There have been numerous discussions on Info-LabVIEW regarding the creation and persistance of LabVIEW queues, and recently as outlined in these two messages: http://sthmac.magnet.fsu.edu/infolabview/I...3/10/14/24.html http://sthmac.magnet.fsu.edu/infolabview/I...-08-05_005.html Use the Advanced Search of Info-LabVIEW Archive located HERE, and search on queue for the year 2006. There's LOTS of info on this subject
  25. Good if queues are not affected. After all I use them more than notifiers.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.