Jump to content

bsvingen

Members
  • Posts

    280
  • Joined

  • Last visited

Everything posted by bsvingen

  1. Whenever you change the size of an array, memory need to be allocated/deallocated. Therefore, if the array dim entering the tunnel on one side is different than the dim on the other side, labview creates a buffer. Replace array subset conserve the dim, while build array changes the dim. This has to be so even though the dimension does not really change in your vi, because LV has no way of knowing the the dim in the new data is the same as the dim in the deleted data.
  2. This is very interesting indeed. I have downloaded the code and looked at it. It took mee some minutes to grasp what was going on but i *think* i'm starting to understand it. My first impressions are these: LV2 style globals are the only way to efficiently store data in LV. Queues can be used, they are very efficient (for small and moderately sized data), they also are inherently syncronous and to make them asynchronous requires some work and the efficiency drops rapidly. However, the main advantage with LV2 style globals (also compared with all of the GOOPs) is that they are the only structure in which large complicated data can be operated on efficiently in an interacting fashion. For instance, you can have several large arrays, and arrays of clusters, and want to operate on them per element basis without unnessesary buffer allocations and copies - LV2 globals are the only programming style that can be used. In addition, you want to read some *some* of the data in the globals at varying frequency in other locations for storing to file, plotting in graphs, sending over internet etc, then reentrant LV2 style globals called by ref is unbeatable (i usually use queues to send the references at init to werever they are needed). The price for all this, is that making changes to a large and complicated global is a difficult and very error prone task, where it is very easy to add bugs allso to code that previously worked well. To simplify this will be a huge step forward. I'm a bit concerned about efficiency of the LV2OO. I will try to have something up and going an compare it with a normal LV2 global (and dqGOOP ) Another thing is that LV2 globals are very straight forward, and therefore easy to use and understand. The LV2OO looked esoteric in comparison, but maybe i just need to get used to dynamic dispatching and stuff and it becomes less esoteric. :thumbup:
  3. Well, it doesn't seem like this is of outmost interest for many people here , but i think i'm converging towards something useful. I have done some changes to the core LV2 Global. It is dynamic, but there are also minor internal changes that, in some strange way probably due to some LV memory handling/buffer allocations ??, improves performance quite a lot. I have also made the LVOOP different so that it takes Variants as input. I have renamed it to Pointer Class, because i think that is a better description, and i have added some pointer aritmetics (can be used for very cool, but buggy? code). When using LVOOP for the pointer, and Variants for the data, the performance increases even more. A simple by ref GOOP made from this pointer class is almost twice as fast as dqGOOP in get/set, and up to 16 times faster than the locked dqQOOP class (get). So, im converging toward the pointer class. It is fastest, it makes pointers/references to *anything* including LVOOP Objects, It is much more secure, because the LV2 Global is protected (only member functions can call the global) and the pointers/reference - wires can only be used by the member functions. It is also more elegant than the others IMO. Download File:post-4885-1156527660.zipDownload File:post-4885-1156527684.zip
  4. Here is another version of the general Variant reference system. In this version it is not neccesary to preset the LV2 global. The arrays will grow according to the number of refs that are created. It is initiated at first use. This means that the LV2 global is completely invisible, it just hangs in the in memory somewhere and do it's job. I allso don't think it is realy nessesary to de-initialize the global, because LV will take care of that in any case when the program finishes. However, it is made so that it still can be initialized if that is wanted. Download File:post-4885-1156455614.zip
  5. In the NI white paper about LVOOP it was mentioned that a by ref system tree could be made by using the parent object in the class of a child object. I didn't quite understand how, but it should be possible (according to the paper)
  6. To get an idea of the performance of the LV2 style reference system i have made some test cases using the dqGOOP test cases as templates. In contrast to what i wrote last night, it IS possible to create arrays from clusters with strings (i have no idea why it didn't work list night, i got a strange error saying that LV could not determine the wire type, maybe that was in LV8 because i started with that?? all these test cases are for LV8.2). There are 6 test cases, dqGOOP(locked), dqGOOP, variant global ref, string global ref, LVOOP global ref and a "specific" ref for the actual strict type def cluster. They are all similar in performance except for the locked dqGOOP which is much slower. There is little penalty in using variant compared to using a specific typedef (only about 30% slower in set), and there is no practical difference between variant vs string. dqGOOP (unlocked) have a few percentages better performance in get/set, but the global refs are fully asynchronous. Download File:post-4885-1156428559.zip
  7. I made two examples yesterday that mimics a chunk of memory in a computer in a single LV2 style global with an array inside. It is a ref making system and makes real references for any kind of variable using variants and another version using LVOOP. It is very efficient, particularly the LVOOP, and compared with using queues it is fully asynchronous. The other thread is here: LV2 style ref
  8. File Name: General reference system File Submitter: bsvingen File Submitted: 11 Sep 2006 File Updated: 18 Sep 2006 File Category: LabVIEW OOP (GOOP) An LVOOP class that creates and controls reference-data pairs. The basic idea is to simulate random access memory and malloc()/free() by using one single LV2 style global. The data can be of any type (including any LVOOP class and strict typedefs) and can be allocated in any order. The only restriction to the number of references is the physical memory on the computer. The global stores data in a Variant Array and uses a separate Bool Array to control access and free/protect the data. The "address" to the data that is stored in the reference (LV Object) ,is the index in the Variant Array where the data is stored in the LV2 Global. By using Variant as the type, it is possible to get a reference to all types, and there is no need to manually change a typedef and possibly the names for every vi when more than one typedef will be used. A small performance penalty is introduced due to the type conversions to/from Variant. However, due to the inherent effectiveness of LV2 style globals, this method is still 2-3 times faster than an equivalent scheme made with wrappers around queue primitives, but slightly slower than using queue primitives directly, (no wrappers around the queues, see example provided). For an even faster reference system, twice the speed of queue primitives, se the separate "Pointer System" on this Code Repository. In contrast to queues that can be regarded as synchronous references to data, this reference system is fully asynchronous. Allocation and freeing of data is done dynamically. Allthough this reference system may at first glance have some resemblence to "classic" GOOP, it is by no means a GOOP system. The main purpose of the reference system is to have a fully asynchronous and fully reusable system for read and write access to one single instance of data in more than one place. All the VIs in the library are documented. Click here to download this file
  9. The "XControl bug" (only pro-edition) is very irritating. When i purchased this licence (i have had other licenses in the past from other companies), it was when version LV7.0 just came out. The pro edition back then only had extra stuff for site management and cooperative development that i had little use for, othervise the full development system had everything needed for - well - a "full development". So i purchased Full development system + application builder. In the last days i have read from NI officials (the white paper about LVOOP and a post in here from NI R&D) that XControl is in fact needed for proper automatic initialisation of objects and that the XControl is "the single reason to upgrade to LV8.0". To me they are saying that the Full Development System is NOT a "full development system" anymore, starting from LV8.0.
  10. The XControl works only in the proffesional version of labview (at least for 8.0. I have only the full development version + application builder). If it is meant to be used, it has to be made available, at least now when LVOOP has been released.
  11. To be more correct, it is an explicit constructor that is not needed because the objects are already made in the VIs as controls or indicators (you still can have "init" functions, but they won't work as constructor functions in ordinary languages, they are just ordinary VIs to fill the object with data). I disagree with your point that LVOOP is good for beginners. Students today are seldom teached FORTRAN, C/C++ or Pascal. They are teached JAVA and Matlab. When you start with JAVA, OOP is learned from the start (the way OOP is meant to work). Then when they start using Labview i think they will scratch their head more than once trying to figure out how this call by value objects are going to be used. Personally i can see that LVOOP will protect data and make programs more organized (that is also very much due to the project builder), but i still don't see how LVOOP will make the program structure better.
  12. As long as the connector pane is the same, we still can use the same init name for the init method for all inherited members. I see that it will be a rather irritating restriction to keep the connector panes equal, especially when theres alot of inherited functions, and you want to change one of them, but it will work (you will have to wire a correct object constant to the init function though, and therefore i don't think it is vise to initialize objects this way, but use the overriding mechanism for other tasks such as plug-ins for instance). Another option is to use the "to more specific class" in the more specific init functions (with specific names and without dynamic dispatch), that way you cannot use the wrong init function, even by accident due to different connector panes, and you don't need to wire the correct object into the function, and you allways get the correct object out. But the main point still remains, you do not need to explicitly initiate objects in an "init" function when all objects are call by value and inside a data flow paradigm environment like LV. The objects will be initiated at first use no matter how you look at it, because they aren't really initiated in the correct sence of the word, they are only given values since they already exist in the VIs as controls or indicators just like any other LV types. You just make a VI that takes the input you need, and out pops the correct object.
  13. I still think you are pointing to a shortcoming that does not really exist in a call by value - data flow "world". The object may not neccesarily be pointing to a file, rather it would be the file itself or more precisely the data in that file. The file path or ref may only be a part (input) of the method "write data" or "read data". Still, in that data object there may very well be a control for the file ref or path, so that the cluster looks like: [DATA] [file ref] The "initial" or default values would be empty data and "not a ref". IMO it would be just as strange to initialize the file ref to anything else than "not a ref" as it would be to initialize the data to anything else than empty data (all objects are always 100% determined since they are call by value). Now, if you want something else, you just write a member VI that initiates the object with whatever values you want. There is no need to create the object in any form upfront, the VI inputs all kinds of non-object data and output the object itself. A "read data" or "set data" method may therefore be all the constructor you need. If you use by ref, you will have to create the *ref* before you can create the object. And since a ref without an object is totally meaningless you will be 100% sure that all refs points to valid objects, and therefore you need a constructor. The only time i see that this may in some strange cases cause problems is if the object is created as a constant and used directly with no other considerations, but this would be the same as opening a call by ref object with default parameters on the constructor, or use an arbitrary floating point constant in calculations.
  14. I don't think anything prevent you from doing this, either by normal programming or by using XControl at those places where you need such functionality. But the main point is that in dataflow "objects" are created when data for the "object" (controls) arrives, or at branches (copies) and then everything is 100% determined.
  15. IMO the LV8.2 objects are just strict type defs with their own sets of wires that only VIs "belonging" to the wires can act on. Or in other words, some sort of protected clusters, since they only can be called by value. However, they also have inheretance and overriding. All in all, considering that they shall be used in a data flow setup, and be consistent with all the other existing types as well as the data flow paradigm, i think NI has done a good job. In a data flow language there are no need for explicit constructors/destructors and copy methods, because all this is an implicit part of the language (or rather a non-existing part, because there are no variables in the normal sence of the word). I think LV8.2 objects function exactly as they should. The main problem is that LV is a is graphical DATAFLOW language, and not the GENERAL graphical language that most of us would like that it should be. What it lacks most is an effective way to store data, and effective pointers and references.
  16. I think the data flow paradigm is a main issue. For data logging, control and somewhat also for data analysis, data flow in a graphical language is a good choice because it will simplify the programming by making the language very intuitive and straight forward in relation to what you are trying to do. As a general purpose language i don't see any benefit of the data flow paradigm at all, it only complicates and bog things down since everything is passed by value and there is no way to effectively store data without breaking the data flow. If G was to become an open standard, i think the first thing that would be scrapped was the data flow paradigm, while leaving only the graphics where execution order is mainly determined by passing of error clusters and references/messages and with a much more effective pointer/reference system. But i dont think this would be beneficial (in intuitive terms) compared with the data flow paradigm for most control and logging applications for which LabVIEW is made and used. For instance, GOOP as implemented in all the different versions i have seen uses passing of references, which is a huge step away from the data flow paradigm in which the G language is built. These GOOP implementation (along with ordinary LV2 style globals) simplify the program structure, but they do so because they step out of the data flow paradigm, and consequently will confuse more than help in cases where data flow will do just fine (although they pinpoint the weakness of the data flow paradigm for general purpose programs). If G was to be opened, i think it very soon would evolve to a state where it no longer would be intuitive and straight forward to use in most data logging and control application, but would be much better suited for general purpose programming that the case is today.
  17. Christ, and i used a day just to delete and recreate graphs Suggestion: What about a separate folder here on LAVA for solutions like the one posted? No discussions or remarks, just the solution. I think that would be great.
  18. You don't need a CD. The application builder is already installed. The only thing you have to do is to write in the license number for the bulder in the license manager - and - voila
  19. You are not calling the correct function. SendInput should return an UINT, while your call does not have a return value (you have set it to void). From MSDN: UINT SendInput( UINT nInputs, LPINPUT pInputs, int cbSize); Add UINT (whatever kind of int that is) as the return value, and it should work. Maybe you also should make the call reentrant.
  20. I had the same problem when i wanted to save to LV7.1 format from LV8.0 and i made graphs in LV8.0. I think it's because of a major bug in LV8.0 concerning the graph property causing properties in the property window to be different from the actual properties that can be set directly in the graph window (i think??). LV8.0.1 does not have this problem. The only solution i found was to make completely new graphs "from scratch" (a real pain, but it worked).
  21. That's the theoretical part of it. But in a practical situation the data acquisition is handled by dedicated hardware, so it's already paralleled and multiprocessed. Besides, analysis demands much more computing power than acquisition (seen from the PC), so the end result is still max 50% utilization of the PC's dual core processor. I have yet not seen that LV uses both cores within one loop (maybe it uses several threads, but never both cores). This means that it is *extremely* hard to write an analysis package that uses both cores effectively, since you have to prallelize it by hand. To detach the user interface from the rest of the program therefore seems to me to be the only thing that the dual cores can be used for in a practical setting. But then, this can be very useful in sutuations where you have lots of graphs and stuff, and guaranties 100% responsiveness.
  22. Thanks for the info. When i got this PC some 3-4 months ago i was a bit puzled about why LV only used max 50% (only one core) in seemingly all applications i tried, since LV was supposed to be "inherently" multitasking etc. But then i forgot about the whole "problem", since all the other applications also seemed to be using only one core. Today I got NI-News with an article about multicore processors and i got curious again.
  23. I don't really know how that happened. I pushed "add reply", and my reply ended up inside my first post although i wrote it at least 10 minutes after the first. Anyway, are there any rules to this multitascing/multicore thing? Is two loops the only method, or are there more?
  24. This link from NI site describes LabVIEW's multicore functionality. But - I have a dual core PC, and i have yet not seen a labview application that runs on both cores simultaneously. Are there some special tricks i have to do in the programming ? two completely separated loops or something? Yes, two separated loops will do it (just tested) Rather cool actually to watch the CPU use when one loop is turned off, and only one core is running. But, this brings up some questions about the usefulness of this in a practical application where i need to transfer data between the loops.
  25. I think this is more a matter of effort vs (potential) income/loss than anything else. If you think the potential income is large, then why give it out for free? If you think the potential income is low, then why charge for it? If it is something in between then you could just give it out for free for non comercial use and charge for commercial use and hope that most people are honest and/or charge enough so that you legally can protect it in court if you have to. It also depends if your VIs are of an "industrial" or highly technical character (will only or mostly be used by corporations and specialists) or if they readily can be used by "most" people, ie. they are more of a generic character. IMO highly technical and special software can be charged (very) high, and there is no meaning of giving it out for free, while more generic software can only be sold much cheaper and it is the potential quantity that can be sold decides if it is worth while charging for it.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.