Jump to content

LAVA 1.0 Content

Members
  • Posts

    2,739
  • Joined

  • Last visited

  • Days Won

    1

Everything posted by LAVA 1.0 Content

  1. Oh, this is really clever! :worship: We should come up with a name to this concept. By-reference variant?
  2. From variant you get protection against typecasting to wrong type. Perhaps you can combine these two by first casting to variant and then typecasting to datalog ref. You get strictly typed wire that you can only cast back to the original datatype.
  3. Yes, yes. I didn't mean this. I just needed a datatype like variant that would perform better memoryvice than variant. I don't really need typecasting, I'll always cast back to the original type. Of course automatic type conversion would be great
  4. Good point So write data to queue and transform queue to variant will be the way to go. I use LabVIEW profiler to measure the memory consumption.
  5. This is how LAVA forum presented this message to me.... Read 0 times - last comment by crelf Hmmm.... I wonder how crelf can read the message by not reading it
  6. All the alternative implementations suggested by both LV Punk or Aristos Queue may fail to succeed in enqueueing the TCP buffer to a concurrently accessed queue. To be 100% certain that you succeeed in enqueueing an element, you have to be able to test-and-set in an atomic manner. An atomic operation in computer science refers to a set of operations that can be combined so that they appear to the rest of the system to be a single operation. However you cannot guarantee atomic test-and-set unless you rely on hardware or operating system test-and-set memory operations. In LabVIEW you can access these OS level operations only by using semaphores or limited size queues. You would need to lock the queue using either a semaphore or another queue, then perform atomic dequeue+enqueue operation and finally release the lock. This however doesn't sound very wise since your performance would be lower than using single threaded application. You could also force all queue operations to happen in a single thread by placing them in a single non-reentrant VI, but also this would reduce your data troughput. To achive practical level quality you can: Dequeue elements when queue is almost full as Aristos suggested. However to be on safe side you should start dequeueing already a little earlier than what Aristos suggested. Dequeue element if queue is full and try to enqueue element. If this fails, you repeat dequeueing elements until you succeed in enqueueing. To increase success rate you can first try to dequeue one element and if you still cannot enqueue the dequeue two elements and so on. Instead of dequeueing only one element, dequeue multiple elements so that for any practical reason there is enough room in the queue Use multiple queues, one for each VI, so that collision never occur Use only single VI so collisions never occur To get more information about the atomic operations, google for "atomic operation", "test-and-set" and perhaps also "semaphore". If you get interested, google also for "software transactional memory" for a little alternative lock-free implementation of concurrent operations.
  7. Perhaps I try to verify this. Meanwhile I also thought of a partial answer. One can typecast queues to any reference and then typecast the reference back to appropriate queue. This way one can use queues to hold many kinds of data. The memory penalty is 2x compared to buffers. Still, it's better than what variants can provide. Such a general queue reference together with some sort of type string can hold any type of data which can be correctly typecasted back. See the image below.
  8. I agree that currently you need to have your VIs in memory to be able handle inter-VI referencing. I however would prefer a more intelligent system. Each VI should contain a cross-reference table listing all cross refereces. Instead of loading all VIs into memory, only these tables would be loaded to construct a project cross-reference (hash) table. When something is changed, this table would be looked-up by LabVIEW to check which other VIs this change affects. These VIs would then be opened on-demand and recompiled. Then if recompiling affects again other VIs, these VIs would be opened and the process would go on. To speed up the process, once opened VIs do not need to be closed until project close or until too much memory is consumed (user could specify limit). There should also be a way to remove VIs from the memory excluding the cross-reference table to allow more memory for VIs during run-time. This kind of system would produce the same functionality that you Michael are looking for but would scale up much better when the number of VIs increases.
  9. I'm waiting for my new workstation to arrive (Core 2 Duo E6600 + 4GB + 1TB RAID). But seriously you should be aware that the problem really is that LabVIEW project loading time doesn't scale linearily with the number of VIs. The first 50 VIs take about the same time to load as the last VI alone when there are around 1000 VIs referred in LVOOP project.
  10. It seems we want different things - I really don't want all my 1000 VIs to be loaded into memory on prject open. :thumbdown: This however is what happens in LabVIEW 8.20 (see this thread). So here what you can do to force your VIs to load at project open in LV 8.20. Add a class to your project (all my classes belong to libraries, but I don't think this matters). Then add a method to your class. Drop all your VIs you want to be automatically loaded to the block diagram of this method. That's it. Since LabVIEW loads the class and everything it refers to on project start-up, all your VIs are in memory instantly. Well, not instantly. If you have around 1000 VIs referred as I do, it may take around 10 minutes to open the project. And it consumes quite a lot of memory too.
  11. Me... picky? No, most of people are just so indifferent :laugh: I'm sure the next LV version wll be the most stable ever Can you btw clarify why the first cluster consumes less memory than the second cluster and why enqueueing & dequeueing a buffer makes one extra copy?
  12. Just looked at this thread today after seeing your update. I've been writing to large binary files recently with 8.0.1; In one test I created 38.9 GBytes of data over 66 hours. The data was 128 byte long strings, with a U32 counter as the first 4 bytes. This was received using a TCP/IP connection, passed to my logging function via a queue, then flushed and written to file once a second. The one thing that I noticed early on was that the Prepend Array or String Size boolean didn't behave as I expected it to. I was using Flush Queue and receiving an array of TCP strings. I wired the array into the Data input and a False into Prepend Array or String Size and still got 4 additional bytes in front of each of my 128 byte strings (0x0080). I had to put Write To Binary File inside an auto-indexed loop to get the "true raw data" to disk. I verified the integrity of the data transferred by simply reading in the logged file 128 bytes at a time and monitoring the counter. I actually had a roll-over of the U32 back to zero. I've been looking at your example and was able to modify it and "make it work". It might not be fast or pretty, but it can write a file > 2 GB and read it back... You mentioned that you hadn't tested with other data types, so I thought that I would point out that with 8.0.1 and Windows XP you CAN write to LARGE files using simple strings. Download File:post-949-1158953469.vi Now, if I didn't see SO many bugs (like with tables) I might consider actually working in 8.20
  13. In LabVIEW Object-Oriented Programming (LVOOP) Dynamic Methods cannot be reentrant. You have to make your method static by opening its connector pane, right clicking the dynamic dispatch terminals and selecting required or recommended instead of dynamic dispatch terminal. However, you cannot override such a method. If you need to be able to override your method, you can write a dynamic method to return you a VI reference for a static method. You then call this static method using call by reference node. This way you can dynamically determine correct static method to call, which can be reentrant. If all of the above was nonsense to you, then you should study LVOOP in more detail. Start by opening your LabVIEW help.
  14. No, I don't remember, LV 7.1. is the first version I've ever used. And I really started my LV career with LV 8.0. But good to hear that we may expect better optimization later on. Until then, I'll use queues to store my large arrays.
  15. You understand me at least half correctly. What I didn't understand was why using a class without queue consumed 3x size of data. Well I must say I don't understand why getting queue element consumes memory instead of just modifying target buffer reference so that it would point directly to queue element. I also don't understand why cluster bundle in second example consumes more memory than cluster bundle in first example since again LabVIEW could just modify cluster element buffer reference to point directly to data buffer that is "written" to the cluster. After all internally LabVIEW refers to arrays using datatype** or datatype***.
  16. It's funny... You either have people who totally luv LV8,x ir you have people who (totally?) hate it.. But one thing for sure... it's the version which has created the most discussions to date.. and the most
  17. Hi, I made some tests to find out how LVOOP consumes memory. I noticed something I don't really understand. It seems that LVOOP class wire consumes memory more than cluster used exactly the same way. And I don't just mean that there is a little more metadata attached. No. It seems if LabVIEW takes an extra copy of the class private data compared to that of cluster private data. Take a look at the image below. In the first case LabVIEW doesn't make buffer copies, so 8MB memory is consumed. In the second case, LabVIEW copies the initial buffer to the cluster. That is one memory copy is made and total 16MB memory is consumed. This already is a bit weird, since instead of making a buffer copy, LabVIEW could simply change the buffer the cluster data points to. Well, it doesn't. In the third case when class constant is used, LabVIEW makes two buffer copies so that the total memory consumption increases to 24MB. This is really what I don't understand. Where does it need two copies. It seems like extremely unefficient way on using the memory. What do I not understand? EDIT: I was naturally using LabVIEW 8.2. EDIT: I made a third test using queues as storage format for data inside an object. It seems that this is more efficient way of storing large arrays to LabVIEW objects than LabVIEW internal way of storing data. In the example below, only 16MB of memory is consumed, at least if profiler correctly profiles the memory usage of queues. Weird, I must say!
  18. Has anyone experience of SourceHaven SVN implementation. I've though of it as an alternative to open source SVN.
  19. Why not look at: LAVA > Code Repository > User Interface > CLL Dialog.zip ? Lars-G
  20. Hi, I have multiple C functions that I need to interface. I need to support numeric scalars, strings and booleans and 1-4 dimensional arrays of these. The programming problem I try to avoid is that I have multiple different functions in my DLLs that all take as an input or return all these datatypes. Now I can create a polymorphic interface for all these functions, but I end-up having about 100 interface VIs for each of my C function. This was still somehow acceptable in LabVIEW 8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project gets read into memory at project open. I takes now about ten minutes to open the project and some 150 MB of memory is consumed instantly. I'm still need to expand my C interface library and LabVIEW doesn't simply scale up to meet the needs of my project anymore. I now reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions. I then initialize the allocated memory blocks correctly and return the handles to LabVIEW. LabVIEW complier interprets Call Library Function Node terminals of my memory block as a specific data type. So what I thought was following. I don't want LabVIEW compiler to interpret the data type at compile time. What I want to do is to return a handle to the memory structure together with some metadata describing the data type. Then all of my many functions would return this kind of handle. Let's call this a data handle. Then I can later convert this handle into a real datatype either by typecasting it somehow or by passing it back to C code and expecting a certain type as a return. This way I can reduce the number of needed interface VIs close to 100 which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze). So I practically need a similar functionality as variant has. I cannot use variants, since I need to avoid making memory copies and when I convert to and from variant, my memory consumption increases to three fold. I handle arrays that consume almos all available memory and I cannot accept that memory is consumed ineffectively. The question is, can I use DSNewPtr and DSNewHandle functions to reserve a memory block but not to return a LabVIEW structure of that size. Does LabVIEW carbage collection automatically decide to dispose my block if I don't correctly return it from my C immediately but only later at next call to C code. Regards, -jimi-
  21. i2dx, I share your comments. A thread in the forum on the NI site suddently switched to this topic as a result of a member's signature. It is halarious to read, but may also require :beer: . I even got a crazy icon out of it. I hope you don't mind that I quoted some passages from this thread. It described the issues too well. Here is the thread: http://forums.ni.com/ni/board/message?boar...message.id=2792 I appologize for not asking permission before posting over there. :worship: This thread has been very interesting and informative to read. Thanks!! :thumbup: JLV
  22. Well, I've been busy. But I decided to work trough it this morning... upon your request. The picture below is the front panel of the attached example (edit 2: was block diagram). There is a parent class and a child class. Both have their own private data. Create Child methods calls Create Parent method. Copy method is dynamically dispatched to the correct one, and this method then calls it's parent method. Close method is also dynamically dispatched to correct one and close also calls parent method. This way the class hierarchy is correctly transversed for object creation, copying and disposal. Inheritance is implemented using LVOOP built-in inheritance. You can download my reference implementation below. Download File:post-4014-1158838778.zip Edit 2: old one is here Download File:post-4014-1158829705.zip EDIT: One more thing. All private parent and child methods need to have different name due to the fact that LVOOP doesn't allow overriding private methods either. So private methods are not really private but they also affect the decendent and ancestor classes by narrowing the namespace that can be used to name methods. This may cause problems since especially the developer of the ancestor class doesn't know in general what methods there are in all possible decendent classes developed by other people. So if the ancestor class developer decides to change the implementation and creates a new private method, the decendent classes may break even though the class interface stays the same. EDIT 2: I modified the wire color of the child class so it can be distinguished from the parent class. The create child method looks following: The close child method looks following: The copy child method looks following:
  23. Yes, it really should. If you filter away 0 Hz component, which is the same thing as DC component, you end up with signal that has no DC component. So if you band pass filter your signal to include only frequencies of certain region and 0Hz doesn't belong to this region, you do not get any DC in you filtered signal. If you want to keep you DC-component together with you 8k component, you can filter your signal with low-pass for 0.5 Hz and the band pass you had and add these two signals. Then you'll have you DC together with your 8k signal.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.