Jump to content

External memory allocation and management using C


Recommended Posts

Hi,

I have multiple C functions that I need to interface. I need to support numeric scalars, strings and booleans and 1-4 dimensional arrays of these. The programming problem I try to avoid is that I have multiple different functions in my DLLs that all take as an input or return all these datatypes. Now I can create a polymorphic interface for all these functions, but I end-up having about 100 interface VIs for each of my C function. This was still somehow acceptable in LabVIEW 8.0 but in LabVIEW 8.2 all these polymorphic VIs in my LVOOP project gets read into memory at project open. I takes now about ten minutes to open the project and some 150 MB of memory is consumed instantly. I'm still need to expand my C interface library and LabVIEW doesn't simply scale up to meet the needs of my project anymore.

I now reserve my LabVIEW datatypes using DSNewHandle and DSNewPtr functions. I then initialize the allocated memory blocks correctly and return the handles to LabVIEW. LabVIEW complier interprets Call Library Function Node terminals of my memory block as a specific data type.

So what I thought was following. I don't want LabVIEW compiler to interpret the data type at compile time. What I want to do is to return a handle to the memory structure together with some metadata describing the data type. Then all of my many functions would return this kind of handle. Let's call this a data handle. Then I can later convert this handle into a real datatype either by typecasting it somehow or by passing it back to C code and expecting a certain type as a return. This way I can reduce the number of needed interface VIs close to 100 which is still acceptable (i.e. LabVIEW 8.2 doesn't freeze).

So I practically need a similar functionality as variant has. I cannot use variants, since I need to avoid making memory copies and when I convert to and from variant, my memory consumption increases to three fold. I handle arrays that consume almos all available memory and I cannot accept that memory is consumed ineffectively.

The question is, can I use DSNewPtr and DSNewHandle functions to reserve a memory block but not to return a LabVIEW structure of that size. Does LabVIEW carbage collection automatically decide to dispose my block if I don't correctly return it from my C immediately but only later at next call to C code.

Regards,

-jimi-

Link to comment
If I rememeber correctly, LabVIEW will automatically dispose the memory allocated by the DSxxx functions when the VI is done.

Perhaps I try to verify this. Meanwhile I also thought of a partial answer. One can typecast queues to any reference and then typecast the reference back to appropriate queue. This way one can use queues to hold many kinds of data. The memory penalty is 2x compared to buffers. Still, it's better than what variants can provide. Such a general queue reference together with some sort of type string can hold any type of data which can be correctly typecasted back. See the image below.

post-4014-1159039400.png?width=400

Link to comment

I think it will be more efficient to convert the queue-ref to variant instead of typecasting the ref. I haven't tried it so i don't know for sure. Just a question: how do you measure the memory?

EDIT: It will use the same memory, but it wont be "transformed" from variant to anything but the original type (at least not without the utility VIs for variant).

Link to comment
I think it will be more efficient to convert the queue-ref to variant instead of typecasting the ref. I haven't tried it so i don't know for sure. Just a question: how do you measure the memory?

EDIT: It will use the same memory, but it wont be "transformed" from variant to anything but the original type (at least not without the utility VIs for variant).

Good point :) So write data to queue and transform queue to variant will be the way to go. I use LabVIEW profiler to measure the memory consumption.

Link to comment

Well, it seems that typecasting the contained types in queues does not work afterall. I made an identical setup as your picture, and although it seems to work for the number zero, putting anything else (3, 4 whatever) in the dbl array, and the int array will consist of just garbage :(

The utility variant VIs (in ..\vi.lib\utility\variant.something)that seems to be able to set the type etc, will in fact do that, BUT they will also destroy the value. I have no idea if this is a bug, or if it is a feature. Aristos queue may know i hope? When they set the type in variants, but at the same time destroy the values, it seems to me that they have a very limited use, and i'm not sure what they can be used for.

But you can still do the same thing with variant, but the conversion must be done by flattening, then unflattening with the correct typedef. The problem is that this will also produce the exact same garbage for numbers other than zero.

Here is a vi (LV8.2) of the two versions of variant conversion using queue.

Download File:post-4885-1159168331.vi

Link to comment
Well, it seems that typecasting the contained types in queues does not work afterall. I made an identical setup as your picture, and although it seems to work for the number zero, putting anything else (3, 4 whatever) in the dbl array, and the int array will consist of just garbage :(

Yes, yes. I didn't mean this. I just needed a datatype like variant that would perform better memoryvice than variant. I don't really need typecasting, I'll always cast back to the original type. Of course automatic type conversion would be great :)

Link to comment

Jimi,

I think that was a pretty cool example, you actually typecasted the values contained in the queue by typecasting the reference!

The problem with the garbage output in the example is due to that you input 64 bit elements and try to extract 32 bit elements, therefore the strange result with every second element set to 0.

Change representation from I32 to I64 and the output will be an I64 array, where each element is the DBL value typecasted to I64.

In my opinion you should stick with the typecasting, mainly because I prefer the protection you get from the DataLogRefnum.

/J

Link to comment
In my opinion you should stick with the typecasting, mainly because I prefer the protection you get from the DataLogRefnum.

From variant you get protection against typecasting to wrong type. Perhaps you can combine these two by first casting to variant and then typecasting to datalog ref. You get strictly typed wire that you can only cast back to the original datatype.

Link to comment
With variants you get a lot of other posibilities for automation later on, for instance in conversion. See the attached VI where int32 arrays are automatically casted to double array (the default type). It will also be safer because you can add errors when wrong queue/ref types accidentally comes in.

Oh, this is really clever! :worship: We should come up with a name to this concept. By-reference variant?

Link to comment

I agree that variants can store any type of data and that you can use variants as references. But with DataLogRefnums you get broken wires if you accidently connect a unsupported wire to the reference input, with variants you wont.

Since variants accepts all data types, your VI will still run, but with error, and in some cases you will have a hard time finding this error.

Regarding performance, typecasting will outperform the variants, even with additional type info.

I put a loop around enque/deque operations (setting down element size to 100).

Looping 1000 times takes 62000ms using variants, and only 3 ms with typecasting.

/J

Forgot the modified version...

With variants you get a lot of other posibilities for automation later on, for instance in conversion. See the attached VI where int32 arrays are automatically casted to double array (the default type). It will also be safer because you can add errors when wrong queue/ref types accidentally comes in.

Download File:post-4885-1159183482.vi

Download File:post-5958-1159186325.vi

Link to comment

Well, I agree that some of the system VIs in labview do have abysmal performance, but i think you are a bit unfair when quoting those numbers. Attached is a much more balanced test. Here i use one variant attribute to store the type (for easy retreival). My numbers are 42 ms for variant and 37 ms for typedef with type. The variant is only slightly slower, but still have all the benefits of variants (full type info and versatility).

Download File:post-4885-1159198639.vi

Here is the "original" test with no type conversions showing variant and typedef with equal performance. Anyway, i'm not saying variant are better. Using variant will be a very different way of passing the data, but they do have equal performance both memory-vise and speed-vise with similar typedef. Variants also have several other very attractive features that only variants have.

Download File:post-4885-1159199482.vi

Link to comment

I just have to add one more thing :) When running the version with all the system VIs in a similar test as the previous tests, i get a value around 220 ms. That is when all the system VIs are called. IMO the system VIs are not that abysmal afterall, and can add alot of functionality. The problem is that they are locked (impossible to optimize) and poorly documented, so using OpenG variant VIs for the same purpose could probably be a better choice although i'm not sure if they have the same functionality.

Download File:post-4885-1159207550.vi

Link to comment

I too think that those numbers were strange, but I didn't have time to restart my computer to perform the test again.

I did restart LabVIEW and run the test with similar result.

My purpose of the test was to confirm that typecasting is faster than variant-conversion, not to reject variant datatypes.

Actually I had not seen the VIs you used on the variant reference, and I do see them as handy.

I will run the test again tomorrow, so stay tuned...

/J

Link to comment

I repeated my test from yesterday. I did the same test again with queue size set to 1 in order to rule out memory issues.

The test was run 1000 times, and the results was a bit :oops: different than yesterday.

Variants = 100ms

typecast = 2ms

Which means that in terms of performance variants will do equally well as typecasting (at least for references).

I don't know the reason for the strange result I got yesterday, sorry for that post :headbang: .

/J

Link to comment

If you mean my first post, I don't understand either, but the result of my second test is more what I would expect.

A typecast should just change the way a piece of memory is interpreted, but conversion to variants must involve data copying since the size of the variant is different from the reference.

There is also 4 VIs that must be run on the Variant reference, these also add to the overall timings

/J

I don't understand how you get those results. Converting a queue ref to variant is just as efficient as typecasting it. Do you do something else than converting the ref?

Download File:post-5958-1159251387.vi

Link to comment

I liked the way you converted the queue reference to variant, and then extracted the information out of the variant.

The purpose of the test was to see how much overhead, compared to jimi's original post (with type added), that was introduced by using this way to pass any queue reference in a variant.

With that in mind, I think the results I posted today is relevant.

As I said previously I still like DataLogRefnumns better due to the wiring protection, e.g. in _4 and _5 there is no check that the variant actually holds a queue reference.

/J

But this will be like comparing apples and oranges, besides you are not sending only a ref anymore. Look at my examples 4 and 5 which is a much better comparison.
Link to comment
As I said previously I still like DataLogRefnumns better due to the wiring protection

Perhaps I join back to this topic that seems to attract only Scandinavian interest. What I was thinking is that I'll embed the typecasted queue into a LVOOP class as private data. This way I may not get best possible performance, but I do get data protection similar to DataLog Refnums. In addition I can hide the implementation from the user. I start by typecasting to DataLog Refnum as I originally suggested. Later if I feel this is inadequate to my purposes I change to variant as besvingen suggested. I thought implementing the class similar to my last suggestion in topic Refactoring the ReferenceObject example in LV 8.2.

-jimi-

Link to comment
Perhaps I join back to this topic that seems to attract only Scandinavian interest. What I was thinking is that I'll embed the typecasted queue into a LVOOP class as private data. This way I may not get best possible performance, but I do get data protection similar to DataLog Refnums. In addition I can hide the implementation from the user. I start by typecasting to DataLog Refnum as I originally suggested. Later if I feel this is inadequate to my purposes I change to variant as besvingen suggested. I thought implementing the class similar to my last suggestion in topic Refactoring the ReferenceObject example in LV 8.2.

-jimi-

My experience with LVOOP for simple data is that it's actually more efficient than typecasting (and variant). I don't know why, maybe the reason is that when using LVOOP there is no typecasting at all? It would be nice to know what exactly is going on, i imagine it works like this:

Typecast: data is first flattened, then unflattened to the actual type

Variant: data is flattened and the type info is stored along with the flattened data ??

LVOOP: data is just set into the cluster ??

Link to comment

LVOOP is efficient because it doesn't require any conversion, at least according to Aristos Queue.

But since LVOOP is not available on RT targets at the moment, I can't use it.

I do not know how variants are stored in memory, but I think you are pretty close. Data should be flattened and some info is added.

I really would like to see NI implement a genericDataType, and data to reference functions.

The genericDataType should accept anything without any conversion, and should be very similar to a variant. This feature basically already exist, but only for LVOOP classes.

Data to reference functions should only mark a wire so that data is not copied when forked (should also change appearance).

/J

My experience with LVOOP for simple data is that it's actually more efficient than typecasting (and variant). I don't know why, maybe the reason is that when using LVOOP there is no typecasting at all? It would be nice to know what exactly is going on, i imagine it works like this:

Typecast: data is first flattened, then unflattened to the actual type

Variant: data is flattened and the type info is stored along with the flattened data ??

LVOOP: data is just set into the cluster ??

Link to comment
If I rememeber correctly, LabVIEW will automatically dispose the memory allocated by the DSxxx functions when the VI is done.

Instead you could use the Application Zone (AZ) functions, as the application data will be kept from call to call.

Nope, DSHandles are NOT automatically disposed other than at application exit. The same applies to AZ handles. If they would be disposed at the end of your VI execution you would have real troubles to return them to a caller.

The difference between DS and AZ really only is that AZ handles are relocatible between calls, which means that you need to lock them explicitedly if you want to dereference them and unlock them afterwards. LabVIEW maintains all its data that get exposed to the diagram excpet path handles in DS handles.

However all modern platforms really have no difference between AZ handles and DS handles since locked handles still can be relocated in the lower level memory manager of the system without causing general protection errors, thanks to advanced memeory management support in nowadays CPUs. I believe that the actual AZ and DS distinction was only really necessary to support the old classic Mac OS.

I do not know how variants are stored in memory, but I think you are pretty close. Data should be flattened and some info is added.

According to some postings on Info-LabVIEW from people who should know no flattening is actaully done.

Rolf Kalbermatter

Link to comment
Nope, DSHandles are NOT automatically disposed other than at application exit.

If I pass a buffer of anything to CIN/DLL on one terminal and return a to the buffer handle on another terminal, LabVIEW doesn't autodispose the buffer now that it is no longer used on block diagram? I've to verify this also since this is a different case from the one you rolf answered.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.