Jump to content

Some 32/64-bit LV questions


Recommended Posts

I believe I can have both LV2009 32-bit and LV2009 64-bit installed on the same 64-bit machine. I believe using LV2009 32-bit I can create a 32-bit application on a 64-bit machine

I'm assuming I should NOT open and run my 32-bit code under 64-bit LV because that would recompile it into 64-bit.

I'm trying to wrap my head around the best way to write in 32-bit but be able to build both 32 and 64 bit applications out of the same code. Or would it just be easier to have a dedicated 64 bit machine, copy my 32-bit development code to it and build it there??

Last but not least, this very interesting article on the topic on the NI site says there is a performance hit running a 32-bit app on a 64-bit system. Anyone have any Real World LabVIEW experience with this and know what that hit might more-or-less be?

(BTW, try searching on "64-bit" on this site. Well, actually, don't. I got real frustrated and then went to Google.)

Link to comment

Apps built in 32-bit LV will be 32-bit. Apps built in 64-bit LV will be 64-bit. If a VI was saved by 32-bit LabVIEW it will recompile in 64-bit LabVIEW and vice versa. Built apps are saved in the version of LabVIEW used to build so even if source VIs were saved as 32-bit, when built by 64-bit LabVIEW, the output will be 64-bit (and again vice versa with 64-bit source and 32-bit LabVIEW).

So in theory, you could use the same source with 32 and 64-bit LabVIEW and your output will be as expected.

Link to comment

Last but not least, this very interesting article on the topic on the NI site says there is a performance hit running a 32-bit app on a 64-bit system. Anyone have any Real World LabVIEW experience with this and know what that hit might more-or-less be?

No particular experience with LabVIEW, but I wouldn't expect much more of a performance hit than most other 32-bit emulated applications. Nothing definitive, but since moving to 64-bit Win7 (from 32-bit Win7) I've seen maybe 10-15% slower performance on various applications (like Office, for example). I suspect it has to do with what you're application will actually be doing ..

Link to comment

So in theory, you could use the same source with 32 and 64-bit LabVIEW and your output will be as expected.

I'm more concerned about the input. smile.gif More specifically (maybe you've already answered this, but the caffeine just hasn't kicked in for me yet), if I build my 32-bit code using 64-bit LV, will 64-bit LV make my code 64-bit first? I.e., do I have to protect that code to keep it 32-bit during a 64-bit build?

I suspect it has to do with what you're application will actually be doing ..

Yeah, I admit it was a pretty open-ended question. I'm looking to run on a 64-bit box in order to be able to access more than the measly 2GB memory (or call it 1GB contiguous on a good day) LV can use on a 32-bit box.

Link to comment

I was under the impression the whole 32/64 bit thing was no different than Windows/Mac/Linux/Etc. A given VI will have the source which has absolutely no affiliation with platform or bitness, and riding along side the source is (among other things) compiled code for whatever platform/bitspace it's being edited on. If opened in a different environment where the compiled bits don't match, it's recompiled as necessary. Pretty much what gmart said. The whole bitness is kind of moot, LabVIEW's been handling multi-platform like this for ages, bitness is no different?

Want a Win64 build? Build the app in LV64. Mac? Build it in LVMac. Win32 exe? Build in Win32. One source to rule them all.

Exceptions being if you have bit/platform specific code that you have written. But that's what symbols/etc are for.

Link to comment

but the caffeine just hasn't kicked in for me yet

One day we will have robots following us about to inject happy drugs. Sorry, been reading too much Sluggy Freelance lately...

Yeah, I admit it was a pretty open-ended question. I'm looking to run on a 64-bit box in order to be able to access more than the measly 2GB memory (or call it 1GB contiguous on a good day) LV can use on a 32-bit box.

Curious... what are you doing that you need 2GB chunk at once? Not that I haven't had Windows take down LV for exceeding 2GB...

Tim

Link to comment

Curious... what are you doing that you need 2GB chunk at once? Not that I haven't had Windows take down LV for exceeding 2GB...

Multi-minute runs at 64k sample rate * multi-channels to track transient noise as it travels around a test vessel.

The main problem is not the amount of memory my data takes, it's all the copies of my data that LabVIEW makes, and holds on to. If a data set is 750MB, that shouldn't be a problem, but LV will make multiple copies of it and blow up the memory.

I currently decimate like crazy on the longer runs -- which of course slows eveything down and is a matter of taking an WAG educated guess at how much contiguous memory is available and keeping my fingers crossed.

Link to comment

...The main problem is not the amount of memory my data takes, it's all the copies of my data that LabVIEW makes, and holds on to. If a data set is 750MB, that shouldn't be a problem, but LV will make multiple copies of it and blow up the memory.

I currently decimate like crazy on the longer runs -- which of course slows eveything down and is a matter of taking an WAG educated guess at how much contiguous memory is available and keeping my fingers crossed.

I don't know anything about your application, but if it is the amount of contiguous memory that is the bottle neck I would start looking into using queues or RT-FIFOs to store data in smaller chunks. Maybe by reading data from the DAQ boards more frequently (if you are using DAQ). The benefit is that you can have the same amount of data in the FIFO, but the memory allocated does not necessary have to be contiguous.

I also believe that the number of buffers of data LabVIEW holds on to can be minimized by using FIFOs; once allocated no more buffers should have to be added.

Just my 2c

/J

Link to comment

Multi-minute runs at 64k sample rate * multi-channels to track transient noise as it travels around a test vessel.

The main problem is not the amount of memory my data takes, it's all the copies of my data that LabVIEW makes, and holds on to. If a data set is 750MB, that shouldn't be a problem, but LV will make multiple copies of it and blow up the memory.

I currently decimate like crazy on the longer runs -- which of course slows eveything down and is a matter of taking an WAG educated guess at how much contiguous memory is available and keeping my fingers crossed.

Ah, the evilness that is copies of data. Have you...

- Reduced the number of subVIs

- Tried using Request Deallocation

- (LV2009) tried passing by-reference data

- Attempted to use smaller chunks at once

- waved the dead chicken

Tim

Link to comment

I don't know anything about your application, but if it is the amount of contiguous memory that is the bottle neck I would start looking into using queues or RT-FIFOs to store data in smaller chunks.

Unfortunately, this is a post-run application where the users want to see the entire data set(s) at one time.

I also believe that the number of buffers of data LabVIEW holds on to can be minimized by using FIFOs; once allocated no more buffers should have to be added.

That is true, in a perfect world. In reality, LV often grabs much more memory than it actually needs. While I understand this is a good thing for performance, it is a bad thing for large data sets. I jump thru lots of hoops to *really* empty and remove queues in order to attempt to get back all the memory I possibly can.

Link to comment

Unfortunately, this is a post-run application where the users want to see the entire data set(s) at one time.

Multiple channels at 64kS/s does not seem to be all that useful for display in real time:lol:

We used this technique to be able to run a high channel/high speed data acquisition on a machine with limited amount of memory (but then, the customer wasn't interested in looking at the raw data, only in the calculation results). I think each FIFO element contained 1000 samples for each channel, and that we had a secondary Queue for the result data (running average etc.).

/J

Link to comment

Ah, the evilness that is copies of data. Have you...

- Reduced the number of subVIs

- Tried using Request Deallocation

- (LV2009) tried passing by-reference data

- Attempted to use smaller chunks at once

- waved the dead chicken

Other than LV9, I've tried all of the above, especially waving the dead chicken.smile.gif

FWIW, NI folks generally seem to poo-poo "Anxious Deallocation" as I've seen it be called. I still throw it in occasionally where it might theoretically have some effect, but I personally haven't seen it help too much myself.

Multiple channels at 64kS/s does not seem to be all that useful for display in real time

I actually display up to 4 channels of "real time" waveform data (or spectra or lofar). We use it for what we call "tap-out"; determining if a particular sensor is hooked up where it's supposed to be. Also to watch for transients. But as you suggest, in that case I can break it up into much smaller chunks and memory is not an issue.

Link to comment

FWIW, NI folks generally seem to poo-poo "Anxious Deallocation" as I've seen it be called. I still throw it in occasionally where it might theoretically have some effect, but I personally haven't seen it help too much myself.

I have tried using the request deallocation on a number of high data rate applications and found it to be utterly worthless. It actually made things worse on occasion. I had better luck with the dead chicken.

Link to comment

Other than LV9, I've tried all of the above, especially waving the dead chicken.smile.gif

FWIW, NI folks generally seem to poo-poo "Anxious Deallocation" as I've seen it be called. I still throw it in occasionally where it might theoretically have some effect, but I personally haven't seen it help too much myself.

Can't have too many chickens about.

Can't say I have had good luck with memory deallocation either, but the times I have tried it I wound up redesigning chunks because what I was doing required going back and checking my sanity.

As a thought, that is a lot of data to display and can't have everything visible on a graph. Can you read in from file and decimate to something somewhat sane? This may be quite slow, but trying to keep everything xN copies in memory would also slow the machine down.

Tim

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.