Jump to content

Recommended Posts

I currently develop my application in Windows 7 using 32bit LabVIEW 2014.  The IT department wants me to deploy to VMs going forward.  And they want the VM OS to be Windows Server 2012 R2 (64-Bit).

 

Does anyone use the 64 bit version of LabVIEW?  If so, what OS do you use?

 

Is there any issues with developing in the 32 bit version of LabVIEW but compiling with the 64 bit version for releases?

 

I want to stick with 32 bit for dev work because some things like the desktop trace toolkit, unit test framework and VI Analyzer are not available for the 64 bit version.

 

My I/O is limited to NI-VISA for TCP/IP communication, PSP for talking to cRIO over Ethernet and .NET calls for database and XML communication.  I do have some Fieldpoint hardware that I talk to via Datasocket but that could be moved to cRIO via PSP. From what I can tell, all of that should work with 64 bit LabVIEW.

 

The application has 100's of parallel processes but does not collect large amounts of data.  Just lots of small chunks of data.  Would it benefit from a 64 bit environment?

 

Also, the application is broken into two parts, a client and a server.  I use VI server to communicate between the two across the network.  If the client is a 32 bit LabVIEW application, can it use VI Server to talk to a different 64 bit LabVIEW application?

 

Thanks for any tips or feedback,

 

-John

 

 

 

Link to comment

Back before I started using databases, I made the switch to 64-bit LabVIEW because one of our application's memory footprint was getting way out of hand. That's the only real reason to go to 64-bit in LabVIEW if you ask me.

 

Databases have since solved that problem for us so we continue to offer 32-bit versions. It seems backwards to revert to 32-bit only deployments, so we stuck with supporting both architectures even though there's no real reason to do the 64-bit thing anymore.

 

We mostly deploy to Win7-64. Before that existed it was Vista-64. We still have a substantial XP-32 base, probably more so than both architectures of Win8 combined.

 

Despite 64-bit being our primary deployment target, all development is done in the 32-bit IDE on a Win7-64 architecture. We only spin up the 64-bit IDE to execute builds. We build mixed-mode installers that deploy the 64-bit binaries if it can, falling back on 32-bit if required.

 

Things I can think of, in no particular order:

 

Last I checked 64-bit doesn't have full support for all the drivers and toolkits. You seem to be aware of this, but wanted to make sure it got on the list.

 

LabVIEW 64-bit for Windows is treated as a completely different platform from LabVIEW 32-bit for Windows. It's really no different than jumping to Linux or RT except both platforms have the name "Windows" in them.

 

Different platform means recompiling. May as well start keeping compiled code separate from source code if you're not in that practice already. Alternatively just don't worry about changes that get made in the 64-bit IDE if you're only using it for building.

 

Different platform also means all of your DLL calls via Call Library Function (CLF) nodes can get tricky. Depending on what you're calling there are few options:

  • Worst-case scenario is you may need to wrap your CLF nodes in conditional disabled structures such that the right DLL gets called depending on platform.
  • The exception to this is Win32 calls that just magically work due to WoW64. Seriously it's magic. Don't try to think too hard about it.
  • If your DLLs are named appropriately, you may be able to get by with a single CLF node that figures out what to call when compiled.
  • Be aware of the "Pointer-sized Integer" and "Unsigned Pointer-sized Integer" arguments for CLF nodes when dealing with pointers. Do not use fixed sized arguments if your CLF is going to adapt depending on platform.
  • Use 64-bit integers when moving pointer data around on the block diagram. LabVIEW is smart enough to figure out what to do with a 64-bit number when it hits a CLF node with a USZ or SZ terminal compiled on a 32-bit platform.

We have pretty strict rules against touching the TCP/IP stack, so I have no experience with VI Server between architectures.

Link to comment

 

We have pretty strict rules against touching the TCP/IP stack, so I have no experience with VI Server between architectures.

 

VI Server is meant to work between LabVIEW versions and platforms transparently. There shouldn't really be anything that could break.

Well there used to be properties such as for platform window handles that used to be 32 bit only until LV 2009. They are now depreciated but still are accessible and if you happen to use them you could run into difficulties when moving to 64 bit platforms and trying to access them, remotely or locally.

Link to comment

One thing you might consider is NOT using LabVIEW 64-bit. We run dozens of Windows Server 2012 R2 (64-Bit) servers with 32-bit LabVIEW applications. Unless your application needs LabVIEW 64-bit due to large memory requirements, there is no need at this point to build your apps in LabVIEW 64-bit.

 

That said, if you do decide to use LabVIEW 64-bit all of mje's comments apply. 

Link to comment

Thanks for the info.  Sounds like there is no advantage of 64 bit beyond memory access.  I will have to see how the application performs under stress to see if I am RAM limited.

Not just RAM limited in the single process, but RAM limited in the whole OS.  So lets say your application takes up 2GB of memory when running which is a lot no doubt.  But if your system can only see 4GB of the 8GB installed because it is a 32 bit Windows, then that means you only have 2GB for all other process running.  And Windows will start paging RAM to disk way whenever it feels like it, and if it sees one application using half the RAM usable it will start paging to disk, and performance will be impacted.  Only a real world test will be able to say if you are fine with a 32 bit OS.

Link to comment

Careful. You most certainly will not have a full 4 GB to use. In practice I've never got close to the limit because dynamic memory allocations begin failing long before getting there. Chances are if you're using that much memory, it's not with a bunch of scalars. I get nervous when I see memory footprints nearing 2 GB for LabVIEW.

Link to comment

Careful. You most certainly will not have a full 4 GB to use. In practice I've never got close to the limit because dynamic memory allocations begin failing long before getting there. Chances are if you're using that much memory, it's not with a bunch of scalars. I get nervous when I see memory footprints nearing 2 GB for LabVIEW.

 

From what John describes in the first post, I would be surprised if his application gets even remotely close to 1GB of memory consumption. In my experience you only get above that when Vision starts getting involved. That or highly inefficient programming with large data arrays that get graphed, analysed and what else in a stacked sequence programming style.  :D

Link to comment

I hope to keep the memory footprint down, but since the application is a test system that simultaneously tests 100's of DUTs in parallel (each DUT getting it's own instance of a test executive), the data consumption can add up.

The current system uses ~9MB per DUT plus overhead of 66MB for the whole system.  I suspect the new system will exceed this a bit.  So, assuming 100MB of overhead and 10MB per DUT, that puts me at 5.1GB for 500 DUTs (that is my target maximum).

So, it is possible that I could benefit from a larger memory space.  Need to get the new system completed and do some testing to confirm this.

Link to comment

I hope to keep the memory footprint down, but since the application is a test system that simultaneously tests 100's of DUTs in parallel (each DUT getting it's own instance of a test executive), the data consumption can add up.

The current system uses ~9MB per DUT plus overhead of 66MB for the whole system.  I suspect the new system will exceed this a bit.  So, assuming 100MB of overhead and 10MB per DUT, that puts me at 5.1GB for 500 DUTs (that is my target maximum).

So, it is possible that I could benefit from a larger memory space.  Need to get the new system completed and do some testing to confirm this.

 

10 MB per DUT fully multiplied by the number of DUTs! That make me believe that you might have been setting reentrant on all VIs just to be on the safe side. While possible and LabVIEW being nowadays able to handle full reentrancy this is not a very scalable decision. Reentrancy by parallel VI hierarchies is often unavoidable but that should be an informed decision on a per VI case, and not a global setting.

Link to comment

Reentrancy is absolutely necessary for all VIs when you instantiate multiple instances of the same code base in memory.  Otherwise, they would constantly be blocking each other to access the same VI.  These instances must run completely asynchronously and simultaneously.  Without 100% reentrancy this application would be impossible to design.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.