Jump to content

Password Security in LabVIEW


mje

Recommended Posts

A recent topic on the idea exchange (Allow "Password" Data Type for Prompt User for Input Express VI) got me thinking about security in LabVIEW. Specifically how do you secure a user interface in LabVIEW which is entering sensitive information such as a password?

No, simply obscuring a password is not the answer. Some things come to mind that would worry me about having to handle authentication in LabVIEW.

  1. Need to make sure any temporary copies of password strings that are generated get properly "cleaned" so the data doesn't stick around in memory.
  2. Keep the password out of any control/indicator such that other tasks can't use VI Server to get a refnum and generate copies of the password, register for value change events, etc.

There are likely other issues, I'm about as far from an expert in security as you can get. I've never had to create an application that requires any form of authentication but I've wondered about it for a while now as I find myself using LabVIEW for more and more as a platform where I wouldn't have previously considered it.

In principle, #1 seems easy because values are mutable. The caveat being you need to have a solid handle on what buffer allocations are "real" when you look at your code such that if you end up having to copy a password, you can be sure you're going to overwrite it. This of course assumes that any primitives you use are secure.

On the surface #2 seems as simple as storing the current password string as local state information which is not accessible via any control/indicator to keep VI Server snoops from just grabbing the current value.

However as soon as you want a portable solution- one that leaves authentication to the calling VI and doesn't do authentication at the same level the password was entered- you're talking about returning a string to the calling VI which by definition must pass through an indicator because you had a UI loaded.

OK, fine, maybe we can set up some fancy stuff with an anonymous synchronization primitive: I send the password dialog an empty single element queue, the dialog fills in a value, I retrieve the value from the queue and destroy it. Now I've gotten my password text without passing it through an indicator, but I have no idea how secure these queues are, they're a black (yellow?) box. I can watch them get created in the trace execution toolkit, are their values safe? Beats me.

For that matter, is it possible to lock a VI down such that you can't snoop around in VI server at all?

I realize this is borderline pedantic for LabVIEW, especially how we're pretty much trained from day one to just let the compiler take care of memory. For the most part that's fine, but when we're talking about security, I think this discussion is warranted. Or is perhaps doing something like this just not really the realm of what one should seriously consider tackling in LabVIEW?

Who knows, I'm breaking my do not post after 10 PM rule, so maybe this is just all crazy talk. Anyone have opinions? Has anyone tackled this before, even outside of a LabVIEW context? What are some of the other issues one might need consider?

Link to comment

What are you trying to protect, and from whom? If there's an attacker who can read arbitrary memory locations or get access through VI server, seems to me they'd just install a keylogger and get your passwords for everything, not just the LabVIEW application.

Link to comment

What are you trying to protect, and from whom? If there's an attacker who can read arbitrary memory locations or get access through VI server, seems to me they'd just install a keylogger and get your passwords for everything, not just the LabVIEW application.

While I do think you have a point, it does seem to me that this is at least an interesting topic to discuss, how to make a secure application. I have made some tools that have very simple authentication as it was more for simplifying displays and such rather than protecting anything important, but I could certainly see the need for a more robust authentication system. I wish I had answers, but I do like the questions.

Link to comment

A recent topic on the idea exchange (Allow "Password" Data Type for Prompt User for Input Express VI) got me thinking about security in LabVIEW. Specifically how do you secure a user interface in LabVIEW which is entering sensitive information such as a password?

No, simply obscuring a password is not the answer. Some things come to mind that would worry me about having to handle authentication in LabVIEW.

  1. Need to make sure any temporary copies of password strings that are generated get properly "cleaned" so the data doesn't stick around in memory.
  2. Keep the password out of any control/indicator such that other tasks can't use VI Server to get a refnum and generate copies of the password, register for value change events, etc

1) Is very difficult to maintain as LabVIEW does all the memory management automatically behind the scenes. In theory data is often reused but in practice LabVIEW tends to hold onto data if that wire doesn't get modified. So this would mean that you should definitely overwrite any wire that contains a password as soon as that password is not required anymore. And this would have to be a function that does some inplace operation to set the string contents to all spaces or something by checking the current string length and overwriting its contents and this VI needs to be designed in a way to make sure it operates inplace, so it should have a string input and output terminal and the string should be wired through in the top level diagram, so no terminals inside a case structure or something.

Also all the VIs that operate on the password string to for instance calculate a hash, should be designed in the same way, with a password input and output and placing those two controls in the top level diagram and wire it through any case and other structures that may be there. If you use loops, make sure to wire it through a shift register just to be sure.

2) Is impossible in pure LabVIEW as at least the UI to enter the password will always contain that password. The Password display option does hide the password but doesn't change the string content in itself. So if you have access to VI Server AND know the VI name of the password UI AND the name of the control AND VI Server is configured to not disallow access to this VI you can get at the clear string at the moment it gets entered. Seems to me like installing a keylogger is a much more simple and universal way though.

The key to solving 2) is in controlling what gets served by VI Server if any. A good approach would be to name all VIs that are supposed to be accessible through VI server with a specific prefix and then setup VI Server to only allow access to VIs with that prefix. Of course this means that your application INI file needs to be secure, but hey if that isn't the case you have many more serious troubles already.

NOTE: An interesting tidbit: Try to make a string control to display as password display and then enter a string, select it and do a ctrl-c. Paste it someplace else! No joy, at least in more recent LabVIEW versions (checked it in 2010).

Link to comment

In the past I would avoid all situations that would cause me to handle passwords in my code. My code does not handle ISO rules for password protection, and it would be quite some time to develop those features. Then it would be even more time to have meetings certifying my code. Having reviews, and audits to verify the protection it performs and that it works properly every time. Then the verification procedure to have records of its functionality.

Instead I would rely on Windows user management, or rather domain controlled management. If that wasn't an option I would use TestStand user management. If the project wasn't using TestStand I would push for DSC which adds user management. The point I'm trying to make is that personally I would not use my own code to perform security other than for a deterant, and not unauthorized access that would compromise any thing important. My opinion is that it is more cost effective, and safer to rely on 3rd party software that is off the shelf, and has already gone through any certifications required.

Of course that being said I understand the constraints people are in and why these might not be viable options.

  • Like 1
Link to comment

I agree with hooovahh. In the past, I've seen projects use home-brew solutions to password protection and user authentication, but there was no built-in protection against anyone with a memory dump. Especially because we have so little control over the compiler, I think it'd be extraordinarily tough to make any guarantees that you're keeping your memory clean and safe; inplaceness can help in some areas, but we still never completely know how the compiler is handling any of our data. It's pretty painless to query Windows credentials, so I support that route of authentication as well - gives you a lot less code to work, to boot.

Link to comment

While using an external solution such as Windows authentication is sometimes an option it doesn't automatically solve most of the problems. If you intend to not just call a Windows login dialog itself, you still end up having the password in cleartext in LabVIEW anyhow, so most of the concerns still apply.

And (ab)using Windows authentication for your own application is not as trivial as it may seem. There are differences about local logins and domain logins. While you can issue domain logins fairly easily using directory services, this proofs very difficult for computers that use a local authentication scheme or don't have cached domain login credentials and have to use local authentication as fallback.

I have tried with .Net to get there, but the only way that works reliably both for domain authentication as well as local authentication is using a VERY involved Windows API interface, that I came across by accident somewhere. And it requires an external DLL, as the Windows APIs involved are really complicated, and need to be able to adapt dynamically to various availability of APIs, depending on installed Windows version and feature set.

And then you are still screwed if you need to go non-Windows.

Link to comment

What are you trying to protect, and from whom? If there's an attacker who can read arbitrary memory locations or get access through VI server, seems to me they'd just install a keylogger and get your passwords for everything, not just the LabVIEW application.

In the past I would avoid all situations that would cause me to handle passwords in my code. My code does not handle ISO rules for password protection, and it would be quite some time to develop those features. Then it would be even more time to have meetings certifying my code. Having reviews, and audits to verify the protection it performs and that it works properly every time. Then the verification procedure to have records of its functionality.

The motivation for this thread was purely academic. Hooovahh covered most of the things I was thinking about. What happens if by landing in a regulatory environment I suddenly need to have an ISO certified way of dealing with authentication? I'm not convinced this is even possible to implement this entirely in LabVIEW, though to be honest, I've not even read the relevant standards.

It's general good practice which makes me curious about this. What if I have an application that runs third party code, maybe plugins. I want to be sure that when it comes time to run that unknown code I have a clean memory footprint such that a malicious bit of code can't scrape an old data from memory when run from the context of my application. Or maybe the best idea is to run this code from an entirely different context-- a sandbox. This could go so many different ways and in the end you still need to worry about once that code gets executed, how can you be sure a new keylogger hasn't been spun up? If my plugins are written in native LabVIEW, there's probably nothing I can do about it, but if I have some form of scripted environment where I provide an API to work with, maybe this concern can be managed. I don't have answers to questions like this, which is why I really wanted to start this discussion.

I'm not trying to argue someone like me should roll their own solution, I'm way too naive about these matters to do so. What I'm really after is if it's even possible for anyone to create a library that properly manages authentication purely in a LabVIEW environment? If so, what are some of the challenges/considerations that are brought up due to LabVIEW?

This is just a topic that I keep coming back to every other year or two, and I've never come to a satisfactory end of discussion other than "I doubt it's possible in pure LabVIEW." I thought I'd see if anyone else has ideas. I believe that any authentication would have to be handled by external code, such that my LabVIEW code doesn't even get access to the password. Really all my code needs to know is who the user is, and their granted permission level if any.

Link to comment

It's general good practice which makes me curious about this. What if I have an application that runs third party code, maybe plugins. I want to be sure that when it comes time to run that unknown code I have a clean memory footprint such that a malicious bit of code can't scrape an old data from memory when run from the context of my application. Or maybe the best idea is to run this code from an entirely different context-- a sandbox. This could go so many different ways and in the end you still need to worry about once that code gets executed, how can you be sure a new keylogger hasn't been spun up? If my plugins are written in native LabVIEW, there's probably nothing I can do about it, but if I have some form of scripted environment where I provide an API to work with, maybe this concern can be managed. I don't have answers to questions like this, which is why I really wanted to start this discussion.

I'm not trying to argue someone like me should roll their own solution, I'm way too naive about these matters to do so. What I'm really after is if it's even possible for anyone to create a library that properly manages authentication purely in a LabVIEW environment? If so, what are some of the challenges/considerations that are brought up due to LabVIEW?

This is just a topic that I keep coming back to every other year or two, and I've never come to a satisfactory end of discussion other than "I doubt it's possible in pure LabVIEW." I thought I'd see if anyone else has ideas. I believe that any authentication would have to be handled by external code, such that my LabVIEW code doesn't even get access to the password. Really all my code needs to know is who the user is, and their granted permission level if any.

If someone has un-bridled access to the machine. Then there is absolutely nothing you can do to protect discovery from a determined effort (it is just a matter of time). It doesn't matter what the programming environment is since I could quite easily drop a hacked windows DLL and then all bets are off. Zeroing memory is a weak (but not inconsequential) way to protect passwords since I only need to fire up Soft-ice and I can see where it is in memory before you clear it. The hard part is finding it in the first place amongst the thousands of lines of code. As you can probably guess. A dialogue box is an easy way to find where to start and then following the code to find the string message sent to the OS. So it doesn't matter what code you write, that is the crack where I can place the crowbar ;). The main thing to bear in mind, however, is that a password is a means-to-an-end. A password in itself is of no use. It is the info it guards that is of interest. You could have the most secure program in the world but it won't be much good if the user writes down the password and puts it on a post-it attached to the monitor. The only purpose of a password dialogue is to prevent someone looking over your shoulder and reading it; no more. If it is a worry, then use a key and lock the PC in a room with no network.

The issue is more about prevention and detection of malicious programs actually getting on to the machine in the first place without your knowledge (reducing the attack vectors) and, if they do get on there, preventing the info they glean from exiting the machine in a meaningful form (like your private PGP keys) or, at the very least, making it difficult to extract meaningful info if info does get out (like your customer database). Isolation from the interweb ( :) ) .goes a long way to minimising this as does not having USB ports (or those ancient things called flippies or something). If a keylogger does get your passwords, then it's not a lot of use if the file that stores them can't be sent to the intended recipient. This is why generally more emphasis is placed on encrypting data since if you assume that the passwords are unavailable then there is a lot you can do to protect private data,

Edited by ShaunR
Link to comment

Are the dongles like http://www.smart-lock.com/ much safer?

A few years ago we had the best protection available: spaghetti code... the worst kind!

Our SW was so unreadable that no one could have used it. Seriously, we know of a big company in China that tried copying the logic of our source code that they stole and they simply gave up trying.

However, now that I'm upgrading the code to LVOOP I'm afraid it won't be that hard anymore. This is the only bad thing I have to say about OO :D

Link to comment

Are the dongles like http://www.smart-lock.com/ much safer?

A few years ago we had the best protection available: spaghetti code... the worst kind!

Our SW was so unreadable that no one could have used it. Seriously, we know of a big company in China that tried copying the logic of our source code that they stole and they simply gave up trying.

However, now that I'm upgrading the code to LVOOP I'm afraid it won't be that hard anymore. This is the only bad thing I have to say about OO :D

Software is like a fart. Yours is ok, but everyone elses stinks. LVOOP just ensures no-one can tell who farted :)

  • Like 1
Link to comment

While using an external solution such as Windows authentication is sometimes an option it doesn't automatically solve most of the problems. If you intend to not just call a Windows login dialog itself, you still end up having the password in cleartext in LabVIEW anyhow, so most of the concerns still apply.

And (ab)using Windows authentication for your own application is not as trivial as it may seem. There are differences about local logins and domain logins. While you can issue domain logins fairly easily using directory services, this proofs very difficult for computers that use a local authentication scheme or don't have cached domain login credentials and have to use local authentication as fallback.

I have tried with .Net to get there, but the only way that works reliably both for domain authentication as well as local authentication is using a VERY involved Windows API interface, that I came across by accident somewhere. And it requires an external DLL, as the Windows APIs involved are really complicated, and need to be able to adapt dynamically to various availability of APIs, depending on installed Windows version and feature set.

My solution to these issues would not be ideal at all, but it would go something like this. You log in to the PC (Windows login) this can either be domain controlled or local. Then run the LabVIEW software, or have it in the startup. On start of the application LabVIEW gets the user that is logged in. By looking at the "cmd /c set" results you can determine if the PC is on the domain or not. If it is then use the Windows command line "Net" to get the groups the user is a member of. If the PC is not on the domain, use "Net" again to get the groups that the user is a member of on the domain. Using this information the software can enable or disable parts of the UI to restrict the user.

If the user wants to log in as someone else, I would use the command line function "Shutdown /l" to log them off where they will then be presented with the Windows login. This way LabVIEW never has the password, and the user never types it in while the PC is logged in (also deterring software keyloggers).

If the user wants to perform user management and add users, or modify privileges, you can call a Windows program. Local users and groups can be edited through Lusrmgr.msc.

None of this would rely on .Net, or ActiveX just Windows command line programs which have been standard for several versions, but sure Microsoft could for some reason change them.

Non-Windows...yes you are screwed.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.