Jump to content

LabVIEW Security Vulnerability


Recommended Posts

I wonder who would go through the trouble of embedding secret code into the vi when you can just put code on the block diagram and hide it behind a structure.

Quote

If I understand correctly only vis that I wrote are safe to use

Well thats kind of always been the case regardless of this vulnerability. Its code, you should only run code from trusted sources or after inspection.

 

Its also funny that the vulnerability page shows labview nxg which gets rid of the VI format entirely

Edited by smithd
Link to comment

A co worker of mine had the hobby of embedding stuff and make LV behave in unexpected ways.

It is not that I'm a scared noob.

The only thing that bothered me in the article is the disregard they got from NI and the question if NI takes security into account at all.

Link to comment

Yeah this is old news.  NI's response is here:

http://www.ni.com/product-documentation/54099/en/

Also a few more outlets discussing it:

https://www.helpnetsecurity.com/2017/08/30/labview-code-execution-flaw/

https://blog.0patch.com/2017/09/0patching-rsrc-arbitrary-null-write.html?m=1

I thought there was a LAVA thread on this last year too but I can't seem to find it.

Link to comment

Well, LabVIEW is a software development platform just like Visual C and many other development environments are too. Should Visual C disallow creation of code because you can write viruses with it?

I can send you a Visual C project and tell you to compile it and if you are not careful you just compiled and started a virus. The project may look even totally harmless but containing precompiled library files that disguise as some DLL import library, and/or even a precompiled dll that you absolutely need to talk to my super duper IOT device that I give away for free :D. Will you use diligent care and not run that project because you suspect something is funky with this?

The only real problem a LabVIEW VI has is that you can configure it to autostart on load. This is the only real problem with VIs. This is a feature from the times when you did not have an application builder to create an executable (and in 1990 the security awareness was much lower, heck the whole internet at that time was basically open, with email servers trusting each other blindly that the other won't be abused for malicious or even commercial reasons). If you wanted a noob to be able to use a program you wrote you could then tell him to just click that VI file and LabVIEW would be started, the VI loaded and everything was ready to run, without having to explain that he also should push that little arrow in the right upper corner just under the title bar.

The solution to this is to NOT click on a VI file that you do not know what it contains to open it but instead to open an empty VI and drag the VI onto its empty diagram. That way you can open the VI diagram and inspect it without causing it to autostart.

Removing the autostart feature in later LabVIEW versions would probably have been a good idea but was apparently disregarded as it would be a backwards incompatibility.

Also the article on The Hackers News site dates from August 29, 2017 (and all the links I can find on the net about CVE 2017-2779 are dated between August 29 and Sebtember 13). They may have gotten a somewhat ignorant response from a person at NI, or they might not! I long ago stopped to believe articles on the net blindly. This site has a certain interest to boost about their activities and that may include putting others into a somewhat more critical light than is really warranted.

Here is a more "official" report about the security advisory (note that the link on the The Hackers News report has gone stale already), and it mentions on September 13, 2017 the official response from NI, although the NI document has a publish date of September 22, 2017, probably due to some later redaction of it. If I remember correctly the original response contained an acknowledgement and stated that NI is looking into this and will update the document once they determined the best cause of action. So on September 22, 2017 they probably updated that document to state the availability of the patches.

Note that the blog post from Cisco Talos about this vulnerability, which is cited as source of the article on The Hackers News, but without providing any link to the actual Cisco Talos blog post, does contain the same claim that NI does not care, but has an updated notice from September 17, 2017 that NI has made an official response available. The security report from Cisco Tales itself does not mention anything about NI not caring! It does however state that the vulnerability was apparently disclosed to NI on January 25, 2017, so yes there seems to have been a problem at NI not giving this due diligence until it was publicly disclosed. I would think that the original reaction was apparently lacking but the reaction time of less than one month after the sh*t hit the fan, to produce a patch for a product like LabVIEW is actually pretty quick. And that if someone at The Hackers News really would care about this type of reports rather than just blasting others, they would have found some time to edit that page to mention the availability of patches, contrary to their statement that NI doesn't care!

Edited by rolfk
Link to comment

Thanks for the links.

It is reassuring to see that NI took action and patched it.

The real issue that I wanted to see is that they care about security.

Personally I'm less afraid of running malicious code. 

The thing that scares me is LV environment itself being vulnerable and acting as a door for a hacker accessing the computer.

My code is used in the manufacture line of big companies which spend big money for security in the IT department.

However, if the management computers and servers are compromised the damage is big but not as big as a manufacturing line that stopped because of a compromised automation.

The damage of a hack in a power plant or a robot on Mars running LV is much greater than even the ERP of the company having to come back from backup.

If you understand that then LV should be made much more secure than regular programs.

Link to comment

NI released a number of patches for different versions. Was it only those versions affected? Or was it only those versions they applied the patch to and the issue still exists in previous versions?

The problem I have with these sorts of threads (not necessarily here on Lavag, but in general) is that no-one ever defines their threat model.

Who is it you are defending against? What are their capabilities? What are you prepared to give up in the name of security? Do you want Norton Security Suite running on your 500kB embedded platform?

Is it the big bad nation state Stuxnetting your production line? Is it Joe Bloggs in the next cubical experimenting with a script he found on Youtube?

My base line threat model is a skill level of a mediocre LabVIEW programmer, who can use Wireshark and has too much free time and a TCP connection to the device. Therefore my main worries are more with developers seemingly oblivious to even basic OPSEC rather than some alphabetty organisations with thousands of programmers.

Does your websockets or web application have any authentication? Does it even use TLS? What happens if I send a 10GB file via websockets to your server ;) Are you still using VI Server across your network? (which is unencrypted). Are you sending raw SQL back and forth between databases, unecrypted and with no authentication? These are the sorts of things I see in every company I have ever consulted with and would only require 10 minutes with wireshark to bring down complete production lines and smash robots into conveyor belts.Until that level of basic security is addressed it is just obsessive fearmongering to worry about specially crafted VIs that may cause a crash. Hell. LabVIEW crashes all the time.

Until developers take ownership of their application security - and by that I mean really think about it rather than just using HTTPS because a webserver requires it, I see no real reason why NI need to fret about academic attacks and it is Kudos to them that they responded in the manner they did.

Edited by ShaunR
  • Like 2
Link to comment
13 hours ago, ShaunR said:

NI released a number of patches for different versions. Was it only those versions affected? Or was it only those versions they applied the patch to and the issue still exists in previous versions?

It is in the basic RSRC format structure of VIs and the code to load that into LabVIEW has basically not changed much since about LabVIEW 2.5. So it is safe to assume that this vulnerability has existed since the begin of multiplatform LabVIEW. I doubt that it was there in this way in the Mac only versions 2.2.x and earlier as VIs were then real Macintosh resource files and LabVIEW used the Macintosh resource manager for loading them. (Note that the Mac OS 7 from those times wouldn't even survive a few seconds of being connected to the net! It's security was by nowadays standards mediocre and its only security from modern internet attacks was its lack of a standard internet connectivity out of the box; you had to buy an expensive coax network plugin card to make it connect to TCP/IP and install a rather buggy network layer that was redesigned more than once and eventually named Open Transport, because it connected the Mac to a non-Apple only standard network :D).

NI puts the latest three versions before the current one into maintenance mode and only releases security patches for them, as well as making sure that driver installations like NI-DAQmx etc support those four versions. Anything older is put in unsupported status and is normally not even evaluated if a bug exists in it. If a customer reports a bug in an older version and it gets assigned a CAR, then that older version may be mentioned, but retrospective investigations in older than the last 4 versions is officially not done by NI.

This threat is assigned on the nist.gov link that I mentioned a Common Weakness Enumeration 787 status, which is an out of bounds write access, or buffer overflow write error. As such it is pretty easy to use as a DOS attack and potentially also as a code privilege execution escalation but those are tricky to exploit and LabVIEW makes them even trickier to exploit due to its highly dynamic memory management scheme.

Edited by rolfk
  • Like 1
Link to comment
3 hours ago, rolfk said:

As such it is pretty easy to use as a DOS attack and potentially also as a code privilege execution escalation

I'm not so sure. For it to affect LabVIEW you have to open it. If that results in a crash then you have to restart LabVIEW (which will be fine) and then open it again for it to crash again. The effect doesn't seem to have LabVIEW-wide permanence so I'm not sure how you would DOS LabVIEW, rather, a developer would just curse the VI and move on unaffected. I suppose a plug-in architecture may be problematic, but as soon as a plug-in keeps crashing I expect it would be removed.

The demonstrations also don't show a privilege escalation, rather, some C code to memcpy. Apart from the glaring obvious question of how do you execute C code within LabVIEW without DLLs etc; the exfiltration would be somewhat problematic. There are several levels of additional complexity to make that a useful attack. I'm not saying it can't be done, but for a random VI on the net or in your email client, there doesn't seem to be much of a consequence from opening it that we don't experience daily anyway-LabVIEW crashes or out of memory.

That's why I remain unconcerned about this particular threat, for now, as it seems more of an academic exploit. There are far more egregious methods of DOSing LabVIEW applications and if it were shown that arbitrary code in the VI could be executed (therefore making exfiltration easier) then I would probably be more concerned.

Link to comment

An application crash is IMHO a DOS. Never mind that it does obviously happen due to an actual user action (the opening of the file) rather than stealthy in the background, and I find it irrelevant in this context that LabVIEW can sometimes crash out of the blue without anyone trying to DOS it. :D (That explicitly excludes the crashes that I sometimes cause when testing external code libraries :cool:)

Code execution is another but as I wrote not very likely option. Buffer overflow write access errors can theoretically be used to execute some code that you loaded through careful manipulation of the file into a known or easily predictable location in memory. But that is kind of tricky to exploit and the LabVIEW nature makes it even more tricky to get any predictable memory location. Even if you manage to do that it almost certainly will only work on a specific LabVIEW version.

Edited by rolfk
Link to comment

Yes the auto running VIs thing can be annoying, especially when people help out on the forums and are opening code from unknown sources.  My LabVIEW Tray Launcher I wrote takes over the .VI file extension and if a VI is set to run on open, it has an option to ask if you really want to run it, or just open it.  An installer is also linked to on that page.

  • Like 1
Link to comment
2 hours ago, 0_o said:

You guys... All I wanted is to signal NI to take LV's security more seriously.

Instead you started here a tutorial explaining what the real vulnerabilities are and how to exploit them.

T h a n k s  :thumbup1: 

You're welcome. :) Maybe it's time to think beyond the ostrich defence to adversaries?

Link to comment
On 7/9/2018 at 2:02 AM, ShaunR said:

Does your websockets or web application have any authentication? Does it even use TLS? What happens if I send a 10GB file via websockets to your server ;) Are you still using VI Server across your network? (which is unencrypted). Are you sending raw SQL back and forth between databases, unecrypted and with no authentication? These are the sorts of things I see in every company I have ever consulted with and would only require 10 minutes with wireshark to bring down complete production lines and smash robots into conveyor belts.

I take good care that whatever is in my control is safe to a reasonable degree:

The client app uses a .net secure comm to a server app that checks the request and operates on a MS SQL DB.

However, I like standardization+code reuse and hate writing from scratch tools that are available and approved by the community.

Till now I had to worry mainly about license types.

Should I be afraid of such code more than I already do?

Should I move more parts of my code to a language that takes security as a priority? Does NI take it seriously?

Those where the questions behind my post since I'm not writing a small laboratory toy, I'm automating production lines.

Link to comment
2 hours ago, 0_o said:

I take good care that whatever is in my control is safe to a reasonable degree:

If you are writing the software, then it is all within your control. What do you consider "reasonable"?

2 hours ago, 0_o said:

The client app uses a .net secure comm to a server app that checks the request and operates on a MS SQL DB.

What is a .net secure comm?

2 hours ago, 0_o said:

Till now I had to worry mainly about license types.

Should I be afraid of such code more than I already do?

What has licencing to do with security and what are you afraid of and whom? (refer to my previous statement about threat models).

2 hours ago, 0_o said:

Should I move more parts of my code to a language that takes security as a priority?

Security is language agnostic.

2 hours ago, 0_o said:

Does NI take it seriously?

Kind of. For the most part they think long and hard about certain things but my biggest bugbear is their deployment of OpenSSL libraries which is sporadic and doesn't seem to have a high priority in keeping up to date. I have, on more than one occasion, thought about deploying my own instead of using theirs for this reason.

Edited by ShaunR
Link to comment

The .net WCF is encrypted. You can't sniff it with wireshark. Only by checking the WCF log from the server directly.

License has nothing to do with IT security. I ment that I felt free to use any BSD/MIT from VIPM yet now I'm starting to think that I should go over that code and even then it is risky.

Basically I'm afraid that a tool I give to a costumer will and up as the door for a hack and I'll get sued not only for an app I wrote but for a production line stop.

Security is not language agnostic as you can see in the self executing vi you talked about and their decision to use OpenSSL

Link to comment
22 hours ago, 0_o said:

The .net WCF is encrypted. You can't sniff it with wireshark. Only by checking the WCF log from the server directly.

If you are using "WSHttpBinding" then that is SSL, IIRC. You can sniff the network packets with wireshark if you have the private key(s).

22 hours ago, 0_o said:

License has nothing to do with IT security. I ment that I felt free to use any BSD/MIT from VIPM yet now I'm starting to think that I should go over that code and even then it is risky

You should always go over others code regardless of licence (if you have the source). At the very least, one might learn something.

22 hours ago, 0_o said:

Basically I'm afraid that a tool I give to a costumer will and up as the door for a hack and I'll get sued not only for an app I wrote but for a production line stop.

That's what Limited Liability Insurance is for but there are some things you can do to try to mitigate these possibilities (no USB ports, no network access, basic OPSEC etc.).

22 hours ago, 0_o said:

Security is not language agnostic as you can see in the self executing vi you talked about and their decision to use OpenSSL

OpenSSL is written in C. You are using .NET (C#?). You could write an SSL protocol provider in LabVIEW, if you wanted to. They may have bugs, but the protocols are language agnostic as are mitigations such as keeping a server room locked, not connecting an application to the internet/network and not telling people your passwords.

Security is a huge specialist domain. You will hear about things like "defence in depth" which means not relying on a single mitigation or security feature, rather, many layers of protection. "Reducing the attack surface" - making methods of ingress as small as possible whilst still maintaining functionality. You are thinking about it, which, IMO, is already ahead of the average programmer (especially in IoT :frusty:).

There was a very good video at one of the NI meetings at CERN by their IT manager (2014ClaEu_Day3_03_Control System Security). Well worth a watch if you can find it. Good communication with an IT department is paramount as they are your first and best line of defence and have the expertise. Once an adversary is inside the network, your options are more limited and they have demonstrated a certain skill level.

Oh yes. One final point. DON'T CONNECT ANYTHING TO THE INTERNET!

Edited by ShaunR
Link to comment

1. You can sniff with Wireshark but you'll understand nothing since it is encrypted as I said.

2. Not only that I don't have time to go over the code of such tools, most are password protected. I guess the question here is what do the community do when is says a tool was verified? What does NI do before it gets a tool or driver on the shop? If it only means they installed it and ran an antivirus on the folder then I can't rely on such a verification anymore.

3.  Made sure Limited Liability appears in all our contracts. It was missing from some. Thanks!

4. If an ecosystem like LV decides to use a problematic toolbox and I'm taking the risk since I'm using that ecosystem then this ecosystem itself is problematic. If you put that ecosystem in comparison with other languages then the risk is not language agnostic

5. If I sold a stand alone app that can be an island disconnected from the world then this might be nice. Setup, do an image, tell the costumer to call only if the image fails or they want a new feature. However, in my case, the app controls a factory and talks with a central server that talks with a DB server. It is monolithic and thus can't turn into Lambdas at AWS and the servers are out of my control.

6. How do I get this: 2014ClaEu_Day3_03_Control System Security ?????

7. What do you think about https://aws.amazon.com/iot/?nc=sn&loc=0

Edited by 0_o
Link to comment
11 hours ago, 0_o said:

1. You can sniff with Wireshark but you'll understand nothing since it is encrypted as I said.

Nope! If you have the according certificate, Wireshark has an option to decrypt the SSL encrypted data stream directly into fully readable data (which might be binary or not). Obviously the client should only have the certificate containing the public key which won't make direct decryption possible and only the server should know the certificate with the private key but that is already an attack surface.

Edited by rolfk
Link to comment

This is the case. Only the server has the private key.

How can you reduce the attack surface of hacking the private key from the server? This is a task for the IT and not for an app developer.

23 hours ago, 0_o said:

How do I get this: 2014ClaEu_Day3_03_Control System Security ?????

?

Link to comment
On 7/17/2018 at 10:44 AM, 0_o said:

This is the case. Only the server has the private key.

So you are not using client certificates?

On 7/16/2018 at 11:11 AM, 0_o said:

Not only that I don't have time to go over the code of such tools, most are password protected. I guess the question here is what do the community do when is says a tool was verified? What does NI do before it gets a tool or driver on the shop? If it only means they installed it and ran an antivirus on the folder then I can't rely on such a verification anymore.

Verification does not mean "secure". On LavaG it means someone has inspected it and it works without errors and seems to work as advertised. Similarly the NI process is basically a few initial tests to make sure it installs and doesn't impact existing LabVIEW features. It's probably virus scanned several time whilst on the NI network.

On 7/16/2018 at 11:11 AM, 0_o said:

If an ecosystem like LV decides to use a problematic toolbox and I'm taking the risk since I'm using that ecosystem then this ecosystem itself is problematic. If you put that ecosystem in comparison with other languages then the risk is not language agnostic

Yes you are [taking the risk]. That has nothing to do with the "ecosystem". It is the trade-off between your effort and leveraging other peoples effort. It applies to all open source projects. The premise is that security flaws will be identified by having more eyes on the code, not that it is unequivocally "secure". If you want someone else to take the responsibility, then use a company that specialises in security.

On 7/16/2018 at 11:11 AM, 0_o said:

If I sold a stand alone app that can be an island disconnected from the world then this might be nice. Setup, do an image, tell the costumer to call only if the image fails or they want a new feature. However, in my case, the app controls a factory and talks with a central server that talks with a DB server. It is monolithic and thus can't turn into Lambdas at AWS and the servers are out of my control.

That is the converse of "monolithic". Many services interacting to produce a cohesive system is what Systems Engineers are for. In that domain there is the concept of "compartmentalise and contain" - meaning attempting to design a system so that problems (be they security or otherwise) are limited to single modules/services and unintended consequences cannot propagate to other services. This is a kind of "design for damage limitation" and you can do your part for your contribution.

On 7/16/2018 at 11:11 AM, 0_o said:

What do you think about https://aws.amazon.com/iot/?nc=sn&loc=0

You can't outsource security and keeping your private keys on some elses server is a trend that I resist. I stated previously that is was a bad idea to connect anything to the internet so connecting everything to someone elses servers that you do not have physical control over, don't know where it is residing, and that can be cut off on a whim is not a viable solution for me.

 

Edited by ShaunR
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.