Jump to content

Security? Who cares?


ShaunR

Recommended Posts

I posted on a separate thread because I thought the two topics differed enough but I'm happy to continue here.

 

I totally missed the SSH option in conjunction with the VPN (for remote access.) That was the missing link that pushed us to think of a more creative workaround. Especially with the Linux boxes, I think that SSH and proper configuration of the firewall to only accept local connections should be good enough for us. The only caveat left in the chain is integrating the SSH client within the LabVIEW application. Fortunately, we would only need to support a single platform (Windows). Has anyone done it so that the username/passwords can be requested from within the application and the tunnel established over a .dll or command line utility? There appears to be a port of OpenSSH for Windows but it seems to run on Cygwin. Putty's command line utility (plink) could also be a good contender to create a tunnel which appears to be builtin the application?

 

Using those standard tools would likely require a lot less development while offering proper encryption.

 

Funny you should mention that. I've just been playing with forward and reverse SSH proxies for the Encryption Compendium for LabVIEW now that it has SSH client capabilities (no cygwin or other nasty Linux emulators :P ).

 

post-15232-0-63713600-1448319676.png

 

If you have SSH (or TLS for that matter) you don't need a VPN. If you are thinking about requiring VPN for Remote Desktop then there are flavours of VNC that tunnel over SSH/TLS or use encryption.(disclaimer: haven't used them in anger but know they exist) Once you go for direct TCPIP through TLS or SSH there isn't much of a reason to use a VPN, apart from IP address hiding, since end-to-end doesn't require control over the entire path of the infrastructure to maintain secrecy.

Edited by ShaunR
Link to comment

Is the screen shot a connection to a RIO unit? Have you been able to test down to port forwarding to the LabVIEW app running on the RIO? I'll try to find some time this week to do my own tests but your toolkit is interesting.

 

For the VPN, it would only be to connect from our office to the client's LAN for example. For the dedicated Host PC, I understand it would only require SSH.

Link to comment

Is the screen shot a connection to a RIO unit? Have you been able to test down to port forwarding to the LabVIEW app running on the RIO? I'll try to find some time this week to do my own tests but your toolkit is interesting.

 

For the VPN, it would only be to connect from our office to the client's LAN for example. For the dedicated Host PC, I understand it would only require SSH.

 

Answered in a PM.

Link to comment

So a quick test definitely confirms that the MyRIO has SSH tunnel support enabled and a connection can be established from Windows using "plink.exe" (Putty) with System Exec. The command to System Exec:

 

           d:\Installed\Putty\plink.exe -ssh admin@192.168.0.66 -pw Passwrd -C -T -L 127.0.0.1:9988:192.168.0.66:8899  (9988 is the port on the Host/PC, 8899 is the port I open in LabVIEW on the target, 192.168.0.66 is the MyRIO address.)

 

Neat, thanks for the heads up Shaun. This is a lot faster and easier than our original idea and it is my answer to your original question about my take on security for remote systems.

 

Sidetracking a bit here but once we've launched a System Exec in the background (wait until completion? set to False), is there any way to get that process back and parse the information (output to stdout) or to stop it?

Link to comment

So a quick test definitely confirms that the MyRIO has SSH tunnel support enabled and a connection can be established from Windows using "plink.exe" (Putty) with System Exec. The command to System Exec:

 

           d:\Installed\Putty\plink.exe -ssh admin@192.168.0.66 -pw Passwrd -C -T -L 127.0.0.1:9988:192.168.0.66:8899  (9988 is the port on the Host/PC, 8899 is the port I open in LabVIEW on the target, 192.168.0.66 is the MyRIO address.)

 

Neat, thanks for the heads up Shaun. This is a lot faster and easier than our original idea and it is my answer to your original question about my take on security for remote systems.

 

Sidetracking a bit here but once we've launched a System Exec in the background (wait until completion? set to False), is there any way to get that process back and parse the information (output to stdout) or to stop it?

There is a whole thread on LabVIEW and SSH. and one of the posts has your solution Cat is the expert on SSH now :D

 

I will however reiterate that using a username and password, although exchanged securely, is a lot less desirable than private/public keys. The later makes it impossible to brute force.

 

There is only one real weakness with SSH - verification. When you first exchange verification hashes you have to trust that the hash you receive from the server is actually from the server you think it is. You will probably have noticed that plink asked you about that when you first connected. You probably said "yeah, I trust it" but it is important to check the signature to make sure someone didn't intercept it and send you theirs instead. Once you hit OK, you won't be asked again until you clear the trusted servers' cache so that first time is hugely important for your secrecy.

Edited by ShaunR
  • Like 2
Link to comment

Whos TCPIP read functions use the tried and tested read length then data like the following? (ShaunR puts his hand up  :oops:  )

 

post-15232-0-09754900-1448633857.png

 

What happens if I connect to your  LabVIEW program like this?

 

post-15232-0-17848800-1448636926.png

 

Does anyone use white lists on the remote address terminal of the Listener.vi?

Edited by ShaunR
Link to comment

Not really like this! My code uses generally a header of a fixed size with more than just a size value. So there is some context that can be verfied before interpreting the size value. The header usually includes some protocol identifier, version number and a message identifier before specifying the size of the actual message.

 

If the header doesn't evaluate to a valid message the connection is closed and restarted in the client case. For the server it simply waits for a reconnection from the client.

 

Of course if you maliciously create a valid header specifying your ridiculous length value it may still go wrong, but if you execute your client code on your own machine you will probably run into trouble before it hits the TCP Send node.  :P

 

I usually don't go through the trouble of trying to guess if a length value might be usefull after the header has been determined to be valid. Might as well consider that in the future, based on the message identifier, but if you have figured out my protocol you may as well find a way to cause a DOS attack anyways. Not all messages types can be made fixed size and imposing an arbitrary limit on such messages may look good today but bite you in your ass tomorrow.  :rolleyes:

 

And yes I have used white listing on an SMS server implementation in the past. Not really funny if anyone in the world could send SMS messages through your server where you have to pay for each message.

  • Like 2
Link to comment

Not really like this! My code uses generally a header of a fixed size with more than just a size value. So there is some context that can be verfied before interpreting the size value. The header usually includes some protocol identifier, version number and a message identifier before specifying the size of the actual message.

Indeed. The Transport.lvlib has a more complex header, but it is still a naive implementation which the example demonstrates. It doesn't check the length and passes it straight to the read, which is the real point of the example.

 

If the header doesn't evaluate to a valid message the connection is closed and restarted in the client case. For the server it simply waits for a reconnection from the client.

 

Of course if you maliciously create a valid header specifying your ridiculous length value it may still go wrong, but if you execute your client code on your own machine you will probably run into trouble before it hits the TCP Send node.  :P

 

Only on 32 bit LabVIEW ;)  64 bit LabVIEW will quite happily send it. However. That is a moot point because I could send the same length header and send 20 x 107374182 length packets for the same effect..

 

I usually don't go through the trouble of trying to guess if a length value might be useful after the header has been determined to be valid. Might as well consider that in the future, based on the message identifier, but if you have figured out my protocol you may as well find a way to cause a DOS attack anyways. Not all messages types can be made fixed size and imposing an arbitrary limit on such messages may look good today but bite you in your ass tomorrow.  :rolleyes:

 

I agree. But for a generic, reusable TCPIP protocol; message length by an ID isn't a good solution as you make the protocol application specific. Where all this will probably be first encountered in the wild is with Websockets so the protocol is out there. Whatever strategy is decided on, it should also be useful for that too too because you have a fixed RFC protocol in that case.

 

Interestingly. All this was prompted by me implementing SSL and emulating the LabVIEW STANDARD, IMMEDIATE, CRLF etc. Because I had to implement a much more complex (buffered) read function in LabVIEW, it became much more obvious where things could (and did) go wrong. I decided to implement two defensive features.

 

  1. A default maximum frame/message/chink/lemon size that is dependent on the platform - cRIO and other RT platforms are vulnerable to memory exhaustion well below 2GB. (Different mechanism, same result)
  2. A default rate limit on incoming frame/message/chunk/lemon - attempt to prevent the platform TCPIP stack being saturated if the stack cannot be serviced quickly enough.

 

Both of these are configurable, of course, and you can set to either drop packets with a warning (default) or an error.

 

And yes I have used white listing on an SMS server implementation in the past. Not really funny if anyone in the world could send SMS messages through your server where you have to pay for each message.

Yes. This is actually a much more recent brain fart that I've had and I have plans to add this as a default feature in all my APIs. It's easy, ti implement and raises an abusers required competence level considerably.

Edited by ShaunR
Link to comment

All this was prompted by me implementing SSL and emulating the LabVIEW STANDARD, IMMEDIATE, CRLF etc. Because I had to implement a much more complex (buffered) read function in LabVIEW, it became much more obvious where things could (and did) go wrong. I decided to implement two defensive features.

 

  1. A default maximum frame/message/chink/lemon size that is dependent on the platform - cRIO and other RT platforms are vulnerable to memory exhaustion well below 2GB. (Different mechanism, same result)
  2. A default rate limit on incoming frame/message/chunk/lemon - attempt to prevent the platform TCPIP stack being saturated if the stack cannot be serviced quickly enough.

 

Both of these are configurable, of course, and you can set to either drop packets with a warning (default) or an error.

 

I wonder if this is very useful. The Berkeley TCP/IP socket library, which is used on almost all Unix systems including Linux, and on which the Winsock implementation is based too, has various configurable tuning parameters. Among them are also things like number of outstanding acknowledge packets as well as maximum buffer size per socket that can be used before the socket library simply blocks any more data to come in. The cRIO socket library (well at least for the newer NI Linux systems, the vxWorks and Pharlap libraries may be privately baked libraries that could behave less robust) being in fact just another Linux variant certainly uses them too. Your Mega-Jumbo data packet simply will block on the sender side (and fill your send buffer) and cause more likely a DOS attack on your own system than one on the receiving side. Theoretically you can set your send buffer for the socket to 2^32 -1 bytes of course but that will impact your own system performance very badly.

 

So is it useful to add yet another "buffer limit" on the higher level protocol layers? Aren't you badly muddying the waters about proper protocol layer respoinsiblities by such bandaid fixes? Only the final high level protocol can really make any educated guesses about such limits and even there it is often hard to do if you want to allow variable sized message structures. Limiting the message to some 64KB for instance wouldn't even necessarily help if you get a client that maliciously attempts to throw thousends of such packets at your application. Only the final upper layer can really take useful action to prepare for such attacks. Anything in between will always be possible to circumvent by better architected attack attempts.

 

In addition you can't set a socket buffer above 2^16-1 bytes after the connection has been established as the according windows need to be negotiated during the connection establishment. Since you don't get at the refnum in LabVIEW before the socket has been connected this is therefore not possible. You would have to create your DOS code in C or similar to be able to configure a sender buffer above 2^16-1 bytes on the unconnected socket before calling the connect() function.

  • Like 1
Link to comment

I wonder if this is very useful. The Berkeley TCP/IP socket library, which is used on almost all Unix systems including Linux, and on which the Winsock implementation is based too, has various configurable tuning parameters. Among them are also things like number of outstanding acknowledge packets as well as maximum buffer size per socket that can be used before the socket library simply blocks any more data to come in. The cRIO socket library (well at least for the newer NI Linux systems, the vxWorks and Pharlap libraries may be privately baked libraries that could behave less robust) being in fact just another Linux variant certainly uses them too. Your Mega-Jumbo data packet simply will block on the sender side (and fill your send buffer) and cause more likely a DOS attack on your own system than one on the receiving side. Theoretically you can set your send buffer for the socket to 2^32 -1 bytes of course but that will impact your own system performance very badly.

 

The issue is the LabVIEW implementation (in particular, the STANDARD or BUFFERED modes). The actual socket implementations are not the issue; it is the transition from the underlying OS to LabVIEW where these issues arise. Standard mode waits there until the underlying layer gets all 2G then dies when returning from the LabVIEW primitive. The solution is to use immediate mode to consume at whatever size chunks the OS throws it up to LabVIEW at. At this point you have to implement your own concatenation to maintain the same behavior and convenience that STANDARD gives you which requires a buffer (either in your TCPIP API or in your application).

 

When you have consigned yourself to the complexity increase of buffering within LabVIEW, the possibility to mitigate becomes available as you can now drop chunks either because its coming in too fast for you to consume or because the concatenated size will consume all your memory.

 

So is it useful to add yet another "buffer limit" on the higher level protocol layers? Aren't you badly muddying the waters about proper protocol layer respoinsiblities by such bandaid fixes? Only the final high level protocol can really make any educated guesses about such limits and even there it is often hard to do if you want to allow variable sized message structures. Limiting the message to some 64KB for instance wouldn't even necessarily help if you get a client that maliciously attempts to throw thousends of such packets at your application. Only the final upper layer can really take useful action to prepare for such attacks. Anything in between will always be possible to circumvent by better architected attack attempts.

 

I'd be interested in other solutions but yes, it is useful (and effective). The contrived example and more sophisticated attempts are ineffective with (1). If you then put the read VI in a loop with a string concatenate in your application; there is not much we can do about that as the string concatenate will fail at 2GB on a 32 bit system. So it is better to trust the VI if you are only expecting command/response messages and set the LabVIEW read buffer limit appropriately (who sends 100MB command response messages? :D)  In a way you are allowing the user to be able to handle it easily at the application layer with a drop-in replacement of the read primitive, It just also has a safer default setting.

 

If a client maliciously throws thousands of such packets then (2) comes into play. Then it is an arms race of how fast you can drop packets against how fast they can throw them at you. If you are on a 1Gb connection, my money is on the application ;) You may be DOS'd but when they stop; your application is still standing.

 

In addition you can't set a socket buffer above 2^16-1 bytes after the connection has been established as the according windows need to be negotiated during the connection establishment. Since you don't get at the refnum in LabVIEW before the socket has been connected this is therefore not possible. You would have to create your DOS code in C or similar to be able to configure a sender buffer above 2^16-1 bytes on the unconnected socket before calling the connect() function.

 

As said previously. It is a LabVIEW limitation. So effectively this proposal does give you the option to increase the buffer after connection (from the applications viewpoint). However, there is no option/setting in any of the OS that I know of to drop packets or  buffer contents. You must do that by consuming them. There is no point in setting your buffer to 2G if you only have 500MB of memory.

Edited by ShaunR
Link to comment

I still don't really get who you are trying to secure yourself against.

This seems like a lot of work for very little real reason.

I suppose in response to your OP, I am in the don't care category. All my systems that have some kind of network comms are on an intranet, and if somebody else on that network is trying to maliciously ruin my day it really is not my problem.

Where I need to expose things to the Internet I would never use raw tcpip.

In those circumstances I use Web Services with their built in security features.

Link to comment

I still don't really get who you are trying to secure yourself against.

 

I want to secure myself against you. your dog, your friends, your company. your government and your negligence. :D (Obviously not you personally, just to clarify :P).

 

This seems like a lot of work for very little real reason.

 

How much effort I put into that depends on how much value I put on a system, the data it produces/consumes, the consequences of compromising its integrity and the expected expertise the adversary has. If I (or my company/client) don't value a system or the information it contains then there is no need  to do anything. If my company/client says it's valuable, then I can change some parameters and protect it more without having to make architectural concessions. The initial effort is high. The continued effort is low. I have the tools, don't you want them too?

 

I suppose in response to your OP, I am in the don't care category. All my systems that have some kind of network comms are on an intranet, and if somebody else on that network is trying to maliciously ruin my day it really is not my problem.

Where I need to expose things to the Internet I would never use raw tcpip.

In those circumstances I use Web Services with their built in security features.

 

When your machine can take arms or legs off or can destroy thousands of pounds worth of equipment when it malfunctions. People very quickly make it your problem :P "Not my problem" is the sort of attitude I take IT departments to task with and I don't really want to group you with those reprobates :lol:

 

Your last comment sentence is a fair one, though. However. Web services are slow and come with their own issues. By simply deploying a web server you are greatly increasing the attack surface as they have to be all things to all people and are only really designed for serving web pages  If you are resigned to never writing some software that uses Websockets, RTSP or other high performance streaming protocols then you can probably make do. I prefer to not make do but make it do - because I can. Some are users of technology and some are creators.Sometimes the former have difficulty in understanding the motives of the latter but everyone befits. :yes:

Link to comment

Neil, While LV is not used in any financial role that I know of, it is used in many industrial applications. You don't want to be the one that enabled the next Flame/Doku, the same as Siemens didn't want to be the first, when their engineers said it is not their problem.

Link to comment

So is it useful to add yet another "buffer limit" on the higher level protocol layers? Aren't you badly muddying the waters about proper protocol layer respoinsiblities by such bandaid fixes?

 

Exactly.

 

If we're talking TCP here (we are), rate limiting thru backpressure is what you want by setting `SO_RCVBUF` with `setsockopt()`

 

Buried in vi.lib you can find some methods that take a TCP connection reference and expose the underlying file descriptor -- presumably, you could set such options with kernel APIs (I've not tried) -- but in reality using the labview TCP API is kinda jokey if you're concerned about security (or high performance). Granted, best-in-class security and also high-performance are not the presiding requirement for many applications, in which case the TCP API is perfectly fine and convenient.

 

 

I still don't really get who you are trying to secure yourself against.

This seems like a lot of work for very little real reason.

I suppose in response to your OP, I am in the don't care category. All my systems that have some kind of network comms are on an intranet, and if somebody else on that network is trying to maliciously ruin my day it really is not my problem.

Where I need to expose things to the Internet I would never use raw tcpip.

In those circumstances I use Web Services with their built in security features.

 

Response to each sentence: Yikes; yikes; fair enough; seems like a reasonable; only a 100% maniac would do this; YIKES NO NO NO.

 

In regards to all instruments/control systems being accessible remotely -- whether intranet, or especially public internet -- one must analyze the risk profile of what happens in the physical word if the controller goes up in smoke, and how easy/likely is that to be triggered remotely?

 

The controller is a finite resource. If demanded, it can spend 100% of its finite power either controlling the process, or 100% of its power fending off an DoS or fuzzing attack. Each can starve the other, and both can starve "well-behaved", authorized remote clients/peers attempting to bring order to a railed-out controller.

 

(Here, "attack" is a word shrouded with improper connotation. It's likely that we are ourselves (and our colleagues) are our own (and colleague's) biggest attackers, just by writing dumb bugs. Networked communication is an *excellent* manner of flushing out bugs we were not even aware we were so good at writing, having become accustomed to the reasonably deterministic and reliable medium of memory and CPU for in-process communication, not always programming against such lossy mediums as the physical jankiness the IP stack must deal with.)

 

All this said; the labview web service is woefully incapable of configuring security and QoS parameters for clients. It's reasonable to use behind VPN on an intranet, but I would strongly recommend against putting it on the internet without sitting behind a production-quality HTTP server acting as a reverse proxy.

Link to comment

Exactly.

 

If we're talking TCP here (we are), rate limiting thru backpressure is what you want by setting `SO_RCVBUF` with `setsockopt()`

 

Buried in vi.lib you can find some methods that take a TCP connection reference and expose the underlying file descriptor -- presumably, you could set such options with kernel APIs (I've not tried) -- but in reality using the labview TCP API is kinda jokey if you're concerned about security (or high performance). Granted, best-in-class security and also high-performance are not the presiding requirement for many applications, in which case the TCP API is perfectly fine and convenient.

 

I'm not sure I would agree her fully. Yes security is a problem as you can not get at the underlaying socket in a way that would allow to inject OpenSSL or similar into the socket for instance. So TCP/IP using LabVIEW primtives is limited to unencrypted communication. Performance wise they aren't that bad. There is some overhead in the built in data buffering that consumes some performance, but it isn't that bad. The only real limit is the synchronous character towards the application which makes some high throughput applications more or less impossible. But that are typically protocols that are rather complicated (Video streaming, VOIP, etc) and you do not want to reimplement them on top of the LabVIEW primitives but rather import an existing external library for that anyways.

 

Having a more asynchronous API would be also pretty hard to use for most users. Together with the fact that it is mostly only really necessary for rather complex protocols I wouldn't see any compelling reason to spend to much time on that. I worked through all this pretty extensively when trying to work on this library. Unfortunately the effort to invest into such a project is huge and the immediate needs for it were somewhat limited. Shaun seems to be working on something similar at the moment but making the scope of it possibly even bigger. :D

 

I know that he prefers to solve as much as possible in LabVIEW itself rather than creating an intermediate wrapper shared library. One thing that would concern me here is implementation of the intermediate buffering in LabVIEW itself. I'm not sure that you can get a similar performance there than doing the same in C, even when making heavy use of the In-Place structure in LabVIEW. 

  • Like 1
Link to comment

Exactly.

 

If we're talking TCP here (we are), rate limiting thru backpressure is what you want by setting `SO_RCVBUF` with `setsockopt()`

 

Buried in vi.lib you can find some methods that take a TCP connection reference and expose the underlying file descriptor -- presumably, you could set such options with kernel APIs (I've not tried) -- but in reality using the labview TCP API is kinda jokey if you're concerned about security (or high performance). Granted, best-in-class security and also high-performance are not the presiding requirement for many applications, in which case the TCP API is perfectly fine and convenient.

 

Setting the receive buffer isn't a help in the instance of rate limiting. There is a limit to how many packets can be buffered and they stack up. If you send faster than you can consume (by that I mean read them out through LabVIEW), you hit a limit and the TCPIP stops accepting - in many scenarios, never to return again or locking your application for a considerable time after they cease sending. For SSL, the packets (or records) are limited to 16KB so the buffer size makes no difference. Therfore it is not a solution at all in that case but that is probably outside the scope of this conversation but does demonstrate it is not a panacea.

 

I'm not saying that setting the low level TCPIP buffer is not useful. On the contrary, it is required for performance. However. Allowing the user to choose a different strategy rather than "just go deaf when it's too much" is a more amenable approach. For example. It gives the user an opportunity to inspect the incoming data and filter out real commands so that although your application is working real hard to just service the packets, your application is still responding to your commands.

 

As for the rest. Rolf says it better than I.

 

Having a more asynchronous API would be also pretty hard to use for most users. Together with the fact that it is mostly only really necessary for rather complex protocols I wouldn't see any compelling reason to spend to much time on that. I worked through all this pretty extensively when trying to work on this library. Unfortunately the effort to invest into such a project is huge and the immediate needs for it were somewhat limited. Shaun seems to be working on something similar at the moment but making the scope of it possibly even bigger. :D

I don't think the scope is bigger. It is just a move up from the simplified TCPIP reads that we have relied on for so long. Time to get with the program that other languages' comms APIs did 15 years ago :rolleyes:  I had to implement it for the SSL (which doesn't use the LabVIEW primitives) and have implemented it in the Websocket API for LabVIEW. I am now considering also putting it in transport.lvlib. However. No-one cares about security, it seems, and probably even less use transport.lvlib so its very much a "no action required",in that case. :D

 

I know that he prefers to solve as much as possible in LabVIEW itself rather than creating an intermediate wrapper shared library. One thing that would concern me here is implementation of the intermediate buffering in LabVIEW itself. I'm not sure that you can get a similar performance there than doing the same in C, even when making heavy use of the In-Place structure in LabVIEW.

 

Funnily enough. It is your influence that prompted my preference to solve as much as possible in LabVIEW so in a way I learnt from the master but probably not the lesson that was taught :lol:

 

As to performance. I'm ambivalent bordering on "meh". Processor, memory and threading all affect TCPIP performance which is why if you are truly performance oriented you may go to an FPGA. You won't get the same performance from a cRIOs TCPIP stack as even an old laptop and that assumes it comes with more than a 100Mb port. Then you have all the NAGLE, keep-alive etc that affects what your definition of  performance actually is.

 

Obviously it is something I've looked at and the overhead is a few microseconds for binary and a few 10s of microseconds for CRLF on my test machines. It's not as if I'm using IMMEDIATE mode and processing a byte at a time :P

Edited by ShaunR
  • Like 2
Link to comment

All this said; the labview web service is woefully incapable of configuring security and QoS parameters for clients. It's reasonable to use behind VPN on an intranet, but I would strongly recommend against putting it on the internet without sitting behind a production-quality HTTP server acting as a reverse proxy.

 

As an addendum. Do you have specific issues with using the NI webserver on public networks? Can you detail the specifics of why it is "woefully incapable" for us?

Edited by ShaunR
Link to comment

Do you have specific issues with using the NI webserver on public networks?

 

Unless something has changed, both the Embedded Web Server and Application Web Server are AppWeb by EmbedThis. That web server in and of itself seems pretty capable and pretty sweet, but the wrapper that LabVIEW puts around it strips and sanitizes virtually all of the configuration file options: https://embedthis.com/appweb/doc/users/configuration.html

 

Also, unless something has changed, setting up `SSLCertificateFile` with a private key signed by a bona fide CA (rather than self-signed) took some jumping-thru-hoops.

 

You can poke around and modify the `ErrorLog` and `Log` specs, but only a couple security tokens are honored by not being sanitized away (https://embedthis.com/appweb/doc/users/security.html and https://embedthis.com/appweb/doc/users/monitor.html)

 

Also -- unless something has changed -- the developer experience between dev-time and deploy-time is vastly-different, the way the web service actually works. At a very high level, the problem was this: the concepts of who calls `StartWebServer()` (maRunWebServer) were hard-coded or something into the way the RTE loads applications, and so therefore was not invoked when you just run your application from source in the IDE. In that case, there was this weird out-of-band deployment via a Project Provider, where it used the application web server rather than the embedded web server. Where of course, managing the configurations of these two disjoint deployments didn't work at all.

 

YMMV, but I don't use it for anything. libappweb always felt pretty awesome, but it was just shackled (oh; and while libappweb continually improved, the bundled version with lv stayed on some old version)

 

On a side note, the AppWeb docs site got a huge facelift -- looks like Semantic UI; looks/feels spectacular.

  • Like 1
Link to comment

Unless something has changed, both the Embedded Web Server and Application Web Server are AppWeb by EmbedThis. That web server in and of itself seems pretty capable and pretty sweet, but the wrapper that LabVIEW puts around it strips and sanitizes virtually all of the configuration file options: https://embedthis.com/appweb/doc/users/configuration.html

 

Also, unless something has changed, setting up `SSLCertificateFile` with a private key signed by a bona fide CA (rather than self-signed) took some jumping-thru-hoops.

 

You can poke around and modify the `ErrorLog` and `Log` specs, but only a couple security tokens are honored by not being sanitized away (https://embedthis.com/appweb/doc/users/security.html and https://embedthis.com/appweb/doc/users/monitor.html)

 

Also -- unless something has changed -- the developer experience between dev-time and deploy-time is vastly-different, the way the web service actually works. At a very high level, the problem was this: the concepts of who calls `StartWebServer()` (maRunWebServer) were hard-coded or something into the way the RTE loads applications, and so therefore was not invoked when you just run your application from source in the IDE. In that case, there was this weird out-of-band deployment via a Project Provider, where it used the application web server rather than the embedded web server. Where of course, managing the configurations of these two disjoint deployments didn't work at all.

 

YMMV, but I don't use it for anything. libappweb always felt pretty awesome, but it was just shackled (oh; and while libappweb continually improved, the bundled version with lv stayed on some old version)

 

On a side note, the AppWeb docs site got a huge facelift -- looks like Semantic UI; looks/feels spectacular.

 

Outside of UX are there any known issues with things like URL mapping to the LabVIEW code (sanitation and URL length) or the API keys (poor keyspace)? Are LabVIEW developers even aware that they may need to sanitise and sanity check URL parameters that map to front panel controls?(especially strings)

Link to comment

It should NOT be enabled by default as of LabVIEW 2014.  NI knows that is a security hole and has WebDav instead enabled.

 

I can find no statement to that effect. (I don't have a 2014 RIO; only 2013). This was published on Oct 16, 2015 so hopefully it is just an oversight as it references a 2011 document.

Edited by ShaunR
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.