Jump to content

ShaunR

Members
  • Posts

    4,936
  • Joined

  • Days Won

    303

Everything posted by ShaunR

  1. Wall street maybe?
  2. I want to secure myself against you. your dog, your friends, your company. your government and your negligence. (Obviously not you personally, just to clarify ). How much effort I put into that depends on how much value I put on a system, the data it produces/consumes, the consequences of compromising its integrity and the expected expertise the adversary has. If I (or my company/client) don't value a system or the information it contains then there is no need to do anything. If my company/client says it's valuable, then I can change some parameters and protect it more without having to make architectural concessions. The initial effort is high. The continued effort is low. I have the tools, don't you want them too? When your machine can take arms or legs off or can destroy thousands of pounds worth of equipment when it malfunctions. People very quickly make it your problem "Not my problem" is the sort of attitude I take IT departments to task with and I don't really want to group you with those reprobates Your last comment sentence is a fair one, though. However. Web services are slow and come with their own issues. By simply deploying a web server you are greatly increasing the attack surface as they have to be all things to all people and are only really designed for serving web pages If you are resigned to never writing some software that uses Websockets, RTSP or other high performance streaming protocols then you can probably make do. I prefer to not make do but make it do - because I can. Some are users of technology and some are creators.Sometimes the former have difficulty in understanding the motives of the latter but everyone befits.
  3. The issue is the LabVIEW implementation (in particular, the STANDARD or BUFFERED modes). The actual socket implementations are not the issue; it is the transition from the underlying OS to LabVIEW where these issues arise. Standard mode waits there until the underlying layer gets all 2G then dies when returning from the LabVIEW primitive. The solution is to use immediate mode to consume at whatever size chunks the OS throws it up to LabVIEW at. At this point you have to implement your own concatenation to maintain the same behavior and convenience that STANDARD gives you which requires a buffer (either in your TCPIP API or in your application). When you have consigned yourself to the complexity increase of buffering within LabVIEW, the possibility to mitigate becomes available as you can now drop chunks either because its coming in too fast for you to consume or because the concatenated size will consume all your memory. I'd be interested in other solutions but yes, it is useful (and effective). The contrived example and more sophisticated attempts are ineffective with (1). If you then put the read VI in a loop with a string concatenate in your application; there is not much we can do about that as the string concatenate will fail at 2GB on a 32 bit system. So it is better to trust the VI if you are only expecting command/response messages and set the LabVIEW read buffer limit appropriately (who sends 100MB command response messages? ) In a way you are allowing the user to be able to handle it easily at the application layer with a drop-in replacement of the read primitive, It just also has a safer default setting. If a client maliciously throws thousands of such packets then (2) comes into play. Then it is an arms race of how fast you can drop packets against how fast they can throw them at you. If you are on a 1Gb connection, my money is on the application You may be DOS'd but when they stop; your application is still standing. As said previously. It is a LabVIEW limitation. So effectively this proposal does give you the option to increase the buffer after connection (from the applications viewpoint). However, there is no option/setting in any of the OS that I know of to drop packets or buffer contents. You must do that by consuming them. There is no point in setting your buffer to 2G if you only have 500MB of memory.
  4. Indeed. The Transport.lvlib has a more complex header, but it is still a naive implementation which the example demonstrates. It doesn't check the length and passes it straight to the read, which is the real point of the example. Only on 32 bit LabVIEW 64 bit LabVIEW will quite happily send it. However. That is a moot point because I could send the same length header and send 20 x 107374182 length packets for the same effect.. I agree. But for a generic, reusable TCPIP protocol; message length by an ID isn't a good solution as you make the protocol application specific. Where all this will probably be first encountered in the wild is with Websockets so the protocol is out there. Whatever strategy is decided on, it should also be useful for that too too because you have a fixed RFC protocol in that case. Interestingly. All this was prompted by me implementing SSL and emulating the LabVIEW STANDARD, IMMEDIATE, CRLF etc. Because I had to implement a much more complex (buffered) read function in LabVIEW, it became much more obvious where things could (and did) go wrong. I decided to implement two defensive features. A default maximum frame/message/chink/lemon size that is dependent on the platform - cRIO and other RT platforms are vulnerable to memory exhaustion well below 2GB. (Different mechanism, same result) A default rate limit on incoming frame/message/chunk/lemon - attempt to prevent the platform TCPIP stack being saturated if the stack cannot be serviced quickly enough. Both of these are configurable, of course, and you can set to either drop packets with a warning (default) or an error. Yes. This is actually a much more recent brain fart that I've had and I have plans to add this as a default feature in all my APIs. It's easy, ti implement and raises an abusers required competence level considerably.
  5. Whos TCPIP read functions use the tried and tested read length then data like the following? (ShaunR puts his hand up ) What happens if I connect to your LabVIEW program like this? Does anyone use white lists on the remote address terminal of the Listener.vi?
  6. Yes Orange="run in UI thread" Yellow="run in any thread" Orange: Requires the LabVIEW root loop. All kinds of heartache here but you are guaranteed all nodes will be called from a single LabVIEW thread context. This is used for non thread-safe library calls when you use a 3rd party library that isn't (thread-safe) or you don't know. If you are writing a library for LabVIEW, you shouldn't be using this as there are obnoxious and unintuitive side effects and is orders of magnitude slower. This is the choice of last resort but the safest for most non C programmers who have been dragged kicking an screaming into do it Yellow: Runs in any thread that LabVIEW decides to use. LabVIEW uses a pre-emptively scheduled thread pool (see the execution subsystems) therefore libraries must be thread-safe as each node can execute in an arbitrary thread context.. Some light reading that you may like - section 9 If you are writing your own DLL then you should be here - writing thread safe ones. Most people used to LabvIEW don't know how to. Hell, Most C programmers don't know how to. Most of the time, I don't know how to and have to relearn it If you have a pet C programmer.; keep yelling "thread-safe" until his ears bleed. If he says "what's that?" trade him in for a newer model It has nothing to do with your application architecture but it will bring your application crashing down for seemingly random reasons. I think I see a JackDunaway NI Days presentation coming along in the not too distant future
  7. There is a whole thread on LabVIEW and SSH. and one of the posts has your solution Cat is the expert on SSH now I will however reiterate that using a username and password, although exchanged securely, is a lot less desirable than private/public keys. The later makes it impossible to brute force. There is only one real weakness with SSH - verification. When you first exchange verification hashes you have to trust that the hash you receive from the server is actually from the server you think it is. You will probably have noticed that plink asked you about that when you first connected. You probably said "yeah, I trust it" but it is important to check the signature to make sure someone didn't intercept it and send you theirs instead. Once you hit OK, you won't be asked again until you clear the trusted servers' cache so that first time is hugely important for your secrecy.
  8. Indeed they are. In fact. It is this that means that the function calls must be run in the root loop (orange node).. That is really, really bad for performance.. If you passed the ref into the callback function as a parameter then you could turn those orange nodes into yellow ones. This means you can just call the EventThread directly without all the C++ threading, (which is problematic anyway) and integrate into the LabVIEW threading and scheduling. The problem then becomes how do you stop it since you can't use the global flag, b_ThreadState, for the same reasons. I'll leave you to figure that one out since you will have to solve it for your use case if you want "Any Thread" operation When you get into this stuff, you realise just how protected from all this crap you are by LabVIEW and why engineers don't need to be speccy nerds in basements to program in it (present company excepted, of course ). Unfortunately, the speccy nerds are trying their damnedest to get their favourite programming languages' crappy features caveats into LabVIEW. Luckily for us non-speccy nerds; NI stopped progressing the language around version 8.x.
  9. As if by magic, the shopkeeper appeared. (LabVIEW 64 bit) You getting scared yet, Mr Pate?
  10. Answered in a PM.
  11. Funny you should mention that. I've just been playing with forward and reverse SSH proxies for the Encryption Compendium for LabVIEW now that it has SSH client capabilities (no cygwin or other nasty Linux emulators ). If you have SSH (or TLS for that matter) you don't need a VPN. If you are thinking about requiring VPN for Remote Desktop then there are flavours of VNC that tunnel over SSH/TLS or use encryption.(disclaimer: haven't used them in anger but know they exist) Once you go for direct TCPIP through TLS or SSH there isn't much of a reason to use a VPN, apart from IP address hiding, since end-to-end doesn't require control over the entire path of the infrastructure to maintain secrecy.
  12. The scientific notation is still not 7 digits. It also looks on the face of it like the precision of doubles is set ti 9 significant digits.rather than 15 decimal places.
  13. A nice workaround for those who don;t have access to TLS and SSH tools.
  14. The one outstanding reason to use dynamic loading is you can replace the dependencies without recompiling your DLL (as long as the interface doesn't change). So you can update your driver DLL with a newer version just by overwriting the old one (either manually or with an installer). You will find this a must if you ever piggyback off of NI binaries which change with each release/update. With static bindings it is forever set in stone for that *.lib version until you decide to recompile with the new lib..
  15. In a production environment the strategy is network isolation because the engineer has complete control over the devices, infrastructure and users. Where isolation isn't desirable or achievable; VPN seems to be the strategy which requires the company to have complete control over the devices, infrastructure and users. We have already heard from one engineer that uses mobile devices with a web server and no VPN when there is no complete control.and with websockets now stable; mobile devices are being connected to LabVIEW systems without web servers. So it was intended as a leading question.
  16. So. Who is using a VPN on their Android or iOS to connect to their LabVIEW software?
  17. Does this mean you have never used this method but if you were specifically asked to, then this would be a proposal? Yes. That is clearer. You have a (XML?) protocol that contains security tokens.
  18. When you talk of "Package" are you talking about software updates to the cRIO?
  19. This is demonstrated in the shipped TCPI/IP examples with LabVIEW. You are effectively creating a simple proprietary protocol with a single header field of "Data Length". Transport extends this further by adding a timestamp, encryption and compression flags to the header.
  20. cData can be passed as an array of U8 and you avoid all that.
  21. A VPN will help to mask your IP and traffic will be encrypted for the entire journey only if both ends are within the VPN network (note I'm not using the phrase "end-to-end" here). If you use a 3rd party, they will potentially have visibility of all traffic so it would be the same trust issue as a cloud service. This probably isn't an option for VxWorks targets as you are pretty much stuck with what NI have installed.
  22. Because it is far more pragmatic to remote into (and send data out of) devices in offshore oil rigs than it is to send a survival trained engineer via helicopter.
  23. what about for cRIO or PXI?
  24. There is an API for interfacing to RDS. As it is a session based system you would need to get the session information and use that to create a unique ID to route your data. Your channel setup process is an almost exact description of the Dispatcher handshake. I notice you have only specified a single connection to a client so I think for network streams the end points are dedicated to either writing or reading so it would be uni-directional only. You are also missing the "dealer" in your description that needs to copy the data to each end-point if there are multiple clients to a single service. That may or may not be a requirement in your case but most of the time it is needed and you might need to consider control contention if multiple clients are to ultimately all have bi-directional or reverse control channels.
  25. I've often think about security of my LabVIEW applications but I haven't seen much discussion in the LabVIEW community and almost never see consideration given to securing network communications even at a trivial level. So I am wondering...... What do you do to protect your customers'/companies' data and network applications written in LabVIEW? (if anything). How do you mitigate attacks on your TCPIP communications? What attacks have you seen on your applications/infrastructure? Do you often use encryption? (For what and when?). Do you trust cloud providers with unencrypted sensitive data?
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.