Jump to content

ShaunR

Members
  • Posts

    4,840
  • Joined

  • Days Won

    290

Everything posted by ShaunR

  1. SNDCSDLM.dll depends on cvirte.dll. You can check it's deployed on the target machine with Dependency Walker. It's always deployed on development machines so maybe that's the issue. Something easy to check and eliminate.
  2. What are you queueing? The image array or the U64 reference? (then reading the image in the consumer). You'll need to post your code so we can see what you are doing.
  3. What do you mean "fails"? You are not consuming fast enough?
  4. OK. So I now have a 2D array (2x264 bytes). if the IDX==0; the bytes are copied from the first row to the second and the random bytes in the first row are regenerated. If the verification fails with the random bytes in the first row, it looks in the second row. If that fails then they are hammering the connection and don't deserve to be let in. It means that every 255 client hello's we might have the overhead of an extra SHA1 hash to calculate. We can easily live with that. Any other ideas?
  5. That was a crap idea (was awfully complicated and memory bloaty) so I thought of something else....bear with me (it's easier done than said). Generate an array of 264 cryptographically random bytes (global). Lets call it RNDARR. Create an index (global UINT8). Lets call it IDX. For the Generate callback: if IDX = 0 initialise RNDARR with 264 new bytes. (Should get new bytes whenever we roll over as it's a UINT8) Take 8 bytes of RNDARR at IDX in the array. Concatenate the 8 bytes with 8 bytes of the SSL session reference (Uint64 as bytes). SHA1 Hash the 16 byte concatenated array (one of the fastest and only 20 bytes). Append IDX to the SHA1 hash and present the 21 bytes as the cookie. Increment IDX. For the verify callback: Take the last byte and use it as an index (lets call it IDX_V); Take 8 bytes of RNDARR at IDX_V in the array. Concatenate the 8 bytes with 8 bytes of the SSL session reference (Uint64 as bytes). SHA1 Hash the 16 byte array. Compare the SHA1 hash with the cookie ignoring the last byte of the cookie. So. that should mean we have a session dependent random number hash that is shared between callbacks. We get a unique hash on every client hello and It doesn't matter if the session is reused as the hash is relying on the 8 random bytes. (still convinced we don't need an HMAC but could do that instead of just a straight SHA1). Oh. And it's fast. Very fast. There is one corner case when IDX rolls over and a hash is in-flight (created with the IDX=255). The array is populated with new random data so the 8 random bytes used for the hash are no longer available for verification. In practice, OpenSSL retries so it's not an issue but I will think about it more for a proper solution (if you have an idea, let me know).
  6. I resorted to a global key->value lookup table. I feel dirty
  7. I need something similar (session based variable that can be accessed by callbacks) for PSK too, it seems.
  8. Hmm. Using the rbio or SSL_CTX isn't good enough. Repeats can occur 1 in 30-ish times. Need to use a proper random....somehow.
  9. But you cannot reference it. You only get an index and the read and write require CRYPTO_EX_DATA
  10. Having played a bit, it doesn't look that straightforward. The main idea, it seems, is that you create callbacks that allocate and free the CRYPTO_EX_DATA (which is required for the get and set) but if they are all set to NULL in CRYPTO_get_ex_new_index then you must use CRYPTO_EX_new which would have to be a global and there is no way to associate it with the SSL session. This seems a lot harder than it should be so maybe I'm not seeing something.
  11. It's not so much safety but I can have multiple connections (on say 127.0.0.1) and I don't want a global for all the connections. A random per callback would be OK but there is no way to tell the verifying callback what the generator chose (hence they have a global). It would have been preferably to be able to define the cookie to be compared so that the cookie generation could be done in the application rather than inside the callback. I'm not sure HMAC is all that useful here either (they use SHA1-HMAC by the way). Effectively we are just saying "is it mine? rather than "is it from who it says it is"? They are really relying on the port number from the same address (127.0.0.1, say) and that definitely isn't random and could be repeated. What I've done is just SHA1 the result of SSL_get_rbio(ssl) It's not "cryptographically" random but is probably random enough for this purpose (this is for DDOS rather than hiding secrets-similar reasoning that we use broken hashes for file integrity) and, unlike their global, it changes on each connect. I could do the whole HMAC thing using the SSL_get_rbio(ssl) as the random but I'm not sure it's really worth the overhead. Can you give an argument in favour?
  12. OK. So seems it's to do with the security level. They are compiled in but disabled at run-time. SSL_CTX_set_security_level states: That last sentence isn't in 1.1.1. The default security level is 1. You have to set it to 0 to get the rest. Now we're cooking!
  13. 3.1.0. They weren't disabled in 1.1.1. That post seems to be specifically for Debian since it says "OpenSSL on Debian 10 is built with TLS 1.0 disabled.". You used "no-{tls1|tls1_1} to disable them at compile time. Using that compile option also removes the TLS1 methods from the binary as well. The TLS1 methods are available in the 3.1.0 binary.
  14. DTLS version 1.0 doesn't work. It looks like they don't compile the TLS1.0 and TLS1.1 cipher suites on the default build with legacy. There is an option to compile without certain cipher suites (no-{}) which implies they should be enabled by default. Additionally. The no-{} compile options remove the applicable methods for that ciphersuite. However, they are all available. Compiling with enable-{} doesn't get around the problem. Using openssl.exe ciphers -s -tls1_1 yields an empty list. This also means that TLS1.0 and TLS 1.1 don't work either. Using openssl.exe s_server -dtls1 -4 -state -debug and openssl.exe s_client -dtls1 -4 -state -debug yields a Protocol Version Error (70) DTLS 1.2 is fine, however.
  15. OK. Got it working (sort of). So you don't "need" the callbacks (anti denial of service) but if you do use them then you need a way of matching the cookies between the generate and verify. In the examples they use an application global which is not very useful as I need it per CTX. I had a similar problem with ICMP echo which I partially solved by polling data on the socket (peeking) and dequeuing when the "cookie" matched. That's not an option here though. The callbacks aren't very useful unless the cookie can be stored with the CTX or SSL session ... somehow. At least not without a lot of effort to create a global singleton with critical sections and lookups. Any ideas? [A little later] Hmmm. Maybe I can use SSL_get_session and use the pointer to generate a cookie? Or maybe a BIO reference that I can get from the session?
  16. I'd be happy if 90% sales and admin were laid off in every company.
  17. I'm not sure of the "convenient wrapper functions" that you refer to since I had to write quite complicated wrappers around reading TLS packets to get it to behave like the LabVIEW primitives. I think the main issue I have with the DTLS are the cookies which a) are callbacks and b) are not standardized. They also introduced the SSL_Stateless - a TLS equivalent of DTLS_Listen but, as they state: So I've no idea what that's about. All the examples I've seen so far all use a different method for generating the cookies so how interoperability is supposed to work - I have no idea. And, unlike TLS , you have to listen before creating a socket and then tell the bio it's connected. That's a completely different procedure and not one easy to merge. I've also seen reports of the DTLv1_Listen being broken is some versions but not seen anything about it being addressed. It's a mess!
  18. So. DTLS still not all that great. A few functions you need are macros (one or two are missing). Despite starting off with a similar API to TLS, you are quickly reduced to BIO manipulations and trickery. Honestly. It should only be a choice of instantiating DTLS or TLS object but, like so much in OpenSSL, they make you jump through hoops with wildly varying low-level API's (that they will probably deprecate in the future anyway). I guess my brain just works differently to these developers. So much of what I write is to hide the complexity that they put in.
  19. CVE vulnerabilities will be logged against LabVIEW. All software written in LabVIEW will potentially be vulnerable until NI roll out a fix. There is little an addon developer can do. If the addon leverages other suppliers' binaries, then that is also a route to vulnerability but that would be outside NI's remit. The addon developer would need to keep a track of the issues associated with the binary distributions that they use. So. The answer is "it's complicated" One of the issues I have had to deal with (and is probably something you may have to investigate depending on your environment) is that NI distribute OpenSSL binaries. However. Their security updates lag years behind the current fixes (currently OpenSSL 1.0.2u 20 Dec 2019, I believe). That was untenable for me and my clients so I had to come up with a solution that could react much more quickly. I moved away from the NI OpenSSL binaries to my own compiled ones from source (currently OpenSSL 3.1.0 14 Mar 2023). That meant I could update the OpenSSL binaries within weeks or days for serious issues, if necessary.
  20. It depends what you mean by "vulnerabilities". Buffer overflows, for example, cannot be managed by addon providers (only NI) while a memory leak of a queue reference can. But lets expand the discussion into what, at all, security is available for addons? Many of the network resources aren't even HTTPS. I did start to write an installer which had in-built code signing and verification but haven't worked on it for a while. For code signing and verification it would need to be the installer (JKI) rather than the package provider. LabVIEW packages don't even have basic authenticity verification which has been a standard feature in package managers for over 20 years.
  21. Is this really what we are up in arms about? Sounds like the sort of thing the LabVIEW haters group would be whinging over. I don't have this problem in 2009
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.