-
Posts
4,881 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
But you cannot reference it. You only get an index and the read and write require CRYPTO_EX_DATA
-
Having played a bit, it doesn't look that straightforward. The main idea, it seems, is that you create callbacks that allocate and free the CRYPTO_EX_DATA (which is required for the get and set) but if they are all set to NULL in CRYPTO_get_ex_new_index then you must use CRYPTO_EX_new which would have to be a global and there is no way to associate it with the SSL session. This seems a lot harder than it should be so maybe I'm not seeing something.
-
Ooooh. I shall have a play.
-
It's not so much safety but I can have multiple connections (on say 127.0.0.1) and I don't want a global for all the connections. A random per callback would be OK but there is no way to tell the verifying callback what the generator chose (hence they have a global). It would have been preferably to be able to define the cookie to be compared so that the cookie generation could be done in the application rather than inside the callback. I'm not sure HMAC is all that useful here either (they use SHA1-HMAC by the way). Effectively we are just saying "is it mine? rather than "is it from who it says it is"? They are really relying on the port number from the same address (127.0.0.1, say) and that definitely isn't random and could be repeated. What I've done is just SHA1 the result of SSL_get_rbio(ssl) It's not "cryptographically" random but is probably random enough for this purpose (this is for DDOS rather than hiding secrets-similar reasoning that we use broken hashes for file integrity) and, unlike their global, it changes on each connect. I could do the whole HMAC thing using the SSL_get_rbio(ssl) as the random but I'm not sure it's really worth the overhead. Can you give an argument in favour?
-
OK. So seems it's to do with the security level. They are compiled in but disabled at run-time. SSL_CTX_set_security_level states: That last sentence isn't in 1.1.1. The default security level is 1. You have to set it to 0 to get the rest. Now we're cooking!
-
3.1.0. They weren't disabled in 1.1.1. That post seems to be specifically for Debian since it says "OpenSSL on Debian 10 is built with TLS 1.0 disabled.". You used "no-{tls1|tls1_1} to disable them at compile time. Using that compile option also removes the TLS1 methods from the binary as well. The TLS1 methods are available in the 3.1.0 binary.
-
DTLS version 1.0 doesn't work. It looks like they don't compile the TLS1.0 and TLS1.1 cipher suites on the default build with legacy. There is an option to compile without certain cipher suites (no-{}) which implies they should be enabled by default. Additionally. The no-{} compile options remove the applicable methods for that ciphersuite. However, they are all available. Compiling with enable-{} doesn't get around the problem. Using openssl.exe ciphers -s -tls1_1 yields an empty list. This also means that TLS1.0 and TLS 1.1 don't work either. Using openssl.exe s_server -dtls1 -4 -state -debug and openssl.exe s_client -dtls1 -4 -state -debug yields a Protocol Version Error (70) DTLS 1.2 is fine, however.
-
OK. Got it working (sort of). So you don't "need" the callbacks (anti denial of service) but if you do use them then you need a way of matching the cookies between the generate and verify. In the examples they use an application global which is not very useful as I need it per CTX. I had a similar problem with ICMP echo which I partially solved by polling data on the socket (peeking) and dequeuing when the "cookie" matched. That's not an option here though. The callbacks aren't very useful unless the cookie can be stored with the CTX or SSL session ... somehow. At least not without a lot of effort to create a global singleton with critical sections and lookups. Any ideas? [A little later] Hmmm. Maybe I can use SSL_get_session and use the pointer to generate a cookie? Or maybe a BIO reference that I can get from the session?
-
Including solicitation of interest from potential acquirers
ShaunR replied to gleichman's topic in LAVA Lounge
I'd be happy if 90% sales and admin were laid off in every company. -
I'm not sure of the "convenient wrapper functions" that you refer to since I had to write quite complicated wrappers around reading TLS packets to get it to behave like the LabVIEW primitives. I think the main issue I have with the DTLS are the cookies which a) are callbacks and b) are not standardized. They also introduced the SSL_Stateless - a TLS equivalent of DTLS_Listen but, as they state: So I've no idea what that's about. All the examples I've seen so far all use a different method for generating the cookies so how interoperability is supposed to work - I have no idea. And, unlike TLS , you have to listen before creating a socket and then tell the bio it's connected. That's a completely different procedure and not one easy to merge. I've also seen reports of the DTLv1_Listen being broken is some versions but not seen anything about it being addressed. It's a mess!
-
So. DTLS still not all that great. A few functions you need are macros (one or two are missing). Despite starting off with a similar API to TLS, you are quickly reduced to BIO manipulations and trickery. Honestly. It should only be a choice of instantiating DTLS or TLS object but, like so much in OpenSSL, they make you jump through hoops with wildly varying low-level API's (that they will probably deprecate in the future anyway). I guess my brain just works differently to these developers. So much of what I write is to hide the complexity that they put in.
-
LabVIEW, VIPM packages and tracking of their vulnerabilities.
ShaunR replied to MzazM's topic in LabVIEW General
CVE vulnerabilities will be logged against LabVIEW. All software written in LabVIEW will potentially be vulnerable until NI roll out a fix. There is little an addon developer can do. If the addon leverages other suppliers' binaries, then that is also a route to vulnerability but that would be outside NI's remit. The addon developer would need to keep a track of the issues associated with the binary distributions that they use. So. The answer is "it's complicated" One of the issues I have had to deal with (and is probably something you may have to investigate depending on your environment) is that NI distribute OpenSSL binaries. However. Their security updates lag years behind the current fixes (currently OpenSSL 1.0.2u 20 Dec 2019, I believe). That was untenable for me and my clients so I had to come up with a solution that could react much more quickly. I moved away from the NI OpenSSL binaries to my own compiled ones from source (currently OpenSSL 3.1.0 14 Mar 2023). That meant I could update the OpenSSL binaries within weeks or days for serious issues, if necessary. -
LabVIEW, VIPM packages and tracking of their vulnerabilities.
ShaunR replied to MzazM's topic in LabVIEW General
It depends what you mean by "vulnerabilities". Buffer overflows, for example, cannot be managed by addon providers (only NI) while a memory leak of a queue reference can. But lets expand the discussion into what, at all, security is available for addons? Many of the network resources aren't even HTTPS. I did start to write an installer which had in-built code signing and verification but haven't worked on it for a while. For code signing and verification it would need to be the installer (JKI) rather than the package provider. LabVIEW packages don't even have basic authenticity verification which has been a standard feature in package managers for over 20 years. -
Is this really what we are up in arms about? Sounds like the sort of thing the LabVIEW haters group would be whinging over. I don't have this problem in 2009
-
Back at around version 2 Linux and Mac were supported but only Windows was distributed (see licensing below). Linux was a real pain to maintain so I dropped that. No one was interested in Mac so I dropped that too. There are several reasons why Linux support is unlikely in the future: The NI licencing scheme only works on Windows. There is quite a lot of Windows specific stuff now like the use of the Windows Certificate Store and Windows Messaging for ICMP (off the top of my head-but there's more). Linux is a real pain to maintain for. (50% chance of any one distribution of any software working out-of-the-box). LabVIEW may be synonymous with the Dodo in the next year or two. 1. Has a solution but I'm tentative. 2. Is doable with a quite some of effort. 3 just isn't worth the aggro and 4. Well. Interesting times. There are lots of reasons why *not*. Very few of why it should.
-
Tada! Just released 4.3.0 with MQTT support (and, of course, examples). Have fun!
-
Eric Starkloff horse trades NI shares. The ones we are really interested in are the 50+% owned by the creators and their families. I still have a feeling all these "Over by March/April" are a kind of propaganda. NI's behaviour doesn't seem to be of a company about to throw in the towel-quite the contrary. I'm still quietly hopeful it will all fall through but hey! I'm a programmer! What do I know about this stuff anyway?
-
Just mention GOOP 3 times and it will summon MikaelH - who can tell you everything about it.
-
Aha! Yes. Apologies.
-
As you are the only one that has commented on it al all (indirectly). I think that's a resounding "don't bother". Also means I don't have to look too closely at DTLS just yet
-
So. that's a resounding "don't bother" on the CoAP then
-
If Emerson buy NI, you can look forward to no LabVIEW at all.
-
Here is a List API. Seems complicated but it's a "managed" list with lots of features. Overkill for what you need but it demonstrates a point. You could use it for your data but then you'd have a dependency. I'm guessing you don't want dependencies at this point. You'll notice it has two items in the cluster - Name and Variant (variant is a general type and think of it as your Cluster). The important part is that it has a "Name". This is a poor replacement for the Cluster Name (functionally) but it does enable lookups, even by regular expressions. It serves a very similar purpose but operates at run-time instead of design time The name is just a string label. It can be anything. Granted it's not as simple as an enum but it does give much more flexibility and doesn't require design time editing of the control to make changes. The tradeoff is complexity. Now you have a way of making a list behave like a database but you need specific functions to look up and return data. This, from one of the list examples, returns all items with a numerical label: Intuitively you will realise that having duplicate data is not very useful unless you can distinguish between the original and manipulated. Up until now you have used the Cluster with an unbundle which has served you well but are now finding that you need to edit the cluster every time you add new variant of your data. The label gives you that ability at run-time instead of design time with a small increase in complexity. However. Your biggest issue is compartmentalization - separating config from data. Now. What if only the List (aka data array) wasn't tied to a particular data type? Then, by thinking carefully about the labels you use, you would be able to differentiate between the different data types, different devices and different configs There are other ways to approach this. But from where you are at present I would suggest this way - at least until you are more confident with your abilities. The database I suggested earlier requires a different mind-set and classes are a huge learning curve. This is a modest step from where you are to where you want to be.