Jump to content

ShaunR

Members
  • Posts

    4,849
  • Joined

  • Days Won

    292

Posts posted by ShaunR

  1. DTLS version 1.0 doesn't work.

    It looks like they don't compile the TLS1.0 and TLS1.1 cipher suites on the default build with legacy. There is an option to compile without certain cipher suites (no-{}) which implies they should be enabled by default. Additionally. The no-{} compile options remove the applicable methods for that ciphersuite. However, they are all available. Compiling with enable-{} doesn't get around the problem. Using openssl.exe ciphers -s -tls1_1 yields an empty list. This also means that TLS1.0 and TLS 1.1 don't work either.

    Using openssl.exe s_server  -dtls1 -4 -state -debug and openssl.exe s_client  -dtls1 -4 -state -debug yields a Protocol Version Error (70)

    image.png.92c758b4e72888d6628c508cb964caaf.png

    DTLS 1.2 is fine, however.

  2. OK. Got it working (sort of).

    So you don't "need" the callbacks (anti denial of service) but if you do use them then you need a way of matching the cookies between the generate and verify. In the examples they use an application global which is not very useful as I need it per CTX. I had a similar problem with ICMP echo which I partially solved by polling data on the socket (peeking) and dequeuing when the "cookie" matched. That's not an option here though.

    The callbacks aren't very useful unless the cookie can be stored with the CTX or SSL session ... somehow. At least not without a lot of effort to create a global singleton with critical sections and lookups.

    Any ideas?

     

    [A little later]

    Hmmm. Maybe I can use SSL_get_session and use the pointer to generate a cookie? Or maybe a BIO reference that I can get from the session?

  3. On 4/2/2023 at 3:20 PM, Rolf Kalbermatter said:

    I can clearly see what you are talking about. DTLS is a second class citizen as you have to indeed do BIO trickery. For TLS OpenSSL has convenient wrapper functions. 
     

    However I appreciate the OpenSSL developers troubles in trying to shove an intermediate layer into the socket interface, and to make matters worse try to get it to work on multiple platforms While both Linux sockets and WinSock are in principle based on the BSD sockets and this works surprisingly smooth on all these platforms with the same source code for basic TCP and even UDP sockets, things start to get nasty soon when trying to do advanced things such as what OpenSSL needs to do. Under Windows this should be really solved with a socket filter driver that can be installed into WinSock but that is considerably different to how things would be done on BSD and pretty much impossible on Linux without integrating it in the kernel or trying to hack it into pcap.

    OpenSSL is clearly a compromise. The alternative would be OS specific protocol filter drivers and there are very few of them and none that supports multiple OSes.

    I'm not sure of the "convenient wrapper functions" that you refer to since I had to write quite complicated wrappers around reading TLS packets to get it to behave like the LabVIEW primitives.

    I think the main issue I have with the DTLS are the cookies which a) are callbacks :frusty: and b) are not standardized. They also introduced the SSL_Stateless - a TLS equivalent of DTLS_Listen but, as they state:

    Quote

    TLSv1.3 is designed to operate over a stream-based transport protocol (such as TCP). If TCP is being used then there is no need to use SSL_stateless()

    So I've no idea what that's about.

    All the examples I've seen so far all use a different method for generating the cookies so how interoperability is supposed to work - I have no idea. And, unlike TLS , you have to listen before creating a socket and then tell the bio it's connected. That's a completely different procedure and not one easy to merge.

    I've also seen reports of the DTLv1_Listen being broken is some versions but not seen anything about it being addressed.

    It's a mess!

  4. On 3/14/2023 at 1:54 PM, ShaunR said:

    Last time I looked it was about 1.1.1e. I don't think it was much better. I bypassed it in the end because it needed callbacks for cookies-wasn't prepared to do that at the time. I'm hoping they've moved on from there with full blown certificate verification but if they haven't, I now have a place for callbacks in the API.

    So. DTLS still not all that great. A few functions you need are macros (one or two are missing). Despite starting off with a similar API to TLS, you are quickly reduced to BIO manipulations and trickery. Honestly. It should only be a choice of instantiating DTLS or TLS object but, like so much in OpenSSL, they make you jump through hoops with wildly varying low-level API's (that they will probably deprecate in the future anyway).

    I guess my brain just works differently to these developers. So much of what I write is to hide the complexity that they put in.

  5. 16 hours ago, MzazM said:

    Hi @ShaunR, thanks for your answer.

    Indeed, knowing that, as of now, JKI does not provide authenticity verification of the packages we install is something worrying. It would not be easy to defend (read: get approval for) the cybersecurity and integrity of software built and deployed using such libraries. This is indeed a problem and I think it should be a feature request to VIPM forums. I will follow up there and post a link here if you (and others are interested).

    Going back to my original question. What I meant is that normally cybersecurity vulnerabilities are published in such CVE lists (for example the one for LabVIEW) , so that users can get notified and decide to patch the application or not. In my understanding there is no way to understand if the packages we install are affected by a security vulnerability leading to a vulnerability in the application that is using them. For example, this happened with the famous Log4j vulnerability few months back, where millions applications and devices where suddenly exposed. Is such a list something that in the world of Package management in LabVIEW is available? 

    CVE vulnerabilities will be logged against LabVIEW. All software written in LabVIEW will potentially be vulnerable until NI roll out a fix. There is little an addon developer can do. If the addon leverages other suppliers' binaries, then that is also a route to vulnerability but that would be outside NI's remit. The addon developer would need to keep a track of the issues associated with the binary distributions that they use. So. The answer is "it's complicated"

    One of the issues I have had to deal with (and is probably something you may have to investigate depending on your environment) is that NI distribute OpenSSL binaries. However. Their security updates lag years behind the current fixes (currently OpenSSL 1.0.2u  20 Dec 2019, I believe). That was untenable for me and my clients so I had to come up with a solution that could react much more quickly. I moved away from the NI OpenSSL binaries to my own compiled ones from source (currently OpenSSL 3.1.0 14 Mar 2023). That meant I could update the OpenSSL binaries within weeks or days for serious issues, if necessary.

  6. It depends what you mean by "vulnerabilities". Buffer overflows, for example, cannot be managed by addon providers (only NI) while a memory leak of a queue reference can.

    But lets expand the discussion into what, at all, security is available for addons? Many of the network resources aren't even HTTPS. :frusty:

    I did start to write an installer which had in-built code signing and verification but haven't worked on it for a while. For code signing and verification it would need to be the installer (JKI) rather than the package provider. LabVIEW packages don't even have basic authenticity verification which has been a standard feature in package managers for over 20 years.

  7. 12 minutes ago, codcoder said:

    But this isn't LabVIEW specific. There are a lot of situations in corporate environments where you simply don't upgrade to the latest version all time of a software. It's simply too risky.

     

    On 10/14/2017 at 6:07 PM, ShaunR said:

    Changing versions is a huge project risk. You may get your old bug fixed (not guaranteed, though) but there will be other new ones and anyone who converts mid-project is insane. In fact. I would argue that anyone who upgrades before SP1 is out is also insane.

  8. 1 hour ago, Antoine Chalons said:

    Any chance the Encryption Compendium ever supports Linux (Ubuntu)?

    Back at around version 2 Linux and Mac were supported but only Windows was distributed (see licensing below). Linux was a real pain to maintain so I dropped that. No one was interested in Mac so I dropped that too.

    There are several reasons why Linux support is unlikely in the future:

    1. The NI licencing scheme only works on Windows.
    2. There is quite a lot of Windows specific stuff now like the use of the Windows Certificate Store and Windows Messaging for ICMP (off the top of my head-but there's more).
    3. Linux is a real pain to maintain for. (50% chance of any one distribution of any software working out-of-the-box).
    4. LabVIEW may be synonymous with the Dodo in the next year or two.

    1. Has a solution but I'm tentative. 2. Is doable with a quite some of effort. 3 just isn't worth the aggro and 4. Well. Interesting times.

    There are lots of reasons why *not*. Very few of why it should.

  9. Eric Starkloff horse trades NI shares. The ones we are really interested in are the 50+% owned by the creators and their families.

    I still have a feeling all these "Over by March/April" are a kind of propaganda.  NI's behaviour doesn't seem to be of a company about to throw in the towel-quite the contrary. I'm still quietly hopeful it will all fall through but hey! I'm a programmer! What do I know about this stuff anyway? :lol:

  10. 5 hours ago, Mahbod Morshedi said:

    Any good material you can suggest for me to start with GOOP would be appreciated in the meantime.

    Just mention GOOP 3 times and it will summon MikaelH - who can tell you everything about it.

    • Haha 2
  11. 12 minutes ago, Rolf Kalbermatter said:

    Well. It's more likely a very resounding "I have no idea if I'm ever going to need that. For now I just refrain from commenting on the matter!" 😎

    As you are the only one that has commented on it al all (indirectly). I think that's a resounding "don't bother".

    Also means I don't have to look too closely at DTLS just yet :rolleyes:

  12. 11 hours ago, Mahbod Morshedi said:

    I totally Agree and in fact i had that format originally. However, arrays gets rid of the cluster names and replace them wit the first item's name and for me keeping track was becoming difficult even with documentation. it was just easier to have the clusters that can have different name. I know that i could also use a simple enum for indexing but that would add an extra data that needed to add to my system.
    I also have a duplicate of BG and HRS as an original data and manipulated data. This way when want i can revert back by replacing the data with the original.

    I am very new to programming and do not have experience in data organisation and almost nothing about labVIEW classes and the NI documentation is just too simple and is not covering any real life application.
    That is why I was asking for help.

    Cheers,

    Here is a List API. Seems complicated but it's a "managed" list with lots of features. Overkill for what you need but it demonstrates a point. You could use it for your data but then you'd have a dependency. I'm guessing you don't want dependencies at this point.

    image.png.40ede3f01101eb9a862fee7a70617c96.png

    You'll notice it has two items in the cluster - Name and Variant (variant is a general type and think of it as your Cluster). The important part is that it has a "Name". This is a poor replacement for the Cluster Name (functionally) but it does enable lookups, even by regular expressions. It serves a very similar purpose but operates at run-time instead of design time

    The name is just a string label. It can be anything. Granted it's not as simple as an enum but it does give much more flexibility and doesn't require design time editing of the control to make changes. The tradeoff is complexity. Now you have a way of making a list behave like a database but you need specific functions to look up and return data.

    This, from one of the list examples, returns all items with a numerical label:

    image.png.6d909d0fbbad9bd6c4dd1f6ee3b6b7ad.png

    Intuitively you will realise that having duplicate data is not very useful unless you can distinguish between the original and manipulated. Up until now you have used the Cluster with an unbundle which has served you well but are now finding that you need to edit the cluster every time you add new variant of your data. The label gives you that ability at run-time instead of design time with a small increase in complexity.

    However. Your biggest issue is compartmentalization - separating config from data. Now. What if only the List (aka data array) wasn't tied to a particular data type? Then, by thinking carefully about the labels you use, you would be able to differentiate between the different data types, different devices and different configs

    There are other ways to approach this. But from where you are at present I would suggest this way - at least until you are more confident with your abilities. The database I suggested earlier requires a different mind-set and classes are a huge learning curve. This is a modest step from where you are to where you want to be.

  13. 25 minutes ago, Mahbod Morshedi said:

    That is why I wanted to use a class to help me with ubn and bbn of the cluster data.

    If you are going they way I think you are, all you will do is swap [un]bundles for VI's and add a lot of boiler plate. To be fair. You monster cluster isn't really that much of a monster-more like a tribble.

    My advice (if you're not going to re-architect) would be just to split out the data from the config. Your All Scan data BG and HRS is actually an identical format so you could rationalise that into a single array which will make adding more of the same format easier (just add and index into the array-no need to modify the cluster to add more data). Everything else seems to be config data.

  14. 20 hours ago, LogMAN said:

    break down your complex and complicated data types into simple and uncomplicated ones.

    Break down your complex and complicated data types into a complex and complicated architecture. :P FTFY

    • Haha 2
  15. 2 hours ago, Rolf Kalbermatter said:

    Most likely because of its use of DTLS. 😁 OpenSSL's support of this was fairly "flaky" back when I did my Network library. Many problems were surrounding it, some of them were actually kind of unfixable with the DTLS standard at that time. Now this was around OpenSSL 0.9.6 or so, so I would assume that a lot has changed since.

    And yes I got it to work, but only had done minimum testing with it. It was clear that more extended use of it would sooner or later bring out troubles with it. Some for sure in my interpretation of the OpenSSL API at that time, but some also unfixable for me without changing OpenSSL itself.

    Last time I looked it was about 1.1.1e. I don't think it was much better. I bypassed it in the end because it needed callbacks for cookies-wasn't prepared to do that at the time. I'm hoping they've moved on from there with full blown certificate verification but if they haven't, I now have a place for callbacks in the API.

  16. Quote

     

    In LabVIEW 2014 and later, the PID and Fuzzy Logic Toolkit is included natively within LabVIEW Full and Professional Development Systems and does not require a separate license, installation, or activation.

     

    source

    I guess you don't have the Full or Professional LabVIEW?

  17. 14 hours ago, Jordan Kuehn said:

    I have had a passing interest. I think you were extoling its virtues some time ago. But when I saw I'd be needing to build the LV implementation of the protocol from scratch I lost interest. I also didn't see a lot of wide adoption at the time in areas where I was working, but that is probably worth a fresh look.

    Building the protocol from scratch isn't a barrier for me - I'm on a roll :D The difficulty is that it requires DTLS (the UDP version of TLS). DTLS is something I've played with in the past and it was somewhat awkward to integrate into what I have currently so I moved past it and on to other features that I desperately wanted. CoAP would force me to look at DTLS again as it is something I've wanted, but never had a need for.

    IMO CoAP is a far superior protocol to MQTT. I don't really understand why MQTT gets so much love.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.