Jump to content

ShaunR

Members
  • Posts

    4,848
  • Joined

  • Days Won

    291

Posts posted by ShaunR

  1. 1 hour ago, acb said:

    What types of day-to-day tasks would you want Nigel to help you with?

    HTML Help files (manuals). Documentation is by far the greatest resource hog and we should be able to get away from programmer created VI descriptions. But I don't just mean documenting VI's though. I also mean generating API references, menu layouts, including external references and collating examples with descriptive comments about their function and what they demonstrate (from the code in the diagrams).

    A far better DLL importer. The one we have currently is next to useless for 80% of DLL's. I'd like an AI that can do what Rolf does :lol: . It should be able to import the DLL and create the VI's containing CLFN's for complex structures (e.g strings nested in structures), callbacks (he, he), events and error handling.

    • Like 1
  2. 11 hours ago, X___ said:

    It's heavily scripted, but on face value, that looks promising.

    I was mesmerised by the cheesy forced grin of the guy demonstrating. Why does AI type so slowly?

    I agree it's looking promising. It's at the level if an intern but actually listens to you. Linear coding is never the final solution. I expect it was tuned for very specific requirements but I was more impressed with it reading the emails and specs and interpreting them in context.

    Would suffice for Unit Test cases and feasibility prototypes. Looking forward to getting my hands on it.

  3. 46 minutes ago, crossrulz said:

    The announcement for the acquisition did happen in early April.  Now it is just waiting for approval by the US government and who knows what other legal red tape for it to actually happen.  One of the articles mentioned the acquisition finalization was expected to be in Emerson's financial 2024 H1, which starts in October.  Do not expect to hear anything until that happens.

    So I can retire in October? :D

  4. 14 hours ago, Rolf Kalbermatter said:

    You clearly have not much C programming experience.

    typedef struct
    {
    	int32_t firstInt;
    	int32_t secondInt;
    	int32_t thirdInt;
    	LStrHandle lvString;
    } MyStruct, *MyStructPtr;
    
    
    MgErr CreateStringHandle(LStrHandle *lvStringHandle, char* stringData)
    {
    	MgErr err;
    	size_t len = strlen(stringData);
    	if (*lvStringHandle)
    	{
    		err = DSSetHandleSize(*lvStringHandle, sizeof(int32_t) + len);
    	}
    	else
    	{
    		*lvStringHandle = DSNewHandle(sizeof(int32_t) + len);
    		if (!*lvStringHandle)
    			err = mFullErr;
    	}
    	if (!err)
    	{
    		MoveBlock(stringData, LStrBuf(**lvStringHandle), len);
    		LStrLen(**lvStringHandle) = (int32_t)len;
    	}
    	return err;
    }
    
    MgErr SendStringInSructToLV(LVUserEventRef *userEvent)
    {
     	MyStruct structure = {1, 2, 3, NULL);
    	MgErr err = CreateStringHandle(&structure.lvString, "Some C String!");
    	if (!err)
    	{
    		err = PostLVUserEvent(*userEvent, &structure);
    		DSDisposeHandle(structure.lvString);
    	}
    	return err;
    }

     

    Me neither so I'm stealing that snippet.:D

  5. 1 hour ago, thenoob94 said:

    When I use the producer consumer pattern mycamera throws an "Error" telling me that the handle is incorrect. This happens only when I try to pass the data to the consumer as shown in the following link https://labviewwiki.org/wiki/Producer/Consumer

    this happens as well when I use the synchronised consumer

    What are you queueing? The image array or the U64 reference? (then reading the image in the consumer).

    You'll need to post your code so we can see what you are doing.

  6. 16 hours ago, ShaunR said:

    I will think about it more for a proper solution

    OK. So I now have a 2D array (2x264 bytes).

    if the IDX==0; the bytes are copied from the first row to the second and the random bytes in the first row are regenerated.

    If the verification fails with the random bytes in the first row, it looks in the second row. If that fails then they are hammering the connection and don't deserve to be let in.:P

    It means that every 255 client hello's we might have the overhead of an extra SHA1 hash to calculate. We can easily live with that. Any other ideas?

  7. On 5/2/2023 at 10:27 AM, ShaunR said:

    I resorted to a global key->value lookup table. I feel dirty :yes:

    That was a crap idea (was awfully complicated and memory bloaty) so I thought of something else....bear with me (it's easier done than said).

    1. Generate an array of 264 cryptographically random bytes (global). Lets call it RNDARR.
    2. Create an index (global UINT8). Lets call it IDX.

    For the Generate callback:

    1. if IDX = 0 initialise RNDARR with 264 new bytes. (Should get new bytes whenever we roll over as it's a UINT8)
    2. Take 8 bytes of RNDARR at IDX in the array. 
    3. Concatenate the 8 bytes with 8 bytes of the SSL session reference (Uint64 as bytes).
    4. SHA1 Hash the 16 byte concatenated array (one of the fastest and only 20 bytes).
    5. Append IDX to the SHA1 hash and present the 21 bytes as the cookie.
    6. Increment IDX.

    For the verify callback:

    1. Take the last byte and use it as an index (lets call it IDX_V);
    2. Take 8 bytes of RNDARR at IDX_V in the array. 
    3. Concatenate the 8 bytes with 8 bytes of the SSL session reference (Uint64 as bytes).
    4. SHA1 Hash the 16 byte array.
    5. Compare the SHA1 hash with the cookie ignoring the last byte of the cookie.

    So. that should mean we have a session dependent random number hash that is shared between callbacks. We get a unique hash on every client hello and It doesn't matter if the session is reused as the hash is relying on the 8 random bytes. (still convinced we don't need an HMAC but could do that instead of just a straight SHA1). Oh. And it's fast. Very fast.;)

    There is one corner case when IDX rolls over and a hash is in-flight (created with the IDX=255). The array is populated with new random data so the 8 random bytes used for the hash are no longer available for verification. In practice, OpenSSL retries so it's not an issue but I will think about it more for a proper solution (if you have an idea, let me know).

  8. 13 hours ago, Rolf Kalbermatter said:

    If you absolutely want to store information on a session level, you could use the CRYPTO_get_ex_new_index(CRYPTO_EX_INDEX_SSL/SSL_CTX, 0, "Name", NULL, NULL, NULL);

    Then store the information on the ssl or ctx with SSL_set_ex_data() or SSL_CTX_set_ex_data().

    Retrieve it with the according SSL_get_ex_data()/SSL_CTX_get_ex_data().

    Having played a bit, it doesn't look that straightforward.

    The main idea, it seems, is that you create callbacks that allocate and free the CRYPTO_EX_DATA (which is required for the get and set) but if they are all set to NULL in CRYPTO_get_ex_new_index then you must use CRYPTO_EX_new which would have to be a global and there is no way to associate it with the SSL session.

    This seems a lot harder than it should be so maybe I'm not seeing something.

  9. 10 hours ago, Rolf Kalbermatter said:

    If you absolutely want to store information on a session level, you could use the CRYPTO_get_ex_new_index(CRYPTO_EX_INDEX_SSL/SSL_CTX, 0, "Name", NULL, NULL, NULL);

    Then store the information on the ssl or ctx with SSL_set_ex_data() or SSL_CTX_set_ex_data().

    Retrieve it with the according SSL_get_ex_data()/SSL_CTX_get_ex_data().

    Ooooh. I shall have a play.

  10. 7 minutes ago, Rolf Kalbermatter said:

    The way I saw it done in some example code was to generate an application global random secret on first use and use that as key for a HMAC over the actual connection peer address (binary address + port number).

    BIO_dgram_get_peer(SSL_get_rbio(ssl), &peer);

    Then use the HMAC result as cookie.

    Yes it is not super safe as an attacker could learn the key eventually if he tries to attack the server long enough (and knows that that key for the cookie generation is actually constant) but if you don't use an abnormally bad HMAC hash code (SHA256 should be enough), it should be pretty safe.

    It's not so much safety but I can have multiple connections (on say 127.0.0.1) and I don't want a global for all the connections. A random per callback would be OK but there is no way to tell the verifying callback what the generator chose (hence they have a global). It would have been preferably to be able to define the cookie to be compared so that the cookie generation could be done in the application rather than inside the callback.

    I'm not sure HMAC is all that useful here either (they use SHA1-HMAC by the way). Effectively we are just saying "is it mine? rather than "is it from who it says it is"? They are really relying on the port number from the same address (127.0.0.1, say) and that definitely isn't random and could be repeated.

    What I've done is just SHA1 the result of SSL_get_rbio(ssl) It's not "cryptographically" random but is probably random enough for this purpose (this is for DDOS rather than hiding secrets-similar reasoning that we use broken hashes for file integrity) and, unlike their global, it changes on each connect. I could do the whole HMAC thing using the SSL_get_rbio(ssl)  as the random but I'm not sure it's really worth the overhead. Can you give an argument in favour?

  11. OK.

    So seems it's to do with the security level. They are compiled in but disabled at run-time.

    SSL_CTX_set_security_level states:

    Quote

     

    Level 1

    The security level corresponds to a minimum of 80 bits of security. Any parameters offering below 80 bits of security are excluded. As a result RSA, DSA and DH keys shorter than 1024 bits and ECC keys shorter than 160 bits are prohibited. All export cipher suites are prohibited since they all offer less than 80 bits of security. SSL version 2 is prohibited. Any cipher suite using MD5 for the MAC is also prohibited. Note that signatures using SHA1 and MD5 are also forbidden at this level as they have less than 80 security bits. Additionally, SSLv3, TLS 1.0, TLS 1.1 and DTLS 1.0 are all disabled at this level.

     

    That last sentence isn't in 1.1.1.

    The default security level is 1. You have to set it to 0 to get the rest.

    Now we're cooking! :thumbup1:

    image.png.59f860cee59de7b06c5e41b2ba09023d.png

  12. 31 minutes ago, Rolf Kalbermatter said:

    Which version of OpenSSL is that in? TLS 1.0. and 1.1 are/were scheduled to be depreciated for quite some time already. And it seems by default to be disabled in OpenSSL 1.1(.1).

    https://github.com/SoftEtherVPN/SoftEtherVPN/issues/1358

     

    3.1.0.

    They weren't disabled in 1.1.1. That post seems to be specifically for Debian since it says "OpenSSL on Debian 10 is built with TLS 1.0 disabled.".

    You used "no-{tls1|tls1_1} to disable them at compile time. Using that compile option also removes the TLS1 methods from the binary as well. The TLS1 methods are available in the 3.1.0 binary.

    image.png.b3175964d37afaca61a64c31b123700c.png

  13. DTLS version 1.0 doesn't work.

    It looks like they don't compile the TLS1.0 and TLS1.1 cipher suites on the default build with legacy. There is an option to compile without certain cipher suites (no-{}) which implies they should be enabled by default. Additionally. The no-{} compile options remove the applicable methods for that ciphersuite. However, they are all available. Compiling with enable-{} doesn't get around the problem. Using openssl.exe ciphers -s -tls1_1 yields an empty list. This also means that TLS1.0 and TLS 1.1 don't work either.

    Using openssl.exe s_server  -dtls1 -4 -state -debug and openssl.exe s_client  -dtls1 -4 -state -debug yields a Protocol Version Error (70)

    image.png.92c758b4e72888d6628c508cb964caaf.png

    DTLS 1.2 is fine, however.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.