Jump to content

Rolf Kalbermatter

Members
  • Posts

    3,772
  • Joined

  • Last visited

  • Days Won

    241

Everything posted by Rolf Kalbermatter

  1. Supposedly it is not using user input to add to the internal knowledge pool beyond a single session. Of course it's tempting to do that anyhow. It uses a lot of energy to run and somehow some bean counters want to see some form of ROI here, so "free" learning input would seem interesting. But without a very rigorous vetting of such input, the whole knowledge pool could easily be tainted with lots and lots of untruthiness. Not that even the selection of the actual training corpus for each new training round is guaranteed or even likely to be not biased in some ways either.
  2. Your time scale definitely seems skewed in terms of the grand scheme of the universe and all that. šŸ˜ Or did you mean to write Atoms instead of Eons? šŸ˜Ž I consider ChatGPT and friends still mainly a fairly advanced parrot.
  3. Shaun already pointed you at the culprit. There is one potential pitfall however. CVIRTE.dll is the same for LabWindows/CVI as is LVRT.dll for LabVIEW. It is the Runtime Engine that contains the entire business and support logic for executing LabWindows/CVI compiled DLLs and executables. And it has similar versions as LabVIEW too. So depending on in which CVI version the DLL was created, you may need to make sure you install the correct CVI Runtime version on that PC. Similar to LabVIEW, CVI also started to use an upwards compatible Runtime library somewhere around 2015 or 2016 I think. But if your DLL was compiled in an earlier version of LabWindows/CVI, things are listening a lot more narrow. CVIRTE is used by several tools in the NI software stall (Not everything is developed in LabVIEW and not even everything in LabWindows/CVI, some tools are directly developed and compiled in MSVC). But some of the plugins in NI Max such as the instrument control wizard and similar are developed in LabWindows/CVI. So if you install a typical NI Development machine, that runtime library is absolutely sure to be present, but on a minimal LabVIEW runtime only installed machine, it is not automatically there. And none of the typical LabVIEW Additional Installer containers includes it. The real problem is likely the developer of your SNDCSDLM.dll. S/he would know that they used LabWindows/CVI to compile that DLL and they should have provided a proper installer for this component that also takes care of installing the right version of the CVI Runtime to the target machine. Simply adding such components as a dependency in your LabVIEW project makes you automatically responsible to care about this yourself! And yes it is cumbersome, but there is no easy solution to this, other than the fact that the original developer of a component SHOULD provide a proper installer (and any user of such a component SHOULD include that installer as part of their own application installer rather than just copy the DLL alone into their project).
  4. That's simple. OPC is a completely different protocol layer. You need an OPC capable server (which last time I checked was an additional licensed component) in your PLC to do that. Part of the OPC UA protocol is to enumerate all the available resources on a system. That includes data items and their data type. And the NI OPC library has an extensive part that converts between the OPC wire data and the actual LabVIEW data in a seemingly transparent way. Snap7 (and all the other S7 Toolkits out there) communicate through the Siemens S7 protocol, which is based on ISO on TCP as basis protocol, and while the ISO protocol is an officially documented protocol, it is only the container frame in which Siemens then packs its own S7 protocol elements. And those S7 protocol features were never officially documented by Siemens, but the original protocol that only addresses fixed DB, EB, AB, MK elements was reverse engineered by projects like libnodave and then Snap7. No such reverse engineering has happened for the extended protocol elements present in the 1200 and 1500 series that support accessing "compressed" elements.
  5. Handles are only used for arrays (and a LabVIEW string is also an array of ASCII bytes). Now, when you start to do arrays of structures, things get really fun, but yes it is an array so it is a handle too.
  6. While it's not a problem for this specific datatype, you should do something extra for any struct definition, which I forgot in my example above! #include "extcode.h" // This is logical to get the definitions for the LabVIEW manager functions #include "hosttype.h" // Helpful if you also intend to include OS system API headers ...... // Anything else you may need to include #include "lv_prolog.h" // This is important to get correct alignment definition for any structured datatype // that is meant to interface directly to LabVIEW native diagram clusters typedef struct // This datatypes elements are now properly aligned to LabVIEW rules { int32_t firstInt; int32_t secondInt; int32_t thirdInt; LStrHandle lvString; } MyStruct, *MyStructPtr; #include "lv_epilog.h" // Reset the alignment to what it was before the "lv_prolog.h" include .......................... As mentioned for this particular cluster no special alignment rules apply for 32-bit as all 4 elements are 32-bit entities. In LabVIEW 64-bit the LStrHandle (which is a pointer really) is aligned to 64-bit, so there are 4 extra bytes added between thirdInt and lvString, but LabVIEW also uses the default alignment of 8 byte (64-bit) so the alignment is again the same independent if you use those lv_prolog.h and lv_epilog.h includes or not. But in LabVIEW 32-bit full byte packing is used (for traditional reason), while most compilers there also use 64-bit alignment rules. Therefore if the natural alignment of elements does cause extra filler bytes, it will not match with what LabVIEW 32-bit for Windows expects for its clusters.
  7. You clearly have not much C programming experience. Which of course is a very bad starting point to try to write C code that should then interoperate with LabVIEW. First this: // +++++ I try to add this, Gues this will not work .. see extcode.h LStrHandle textString[TEXT_STRING_SIZE]; }structSample; You are basically defining a fixed size array of TEXT_STRING_SIZE LabVIEW string handles, not a LabVIEW string handle of TEXT_STRING_SIZE length. LabVIEW string handles are never fixed size but instead dynamically allocated memory blocks with an extra pointer reference to it. And that dynamic allocation (and deallocation) ABSOLUTELY and SURELY must be done by using the LabVIEW memory manager functions. Anything else is nothing more than a crash site. What you have built there as datatype would look like an array of structs and each of these structs would contain three integers followed by 256 LabVIEW string handles, which is not only pretty weird but absolutely NOT compatible with any possible LabVIEW structure. And after allocating all these things you eventually only send the actual string handle to the event and leak everything else and the handle itself too! typedef struct { int32_t firstInt; int32_t secondInt; int32_t thirdInt; LStrHandle lvString; } MyStruct, *MyStructPtr; MgErr CreateStringHandle(LStrHandle *lvStringHandle, char* stringData) { MgErr err; size_t len = strlen(stringData); if (*lvStringHandle) { err = DSSetHandleSize(*lvStringHandle, sizeof(int32_t) + len); } else { *lvStringHandle = DSNewHandle(sizeof(int32_t) + len); if (!*lvStringHandle) err = mFullErr; } if (!err) { MoveBlock(stringData, LStrBuf(**lvStringHandle), len); LStrLen(**lvStringHandle) = (int32_t)len; } return err; } MgErr SendStringInSructToLV(LVUserEventRef *userEvent) { MyStruct structure = {1, 2, 3, NULL); MgErr err = CreateStringHandle(&structure.lvString, "Some C String!"); if (!err) { err = PostLVUserEvent(*userEvent, &structure); DSDisposeHandle(structure.lvString); } return err; }
  8. No that should not happen. When the Write loop wants to access the DVR it is either locked or not. When it is not there is no problem. When it is, the IPE in the Write loop is put on a wait queue (with other stuff in the diagram still able to execute and as soon as the DVR is unlocked this wait queue is queried and the according IPE woken up and passed control with the newly locked DVR. If the Read loop now wants to access the DVR, its IPE is put on that same wait queue as a Read access is not allowed as long as the DVR is locked for ReadModifyWrite access. Two Read accesses on the other side could execute in parallel as there is no chance for a race condition here.
  9. It's not impossible but it is also not something the typical toolkits support. The low level functions used underneath are based on requesting a number of bytes from a certain address in a certain DB element, then convert this as binary data to the actual LabVIEW datatype. For the standard scalar data types, all the S7 toolkits out there provide ready made functions that convert the actual bytes to and from floating point data, integers, timestamps, and even strings but so called complex data types are not something they can support out of the box without a lot of extra work. But it is your responsibility to make sure that the DB address you specify for an actual read or write access is actually the datatype you used according to the VI. If you use a Read Float64 function for a specific address but the PLC has a boolean or something stored there, you simply read crap. To support structured data we can not have ready made VIs that would provide an out of the box functionality. For one there are really an indefinite amount of possible combinations of data types when you add complex (struct type) data types to the recipe. And it is obvious that nobody can create a library with an indefinite amount of premade VIs. So we would have to do some dynamic data parsing but that is not trivial. It could be done with Variant parsing code in LabVIEW where we try to map a specific LabVIEW cluster to a structured data type in the PLC memory. But this is quite involved and still leaves the problem of the user needing to be able to actually determine the correct cluster elements and order to map nicely to the PLC memory structure. A can of worms and a complete support nightmare as there will be lots and lots of support requests in the form of "It doesn't work!!!!!" The static S7 protocol as "documented" by the various libraries out there does not support datatype discovery from the PLC from a remote site. Siemens introduced a new memory model with TIA 13 or so that supports so called compressed DB elements with symbolic naming. Here the various data elements do not have a fixed address anymore but are aligned in memory to be optimally filling things. To be able to still read and write such elements, Siemens added a new protocol layer that allows to enumerate the actual elements in the DBs, retrieve their current dynamic address (a variable can end up at a different address every time you make any changes to the PLC program and deploy it) and then access it. This protocol extension, as the entire S7 protocol actually, has not been officially documented by Siemens for external users. Some commercial libraries have tried to support it, such as Accon AG-Link but I'm not sure if they did reverse engineering or if they have an NDA with Siemens to get that information. I'm not aware of any other 3rd party library that would support this and definitely none of the Open Source ones have ever tried to go there.
  10. I'm not entirely sure if that applies to every ISO standard, but that reaction is usually mine too. They tend to be complicated, involved, lengthy and almost incomprehensible. RFC's also have a tendency to contain a lot of verbiage and not really come to the actual meat of it very soon, but they at least have usually some actual description and even occasional examples. ISO often just seems like a lot of bla bla and it is rather difficult to distill the actual useful information out of it.
  11. Yes it of course depends on the target you are compiling for. The actual application target, not the underlying OS. It's the same for pointers and size_t datatypes (at least in any system I know of. There is of course theoretically a chance that some very obscure compiler chooses a different type system. But for the systems that are even an option for a LabVIEW program, this is all the same).
  12. Totally correct so far but you forgot that size_t is also depending on the bitness, not just the pointers itself. So these values also either need to be a 32-bit of 64-bit integer value.
  13. Daddreamer is correct. An IMAQ image is NOT a LabVIEW array, not even by a very very long stretch. There is simply no way to treat it as such without copying the data around anyhow!
  14. Yes since LabVIEW 2020 or 2021 one would need an .ipk to be able to install software on an RT system. I haven't yet come to create such a beast but am working on a general update to the library currently. It's just that there are so many other things to tackle too. šŸ˜Ž
  15. It depends what you consider a lot harder. The basic function gets you a pointer sized slot to store whatever you want. If you manage allocation/deallocation of that data structure yourself you can get away with NULL callback parameters and only need the set and get functions to store the pointer and get it out back.
  16. If you absolutely want to store information on a session level, you could use the CRYPTO_get_ex_new_index(CRYPTO_EX_INDEX_SSL/SSL_CTX, 0, "Name", NULL, NULL, NULL); Then store the information on the ssl or ctx with SSL_set_ex_data() or SSL_CTX_set_ex_data(). Retrieve it with the according SSL_get_ex_data()/SSL_CTX_get_ex_data().
  17. The way I saw it done in some example code was to generate an application global random secret on first use and use that as key for a HMAC over the actual connection peer address (binary address + port number). BIO_dgram_get_peer(SSL_get_rbio(ssl), &peer); Then use the HMAC result as cookie. Yes it is not super safe as an attacker could learn the key eventually if he tries to attack the server long enough (and knows that that key for the cookie generation is actually constant) but if you don't use an abnormally bad HMAC hash code (SHA256 should be enough), it should be pretty safe.
  18. Which version of OpenSSL is that in? TLS 1.0. and 1.1 are/were scheduled to be depreciated for quite some time already. And it seems by default to be disabled in OpenSSL 1.1(.1). https://github.com/SoftEtherVPN/SoftEtherVPN/issues/1358
  19. One of Emerson's critique point about the NI strategy was its above market average spending in R&D. While LabVIEW development is only a small part of that spending, it does not bode well for the future of LabVIEW.
  20. Quite a bit late to the party but that information is only partly correct! šŸ˜€ LabVIEW 2.x and I believe even 3.x used actually an unsigned 32-bit integer for timestamps. Yes it had no fractional seconds and that was a legacy from the Macintosh API.
  21. I think the cookies callbacks are not the problem. It's simply a "magic" value being generated and included in the encrypted hello response and the client then copies that back in its subsequent messages to verify that it is itself and not some intermediate adversary that intercepted the messages. So the generation and verification is done on the same side and there is no interoperability issue as the cookie is treated as an opaque binary blob by all intermediate channels. Why they even require callbacks to be installed rather than providing a default mechanism itself (that could be overridden by a callback if so desired) is a bit beyond me however.
  22. Welcome in LabVIEW scripting. I'm pretty sure it is doable, and I'm even more sure that it is a lot of work to do right. But the tools are there for you to do it. šŸ˜€
  23. I can clearly see what you are talking about. DTLS is a second class citizen as you have to indeed do BIO trickery. For TLS OpenSSL has convenient wrapper functions. However I appreciate the OpenSSL developers troubles in trying to shove an intermediate layer into the socket interface, and to make matters worse try to get it to work on multiple platforms While both Linux sockets and WinSock are in principle based on the BSD sockets and this works surprisingly smooth on all these platforms with the same source code for basic TCP and even UDP sockets, things start to get nasty soon when trying to do advanced things such as what OpenSSL needs to do. Under Windows this should be really solved with a socket filter driver that can be installed into WinSock but that is considerably different to how things would be done on BSD and pretty much impossible on Linux without integrating it in the kernel or trying to hack it into pcap. OpenSSL is clearly a compromise. The alternative would be OS specific protocol filter drivers and there are very few of them and none that supports multiple OSes.
  24. I love number 7). šŸ˜€ Serves you right if you used DAQmx Express VIs! The recompiling of VIs is not enough punishment for such a crime! šŸ˜Ž
  25. Technically very much so. But it is an effort to support multiple platforms. If you intend to commit some backing to Shaun/LVS-Tools Iā€™m fairly convinced he can come up with some proposal. And if he wants to I can even offer some support. But Iā€™m very sure he can do it also alone if there is some incentive. šŸ˜€
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.