Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Posts posted by smithd

  1. On 1/1/2020 at 1:36 PM, ShaunR said:

    Indeed. It does have some useful features though like congestion control targeted at small packets and discovery.

    Seems to be more performant too.

    Its important not to miss the details on that performance one. They went for the specific use case of very very underpowered devices, very infrequent sending of data, etc. For example they assume a new TCP connection for every data packet, unless I misread*, while on the DTLS/coap side they ignore security handshaking, assuming you have a factory installed pre-shared key instead. It looks like it also ignores the fact that CoAP uses a rest model, meaning a request-response cycle. If you include that information they are basically saying "UDP protocols with no reliability except a super super barebones re-transmit feature work better than TCP if you close and reopen the connection once per second"..which...yeah.





    For TCP, the session is terminated after each sensor report was received in the other end of the connection


  2. 13 hours ago, X___ said:

    No interest in NXG from this neck of the woods would be the feedback to the Powers that Were.

    Lol. Sorry for sidetracking, but I tried to get it to even open and it crashes on my machine. If you believe a KB, at some point someone before me installed a beta version of nxg on what is now my laptop (seems unlikely). The fix is to uninstall nxg, uninstall ni package manager, "Remove any supporting NXG files in the root directory, removing any trace of LabVIEW NXG from the machine" and reinstall everything. I asked NI support what "the root directory" is I'm supposed to delete NXG stuff from and they told me it meant the C drive 😕. Find an unspecified leftover NXG file somewhere on the C drive and delete it. What a joke.

    11 hours ago, ShaunR said:

    Hopefully TLS1.3 :D


  3. On 12/27/2019 at 9:02 AM, Jordan Kuehn said:

    As I read the OP the question was does zeromq fill this gap in native NI offerings? I quoted a mention of some of the holes that exist and suggest that I think they still exist. Perhaps I should add some context. I remember when this conversation first popped up as well as rabbitmq. I thought that it was just an effort to include a third-party communication protocol to make rt targets able to communicate with 3rd party devices and I don't really have *that* need. However, filling some of the holes in NI offerings is something I am interested in seeing if this is the case. I work with cRIOs daily and have a working toolset, but I was curious to see where this (or other) protocols sit today in terms of the original question. If I misunderstood the OP, my apologies, and I do realize this is an old thread, but that isn't always frowned on here.

    I see, yeah I don't know of any movement on the ni side to improve data comms. Rabbit is 'available' via systemlink, but...requires systemlink. Its also just about 2020 and we don't have any secure network api built into the product.

  4. Cross post is here:


    StephenD in that thread is correct. I don't have labview installed here, but the error literally just means that you are using the base class for something. The base class has no implementation, so I added an error which says "hey you just tried to do something with an uninitialized, dead base class". I added this error message because I think I had a case structure where I accidentally selected "use default if unwired" and so that case structure was returning an uninitialized parent object.

    Other situations to look for:

    • Uninitialized shift register (ie action engine) used before initialized
    • Pass object through a for loop without using shift registers -- if the for loop executes 0 times the output will be a dead parent object
    • case structures as described above
    • diagram disable structures you forgot to wire through

    You should be able to probe your code (turn on 'retain wire values') and you should see a point somewhere that the initialized object gets invalidated. Or you could put breakpoints in your action engine cases and see if one of the error cases is called prior to the modbus object being initialized.

    As a general statement I don't use action engines/fgvs because I find them to be confusing to manage, but then labview objects are also confusing to manage. The modbus master object is internally reference based and the wire can be safely branched. I would typically not do this either -- I'd have a single loop which talks to a device and is responsible for writing data to that device or reading data from that device.

  5. I don't really think the metaphor matches. Left handed scissors are obviously intended for we 10% and are marked as such. In your examples, its not clear who xctrls and ppls were designed for, nor what "using as intended" actually means.

    In contrast: left- and right-handed knives. They do this fun thing where they just steadily slide outward and make all your cuts super weird if you're using the wrong hand, but they still kinda cut so its very non-obvious. Its funny to watch, and its not really marked unless you look carefully at the edge, and you also have to know that you were supposed to look and check the edge in the first place, which many many people do not. And then if the knife slides off the edge of what you're cutting, you might just cut right into your hand. This seems like a more fitting analogy to xcontrols, myself ;)

  6. On 11/8/2019 at 5:24 AM, drjdpowell said:

    A while back I posted a beta version of a client library to access Postgresql.  The project driving that work was put on hold, and I have not done significant work since.  But at least some people have used or have considered using it, so I wanted to see how many people that is.  There has also been a desire expressed to me about using it on RT, so I wanted to know if anyone has developed it to do that.

    I've definitely used it, although we ended up going with a different db so the code never got used for real. I think I had tried it on lvrt but I can't really remember. I tend to think that talking to a central db from an rt target tends to break the nicely distributed nature of the systems and so I'm more likely to use your sqlite on rt, but I get that for test systems and the like (now that PXI RT Linux is a thing) might be more likely to talk directly to a postgres database.

  7. 17 hours ago, drjdpowell said:

    Worse, writing the image ref only triggers the need for a refresh of the control; the actual read of the data is asynchronous (like all LabVIEW indicators) and happens a short while later, possibly after the image has been changed.

    i think if you put the img display in snapshot mode its synchronous?

    On 10/4/2019 at 4:22 PM, Rolf Kalbermatter said:

    The reference nature for images is neccessary as otherwise you run out of memory very quickly when trying to process huge images.

    I agree that imaq needs refs, but why does it need named refs? Thats the part thats horrifying

  8. On 9/25/2019 at 6:35 PM, G-CODE said:

    A lot of the code under the hood just ain't that pretty and some people don't really care about that. I do so I find myself getting annoyed every time I dig into it.

    Yes, I don't really care about that, and I will use block diagram cleanup until its pried from my cold dead hands 👻

    Some of the key parts (like the engine) are relatively neat (by my standards) but due to the complexity its definitely still a hot mess. And of course a lot of the code dealing with all of the data types is scripted because aint nobody got time for that.

    Your other comments are fair though, some definite mistakes were made in the design, but all told I think it does a decent job.

    On 9/25/2019 at 8:31 PM, MarkCG said:

    My biggest gripe about it is that there is no built in events / messages/ triggers.

    Yeah, I made an aborted attempt (https://github.com/LabVIEW-DCAF/ModuleInterface/tree/StreamingAndEvents  and   https://github.com/LabVIEW-DCAF/ExecutionInterface/tree/StreamsAndEvents) but...then I left NI. The nature of labview merging is such that those branches are probably now useless :(

    edit: theres probably a few implementation-side branches too like https://github.com/LabVIEW-DCAF/StandardEngine/tree/StreamsAndEvents

  9. On 9/5/2019 at 9:21 AM, patufet_99 said:


    in the LabVIEW Modbus API Master example there is no error handling.

    Is there some function or method to explicitly check the status of the communication In case of communication interruption, Modbus Server restart or other modbus error,?

    Should that be done by parsing the error codes if any when one read/writes registers and then re-trying the connection?

    Thank you for your help.



    Labview doesn't provide a way to check the status of the TCP connection except by trying to use it. Serial modbus has no physical way to check the connection except by using it. In both cases you need to use it to see if it still works.

    On the master (client) side you will see any of the TCP error codes (56, 62 and 63 being the most common as I recall) for the TCP master. For serial masters, you will only ever see 56. I suppose some VISA-specific codes are also possible, like if there are parity bit errors or if you're using a usb-serial device that gets unplugged, that sort of stuff. Modbus protocol errors are - 389110  as mentioned above.

    On the slave (server) side you will never see an error from the data access methods because the access is local. You can check (via a property node, I think) on the status of all connected masters (clients) for TCP. Serial is 1-1 and has no concept of a connection, so I think it just tells you the last error to occur for the serial comms process.

  10. 17 hours ago, smidsy said:

    You can actually typecast your DVR reference to U32 value (pointer) and back. You can store this U32 number in a helper class, that will represent your API.

    I wouldn't do this. Its probably ok in this limited case, but its possible to make labview crash in this way. A better route is the variant, which has the same capabilities (cast to an opaque untyped blob and back again) but without the danger.

  11. As described, anyone can take your DVR, unbundle it, and get access to the class. Since this is possible, and the class is private, it must be blocked. Technically this doesn't make much sense for labview classes since their data is private by default. Makes more sense for clusters, where enclosed data is private, so maybe thats why.


    You can make the methods of the class private, but the class itself must be public. Or, you could use a DVR of the base labview object and then cast it when you use it, inside of your library.

  12. Depends on the database. For example, oracle does not support multiple inserts in a single query like that. Instead you'd have to use:

    INSERT INTO T_Column (C_Name , C_Status) VALUES('21' , 'Not realised');
    INSERT INTO T_Column (C_Name , C_Status) VALUES('22' , 'Not realised');

    On the one hand this sucks, but on the other hand if you're using oracle this is probably the least of your worries. :(


    On a related note, it looks like you are manually generating sql queries. While fine for development, its generally recommended to use parameterized statements to help avoid sql injection attacks.

  13. 13 hours ago, drjdpowell said:

    Sure.   I suspect smithd already has a backsaved version, so you might ask him.

    afraid not, I moved on to 2017. At this point I think your best bet is to use a pre-2017 version (pre-vim). You can backsave but the vims make it annoying.


    If you are windows-only, I think the labview runtime now supports several versions back and forth, so it might be possible to build the 2017 library into a packed library (very much like a dll, but pure labview code) and reference that. Probably not worth it though. 2015 is 4 years old at this point, might be worth investigating an update.

  14. the terminology NI was using a few years ago was tag/stream/message and i believe the descriptions are as follows:

    messages have a non-fixed rate, mixed data types, and should generally be as lossless as possible with minimal latency, with the note that anything requiring confirmation of action must be request-response which takes you down the rabbit hole of idempotence (ie what happens if the response is lost, request is reissued -- does the customer get 2 orders?). Messages are usually 1:N (N workers for 1 producer) or N:1 (N clients for 1 process), but this isn't required.

    streams have a fixed rate, fixed type, and generally latency isn't an issue but throughput is. Losslessness is a must. Usually 1:1.

    tags are completely lossy with a focus on minimal latency. The addon to this would be an 'update' which is basically a message oriented tag (eg notifier, single element queue). Usually 1:N with 1 writer and N readers.

    all three overlap, but I think these three concepts make sense.

  15. Of the messaging things plain queues should be the fastest message-oriented option because everything else (I think) is built on top. Notifiers, channels, etc use queues. All these options are pretty quick.

    >Last I remember reading, user event queues rely on some of the same underlying bits, but the UI-oriented and pub-sub nature of the code makes them a bit slower. Still generally fast enough for anything anyone ever uses them for.

    >>User events are completely nondeterministic (every option in this category is nondeterministic technically, but user events behave like garbage if you stress them).

    Property nodes obviously require locking the UI thread and suck, but control indices are sufficiently fast for most anything.

    If you eliminate the update-oriented part -- just sharing scalar values in other words -- then the fastest would be a global (requires a lock for write but I think the read side doesn't), then a DVR in read-only mode, then a standard DVR or FGV implementation (both need to lock a mutex). These are all faster than the message oriented variations, obviously, but thats because you get less than half the functionality.


    • Like 1
  16. Its similar to flattening a cluster, except its cross language. It accomplishes this by having scripts which take a message definition and generate code in that language. This makes it easy to send a protobuf message, which might be represented in labview as a cluster*, to C or java or python or go or wherever. Its primary benefit over something like json is a slightly more extensive type system and speed.

    This won't get you any of what I described but if you just need super basic support for generating a valid message manually: https://github.com/smithed/experiments/tree/master/Protocol Buffers

    It would need a ton of work to actually support scripting. Doesn't seem like there is enough of an advantage.


    *I'm pretty sure it always has to be a class actually due to things like optional data values.

  • Create New...

Important Information

By using this site, you agree to our Terms of Use.