Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. Its important not to miss the details on that performance one. They went for the specific use case of very very underpowered devices, very infrequent sending of data, etc. For example they assume a new TCP connection for every data packet, unless I misread*, while on the DTLS/coap side they ignore security handshaking, assuming you have a factory installed pre-shared key instead. It looks like it also ignores the fact that CoAP uses a rest model, meaning a request-response cycle. If you include that information they are basically saying "UDP protocols with no reliability except a super super barebones re-transmit feature work better than TCP if you close and reopen the connection once per second"..which...yeah. *
  2. If you don't need to use labview, I recently used this tool: https://github.com/glexey/excel2img You can run direct from the command line or build it into an exe using pyinstaller.
  3. Lol. Sorry for sidetracking, but I tried to get it to even open and it crashes on my machine. If you believe a KB, at some point someone before me installed a beta version of nxg on what is now my laptop (seems unlikely). The fix is to uninstall nxg, uninstall ni package manager, "Remove any supporting NXG files in the root directory, removing any trace of LabVIEW NXG from the machine" and reinstall everything. I asked NI support what "the root directory" is I'm supposed to delete NXG stuff from and they told me it meant the C drive 😕. Find an unspecified leftover NXG file somewhere on the C drive and delete it. What a joke. Lol
  4. I've looked at it in the past. Seems weird to target devices with performance such that full http is too expensive to parse but with sufficient performance for dtls over udp and for parsing/returning json and xml strings. I just ended up using full http.
  5. I see, yeah I don't know of any movement on the ni side to improve data comms. Rabbit is 'available' via systemlink, but...requires systemlink. Its also just about 2020 and we don't have any secure network api built into the product.
  6. oh yeah i meant you just have to download for windows. I know its pretty easy to compile for lrt, but you have to do the compile.
  7. I'm confused by this question -- there is an available zeromq library if that fits your needs. Its not perfect but its good, and its up to version 4.2.5 (current main lib release is 4.3.2). If you don't need linuxrt support you can just use https://github.com/zeromq/netmq
  8. Cross post is here: https://forums.ni.com/t5/LabVIEW/Errror-538179-Modbus-TCP-IP/td-p/3999197/page/2?profile.language=en StephenD in that thread is correct. I don't have labview installed here, but the error literally just means that you are using the base class for something. The base class has no implementation, so I added an error which says "hey you just tried to do something with an uninitialized, dead base class". I added this error message because I think I had a case structure where I accidentally selected "use default if unwired" and so that case structure was returning an uninitialized parent object. Other situations to look for: Uninitialized shift register (ie action engine) used before initialized Pass object through a for loop without using shift registers -- if the for loop executes 0 times the output will be a dead parent object case structures as described above diagram disable structures you forgot to wire through You should be able to probe your code (turn on 'retain wire values') and you should see a point somewhere that the initialized object gets invalidated. Or you could put breakpoints in your action engine cases and see if one of the error cases is called prior to the modbus object being initialized. As a general statement I don't use action engines/fgvs because I find them to be confusing to manage, but then labview objects are also confusing to manage. The modbus master object is internally reference based and the wire can be safely branched. I would typically not do this either -- I'd have a single loop which talks to a device and is responsible for writing data to that device or reading data from that device.
  9. I don't really think the metaphor matches. Left handed scissors are obviously intended for we 10% and are marked as such. In your examples, its not clear who xctrls and ppls were designed for, nor what "using as intended" actually means. In contrast: left- and right-handed knives. They do this fun thing where they just steadily slide outward and make all your cuts super weird if you're using the wrong hand, but they still kinda cut so its very non-obvious. Its funny to watch, and its not really marked unless you look carefully at the edge, and you also have to know that you were supposed to look and check the edge in the first place, which many many people do not. And then if the knife slides off the edge of what you're cutting, you might just cut right into your hand. This seems like a more fitting analogy to xcontrols, myself
  10. I've definitely used it, although we ended up going with a different db so the code never got used for real. I think I had tried it on lvrt but I can't really remember. I tend to think that talking to a central db from an rt target tends to break the nicely distributed nature of the systems and so I'm more likely to use your sqlite on rt, but I get that for test systems and the like (now that PXI RT Linux is a thing) might be more likely to talk directly to a postgres database.
  11. https://stackoverflow.com/questions/39576/best-way-to-do-multi-row-insert-in-oracle
  12. i think if you put the img display in snapshot mode its synchronous? I agree that imaq needs refs, but why does it need named refs? Thats the part thats horrifying
  13. Yes, I don't really care about that, and I will use block diagram cleanup until its pried from my cold dead hands 👻 Some of the key parts (like the engine) are relatively neat (by my standards) but due to the complexity its definitely still a hot mess. And of course a lot of the code dealing with all of the data types is scripted because aint nobody got time for that. Your other comments are fair though, some definite mistakes were made in the design, but all told I think it does a decent job. Yeah, I made an aborted attempt (https://github.com/LabVIEW-DCAF/ModuleInterface/tree/StreamingAndEvents and https://github.com/LabVIEW-DCAF/ExecutionInterface/tree/StreamsAndEvents) but...then I left NI. The nature of labview merging is such that those branches are probably now useless edit: theres probably a few implementation-side branches too like https://github.com/LabVIEW-DCAF/StandardEngine/tree/StreamsAndEvents
  14. Labview doesn't provide a way to check the status of the TCP connection except by trying to use it. Serial modbus has no physical way to check the connection except by using it. In both cases you need to use it to see if it still works. On the master (client) side you will see any of the TCP error codes (56, 62 and 63 being the most common as I recall) for the TCP master. For serial masters, you will only ever see 56. I suppose some VISA-specific codes are also possible, like if there are parity bit errors or if you're using a usb-serial device that gets unplugged, that sort of stuff. Modbus protocol errors are - 389110 as mentioned above. On the slave (server) side you will never see an error from the data access methods because the access is local. You can check (via a property node, I think) on the status of all connected masters (clients) for TCP. Serial is 1-1 and has no concept of a connection, so I think it just tells you the last error to occur for the serial comms process.
  15. cRIO is all linux, PXI now supports linux: https://www.ni.com/product-documentation/55164/en/
  16. I wouldn't do this. Its probably ok in this limited case, but its possible to make labview crash in this way. A better route is the variant, which has the same capabilities (cast to an opaque untyped blob and back again) but without the danger.
  17. Doing a quick search, C++ doesn't permit access scope for the class itself -- only at the member level. In other words the class itself is always public
  18. As described, anyone can take your DVR, unbundle it, and get access to the class. Since this is possible, and the class is private, it must be blocked. Technically this doesn't make much sense for labview classes since their data is private by default. Makes more sense for clusters, where enclosed data is private, so maybe thats why. You can make the methods of the class private, but the class itself must be public. Or, you could use a DVR of the base labview object and then cast it when you use it, inside of your library.
  19. Depends on the database. For example, oracle does not support multiple inserts in a single query like that. Instead you'd have to use: INSERT INTO T_Column (C_Name , C_Status) VALUES('21' , 'Not realised'); INSERT INTO T_Column (C_Name , C_Status) VALUES('22' , 'Not realised'); On the one hand this sucks, but on the other hand if you're using oracle this is probably the least of your worries. On a related note, it looks like you are manually generating sql queries. While fine for development, its generally recommended to use parameterized statements to help avoid sql injection attacks.
  20. afraid not, I moved on to 2017. At this point I think your best bet is to use a pre-2017 version (pre-vim). You can backsave but the vims make it annoying. If you are windows-only, I think the labview runtime now supports several versions back and forth, so it might be possible to build the 2017 library into a packed library (very much like a dll, but pure labview code) and reference that. Probably not worth it though. 2015 is 4 years old at this point, might be worth investigating an update.
  21. the terminology NI was using a few years ago was tag/stream/message and i believe the descriptions are as follows: messages have a non-fixed rate, mixed data types, and should generally be as lossless as possible with minimal latency, with the note that anything requiring confirmation of action must be request-response which takes you down the rabbit hole of idempotence (ie what happens if the response is lost, request is reissued -- does the customer get 2 orders?). Messages are usually 1:N (N workers for 1 producer) or N:1 (N clients for 1 process), but this isn't required. streams have a fixed rate, fixed type, and generally latency isn't an issue but throughput is. Losslessness is a must. Usually 1:1. tags are completely lossy with a focus on minimal latency. The addon to this would be an 'update' which is basically a message oriented tag (eg notifier, single element queue). Usually 1:N with 1 writer and N readers. all three overlap, but I think these three concepts make sense.
  22. Of the messaging things plain queues should be the fastest message-oriented option because everything else (I think) is built on top. Notifiers, channels, etc use queues. All these options are pretty quick. >Last I remember reading, user event queues rely on some of the same underlying bits, but the UI-oriented and pub-sub nature of the code makes them a bit slower. Still generally fast enough for anything anyone ever uses them for. >>User events are completely nondeterministic (every option in this category is nondeterministic technically, but user events behave like garbage if you stress them). Property nodes obviously require locking the UI thread and suck, but control indices are sufficiently fast for most anything. If you eliminate the update-oriented part -- just sharing scalar values in other words -- then the fastest would be a global (requires a lock for write but I think the read side doesn't), then a DVR in read-only mode, then a standard DVR or FGV implementation (both need to lock a mutex). These are all faster than the message oriented variations, obviously, but thats because you get less than half the functionality.
  23. Its similar to flattening a cluster, except its cross language. It accomplishes this by having scripts which take a message definition and generate code in that language. This makes it easy to send a protobuf message, which might be represented in labview as a cluster*, to C or java or python or go or wherever. Its primary benefit over something like json is a slightly more extensive type system and speed. This won't get you any of what I described but if you just need super basic support for generating a valid message manually: https://github.com/smithed/experiments/tree/master/Protocol Buffers It would need a ton of work to actually support scripting. Doesn't seem like there is enough of an advantage. *I'm pretty sure it always has to be a class actually due to things like optional data values.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.