Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Everything posted by ShaunR

  1. Basically. If you try and use events in "Event Driven" languages, the same way you use them in LabVIEW for 1:1. Everything falls over because the assumptions about the underlying mechanisms are incorrect. I think architectures should be language agnostic.
  2. Yeah. They're not very robust. Try clearing your object cache.
  3. If IT have said that a keep-alive will work, then that is the proper solution and you should be trouble free from then on. Checking every 55 minutes because they said the idle timeout is one hour is just asking for it to fall over in 6 months when someone in IT decides 45 minutes would be better
  4. You can get these symptoms if there is a proxy between you and the target. What tends to happen is the connection between you and the proxy is kept alive but the connection to the endpoint from the proxy closes due to inactivity. This means your connection looks good but the tunnel fails when you try to use it. The easiset solution is to heartbeat the connection.
  5. LVA-Tools.Co.Uk is having a Post NI Week Bonanza with 40% off all commercial toolkits.. Offer is available for one week so grab the opportunity to secure your business data and websocket that application.
  6. I don't know much about that module but if it has that capability then yes. you could do that. Most NI modules are quite expensive and now you would have a hardware module dependency for just a watchdog. Most Linux kernels have a watchdog feature if the hardware supports it so I would look into that first. If that's not possible, cost is an issue and it is likely just your application that will hang rather than the OS; then you can spawn a separate process to act as the watchdog.
  7. I think you should start a new thread to discuss it A hijack of a hijack is a bit much
  8. Watchdogs usually give you the option to restart the software if it stops responding and are usually hardware driven. If you have a software one, the chances are your watchdog will hang too so you need an external process that gets kicked every so often. This external app can then forcefully close the application and restart it. You can communicate with the watchdog via TCPIP or sharedmem and just message it every so often from a dedicated loop in the software
  9. Yes.When mapping LabVIEW variables to other languages then you tend to need "tags" and I can see why the CVT is probably the best solution here. You can define a tag name and use the name in JavaScript to update the UI. You can play around with spamming get requests and consolidating messages into larger update messages but the CVT would be simpler, easier and require less framework. I do the same for websockets but am able to generate events from the LabVIEW UI changing so the tags are implicit in the control/indicator names. Of course. That wouldn't work on a cRIO with no UI. Do the latest ones with a UI support UI events? DCAF. Yes.
  10. Most experienced engineers are using messaging systems today. CVT doesn't really fit with those architectures. Can you elaborate on what use cases has CVT been whittled down to and what has been found not appropriate for?
  11. So now you want me to learn how to make shortcuts I don't use as well? The list goes on.... Next you'll be telling me to learn to type with all my fingers Do people still use Google?
  12. It's not that I don't like it. It's that it's not useful to me. I don't know what the primitives are called but I know exactly where they are in the menus-IT'S A VISUAL LANGUAGE! I call this "The Chooser", for example. I'd have to open LabVIEW and mouse over it to find out what it's actually called (and then I'd immediately forget). Words is hard! Beta testing isn't a focus group That's the idea exchange. It must be hard coded because it only effects the TCPIP pimitives - not networks streams etc. This is a deal breaker for me and for what? Because "Select" on *nix systems only supports 1024?. On Linux they have gotten around it by using poll and epoll so why not upgrade the *nix platform to more than 1024 instead of downgrading Windows? On Linux you just simply recompile, right? This the worst type of specmanship
  13. Right. So you are filtering on the receiver. There is nothing wrong with anything you are saying but there are three four things that make a clear case for one or the other in the 1:1 cases you describe (in my architectures): Events cannot be referenced by name. For me this is a problem since my messaging system is string based (Like SCPI). The routing is based on module names so this rules our events for command/control queues. This is why I invented named events when I found out about the VIM. I can have more control. I can make the queue lossy (rate limit for comms) or flush them (abort) and I can insert exit messages at the front. You will probably come back and say that's what you want from Events too.but you already have that if you are just going to use an event as a queue with a case statement. You can't "lazy load" events. With a queue I can start up the main and poke some values on to it. Whenever the sub modules start up, they are still there when they come to read their queue. Events don't do this so you end up having to detect when a module is loaded and then send it a message. (External state again) It's not portable.It is using tribal knowledge of a specific implementation of events. It is more of an abuse of events knowing they have a queue in the LabVIEW implementation so the architecture will not work in the other, true event driven language,s that I also program in (Free Pascal for example).
  14. It's actually very straight forward. The main VI pokes the servers to do things and the listeners react to the responses from the servers. Each server and listener is an API with a pre-defined set of actions and commands. They all follow the same template.Keep adding APIs (services and listeners).and keep poking them to do things Each subsytem is standalone (modular) so you can take one; analyse it then run it and test it in isolation meaning you can break down even really complex systems in easily digestible chunks.. Don;t forget to run the TCPIP client first so you can see the Telemetry that hooks all the messages
  15. 1- Yes. If you have two queues the state and order of operation is external to the sub systems. This means you can place something on one queue, wait for a confirmation (if it is important to do so) and then place the copy on the other. You cannot do this with events. 2- A key point here is message filtering and, consequently, local state and system state. You can consider it like this. When using a queue you filter events on the sender side. You can decide which queue gets which messages and you can "crank" the system through it's various states by being selective in the messages you send and to whom.So you can move the slide, adjust the camera focus and then take a picture with each subsytem responsible for it's little bit. This means you don't flood messages and have strict control over the interaction between sub systems.. With Events, you filter messages on the receiver side. This means that state is local to the sub system and it is oblivious to the "system state". All messages are received by those registered and they immediately start to do their thing. There are two schools here. Those that have a single or shared event and each subsystem filters for relevant messages to it or a separate registration specific to that subsystem which is the same as a queue but you can't reference by name (hence me wanting VIMS to encapsulate events ) 3-(a) The point I was trying (and possibly failing) to make is that with a single event to read the file and 3 separate event cases to Open, read and write. you don't know which one will happen first. With a queue you can send to each queue separately, in order. 3- (b) Agreed. But tell the JKI state machine that 4-I'm not sure I get what you are saying here and I think you might have me confused with someone else I don't need any more tools for queues. I'm sure I could find a use for queue priorities but I do fine without it. I don't use queues that way. I also don't need event flushing or previewing on events because I don't design systems that message flood.Each message is sent when it is required and only when it is required. I do have things like heartbeats and maybe the odd burst but my systems are not 1000 balls tossed into the air all emitting and consuming messages just to stay in the air. Mine are all sat on a shelf and I poke each one when I need it to do something
  16. The "CLI or Die" posse are definitely in the driving seat at NI and perhaps that's why QD is so popular. If you add superduperCLIorDIEquickdrop=true to the LabVIEW ini file in 2016 you can do this Expect this in LabVIEW 2017, now
  17. Just to throw some more wood on the fire of experimenting..... Queues are a Many-to-One architecture (aggregation). You can have many providers and they can post to a single queue. They also have a very specific access order and this is critical. Events are a One-To-Many architecture (broadcast) they have a single provider but many "listeners". Events are not guaranteed to be acted upon in any particular order across equivalent listeners-by that I mean if you send an event to two listeners, you cannot guarantee that one will execute before the other... For control, queues tend to be more precise and efficient since you usually want a single device or subsystem do do something that only it can do and usually in a specific order. As a trivial example, think about reading a file. You need to open it,, read and close it. If each operation was event driven, you could not necessarily guarantee the order without some very specific knowledge about the underlying mechanisms. Queus you can poke the instructions on the queue and be guaranteed they will be acted upon in that order. This feature is why we have Queued Message Handlers. Now. That leads us to a bit of a dichotomy.We like queues because of the ordering but we also like Events because they are broadcast. So what would a hybrid system give us? This is my now standard architecture for complex systems. A control queue with event responses in a service oriented architecture where the services can be orchestrated via their queues and listeners can react to events in the system. If you want to see what that looks like. Take a look at the VIM HAL Demo. It is a message based, service oriented architecture with self contained services that "plug-in" (not dynamically, but at design time).
  18. FYI. I'm not stuck on 2009. There's has been nothing been added since 2009 that warrants the upgrade risk and 2009 is superior in almost all respects (stability, performance, no 1024 TCPIP limit ). In fact most of my toolkits are LV2012 and later but I develop them in 2009. But you are wrong. A lot has changed. They have dropped Linux and Mac 32 bit. Linux 64 bit was only supported in very recent versions (2015?) so all those with production machines(<2015?) have no upgrade path apart from reinstall the OS. They've crippled the TCPIP on windows - for what? To rationalise a platform limitation that was circumvented over a decade ago? This will make you cry too. Linux has got UTF 8 support while Windows can go suck eggs P.S. You can be as snarky as you want
  19. Rule #1 about drag and drop. Don't move the drop target when they are dragging It's going to get very old very quickly. Anyway. that's probably the least of our worries. In their infinite wisdom they have hobbled the TCPIP Vis in a way that means you can't write multi client servers (like websockets). So it's definitely another year with 2009.
  20. I've looked through the options but can't find one to turn off the new 2016 feature of the whole digram going haywire when moving an object. It makes it completely unusable for me. What genius thought this was an "improvement"? And how do I get back a compact view of the icons rather than the screen hog version? .Rolf knows me well.
  21. Follow the link I gave you in the other thread. It is the droid you are looking for.
  22. Zlib isn't a dependency of the JSON library, is it? You need to be over here, I think.
  23. Create a local variable. Set it to read. Use the pointer tool to grab it and create a copy. Works on anything, not just variables.
  24. The BMJ problem isn't about dependancy, per se. It's about useless dependency-all the dependencies that aren't needed and just add baggage. In you analogy with openG, it is more like you wanted to use Cluster to Array of VData__ogtk.vi but you don't just need the dependent VIs, you need the error package, the strings package and the variant package even though only one VI is needed from each (I don't know if that particular VI does require those, but I hope you get the point). This is the same criticism leveled at lvlibs, by the way. They aren't really solutions, They are different implementations of walking a hierarchy. I wouldn't have used this argument because it is an implementation problem with a number of non-equivalent solutions. I refer you back to my previous statement about OOP being a philosophical thought experiment and this is just saying there's many ways of skinning that cat and all are compromises. You can model this easily in LabVIEW,, by the way, because execution depends on data arriving at the objects/VIs rather than choosing a preferential order of the hierarchy. In fact. You will constantly hear me talking about diamond or bi-pyramidal architectures to maximise resuse and modularity. What if I said to you that placing a normal VI on an empty diagram is inheritance and you can override by modifying the data on the wires in the new diagram What would placing multiple VIs on a diagram be? . Yeah. I'm not entirely sure here either. I think he has again mistaken an implementation issue for an OOP issue. Objects in most languages are pointers to mutexed structures. They are non portable between languages so I think he is describing a particular issue with his language that perhaps the underlying mutexes etc aren't what they're supposed to be. I don't really know. True. But his whole article as about his disillusionment with promises unfulfilled. If you took out dynamic dispatch from LVPOOP (the run-time polymorphism) what are you left with? Errm. classical LabVIEW with an instance scoped cluster ;). In fact the polymorphic VI is the static version of dynamic dispatch and it suffers from the same code bloat and static linking as classes (and why we want VI Macros) I think here he's just lamenting that it was a bit of marketing hype that everyone fell for - including him.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.