-
Posts
4,883 -
Joined
-
Days Won
296
Content Type
Profiles
Forums
Downloads
Gallery
Everything posted by ShaunR
-
CRIO-style architecture for Raspberry Pi
ShaunR replied to dmurray's topic in LabVIEW Community Edition
Right. So you are filtering on the receiver. There is nothing wrong with anything you are saying but there are three four things that make a clear case for one or the other in the 1:1 cases you describe (in my architectures): Events cannot be referenced by name. For me this is a problem since my messaging system is string based (Like SCPI). The routing is based on module names so this rules our events for command/control queues. This is why I invented named events when I found out about the VIM. I can have more control. I can make the queue lossy (rate limit for comms) or flush them (abort) and I can insert exit messages at the front. You will probably come back and say that's what you want from Events too.but you already have that if you are just going to use an event as a queue with a case statement. You can't "lazy load" events. With a queue I can start up the main and poke some values on to it. Whenever the sub modules start up, they are still there when they come to read their queue. Events don't do this so you end up having to detect when a module is loaded and then send it a message. (External state again) It's not portable.It is using tribal knowledge of a specific implementation of events. It is more of an abuse of events knowing they have a queue in the LabVIEW implementation so the architecture will not work in the other, true event driven language,s that I also program in (Free Pascal for example). -
CRIO-style architecture for Raspberry Pi
ShaunR replied to dmurray's topic in LabVIEW Community Edition
It's actually very straight forward. The main VI pokes the servers to do things and the listeners react to the responses from the servers. Each server and listener is an API with a pre-defined set of actions and commands. They all follow the same template.Keep adding APIs (services and listeners).and keep poking them to do things Each subsytem is standalone (modular) so you can take one; analyse it then run it and test it in isolation meaning you can break down even really complex systems in easily digestible chunks.. Don;t forget to run the TCPIP client first so you can see the Telemetry that hooks all the messages -
CRIO-style architecture for Raspberry Pi
ShaunR replied to dmurray's topic in LabVIEW Community Edition
1- Yes. If you have two queues the state and order of operation is external to the sub systems. This means you can place something on one queue, wait for a confirmation (if it is important to do so) and then place the copy on the other. You cannot do this with events. 2- A key point here is message filtering and, consequently, local state and system state. You can consider it like this. When using a queue you filter events on the sender side. You can decide which queue gets which messages and you can "crank" the system through it's various states by being selective in the messages you send and to whom.So you can move the slide, adjust the camera focus and then take a picture with each subsytem responsible for it's little bit. This means you don't flood messages and have strict control over the interaction between sub systems.. With Events, you filter messages on the receiver side. This means that state is local to the sub system and it is oblivious to the "system state". All messages are received by those registered and they immediately start to do their thing. There are two schools here. Those that have a single or shared event and each subsystem filters for relevant messages to it or a separate registration specific to that subsystem which is the same as a queue but you can't reference by name (hence me wanting VIMS to encapsulate events ) 3-(a) The point I was trying (and possibly failing) to make is that with a single event to read the file and 3 separate event cases to Open, read and write. you don't know which one will happen first. With a queue you can send to each queue separately, in order. 3- (b) Agreed. But tell the JKI state machine that 4-I'm not sure I get what you are saying here and I think you might have me confused with someone else I don't need any more tools for queues. I'm sure I could find a use for queue priorities but I do fine without it. I don't use queues that way. I also don't need event flushing or previewing on events because I don't design systems that message flood.Each message is sent when it is required and only when it is required. I do have things like heartbeats and maybe the odd burst but my systems are not 1000 balls tossed into the air all emitting and consuming messages just to stay in the air. Mine are all sat on a shelf and I poke each one when I need it to do something -
How to turn off diagram dancing in LV2016
ShaunR replied to ShaunR's topic in Development Environment (IDE)
The "CLI or Die" posse are definitely in the driving seat at NI and perhaps that's why QD is so popular. If you add superduperCLIorDIEquickdrop=true to the LabVIEW ini file in 2016 you can do this Expect this in LabVIEW 2017, now -
CRIO-style architecture for Raspberry Pi
ShaunR replied to dmurray's topic in LabVIEW Community Edition
Just to throw some more wood on the fire of experimenting..... Queues are a Many-to-One architecture (aggregation). You can have many providers and they can post to a single queue. They also have a very specific access order and this is critical. Events are a One-To-Many architecture (broadcast) they have a single provider but many "listeners". Events are not guaranteed to be acted upon in any particular order across equivalent listeners-by that I mean if you send an event to two listeners, you cannot guarantee that one will execute before the other... For control, queues tend to be more precise and efficient since you usually want a single device or subsystem do do something that only it can do and usually in a specific order. As a trivial example, think about reading a file. You need to open it,, read and close it. If each operation was event driven, you could not necessarily guarantee the order without some very specific knowledge about the underlying mechanisms. Queus you can poke the instructions on the queue and be guaranteed they will be acted upon in that order. This feature is why we have Queued Message Handlers. Now. That leads us to a bit of a dichotomy.We like queues because of the ordering but we also like Events because they are broadcast. So what would a hybrid system give us? This is my now standard architecture for complex systems. A control queue with event responses in a service oriented architecture where the services can be orchestrated via their queues and listeners can react to events in the system. If you want to see what that looks like. Take a look at the VIM HAL Demo. It is a message based, service oriented architecture with self contained services that "plug-in" (not dynamically, but at design time). -
How to turn off diagram dancing in LV2016
ShaunR replied to ShaunR's topic in Development Environment (IDE)
FYI. I'm not stuck on 2009. There's has been nothing been added since 2009 that warrants the upgrade risk and 2009 is superior in almost all respects (stability, performance, no 1024 TCPIP limit ). In fact most of my toolkits are LV2012 and later but I develop them in 2009. But you are wrong. A lot has changed. They have dropped Linux and Mac 32 bit. Linux 64 bit was only supported in very recent versions (2015?) so all those with production machines(<2015?) have no upgrade path apart from reinstall the OS. They've crippled the TCPIP on windows - for what? To rationalise a platform limitation that was circumvented over a decade ago? This will make you cry too. Linux has got UTF 8 support while Windows can go suck eggs P.S. You can be as snarky as you want -
How to turn off diagram dancing in LV2016
ShaunR replied to ShaunR's topic in Development Environment (IDE)
Rule #1 about drag and drop. Don't move the drop target when they are dragging It's going to get very old very quickly. Anyway. that's probably the least of our worries. In their infinite wisdom they have hobbled the TCPIP Vis in a way that means you can't write multi client servers (like websockets). So it's definitely another year with 2009. -
I've looked through the options but can't find one to turn off the new 2016 feature of the whole digram going haywire when moving an object. It makes it completely unusable for me. What genius thought this was an "improvement"? And how do I get back a compact view of the icons rather than the screen hog version? .Rolf knows me well.
-
Follow the link I gave you in the other thread. It is the droid you are looking for.
-
Zlib isn't a dependency of the JSON library, is it? You need to be over here, I think.
-
Local variables default to write
ShaunR replied to Neil Pate's topic in Development Environment (IDE)
Create a local variable. Set it to read. Use the pointer tool to grab it and create a copy. Works on anything, not just variables. -
The BMJ problem isn't about dependancy, per se. It's about useless dependency-all the dependencies that aren't needed and just add baggage. In you analogy with openG, it is more like you wanted to use Cluster to Array of VData__ogtk.vi but you don't just need the dependent VIs, you need the error package, the strings package and the variant package even though only one VI is needed from each (I don't know if that particular VI does require those, but I hope you get the point). This is the same criticism leveled at lvlibs, by the way. They aren't really solutions, They are different implementations of walking a hierarchy. I wouldn't have used this argument because it is an implementation problem with a number of non-equivalent solutions. I refer you back to my previous statement about OOP being a philosophical thought experiment and this is just saying there's many ways of skinning that cat and all are compromises. You can model this easily in LabVIEW,, by the way, because execution depends on data arriving at the objects/VIs rather than choosing a preferential order of the hierarchy. In fact. You will constantly hear me talking about diamond or bi-pyramidal architectures to maximise resuse and modularity. What if I said to you that placing a normal VI on an empty diagram is inheritance and you can override by modifying the data on the wires in the new diagram What would placing multiple VIs on a diagram be? . Yeah. I'm not entirely sure here either. I think he has again mistaken an implementation issue for an OOP issue. Objects in most languages are pointers to mutexed structures. They are non portable between languages so I think he is describing a particular issue with his language that perhaps the underlying mutexes etc aren't what they're supposed to be. I don't really know. True. But his whole article as about his disillusionment with promises unfulfilled. If you took out dynamic dispatch from LVPOOP (the run-time polymorphism) what are you left with? Errm. classical LabVIEW with an instance scoped cluster ;). In fact the polymorphic VI is the static version of dynamic dispatch and it suffers from the same code bloat and static linking as classes (and why we want VI Macros) I think here he's just lamenting that it was a bit of marketing hype that everyone fell for - including him.
-
It may have been an over generalisation. I was referring particularly to Javascript and python so am not sure if things like object passing and closures are acceptable in all functional languages.
-
Is you IT department remote desktop-ing into the machine? (clutching at straws here)
-
When we wire VIs together they are normally simple data types (numeric, strings arrays etc). The wire represents a single data type. When we pass a cluster between VIs we are now passing (potentially) a number of data types (the wire is "fatter"). A cluster or clusters is "fatter" still. At this point, it is anthropomorphic silliness and has no effect on the code. This isn't quite what I mean by "fat" wires since there is something missing from the definition.but it illustrates where the idea comes from.. When we get to LVOOP we are passing objects. In LabVIEW terms, these are clusters too but in addition to that there is also dynamic dispatch.These wires not only transfer data (the private cluster) but also a behavioural component. In LabVIEW this is hidden under the guise of dynamic dispatch but in other OOP languages they would be the methods and in Javascript (a functional language) we can easily see the nested functions (and oh, my god...the brackets ). Moreover. A class can contain other classes so its behaviour can be dependent on a number of previous class behaviours and inputs. So to predict a VIs output from its input becomes extremely difficult and assessment in isolation almost impossible. This is what I mean by "fat" wires. The wires are opaque behaviour modifiers crammed with data and dependent on upstream (or is it downstream?) behavioural results .In other languages they are imaginary but In LabVIEW we actually get to see them and even decide their colours and patterns (cool, huh? ). This is why when you wanted the banana, you also needed the gorilla......etc..Fat wires mean dependency.
-
Then why do it?
-
Functional programming still has fat wires-the greatest barrier to reuse. I don't think all the "cool kids" are going over to them because some perceived elegance of the paradigm. Lets also be honest. It's really Javascript & Python we are talking about (because I bet they aren't leaping for joy over Erlang or Lisp ) and mainly because of browsers and servers. The trend is for service oriented design and JavaScript is used by the clients and Python by [Linux] servers. LVPOOP maybe just a fancy way of hiding a clustersaurus but it brings a lot of dependency baggage with it to fulfill it's OOP role . Where it excels is as an instance scoped variable so it's great for small, self contained state machines where state is local to the instance (like comms authentication) but you are back to all the usual nightmares outside of that.
-
OOP is a philosophical thought experiment that gets more and more complicated as they try to implement it in real situations and cover up the failings with axioms. Side with me
-
-
Well. A recent one that springs to mind is the uncontrollable JIT compiling of Windows 10. You can argue that you could specify that hardware driver suppliers only give you precompiled assemblies but it's just more crap to do with .NET that I avoid by not using it.
-
Not sure where I read it (damn my FIFO memory) I learnt a long time ago .NET blocks "run queues" and DLLs don't (notice I don't talk about threads, here). I may have dreamt it but I'm sure Rolf will come along and clear it all up for us (difference in thread mechanics, task switching, overheads etc). I'm not the only one that is anti- .NET/ActiveX and it is usually from experience rather than band-wagoning. I'm not sure where you got that idea from since I said in the previous post:: If you look back through my posting history I also say that DLLs are a last resort when you can't do it in LabVIEW. I see it in terms of project risk, rather than difficulty and out of 10, .NET is 9 and DLLs are about 3. As for hunting through .h files to find the type. That's just lack of documentation and a bit weak. I don't like searching the Internet to find if that green wire (they are all green) is a type, an object, needs a constructor or has a default and doesn't care. Is that different? It's really familiarity with the API that's important rather than calling mechanics.
-
Yes. I think the confusion is that subroutines aren't really a priority, rather another type of VI. I suppose I know that intuitively so didn't see the relevance when you talked about in-lining and priorities. Source Chapter 9.9 We should try and get that document int our wiki/blog/articles or whatever-it's is the definitive guid to LabVIEW under the hood.
-
I'm not quit sure what you are getting at here. You start off with sub-routines and are worried about inline priorities at the end (which I'm pretty sure is ignored and not sure how you tested it wasn't). An answer to the obvious question though is: Yes it is intended and has always been so. It is a restriction on use in the same way that class methods can only be "Shared" reentrant and not preallocated.
-
Your thread, your call. None of that requires ,NET. If you think about it that would be a real problem for Linux or VxWorks as .NET is a Windows only technology (I don't count Mono). The follow on from that is that most, if not all of the LabVIEW primitives don't use .NET (JSON and XML are NI parsers implemented in the run-time, I believe) and where .NET is used it is a windows only toolkit (Office, for example) so is not very attractive to me. This may be a sweeping generalisation and they may have leveraged some .NET under the hood for some things like web services but I probably don't use them or found alternatives many years ago before .NET was a twinkle in the milkman's eye. I avoid XML almost as much as ,NET. JSON - I have my own but drjdpowels one would be my second choice over the NI native one (which isn't .NET anyway). For databases. I have my own cross platform SQLite one based on the [non .NET] SQLite binaries. MySQL type databases are just strings over TCPIP so that's no hardship but there is an excellent alternative to the NI toolkit which is open source and was free when you had to pay for NIs offering (which my google-fu is failing miserably at finding again). Where .NET usually comes in with databases is with the ODBC abstraction which is nice but unnecessary. As a sort of addendum. For OS shipped features, there is very little in .NET that you cannot do with calls to the OS Win32 binaries and you are not saddled with the huge overheads and caveats.