Jump to content

ShaunR

Members
  • Posts

    4,871
  • Joined

  • Days Won

    296

Posts posted by ShaunR

  1. So, i currently have a reusable connection process and you can easily plug in override VIs for open/close/read/write and basically use any connection type you want (UDP, TCP, VISA etc). As it stands now, our process handles multiple connections but if a message is sent out it is sent to all client connections. We are going to be adding some functionality for request-response which will require an outgoing response message to a specific client connection that made the request. I am not sure how to handle the managing of which connection sent the message and needs the response. One idea I had was give each connection an ID, when a request is received, pass the request with the ID to the loop which processes the data, then sends the result back with the ID so the connection process knows where to send it. I could also use the refnum but I don't like the idea of other processes having access to the connection references, even if it's protected by being wrapped in a class, because I have to add in error handling for lost connections outside of the process which already has the code to manage them. 

     

    So, I'm curious how people have handled this and if my approach seems reasonable. 

     

    Sounds to me like Dispatcher.

  2. I'm attempting to set up a build server architecture such that I can execute my builds in parallel in remote application instances of LabVIEW. Initial tests are successful insomuch as the builds complete on the servers but there's an issue of the server becoming unresponsive (error 1130) during the build, preventing my client application from gathering results.

     

    My initial hack is to set the ping delay on the connection to greater than the expected build time. This seems to work-- I haven't seen a timeout yet-- but is hardly elegant. Should something go wrong I now need to wait an hour for things to timeout.

     

    Is there a way to get LabVIEW to prioritize its TCP stack a little more? I really don't like the idea of blindly assuming things are working just fine for an hour at a time, especially if a real network is involved.

     

    I'm not sure if there's anything to be done code-wise: if the build is taking up so many resources that the main server loop isn't servicing ping requests I have a hard time believing anything I do in LabVIEW would be reliable because I imagine the scheduler won't exactly be doing so well either...

     

    You could create a VI to service requests and set the VI priority that services your TCPIP to "real-time" to see if that helps. I'm skeptical that it does much re: the OS, but it  may be effective within the labview environment.

  3. I am thinking of expanding on this for a CLA Summit presentation. But before I invest a lot of time, I am wondering how many others think a tool like this would be useful. If you are interested, please reply or 'like' this post so I have an idea of the level of interest in this subject.

    thanks,

    -John

    From my point of view. It's no better and a little bit worse than the hierarchy window since it hides "depth". What I would really like to see is a 3D version of the hierarchy a bit like the  firefox "layers" (we could zoom in and out then,too ;) )

  4. To clarify -- it's an improper mental model to consider the User Event publisher a "queue". When a User Event object is created, there is no underlying queue of messages that grows with the "Generate User Event" method, and so there exists nothing to "re-transmit" to the handler queues. A better mental model is to consider the Generate User Event method as a

    <snip>

    Heartily concur. This can be generalized to say, lots of different APIs in LabVIEW would benefit from providing asynchronous output streams that adhere to these Events pub/sub semantics. Again, this is in the spirit of enabling concurrent systems development in LabVIEW, which converges to Actor design and the current topic of asynchronous dataflow on Expressionflow.

     

    Actually. Looking closer. My initial "thought" was wrong. Event registrations aren't equivalent to "Dequeue" (i.e. destroy the element) since you get multiple elements in multiple Event Structures. So if your "semantic awesomeness" is advocating they do behave as I described. Then we are in agreement.

  5. Your in danger here of becoming an architecture astronought. :D

    Your typical architecture astronaut will take a fact like "Napster is a peer-to-peer service for downloading music" and ignore everything but the architecture, thinking it's interesting because it's peer to peer, completely missing the point that it's interesting because you can type the name of a song and listen to it right away.

     
    So. Breaking it down.  :) 
     

    Not explicitly true -- for Queues, an enqueuer and a dequeuer are both accessing the same Queue object, whereas with Events, the enqueuer and dequeuer are interfacing with two separate objects -- the event publisher and the event registration/subscription.

     
    OK. Lets changed make a little change here.
    -- the event enqueuer and the event dequeuer of that Event Registrations queue. Lets not get confused by extra abstraction layers.

     

    So you are 1/2 right. Queues do access a single object. But so do events which have access to their own queue which just happens to get populated indirectly rather than directly by the enqueuer.
     

    Any number of writers may write to a Queue, and you may have any number of readers as well; stated another way, contrary to popular belief, you may have multiple asynchronous dequeuers pulling out of the same queue.

     
    Not really. You can only have one dequeue as dequeue destroys the element (readers and dequeue confusion here) and sure you can "peek" the queue, but that does not destroy the element. So having "multiple dequeuers pulling out of the same queue" is unpredictable and results in unwanted behaviour on the most part. In fact. It is a common bug by rookies. This is encapsulated by the axiom that queues are "many-to-one" and if you keep to that, you will be fine..
     

    Events have a bit different semantic than Queues in LabVIEW -- there must be one and only one Handler bound to an Event Registration at any one time,

     
    Why is that? Is it because the Event Registration primitive is, in fact, the symantics for a unique queue? If you do add multiple handlers, do you not end up with the problem I described previously about multiple dequeuers? I would say they have exactly the same semantics as Queues.(just different primitives) and you can only have one outbound (aka dequeue) at any one time. The difference arises from the way the Registration Queue is populated.

     

    yet there may be (0,N) Registrations per publisher. With this extra level of indirection, I don't know if we can even properly classify Events as being something-to-something, but rather (N publishers)-to-(M registrations)-to-(0 or 1 handlers bound to a registration at a time).

     
    Of course we can. Come back down to the troposphere for a second.  :P 

    A fairly good approximation for Events in LabVIEW are a queue (lets call it "event queue") to a number of queues (lets call them the "handler_queues") where "event_queue" re-transmits the enqueued element to the other queues before being destroyed. In this approximation, we need a VI that adds a queue reference to the "event queue" and registers a unique ID for a "handler_queue" so that when an element is enqueued to the "event_queue" it copies the element onto each registered handler_queue (iterates through all handler_queue0-N). Each handler_queue (just the usual while loop, dequeue element and a case structure), is waiting and dequeues from its respective queue. So we can create event-like behaviour using queues, but have to do a lot more programming to realise it.
     
    So we have one (event_queue) - to - many (handler_queues). This is essentially the Event system in LabVIEW.

    We do exactly this all the time with TCPIP servers where events would be a much better solution but, sadly, lacking.
     

     
    Breaking it down:
     
    Queues are many-to-many, one-to-many, one-to-one, or many-to-one, depending on what makes sense for your application domain.


    I will counter argue that if you are trying to use queues for anything other than many-to-one (one-to-one being one of the intersection edge cases I mentioned in my previous post); use something else. I have already outlined how you can use queues for event-like behaviour. But whats the point when it is handled by the language for you in Events? Just because you can, doesn't mean you should - the right tool for the job.

    Many-to-many: Only practically relisable with architecture.
    One-to-many: Events. Notifiers.
    One-to-one: Anything you like, (Queues, Events, Notifiers, FP terminals, Globals et. al).
    Many-to-one: Queues.
     
     

     Events support the same configurations, though with one additional level of indirection (which enables a host of additional design patterns), and the caveat that there may never be multiple readers (Event Handler Structures) simultaneously bound to one Event Registration. This caveat is the one I would like to see lifted, that Events might act more like Queues, in order to better support concurrent processing.

     

    As I have said already. They behave exactly like queues which is why you cannot have multiple dequeuers (Event Handler Structures) attached to the same registration just as you cannot have multiple dequeuers for a queue without unpredictable and unwanted results.

     

    I think the issue here is confusion between reading (aka peek) and dequeueing which has been lost as you've gone further up the abstraction thought process. The destruction of the element is a key attribute of queues and once omitted you need other mechanisms to either filter or remove elements.The difference between the events and queues isn't how they are consumed (events uses queues too). It is how they are populated and this gives rise to the ability to create "one-to-many" (events) from multiple  "many-to-one" (queues),all wrapped up in a nice, neat package.

  6. Interesting.
    Wouldn't work in the UK as software is considered the same a literary work. So the analogy would be like writing a book or chapters with the same title (but titles, slogans and phrases are un-copyrightable in UK law - this seems to be how the judge in this case sees it).

    Of course, they could trademark all the function names :) Then everyone would be stuffed.

  7. and for the advanced programmer I don't see it imposing any real restriction-- to the contrary I see it promoting good practice.

     

    Until you've lost it forever then realise, "bugger!".

     

    AQ hasn't stated what the "return" is. It could be that it just  removes some part of an internal document that someone doesn't like.. Until I know why it is being considered and what the gain is. (and the caveats), my default position is "anti" as I am with any removal of working features without good reason - especially if they have been around for many years.

  8. I've only read the white-paper, but I think this is a new slant on a very old idea.

     

    In the prehistoric days,(before events and queues) we used to use "Data-pools". We'd have a 2D array global variable as a main repository of all data values and any part of the application could read from it. It was very flexible and very fast and it enabled on-the-fly configuration and was great for debugging. Of course, it had one caveat. - write races. For single writers, multiple readers it was great, but for multiple writers it had issues.. 

     

    For UI updates it didn't matter as all indicators would only read. For controls it also wasn't too much of a problem since you just have 2x2D arrays. One for controls and one for indicators and it was impossible for the user to click 2 buttons fast enough to overwrite. But you could only have one writer updating the read array asynchronously. The next "innovation" (I call it that loosely) to address this was more 2D arrays. One for each writer and a mapping system to map the indicators to the global arrays. As long as the naming remained the same from project-to-project; this worked fine and was reusable. This however had the downside of fixing the implementation. So people would put, say, 10x2D arrays into the global even if they weren't all used since it was rare to have more than 10 asynchronous process. As you increased the number of arrays, things obviously got slower as the execution time is dependent on the size of the global and the data it contains Then. Since most projects are of a similar nature within a company, People used multiple globals of one 2D array and called them things like AI AO, DI, DO. When Queues, events and the like came along, they were abandoned since the new technology required less maintenance and were less rigid..

     

    Looking at your LV2 global storage. It doesn't seem to address the multiple writers (READ/WITE, SET/GET is just as bad as a global, but a lot slower) but apart from that, I think it looks very much like the "Data Pool" idea with LV2 globals instead of real globals.

     

    Am I missing something sneaky?

  9. Shaun, I still think datasocket is 'broken' on this obscure point. It's possible you configured your input string to be a publisher not a subscriber. Labview allows you to do this.

     

    To see the effect for sure: -

     

    1. Run datasocket server and open the diagnostics window

    2. Search labview examples for datasocket and open 'Front Panel Datasocket read.vi' and 'Front Panel Datasocket Write.vi'.

    3. Run 'Front Panel Datasocket read' and check the 'fpwave' variable that appears - on my Win XP machine running Labview 2010 that appears as a blue '2' for integer

    4. Run 'Front Panel Datasocket Write' and the new variable changes to the correct type

     

    Thats my problem I think - datasocket subscribers don't show as the proper data type until someone publishes data to the variable. This probably has to be - If you create a datasocket subscriber using 'datasocket open' rather than binding a control, that subscriber is capable of accepting any kind of data.

     

    I've tried a lot of dodges with datasocket properties, writing data once from inside the instrument control apps etc. but so far they all result in remote inputs being unusable or datasocket misbehaviour.

     

    As NI hint that datasocket is non-preferred, I thought it would be better to move on to a comms standard with some life left in it. I'd been looking to adopt something that was cross platform, a lot of language bindings and an active developer group. As has been pointed out, though, adopting third pary carries risks, hence the debate.

     

    Well. If datasocket isn't quite doing what you need. Perhaps try network streams? They have a property node whereby you can retrieve the datatype of the endpoint. I've never used them, as I have Dispatcher, but from what I can tell they can operate in a number of configurations (one to one, one to many etc).

  10. The datasocket prototype allowed the user to select using the 'DS select' menu. When test exec script ran, all datasockets were opened and data transfer was by DS Read and Write. Whne the script stopped all sockets were closed. Very simple, if it weren't for the fact inputs don't show up correctly. You try it, put a string input in a while loop. Make sure the input is bound to datasocket. Start datasocket server and run the VI.  Open the datasocket server diagnostics. Check the input you've just created. It's shown as an integer not a string. So if another app wishes to discover what datatype that input is, it can't tell. Bah!

    Nope. Shows up with a big letter A for me denoting a string.

  11. It's been working well using datasocket reading indicators from some pretend instrument drivers, but as I say datasocket is quirky in that when an instrument controller published it's inputs, they all appear is integers. That's a bit naff, since Spreadpump can't easily discover what data type to deliver to an input.

    Hang on. I missed this in your first post. Are you sending control refs? Control refs are integers and have no meaning outside of the application instance (they are basically pointers, but not quite). Have you tried,using the Variant To Flattened String. That will give you the type info that you can send and examine the other side. No messaging system can help you with that. It is a serialization issue.

  12. The MDI toolkit's ok for local but not distributed.

     

    I think you misunderstood. I didn't proffer it as a solution. It was a retort to Yairs dig at me and classes (a kind of in-joke). You should instead take a look at the Dispatcher (that I highlighted in a previous post) which is very similar to your requirement, but native labview.

  13. I hadn't tried transport.lvlib. Will download and see how it compares with 0mq for this application - thanks for the prompt. I want my controller app to be able to detect slave apps by their inputs and outputs a la datasocket, but probably not pubsub. I was going to fit my test exec app with a 0mq 'router' port and the satellite hardware driver apps fitted with a 'request' port. At start up, each driver app would send a 'request' that contained a description of I/O ports to the test exec. The test exec would compile these into a dictionary of possible connections - like the datasocket connection popup. The 'router'/'request' combination allows each driver app to set an 'identity' for itself. The user can then connect to 'frequency in' port of 'siggen 1' and so on from a selection tree. I can see how the conversation would work and that it could recover from a failure of either slaves or exec ok. More labview 0mq examples are needed to illustrate proper recovery from failure and also graceful shutdown. I agree about having it event based.

     

    Transport.lvlib is just a protocol leveler (it just means that you can send data via UDP, TCP, Bluetooth etc transparently). Dispatcher, however, is a Pub/Sub implementation that uses Transport.lvlib and is probably closer to what you are looking for.

     

    Woohoo. Links are working again!

  14. For anyone who hasn't come across omq before, this book was very useful in getting started. http://zguide.zeromq.org/page:all

    Yes, between LV and the rest of the world is particularly useful as the safety critical products I'm working around are mainly programmed in C. The number of language bindings for 0mq is impressive. Not something that NI seems all that interested in, which is shame (for us that is)

    It's interesting that they use a length header which is the same as the transport.lvlib so they could be compatible (or at least made to be compatible-transport.lvlib has a compression and encryption byte).

    I don't think it really brings that much to the table over native labview implementations. From my cursory look. Pub/sub seems to be it's main "killer" property and there are native labview implementations which are safer and x-platform, This is probably the reason for NIs lack of enthusiasm. I'm not being dismissive, I think I just need to spend more time looking at it and maybe something will jump out as being an advantage. for my use cases - there has to be a really, really good reason to rely on 3rd party compiled binaries.

    I think the library could be improved by allowing binding to the labVIEW event system (a big bugbear of mine with VISA).If an intermediate binary is used, (as it is in this case) then adding events so that developers can hook into the event structure is trivial (even more so than in native labVIEW). I find event driven asynchronous comms, vastly superior to the normal methods (

    )..

    I had a few issues with the library but I don't think they are insurmountable. I kept getting an error (too many sockets) and pressing abort in LV x64 causes it to hang (on the Multi example, without a stop button). But the person who wrote the wrapper seems eu fait with binding to labview, so I expect the latter will get worked out.

  15. @ShauwnR

     

    I haven't investigated the maximum update rate for collectdViewer, but 2 Hz is actually slightly beyond the limit at which the mobile devices that I've tested can reliably display the collectd data on a continuous basis.

     

    This limit isn't due to WebSockets or the RabbitMQ message broker of the system. Instead, I believe that the reason for this limit is due to the way the collectd data is processed in the browser: Every 500 ms, the collectd daemon sends out a stream of updates for each measured parameter of the host platform (eg CPU User, CPU Nice, Memory Used, etc), in random order. On the browser side, the JavaScript code allocates the data in this stream to the correct location in a multi-dimensional array containing the time-series data for each parameter. Once all the parameters for the time step are received, all the plots are redrawn. All of this activity must occur before the next stream of updates is received (ie within 500 ms). For mobile devices, i think that what's happening is that any glitch in the transmission of data over the air throws off the processing of the data stream by the JS code.

     

    Indeed. Mobile dynamic display of graphical data is constrained and very browser dependent. Firefox especially seems to be very resource intensive since for something like the following demo which is using 2 channels @ 100 ms each; most mobile browsers can cope, but Firefox glitches often

     

    http://37.235.49.79/example_gauges.html

     

    However, the following demo chokes all mobile browsers as there are a lot of graphics to update and there are 4 channels @ 100ms each and one channel @ 1sec.. Additionally, the datapoint stack for the graphs is in the browser (the server is sending single datapoints) with each line having a history length of 100 datapoints once the graphs start scrolling.

     

    http://37.235.49.79/example_dash.html

     

    If you turn off the graphs then even Firefox can keep up, so the data rate and number of channels isn't the restriction. It's the graphical rendering that brings mobile devices to their knees. The demos aren't doing anything special though (like your matrix of data). The server is just spewing data out and the javascript redraws happen when each piece of data arrives. Desktops can cope with the redraw rate, but mobile devices can't.

  16. Haven't looked at your code because I'm on mobile but the beauty of enumerated types in any language are how under the hood they're just numbers. Attach names at design time for convenience, but in the end still numeric when operating. I don't see how you can get this benefit with objects?

     

    I agree. The only purpose of an enum is to code improve readability. Granted LV takes it one step further by linking to case statements. But again. The benefit is readability.If you want a run-time extensible numeric type, use a ring.

  17. I like to go a step further and try and represent things graphically. For temperature control I plot the "Proportional Band”, the range of temperatures where the heater output would be between zero and 100%, on the same graph as the process variable. Tuning is still by intuition, but with more visual information to go on. You can see the effects of the PID parameters in the twists of the proportional band.

    I'm also a very visual person (hence my preference for LabVIEW). Combine what you are saying with Taguchi analysis and it takes a lot of the intuition out of it.

    https://controls.engin.umich.edu/wiki/index.php/Design_of_experiments_via_taguchi_methods:_orthogonal_arrays

     

    Damn. What is it with links?

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.