Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 09/24/2015 in all areas

  1. I think you are probably looking at it slightly awkwardly. You went for compartmentalised solution according to some best practice ideology and then found it got awkward for concurrent processes. You want it to be easy to manage and scalable, and the way you did it was exactly that, but the downside was creating bottlenecks for your asynchronous processes. You have the SOUND.vi problem whereby I needed to be able to play arbitrary asynchronous WAV files simultaneously when a single sound could be 10s of seconds. If the SOUND.vi could only process one message at a time that was useful but I wanted a more generic solution. Sol. I made SOUND.vi a service controller.Other processes ask it to "PLAY"a particular file. It then launches a sub process for that file and returns immediately having no more involvement in the data transfer. How could this work with, say, TDMS? You have the FILE service. You send it a message like FILE>STREAM>TDMS>myfile[.tdms]. The FILE service launches a listener that is registered for "myfile" messages. You tell the DAQ service to launch an acquisition - DAQ>STREAM>AI1>myfile. or similar And that's it! The DAQ pumps out messages with the label "myfile" and the listener(s) consume them. Of course the corollary is you can use "FILE>STREAM>TEXT", "FILE>STREAM>BIN" etc, even at the same time .and you still have your FILE>WRITE and FILE>READ which you don;t really have to launch as sub processes. You've started a producer and consumer and connected their messaging pipes. You can do that as many times as you like and let them trundle on in the background. Your other "plugins" just need to send the right messages and listen in on the conversation (also register for myfile).
    1 point
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.