Jump to content

AlexA

Members
  • Content Count

    225
  • Joined

  • Last visited

  • Days Won

    2

AlexA last won the day on November 26 2014

AlexA had the most liked content!

Community Reputation

8

About AlexA

  • Rank
    Very Active

Profile Information

  • Gender
    Not Telling

LabVIEW Information

  • Version
    LabVIEW 2010
  • Since
    2007
  1. Ahhh, interesting! Thanks for that take on things. I'll think a little further about what's going on.
  2. This is getting a little abstract for me. To concretize the discussion a little, @smithd from what you say I'm visualizing something like: A wrapper (subVI) around TDMS files (for discussions sake) which internally looks like a message handler with messages for open/close/write. I assume it would be a re-entrant VI that you drop on the block diagram of anything that needs file IO and hook up internally. This is what I infer from your 1 file/qmh statement?
  3. That's my point I guess. Why manage a central repository of file references for streaming files from different modules when the OS can handle the scheduling better than I can anyway? Why not just let the individual modules open and close their own files as they require.
  4. Ok, thanks very much for the clarification. I guess my original question boils down to "how much can we lean on the OS file handling code for handling multiple streaming files?" I acknowledge your point about someone having to write the code. If the OS (windows) can be trusted to handle multiple open file handles; absorbing multiple streams of data gracefully, then it makes more sense to let people write their own file IO stuff local to their module. There's actually not much difference between completely letting the OS handle it, and what I'm currently doing. There's no effort to schedule writes in my current architecture so it might as well be the same thing as everyone just trying to write their own stuff independently.
  5. So you're saying if someone comes along with a new module and a new way of saving data, the standard way to interop is for them to implement their own File IO? The workflow for me adding something to your application is to roll my own file IO inside my module?
  6. Ok, I think I understand, what do you do if a new module someone is developing wants to use a core service in a different way? I.e. wants to save its data differently?
  7. Interesting, let me see if I understand you correctly as there are a number of implementation differences which I might get hung up on. You would copy and paste that File sub VI into every module that may want to do file IO. You access it using the named queues functionality, rather than maintaining queue references or anything like that. I note that the File IO is non-reentrant, what does this mean if there multiple "plugins" which have it on their block diagrams? Or are you proposing that each plugin has essentially their own version of that File IO subVI?
  8. Following with the idea of division of responsibility. I designed my application such that each individual process in charge of controlling and acquiring data from a specific piece of hardware would send its acquired data to another actor who maintained all file references (TDMS files) and was responsible for opening and closing files, as well as saving the data to disk. A consequence of this decision is that someone wanting to introduce a new piece of hardware + its corresponding control code must go further than just dropping a plugin that meets the communication contract into a directory. They must implement all the file IO stuff in the file IO process. The thought has entered my mind that perhaps it would be better to make File IO the responsibility of the plugin that wants to save acquired data. So each individual process would implement a sub loop for saving its own data. The template for plugins would then include the basic file IO stuff required so it would be much easier for someone to just modify the data types to be saved. My goal here is ease of maintenance/extension. The most important consideration for me apart from ease of extension is whether having a bunch of independant processes talking to the OS will be more CPU heavy than one single actor (who is an intermediary between the OS and all the other processes). Does anyone have any experience in this area?
  9. Hi guys, I've just started to play around with Haskell and learn about functional programming paradigms and I was wondering, as per title: Can a typical LV application be constructed along functional paradigms? Hypothetical application: 2 Analog Inputs and 2 Analog outputs connected to some process, some sort of controller, a UI for interacting with the controller, and file IO. I typically don't see functional paradigms applied to hardware control. I don't know if this is just something that people have avoided doing or whether there just hasn't been any interest. In any case, I'm trying to imagine first of all what LV native mechanisms could be uses to construct a "functional" type application, and secondly, whether there would be any advantages in doing so? Particularly in ability to reason about behaviour. Hope this sparks some interesting discussion. Kind regards, Alex
  10. Lots of fantastic info, thanks guys! @Shoneill How does your proposed solution deal with the potential situation where the process (Signal Generator) is running remotely from the control software (UI). Over TCP or whatever. I can't see a graceful way of passing a UI type object around. If the UI is a part of the object, then you have a static dependency between the control UI (which contains the subpanel) and the process code don't you?
  11. Hmmm, what if the UI doesn't expose that choice unless the user selects the White Noise option?
  12. Hi guys, I thought I'd throw this out there to see if anyone has any opinions on a hypothetical (which cartoons my current situation). Say I have some process, implementing some functionality, for example: A signal generator which can generate triangles, sines, ramps and steps. Now I want the process to also generate band limited white noise. The problem is, the parameters to the generator don't accomodate the idea of a frequency cut-off. The ways I can see of dealing with this are: Extend the interface to include a frequency cut-off parameter (redundant for 90% of messages.) Break down the interface and expose different sub-interfaces (pass complexity on to calling code...) Any other ways? Thanks in advance for your insights!
  13. Shaun I have to disagree, the standard Window's interaction is single click to sort, it's what's intuitive to most people so if the expectation isn't met I think it becomes a minor user gripe.
  14. Hi Ned, Thanks, you've eased my mind a bit. I think this was the discussion http://lavag.org/topic/16908-q-what-causes-unresponsive-fpga-elements-after-restart/ but after re-reading it, I think I read something into it that wasn't there. Or perhaps I'm misrecalling where or if I saw that piece of information. Either way, from what you say I'm not worried about it anymore. Cheers, Alex
  15. Hi Shaun, Thanks for all that info! Would it be correct to say the distinction between state machine with message injection and message drive action engine (AE) is how the actions map to commands. I.e., a message driven AE gets you an action with a strictly defined end point, in other words it's gauranteed to complete. Whereas, for state machine with message injection, a single message could just result in a different continuous state? A long time ago I looked at Actor Framework but there were so many layers of abstraction I found it impossible to get started, let alone port my code to it. I'm pretty happy with what I've got at the moment, I've slowly improved it but I'm still wondering if I'm maintaining too much state in my message handlers. For example, I have an FPGA which is driving a number of pieces of hardware (motors, electrodes etc.) The VI in charge of the interface to that FPGA consists of a continuous loop which basically just listens to a Target-To-Host DMA FIFO, as well as a message handler which takes requests like "Update Setpoint Profile" and computes a new profile before uploading it to the FPGA. Should everything be done in a single state machine loop with message injection (I.e. listening is done in the timeout case). My uncertainty stems from something someone said to me a long time ago on LAVA, that it was very strange that I branched the FPGA reference to two different loops, and that they had never needed to do that. Thanks again for sharing your experience! Edit: I get 403 Forbidden errors when trying to follow your links.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.