Jump to content

Low Coupling Architecture for Data Storage


Recommended Posts

I feel I've plateaued in my approach toward data storage. I am still stuck routing data from a tool through a central mediator to a component specifically designed to manage data. My applications have had low data rates or been manually controlled, so the approach has worked, but I'd expect that a faster data rate or more components could induce lag at some point.

Here's a reduced-scale example:

User presses "Store Data from A" or timer expires, message is sent to mediator >> Mediator routes "Store Data from A" message to Tool A >> Tool A makes measurement, passes to mediator >> Mediator passes measurement to Data Storage >> Data is stored.

What I am looking to do is follow this sequence:

  • Tool A generates data on a known schedule, puts it in "a place" (maybe its final location)
  • User or program synchronization makes "Store Data from A" request
  • Mediator routes message to Tool A
  • Tool A handles "Store Data from A" by storing data currently in "a place" in specified location using preferred storage method

In effect, cutting out the data manager that has to be written or adapted in each project.

A couple of questions, then: Am I overthinking this - trying to achieve something overly fancy? Will I be able to make a method of data storage generic enough that I stay uncoupled? If the answer to these are no and yes, what architecture might one use to accomplish the goal of a decentralized, but still synchronized data manager? If possible, I would welcome links to any KB or other article that would move me forward.

Thanks in advance,

Jim

Link to comment

A couple of questions, then: Am I overthinking this - trying to achieve something overly fancy? Will I be able to make a method of data storage generic enough that I stay uncoupled? If the answer to these are no and yes, what architecture might one use to accomplish the goal of a decentralized, but still synchronized data manager? If possible, I would welcome links to any KB or other article that would move me forward.

Thanks in advance,

Jim

I think the message mediator approach is a good one. See: Mediator Pattern. I had used this for a while, thinking I was quite novel, and Daklu then told me this pattern actually has a name. Not so novel anymore :P . What's nice about this is that you can generate generic processes that just spit out messages, and then you can take care of application specific stuff in your mediator. Think, for example, of a generic TCP loop. My TCP manager that I reuse reads a fixed header and then the message data and just sticks it in a queue. It can be reused on every application. I then let my mediator decode the message as it pertains to that application and sends it where it needs to go.

Others may have better suggestions, but I doubt you can make something totally generic as far as data storage goes because as I have learned every application needs something a little different. But, you may be able to use OOP to create a data logger class and your application specific code can inherit from this class. You may even be able to create a "Data Logger Manager" process which you can reuse, using dynamic dispatching and delegation to call application specific methods for logging. This way the architecture can stay the same, but the handling of the incoming data can be delegated to your application specific logger class. Does this make sense?

To help clarify, I can go back to the TCP manager I have. This Manager holds a "Network Messenger" in its private data. The network messenger can be UDP or TCP. The architecture is the same (open connection, read, write, close, etc) but the specific method that is called will be either TCP read or UDP read/write/close etc based on what object was initialized in the TCP manager

If you're not familiar with OOP, this may all be very greek to you and you will have to get some other suggestions. Logging is often very application specific which it makes a bit harder to architect in a totally reusable, never-write-this-again way. But you can take some steps to make it a bit more flexible. Others may have more (see: better) input.

Here is an example of a logger from NI's website, but admittedly, I haven't used it so I'm not sure how valuable it is. As far as I can tell, it seems like a valid example, though.

Edited by for(imstuck)
Link to comment

I think the message mediator approach is a good one. See: Mediator Pattern. I had used this for a while, thinking I was quite novel, and Daklu then told me this pattern actually has a name. Not so novel anymore :P . What's nice about this is that you can generate generic processes that just spit out messages, and then you can take care of application specific stuff in your mediator. Think, for example, of a generic TCP loop. My TCP manager that I reuse reads a fixed header and then the message data and just sticks it in a queue. It can be reused on every application. I then let my mediator decode the message as it pertains to that application and sends it where it needs to go.

Cool. I’m not reinventing the wheel.

Does this make sense?

I think so, at least on the face of it. In my most recent applications, I’ve applied OOP as far as my storage type goes – Storage (parent), Access DB (child), text (potential child), etc. My present approach is to bundle new data into an application-specific cluster, which is converted into a variant before input into a dynamic dispatch “Store” method. It works because the DB queries included with LV are able to decode the variant and insert results in the proper columns. I haven’t tried another file format because of time/lack of need, but I’d like to include others as an option in upcoming revs and I don’t think I’ll be able to slide by on included software. I also was looking to extend that principle into building the specific entries – test ID, captured data, test notes, time stamp.

Here is an example of a logger from NI's website, but admittedly, I haven't used it so I'm not sure how valuable it is. As far as I can tell, it seems like a valid example, though.

The linked example builds its entries for the logs by flattening clusters to a string in the “Add Log” VI, then unflattening using the same control as a key in the "Write Log". It works for the example, but it seems that this approach would couple the code to the application – especially for “results” entries.

Could I follow a similar path with a modification that I think would decrease coupling:

· Receive results or ID information in the logger mediator

· Convert to a string, append header information (position in the log, etc.) based on message name

· Enqueue the converted information into the logger process

· Dequeue and handle the information in a known way using the header information – possibly by replacing a subset in an array of strings, then rely on non-LV, application-specific tools if doing analysis or retrieval

· Keep the in-progress handled information in logger until “Store” instruction or record is complete

· On “Store” or complete record, perform dynamic dispatch “Store” Method

· Revert temporary logger information to in-progress state

Storage space could be a problem, since strings are large data types, but maybe I could figure a way to include data type in the header and decoding in the logger.

Thanks for the quick turn.

Edited for formatting issues.

Edited by theoneandonlyjim
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Unfortunately, your content contains terms that we do not allow. Please edit your content to remove the highlighted words below.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.