Jump to content
McQuillan

SQLite based application design

Recommended Posts

Hi Everyone,

I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.

He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.

Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)

 

I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers :(, and I'm not using them right...(I hope).

I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).

I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
*I've done SQL training courses.

In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 

Some general questions / thoughts I had:

  1. Is the SQLite based application design something worth perusing / is it a sensible design choice?
  2. Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking)
  3. When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table)
    1. Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows?
  4. The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently.

The two main reasons why I like the database approach are:

  1. External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal.
  2. Data-logging is a big part of the application.

Any advice you can give would be much appreciated.

Cheers,

Tom

(I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)

Proof of Concept.png

Edited by McQuillan

Share this post


Link to post
Share on other sites

Hi Tom,

That's a lot to comment on, so let me just do stream of consciousness:

  • SQLite is fast, but not as fast as more direct writing formats like TDMS.   SQLite combines "pretty fast" with "very, very capable".  
  • One of the limits of SQLite is the number of Transactions per second.   This is because, to have ACID compliance for Transactions, it needs to verify complete writing to disk, which takes time.  A workaround is to group many statements into a few transactions per second.  Note, though, that ACID compliance is a very valuable thing, before I tell you that you can disable ACID compliance and get more transactions per second.
  • You appear to be using SQLite as a "current-value table", which requires a large number of independent writes per second, which is perhaps not a good use cae for SQLite.   If I were doing a similar app, I would pipe the data to a central component that would buffer data and save it to SQLite a couple times a second.  See the "Cyth SQLite Logger" in the LAVA-CR, which can save large numbers of log entries, but saves them once a second.   
  • I note that you seem to have a lot of tables of only one row to hold data.   I would have expected a single table of many rows, with columns like Timestamp, itemID, Value (with Timestamp as the Primary Key).  
  • Have you looked at the "Attributes" subpalette of SQLite Library?   This adds an Attributes table to store scalar values.   It also demonstrates making a subclass of the Connection class to add capability, and the option of Preparing common SQL statements only once, rather than every time.   Preparing has overhead, which, though it can be small relative to large actions (or unimportant for uncommon actions), can be high for the very small, very common actions you are doing.

It seemed strange to me that you are "querying attributes on each iteration to see if they have changed", but then I realized that you are applying "Model-View-Controller" as an application-level monolithic thing, which I don't agree with.   I think the principle should be applied widely, but at a lower level.   To me, every bit of state data has a natural place that it "exists".  This place is the sole "Model" or "master" of that state, and if the state is in any other place those places are "copies".   For example, to me it is natural to say that the "Model" for the Settings of your DAQ processes (SetPoint, etc.) is the process itself, not the saved settings in the db.  Noone should be changing the saved settings in the database except the DAQ process itself.  Any change of setpoint should happen by requesting the DAQ process to change, and then the DAQ process will save the new setpoint in the database.   On shutdown, the DAQ process should write settings to the database.  Yet at the same time as viewing the database as NOT the Model of settings, I would happily view it as the Model of the Data History.    

BTW, Settings is a great use case for JSON.   Save a single JSON item as configuration for each of your DAQ processes.   As the DAQ is the Model, only it ever needs to know the format.  

Gotta go for now.

 

  • Like 1

Share this post


Link to post
Share on other sites

Hi James,

Thank you very much for replying, you've made some very interesting points and helped switched my perspective.

You're right, I was using the DB as a CVT - which isn't sensible, and the many-tables one-row comment (instead of one-table, many-rows) seems so obvious now it's been pointed out!

 

I'm going to mull it over and work on a better design on Monday. But I think I'm going to create a design using the following:

  1. For items that actually need to be in a CVT, I'll create one using a DVR/Variant Attributes.
  2. (Like you say) anytime a module wants to change a setting (like setpoint), I'll send the actor a message and let the actor make the decision (this is what I normally do, but I wanted to try something new/got caught up in the DB design)
  3. When an actor receives a 'logging request,' instead of logging to the database directly, I'll try out the Cyth-SQLite-Logger to make fewer, but 'bigger' commits (I need to look into this properly / learn how to use it, but I like the concept)
  4. In your presentation you showed using JSON in SQLite, I'll give that a go too.

Cheers,

Tom

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Similar Content

    • By drjdpowell
      View File SQLite Library
      Introductory video now available on YouTube: Intro to SQLite in LabVIEW
       
      SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely.

      There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them.

      See the original conversation on this here.
       
      Now hosted on the NI LabVIEW Tools Network.
       
      ***Requires VIPM 2017 or later for install.***
      Submitter drjdpowell Submitted 06/19/2012 Category Database & File IO LabVIEW Version 2013 License Type BSD (Most common)  
    • By Thang Nguyen
      Hi,
      I would like to know if I can write stream data into two different tables with SQLite at the same time or not? Currently my system has one camera. I store the capture images and particle measurement in one table. I have just add one more camera to the system and would like to do the same thing and write to a second table in a parallel process.  I wonder if this is possible or not.
      I use SQLite library.
      Thanks in advance.
       
    • By Munch
      Good Afternoon,
      I have been having an issue trying to resolve an error all afternoon, and hope that you might be able to help.
      I am trying to execute the follow SQL "
      SELECT m.name as tableName,
             p.name as columnName
      FROM sqlite_master m
      left outer join pragma_table_info((m.name)) p
           on m.name <> p.name
           where columnName = 'UploadedFlag'
      order by tableName, columnName
      " That returns a list of the tables that Contain the 'UploadedFlag' Column.   I have testing the SQL on both HeidiSQL and SQLite Expert Personal applications and run and return what I expect, however when I run it through LabVIEW I get a 402860 Error with the description " SQLITE_ERROR(1): near "(": syntax error " 
      If anyone could point me in the direction of the syntax error then I would be very greatful.
      Regards
      Mark
    • By drjdpowell
      For comment, here is a beta version of the next SQLite Library release (1.11).   It has a significant new feature of a "Parameter(s)" input to the "Execute SQL" functions.  This can be a single parameter or a cluster of multiple parameters.  Uses Variant functions and will be not as performance as a more explicit preparing and binding of a Statement object, but should be easier to code.

       
      drjdpowell_lib_sqlite_labview-1.11.0.86.vip
    • By drjdpowell
      Introductory video now available on YouTube: Intro to SQLite in LabVIEW
       
      SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely.

      There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them.

      See the original conversation on this here.
       
      Now hosted on the NI LabVIEW Tools Network.
       
      ***Requires VIPM 2017 or later for install.***
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.