That's a lot to comment on, so let me just do stream of consciousness:
SQLite is fast, but not as fast as more direct writing formats like TDMS. SQLite combines "pretty fast" with "very, very capable".
One of the limits of SQLite is the number of Transactions per second. This is because, to have ACID compliance for Transactions, it needs to verify complete writing to disk, which takes time. A workaround is to group many statements into a few transactions per second. Note, though, that ACID compliance is a very valuable thing, before I tell you that you can disable ACID compliance and get more transactions per second.
You appear to be using SQLite as a "current-value table", which requires a large number of independent writes per second, which is perhaps not a good use cae for SQLite. If I were doing a similar app, I would pipe the data to a central component that would buffer data and save it to SQLite a couple times a second. See the "Cyth SQLite Logger" in the LAVA-CR, which can save large numbers of log entries, but saves them once a second.
I note that you seem to have a lot of tables of only one row to hold data. I would have expected a single table of many rows, with columns like Timestamp, itemID, Value (with Timestamp as the Primary Key).
Have you looked at the "Attributes" subpalette of SQLite Library? This adds an Attributes table to store scalar values. It also demonstrates making a subclass of the Connection class to add capability, and the option of Preparing common SQL statements only once, rather than every time. Preparing has overhead, which, though it can be small relative to large actions (or unimportant for uncommon actions), can be high for the very small, very common actions you are doing.
It seemed strange to me that you are "querying attributes on each iteration to see if they have changed", but then I realized that you are applying "Model-View-Controller" as an application-level monolithic thing, which I don't agree with. I think the principle should be applied widely, but at a lower level. To me, every bit of state data has a natural place that it "exists". This place is the sole "Model" or "master" of that state, and if the state is in any other place those places are "copies". For example, to me it is natural to say that the "Model" for the Settings of your DAQ processes (SetPoint, etc.) is the process itself, not the saved settings in the db. Noone should be changing the saved settings in the database except the DAQ process itself. Any change of setpoint should happen by requesting the DAQ process to change, and then the DAQ process will save the new setpoint in the database. On shutdown, the DAQ process should write settings to the database. Yet at the same time as viewing the database as NOT the Model of settings, I would happily view it as the Model of the Data History.
BTW, Settings is a great use case for JSON. Save a single JSON item as configuration for each of your DAQ processes. As the DAQ is the Model, only it ever needs to know the format.
Gotta go for now.