1) SQLite isn’t a compression format, and as a flat table won’t necessarily be any smaller on disk than a spreadsheet file. Larger actually, due to the various lookup indexes. However, you have the opportunity to use structured data, avoiding a lot of duplication, which can end up with smaller file sizes (or, at least, the freedom to add much more info at not much larger size). For example, you have “site†and “sending app†strings that repeat often. If you instead saved keys into separate “Site†and “Application†tables, you could store lots of info in them that can be “joined†in with a “VIEWâ€. Similarly you could have an “Errors†table that allowed extended error descriptions instead of just a simple error code (or error severity, etc.). The joining VIEW would look something like:
CREATE VIEW Event_VIEW AS
SELECT * FROM Application_Events
JOIN Errors USING (ErrCode)
JOIN Site USING (SiteID)
JOIN Application USING (AppID)
Your UI would then query this View, and filter on any info in all these table. Find all events whose error description contains the word “testâ€, for example.
2) Look up “LIMIT†and “OFFSETâ€, and study how they are used in the “Cyth SQLite Log Viewerâ€. In that viewer, updating the filtering of a selected UI table takes ms, not seconds. This is because only the visible rows of the UI table are actually selected. When the User moves the scrollbar, the SELECT is repeated multiple times per second, meaning that it looks to the User like a table with thousands of rows. And one is free to use a lot of slow property nodes to do things like text colour, since one is never doing more than a few dozen rows.
3) I wouldn’t bother with VACUUM in a logging application, as the space from any deletion will just get used for later inserts. Use VACUUM if you delete a large amount without intending to reuse the space.
4) You cannot unlock the file if you’ve dropped the pointer to the connection without closing it, I’m afraid. You have to restart LabVIEW to unload the SQLite dll. Your code should always call the Finalize and Close methods, even on error.
Hi Rob,
You need to wrap multiple INSERTs into a single transaction with “BEGIN†and “COMMITâ€. Each transaction requires verified writing to the disk twice, and a hard disk only spins on the the order of once every 10 ms. You need to buffer your data and do a bulk insert about once a second (place a FOR LOOP between the “Prepare†and “Finalize†subVIs in you code image, and feed in an array of your data clusters).
This touches on jollybandit’s question (5): durability against power failure by confirmed writing to disk is time consuming, so you need some kind of tradeoff between immediate saving and delayed buffering. About one save per second is what I do.