Jump to content

Recommended Posts

I don't think you can match the raw pull-from-file performance of something like TDMS (unfragmented, of course).   SQLite advantages is its high capability, and if you aren't using those advantages then then it will not compare well.  

Share this post


Link to post
Share on other sites

Sure. I did not expect the same speed, I was just hoping that I would still be able to do such a retrieval fast enough for it to still feel near instantaneous to the users. That would allow for it to be maybe 10x slower than the chunk-based binary file format.

Your library is the only one I have found that also runs on RT, and it is also, I see now, the fastest one, just 20x slower than my current solution. I might just need to adjust to that though, to enjoy the many other benefits 🙂

Edited by Mads

Share this post


Link to post
Share on other sites
On 2/12/2020 at 2:53 PM, Mads said:

To give a simple example; if I have a table of 350 000 rows containing a time stamp (real) and 3 columns of  measurements (real),  and I select a time (sub)period from this table; it takes up to 0,6 seconds to retrieve the measurements on my test machine (using either a single SQL Execute 2D dbl, or the Select many rows template). For comparison the current binary file solution can locate and read the same sub-periods with a worst case of 0,03 seconds. 20x times faster.  This difference would not be an issue if the accumulated time per batch of request was still sub-second, but the end users might combine several requests, which means the total waiting time goes from less than a second ("instantaneous") to 5-30 seconds...

I'll just drop a link to a post on the dark side. I made that post after doing benchmark comparisons between TDMS and SQLite (sound familiar?).

It might be important to take into account that "first read" performance and "typical" performance can be orders of magnitude different. And the differences my be a function of the file type.

OTOH, as mentioned in the post, the same mechanism can be (ab)used to create a 64-bit file cache when using 32-bit LabVIEW, so that's nice.

Edited by shoneill

Share this post


Link to post
Share on other sites
1 hour ago, shoneill said:

I'll just drop a link to a post on the dark side. I made that post after doing benchmark comparisons between TDMS and SQLite (sound familiar?).

It might be important to take into account that "first read" performance and "typical" performance can be orders of magnitude different. And the differences my be a function of the file type.

OTOH, as mentioned in the post, the same mechanism can be (ab)used to create a 64-bit file cache when using 32-bit LabVIEW, so that's nice.

Caching did cross my mind during the testing earlier, so I tried running the tests on "fresh" data to get the worst case scenarios. It can mess thing up quite easily though I agree. Caching does improve both alternatives in this test case, and logically mostly for the non-sql solution, but on my machine not nearly as much as you observed during the mentioned test.

PS: A similarly interesting performance nugget that can be nice to know about is the effect of branch prediction😃

 

Share this post


Link to post
Share on other sites
22 hours ago, Mads said:


PS: A similarly interesting performance nugget that can be nice to know about is the effect of branch prediction😃

I did NOT know that branch prediction could have such an effect. Nice post. Thanks.

That article brings some old memories to light. I remember during the old "Prime number" coding challenge on the NI site there were times where adding a dummy case to my case structures during processing produced measurable changes in execution speed. I couldn't explain it, but I accepted the improved execution times and carried on with my business.  Seeing how the execution was more or less deterministic, could it be that I was moving between "good" and "bad" patterns for the case structure. IIRC, the results were tested on a Pentium 4, with a 31-stage pipeline.  May be completely unrelated, but that was always a point which stuck with me that I couldn't really explain.

Edited by shoneill

Share this post


Link to post
Share on other sites
On 2/12/2020 at 1:53 PM, Mads said:

I was thinking of replacing a proprietary chunk based time series log file format that we use with SQLite, using this library. From a test I did some time ago I thought read performance would not be an issue (unlike for TDMS for example, which performs really badly due to the fragmentation you get when logging large non-continuous time series ), but now that I redid the tests I see that the read times get quite high (writes are less of an issue in this case as we normally just write smaller batches of data once a minute). I expected the database to be much slower, but the difference is of a magnitude that could be troublesome (no longer a real-time response).

To give a simple example; if I have a table of 350 000 rows containing a time stamp (real) and 3 columns of  measurements (real),  and I select a time (sub)period from this table; it takes up to 0,6 seconds to retrieve the measurements on my test machine (using either a single SQL Execute 2D dbl, or the Select many rows template). For comparison the current binary file solution can locate and read the same sub-periods with a worst case of 0,03 seconds. 20x times faster.  This difference would not be an issue if the accumulated time per batch of request was still sub-second, but the end users might combine several requests, which means the total waiting time goes from less than a second ("instantaneous") to 5-30 seconds...

Perhaps I was using @ShaunR 's library back when I was able to get this done "instantaneously", but that seems to no longer exist (?), and in this case I need a solution that will run on Linux RT too...Should it be possible to read something like this any faster, and/or do anyone have any suggestions on how to improve the performance? The data will have to be on disk, the speedup cannot be  based on it being in cache / memory...

You have to be logged in to see the SQLite software.

The main performance criteria for SQLite is the number of rows and/or columns returned/inserted, although there have been significant improvements in recent versions (~15% over 5 years). If you look at the performance graphs, you will see that 0.6 seconds equates to about 450k rows (with 2 columns). The performance test in the Sqlite API for LabVIEW library is based on 10k rows so that you can get repeatable figures and that typically yields 10s of milliseconds. Less than that, and LabVIEW timings become dominant for the purpose of that test.

29d4f6bc231881d6cd008f71f8094818_f130.png.eea1e24f3e6ec2ac5589de63c40efa6a.png

If you are dumping entire tables and performance is a requirement, then a relational database is the wrong tool.

Edited by ShaunR

Share this post


Link to post
Share on other sites

Hi there,

I'm experiencing a weird behavior with foreign key constraint.

I'm inserting a row in a table with a bad "link" (id in parent table doesn't exist). I'm expecting an error to be thrown by EXEC but not error and the "bad" row is inserted !?

If I execute the same SQL query in SQLiteExpert an error is thrown as expected...

 

Note 1: I have UNIQUE constraint in my table that correctly throw error when value inserted is not  unique

Note 2: I found the message below related to error propagation, but in my case return code is equal to 0 

On 4/30/2017 at 4:01 PM, drjdpowell said:

Hi Max,

The actual bug, I think, was that “Step.vi” was ignoring errors outside of 1..99.  It has a case structure that only passes those codes to the error-handling subVI (written, originally, because 100 and 101 are not errors).  I have changed it so that the extended error codes are now passed (see version 1.7.3 just added to the LAVA CR).  I tested it by making a foreign key error and it now works.

Bugfix SQLite.png

 

If someone has an idea of what I'm doing wrong, I'd be pleased to hear from him.

 

Have a good day :)

 

 

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.