OK.
I'm fairly happy with the performance of the API (there are to be a couple more minor tweaks but nothing drastic). So I started to look at SQLites performance. In particular I was interested in how SQLite copes with various numbers of records and whether there is deterioration in performance with increasing numbers of records. Wish I hadn't
Below is a graph of inserts and select queries for 1 to 1,000,000 records.
The test machine is a Core 2 duo running Win 7 x64 using Labview 9 SP1 x32. Each data point is an average over 10 bulk inserts using the "Speed Example.vi". The database file was also deleted before each insert to ensure fragmentation and/or tree searching were not affecting the results.
I think you can see that both inserts and select times are fairly linear in relation to the number. And (IMHO) 5 seconds to read or write a million records (consisting of 2 columns) is pretty nippy
Now the same machine (exactly the same test harness) but using LV2009 SP1 x64
Hmmm.
It's interesting to note. that up until about 100,000; x64 it performs similarly to x32. However, memory usage reported by the windows task manager above 200,000+ shows x64 starts to climb further. Typically by the end of the test x32 has consumed about 450MB whilst x64 is about 850MB when viewed in the windows task manager. Checking SQLites internal memory allocation using the "Memory.VI" yields an identical usage between both tests. However,. LV x64 seems to be using 2x windows memory. I'm tempted to hypothesise that it is memory allocation in LV x64 which is the cause.
Can anyone else reproduce this result? A single check at (say) 500,000 should be sufficient.