Jump to content

[CR] SQLite API


Recommended Posts

Thanks for answering my question. I am lucky that the new version just got released. shifty.gif. Actually every time I will write data to a new file.

Which function can I use to create a new file.

Is there any difference between using SQLite as a database and writing to it and using it as a file format and storing data in it? Can the later be done by without using the SQL queries?

I am completely new in this area. So please ignore if my questions sound foolish.

Thanks,

Subhasis

Link to comment

Thanks for answering my question. I am lucky that the new version just got released. shifty.gif. Actually every time I will write data to a new file.

Which function can I use to create a new file.

You do not need to explicitly open or create a file with the any of the high level API (the exception being "Query by ref") as it will open or create a file if one doesn't exist. Just specify the fie name.

Is there any difference between using SQLite as a database and writing to it and using it as a file format and storing data in it? Can the later be done by without using the SQL queries?

I am completely new in this area. So please ignore if my questions sound foolish.

Thanks,

Subhasis

You cannot write directly to the file using standard file write functions. A SQLite file has a complex structure.

Link to comment

OK.

I'm fairly happy with the performance of the API (there are to be a couple more minor tweaks but nothing drastic). So I started to look at SQLites performance. In particular I was interested in how SQLite copes with various numbers of records and whether there is deterioration in performance with increasing numbers of records. Wish I hadn't biggrin.gif

Below is a graph of inserts and select queries for 1 to 1,000,000 records.

The test machine is a Core 2 duo running Win 7 x64 using Labview 9 SP1 x32. Each data point is an average over 10 bulk inserts using the "Speed Example.vi". The database file was also deleted before each insert to ensure fragmentation and/or tree searching were not affecting the results.

I think you can see that both inserts and select times are fairly linear in relation to the number. And (IMHO) 5 seconds to read or write a million records (consisting of 2 columns) is pretty nippy yes.gif

Now the same machine (exactly the same test harness) but using LV2009 SP1 x64

Hmmm.

It's interesting to note. that up until about 100,000; x64 it performs similarly to x32. However, memory usage reported by the windows task manager above 200,000+ shows x64 starts to climb further. Typically by the end of the test x32 has consumed about 450MB whilst x64 is about 850MB when viewed in the windows task manager. Checking SQLites internal memory allocation using the "Memory.VI" yields an identical usage between both tests. However,. LV x64 seems to be using 2x windows memory. I'm tempted to hypothesise that it is memory allocation in LV x64 which is the cause.

Can anyone else reproduce this result? A single check at (say) 500,000 should be sufficient.

Edited by ShaunR
Link to comment

I get the same thing on LV2010 on win 7.

post-7834-0-35612500-1290533191_thumb.pn

post-7834-0-96587600-1290533191_thumb.pn

I ran it it on my code, and get the same issue in 32 and 64 bit, although it takes longer to show on 32bit. I'm not sure on the cause. I was doing stuff while testing mine so the results might be a bit skewed.

post-7834-0-16940800-1290533190_thumb.pn

post-7834-0-76080400-1290533190_thumb.pn

I'm not sure if I have time to help track down the issue. My first guess would be with memory allocation. Perhaps on the autoindexing on the while loop (if the allocation size is increased in limited sized chunks instead of exponential growth this could happen). Or maybe memory fragmentation (I'm not sure what algorithm LabVIEWs garbage collection uses, but I doubt it's compacting). The desktop trace tool can be used to check the while loop. Fragmentation would be harder to figure out.

Link to comment

I'm not sure if I have time to help track down the issue. My first guess would be with memory allocation. Perhaps on the autoindexing on the while loop (if the allocation size is increased in limited sized chunks instead of exponential growth this could happen). Or maybe memory fragmentation (I'm not sure what algorithm LabVIEWs garbage collection uses, but I doubt it's compacting). The desktop trace tool can be used to check the while loop. Fragmentation would be harder to figure out.

Definitely the former.

If I pre-allocate the array and replace elements rather than auto index, then they perform exactly the same. But what I'm confused by is why there should be a difference between x32 and x64. After all, it should be the same amount of memory being (re)allocated and it is a LV internal implementation.

Link to comment

Definitely the former.

If I pre-allocate the array and replace elements rather than auto index, then they perform exactly the same. But what I'm confused by is why there should be a difference between x32 and x64. After all, it should be the same amount of memory being (re)allocated and it is a LV internal implementation.

The desktop trace tool can help figure out how memory is getting allocated if you really want to into the details.

My guess is that since each string includes a pointer there is an extra 4bytes for each string in 64bit, and since each string is rather short in this test, the extra 4 bytes makes a significant difference in the memory needs of the different platforms.

As for working around it, I can think of a couple of options.

Return the data in chunks instead of all at once either through limit/offset, or by partially stepping through the statement to collect each chunk. The later is probably faster since the former may need to reevaluate early rows.

Figure out the size of returned data before allocating it. I think count(*) with the same where clause should work, but SQLite has to run through all the rows twice. You my also be able to use a temporary table to hold the result then calling "select count(*) from temptable" might not need to evaluate every row (I'm not sure).

Link to comment

My guess is that since each string includes a pointer there is an extra 4bytes for each string in 64bit, and since each string is rather short in this test, the extra 4 bytes makes a significant difference in the memory needs of the different platforms.

That doesn't make a lot of sense to me. Surely pointers are just references to where the data is stored rather than being stored as part of the data.

But I ran the tests again to make sure. This time inserting 500 chars rather than the <10 as before. Everything else is the same apart from taking an average of 5 to cut down the test time.

Pretty much the same. There must be a difference between the memory managers and the way x64 manages allocation. Surprising really. I would expect LVx64 running on a x64 windows platform to outperform a x32 app.

Link to comment

I am trying to upload some data using bulk insert. I have attached a screenshot of the section. The text table which goes into the Value Table input of the insert table VI shows the data correctly. But the data in the database gets completely jumbled up for some reason. Can you please tell me, what I am doing wrong here.

Thanks,

Subhasis

I am trying to upload some data using bulk insert. I have attached a screenshot of the section. The text table which goes into the Value Table input of the insert table VI shows the data correctly. But the data in the database gets completely jumbled up for some reason. Can you please tell me, what I am doing wrong here.

Thanks,

Subhasis

post-18459-0-41952400-1290589902_thumb.p

Link to comment

That doesn't make a lot of sense to me. Surely pointers are just references to where the data is stored rather than being stored as part of the data.

But I ran the tests again to make sure. This time inserting 500 chars rather than the <10 as before. Everything else is the same apart from taking an average of 5 to cut down the test time.

Pretty much the same. There must be a difference between the memory managers and the way x64 manages allocation. Surprising really. I would expect LVx64 running on a x64 windows platform to outperform a x32 app.

The string array is an array of pointers to the individual strings (I wasn't thinking right with my original explanation :frusty:) so extending the inserted data shouldn't change the performance of the while loop index. The reason the strings need to stored as pointers in an array is because they have unknown lengths. If they where stored in one huge block then to index the array you would have to read all the earlier strings. As an array of pointers you can calculate exactly where the offset is with a single multiplication and addition.

A little more info here on what I'm talking about

http://en.wikipedia....ide_of_an_array

A lot of LabVIEWs array operations just adjust stride lengths and array pointers which avoids data copies. Off the the top of my head this should include any form of indexing, subset, and reshape array (assuming the new size fits)

I'll try to figure why mine shows the problem in 32bit, since last I checked our implementations of select should be exactly the same on the index memory requirements.

Link to comment

I found that the speed example also has the same problem. It does not write PASS to the first column. It writes the value in both columns in the DB. Please correct me if I am wrong.

Thanks,

Subhasis

What version of Labview and operating system are you using?

Link to comment

The string array is an array of pointers to the individual strings (I wasn't thinking right with my original explanation :frusty:) so extending the inserted data shouldn't change the performance of the while loop index. The reason the strings need to stored as pointers in an array is because they have unknown lengths. If they where stored in one huge block then to index the array you would have to read all the earlier strings. As an array of pointers you can calculate exactly where the offset is with a single multiplication and addition.

A little more info here on what I'm talking about

http://en.wikipedia....ide_of_an_array

A lot of LabVIEWs array operations just adjust stride lengths and array pointers which avoids data copies. Off the the top of my head this should include any form of indexing, subset, and reshape array (assuming the new size fits)

I'll try to figure why mine shows the problem in 32bit, since last I checked our implementations of select should be exactly the same on the index memory requirements.

I think you are describing a **char. When I iterate over the rows and columns, I only retrieve a "C String" type (*char), which I then build into a 2D array. The Labview CLN automagically dereferences this to a labview string (i.e it adds the length bytes and truncates at \00). In this sense, it is a pointer to an array of bytes rather than an array of pointers.

Link to comment

I found that the speed example also has the same problem. It does not write PASS to the first column. It writes the value in both columns in the DB. Please correct me if I am wrong.

Thanks,

Subhasis

You're correct there is a bug in SQLite_Bind.vi change the 0 on the bottom to -1 and it'll work.

The string array is an array of pointers to the individual strings (I wasn't thinking right with my original explanation :frusty:) so extending the inserted data shouldn't change the performance of the while loop index. The reason the strings need to stored as pointers in an array is because they have unknown lengths. If they where stored in one huge block then to index the array you would have to read all the earlier strings. As an array of pointers you can calculate exactly where the offset is with a single multiplication and addition.

A little more info here on what I'm talking about

http://en.wikipedia....ide_of_an_array

A lot of LabVIEWs array operations just adjust stride lengths and array pointers which avoids data copies. Off the the top of my head this should include any form of indexing, subset, and reshape array (assuming the new size fits)

I'll try to figure why mine shows the problem in 32bit, since last I checked our implementations of select should be exactly the same on the index memory requirements.

I must of ran my 32 bit test in 64 bit since the problem no longer shows in 32bit.

I think you are describing a **char. When I iterate over the rows and columns, I only retrieve a "C String" type (*char), which I then build into a 2D array. The Labview CLN automagically dereferences this to a labview string (i.e it adds the length bytes and truncates at \00). In this sense, it is a pointer to an array of bytes rather than an array of pointers.

I'm saying that internally LabVIEW string ARRAYS are an array of pointers to pascal strings. The CLN interface has nothing to do with it.

On a side note.

I would suggest just binding everything as a string, then you can speed up the binding by not checking for \00 and just inserting the data as if it was there (string length is constant time in LabVIEW but searching for /00 requires checking each byte of the string). Then you just need to add support for reading strings containing \00.

Edited by Matt W
Link to comment

You'r correct there is a bug in SQLite_Bind.vi change the 0 on the bottom to -1 and it'll work.

I'll release the next version a little earlier than planned (later today) since it will eradicate this (well spotted). Funnily enough. It only seems to happen on LVx32. x64 is fine.

I must of ran my 32 bit test in 64 bit since the problem no longer shows in 32bit.

I would suggest just binding everything as a string, then you can speed up the binding by not checking for \00 and just inserting the data as if it was there (string length is constant time in LabVIEW but searching for /00 requires checking each byte of the string). Then you just need to add support for reading strings containing \00.

The next release passes an array of bytes to the bind function, which is faster than passing a string even with the conversion to a U8 array. It also removes the aforementioned "bug".

The API already supports reading strings containing \00 (since V1.1). The field just needs to be declared as a blob. I did agonise about making it generic (just involves a direct replacement of "Fetch column" with "Read Blob"), but decided the performance advantage of not using the generic method outweighed the fact that you just have to define a field type.

I'm saying that internally LabVIEW string ARRAYS are an array of pointers to pascal strings. The CLN interface has nothing to do with it.

Well. I don't think that is the issue, since the later tests should have reduced the allocation to a smaller difference and I would have expected the x32 to be more like the x64 - which it isn't. Sufficed to say, there is a difference and, that LV x64 is vastly less efficient at building large arrays of strings than x32 (which I find surprising).

Link to comment

I'll release the next version a little earlier than planned (later today) since it will eradicate this (well spotted). Funnily enough. It only seems to happen on LVx32. x64 is fine.

My understanding of the problem is that the bound data isn't read until you step the statement, and labview may unallocate the data passed to the dll by that point (in this case it's reusing the data pointer from null terminated copy for both calls), by passing -1 sqlite makes it's own copy of the data before returning.

The next release passes an array of bytes to the bind function, which is faster than passing a string even with the conversion to a U8 array. It also removes the aforementioned "bug".

The conversion should be free (it's just a static type cast). Passing the array avoids LabVIEW making a null terminated copy, which should speed things up. But you need to be certain LabVIEW doesn't deallocate the memory the string data until after you've stepped the statement. I think the IPE can do that (that's how I interpret it's help entry), but I'm not absolutely certain what the IPE structure does at a low level. Without the IPE (assuming it does make that guarantee), your risking the compiler deallocating it. Different platforms, future versions of LabVIEW, and different optimization cases may break it. I'm using -1 just to be safe. I would suggest you at least use an IPE to be sure the relevant string data is in memory. If you can find documentation that the IPE will always delay deallocation, let me know then I'll switch over mine as well .

The API already supports reading strings containing \00 (since V1.1). The field just needs to be declared as a blob. I did agonise about making it generic (just involves a direct replacement of "Fetch column" with "Read Blob"), but decided the performance advantage of not using the generic method outweighed the fact that you just have to define a field type.

I had the requirement on mine that any data I put in comes out exactly the same (which as far as I can tell is true with my variant interface, assuming a columns type affinity doesn't mess with things). And since strings in LabVIEW can contain /00, reading them is a requirement in my case.

Well. I don't think that is the issue, since the later tests should have reduced the allocation to a smaller difference and I would have expected the x32 to be more like the x64 - which it isn't. Sufficed to say, there is a difference and, that LV x64 is vastly less efficient at building large arrays of strings than x32 (which I find surprising).

The problem is LabVIEW is allocating a array (arrays need a continuous chunk of memory) of string pointers in the while loop autoindex. When that array fills LabVIEW makes a new array larger array and copies the old one into it. The copy is the main cause of the slow down. Now LabVIEW seems to increasing the size of the new array in chunks (at least when the array is large). And since in 64 bit the pointers are twice the size the array needs to be resized and copied twice as often. Since the copies cost depends on how much data is getting copied, this leads problem getting exponentially worse with increasing array size.

If I'm correct the size of data elements should not affect when the exponetional error starts showing up, and the 32 bit should look like the 64 bit when reading the twice the number of rows as the 64 bit.

Which is the case.

post-7834-0-62860600-1290625933_thumb.pn

This can be avoided by growing the array exponentially (for example doubling it's size when filled), but you'd have to code that by hand (and it's slower since I used to do that before I saw the autoindex in yours). You could check if the returned array has reached a size, this say number of columns * number of rows read = 500000 (and half that limit in 64bit), then switch to doing exponentional growth by hand.

  • Like 1
Link to comment

My understanding of the problem is that the bound data isn't read until you step the statement, and labview may unallocate the data passed to the dll by that point (in this case it's reusing the data pointer from null terminated copy for both calls), by passing -1 sqlite makes it's own copy of the data before returning.

The conversion should be free (it's just a static type cast). Passing the array avoids LabVIEW making a null terminated copy, which should speed things up. But you need to be certain LabVIEW doesn't deallocate the memory the string data until after you've stepped the statement. I think the IPE can do that (that's how I interpret it's help entry), but I'm not absolutely certain what the IPE structure does at a low level. Without the IPE (assuming it does make that guarantee), your risking the compiler deallocating it. Different platforms, future versions of LabVIEW, and different optimization cases may break it. I'm using -1 just to be safe. I would suggest you at least use an IPE to be sure the relevant string data is in memory. If you can find documentation that the IPE will always delay deallocation, let me know then I'll switch over mine as well .

Indeed. It was an oversight. It should have been -1. I don't think an IPE is really the way forward as I don't see any performace difference between 0 and -1 (KISS).

The problem is LabVIEW is allocating a array (arrays need a continuous chunk of memory) of string pointers in the while loop autoindex. When that array fills LabVIEW makes a new array larger array and copies the old one into it. The copy is the main cause of the slow down. Now LabVIEW seems to increasing the size of the new array in chunks (at least when the array is large). And since in 64 bit the pointers are twice the size the array needs to be resized and copied twice as often. Since the copies cost depends on how much data is getting copied, this leads problem getting exponentially worse with increasing array size.

If I'm correct the size of data elements should not affect when the exponetional error starts showing up, and the 32 bit should look like the 64 bit when reading the twice the number of rows as the 64 bit.

Which is the case.

post-7834-0-62860600-1290625933_thumb.pn

This can be avoided by growing the array exponentially (for example doubling it's size when filled), but you'd have to code that by hand (and it's slower since I used to do that before I saw the autoindex in yours). You could check if the returned array has reached a size, this say number of columns * number of rows read = 500000 (and half that limit in 64bit), then switch to doing exponentional growth by hand.

I consider it as a Labview limitation rather than the API, In theory they should behave identically regardless of the implementation specifics. Differences between compiling in different IDEs is a little disconcerting since I think we all assume that what works in one will work identically in the other. But it looks like one of those "not a bug. not desired" effects.

But good call. on finding a probable explanation (your C experience obviously shining through). I think it will be rare occasions that anyone will be querying that many records at a time and it is still an order of magnitude faster than other DB implementations (like Access). You never know, they might optimise it in LV 2011 2015. biggrin.gif

Edited by ShaunR
Link to comment

Indeed. It was an oversight. It should have been -1. I don't think an IPE is really the way forward as I don't see any performace difference between 0 and -1 (KISS).

The speed difference between 0 and -1 it should be more apparent when inserting large data elements.

I consider it as a Labview limitation rather than the API, In theory they should behave identically regardless of the implementation specifics. Differences between compiling in different IDEs is a little disconcerting since I think we all assume that what works in one will work identically in the other. But it looks like one of those "not a bug. not desired" effects.

But good call. on finding a probable explanation (your C experience obviously shining through). I think it will be rare occasions that anyone will be querying that many records at a time and it is still an order of magnitude faster than other DB implementations (like Access). You never know, they might optimise it in LV 2011 2015. biggrin.gif

There's limits on how much of the hardware details one can abstract away before you start losing performance. NI could double the chunk size in 64bit, but then you get a difference when working with arrays not containing pointers. Using exponential size increases is better theoretically but can lead to out of memory issues (This wouldn't be much of a problem on 64 bit). I would be probably lean toward using exponential allocation on 64 bit and leaving 32 bit with the chunk allocator, the code would perform differently (extremely in some cases) between the two but 64 would get a nice speed boost when allocating large arrays from while loops.

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.