Jump to content

[CR] SQLite API


Recommended Posts

You probably only need to take them out of the polymorphic VI. Rename column calls delete column and they are both in the table polymorphic vi.

What sort of glitch?

-1 is much safer since if you supply a normal Labview string, you would have to add 1, however, if you supplied a null terminated string, you mustn't add the 1.The logic involved in detecting null chars outweighs any performance improvements you may get. If its better performance you are after, you'll have to wait for Version 1.1 wink.gif

However. If you are trying to insert a null char into the database. You will have problems since c strings are null terminated, So when you pass the string (even with the length) Labview needs to convert it from a Labview string (which has a length that may have leading nulls) to a C string and this conversion will invariably truncate your string.

Usually the way forward in the latter case is to use a "Blob" field but I haven't implemented that in this release.

The error is a code 1, "SQL Error or missing database".

Unfortunately, the Flatten To String function in LV returns a fair number of null chars when you feed it, for instance, a waveform. Escaped null chars don't seem to have the issue, so I might give that a try.

Thanks again,

Joe Z.

Link to comment

The error is a code 1, "SQL Error or missing database".

Unfortunately, the Flatten To String function in LV returns a fair number of null chars when you feed it, for instance, a waveform. Escaped null chars don't seem to have the issue, so I might give that a try.

Thanks again,

Joe Z.

If its a waveform you can use the "Array To Spreadsheet.vi" and store it as a comma delimited string. Or even each separate value with a time stamp in its own table.

Edited by ShaunR
Link to comment

Possibly. This release basically does a "GetRow" and has to iterate for each row. There is another method, but it isn't implemented yet. (Still trying to figure out how to do it....if it is possible in LV).

Do you have a real-time target that uses ETS?

Select is slow because you're building the array one element at at a time, you should grow it exponentially and resize it back down.

I wrote my own SQLite wrapper a while ago, I've been debating posting it after I finish up the documentation. Its interface is more difficult than yours,since it assumes knowledge of SQLite. But is built for speed and storing any kind of LabVIEW value object. Anyway for comparison, using the 10,000 insert code you posted, on my computer with LV2010.

Yours

Insert 1202ms

select 6657ms

Mine

Insert 451 ms

Select 251 ms

My select uses the same basic logic as yours (at least for values stored as strings). The main difference is that I prealloc 16 rows (note: it doesn't have to be 16 that just seemed like a good starting number to me), use replace subset to fill the values in and if I need more space I double the array's size (I just concat it to itself), at the end I reshape the array back down. Changing the allocation should make yours faster than mine (since you aren't dealing with type information).

My insert is faster because I'm using prepared statements and value binding. I'm also only opening the file once, which probably accounts for some of the difference.

Matt W

Link to comment

Select is slow because you're building the array one element at at a time, you should grow it exponentially and resize it back down.

I wrote my own SQLite wrapper a while ago, I've been debating posting it after I finish up the documentation. Its interface is more difficult than yours,since it assumes knowledge of SQLite. But is built for speed and storing any kind of LabVIEW value object. Anyway for comparison, using the 10,000 insert code you posted, on my computer with LV2010.

Yours

Insert 1202ms

select 6657ms

Mine

Insert 451 ms

Select 251 ms

Funny you should say that. Downland 1.1 I've just posted above. biggrin.gif

Using the benchmark (supplied in the update)

Insert ~350ms

Select ~90ms

The initial release was really for me to get acquainted with SQLite map out the feature list. I'm thinking I maybe able to shave off a few more ms in the next release. Its a shame the"convenience" functions are unavailable to us sad.gif

My select uses the same basic logic as yours (at least for values stored as strings). The main difference is that I prealloc 16 rows (note: it doesn't have to be 16 that just seemed like a good starting number to me), use replace subset to fill the values in and if I need more space I double the array's size (I just concat it to itself), at the end I reshape the array back down. Changing the allocation should make yours faster than mine (since you aren't dealing with type information).

My insert is faster because I'm using prepared statements and value binding. I'm also only opening the file once, which probably accounts for some of the difference.

Matt W

Indeed. That's the next feature that I will be implementing and I was also thinking that would improve the insert speed.

Good feedback. Not having other (working) implementations to compare against, I really need this kind of input.

What are you running Win x64? LVx64? I don' t suppose you happen to have am RT Pharlap target?

Link to comment

Funny you should say that. Downland 1.1 I've just posted above. biggrin.gif

Using the benchmark (supplied in the update)

Insert ~350ms

Select ~90ms

The initial release was really for me to get acquainted with SQLite map out the feature list. I'm thinking I maybe able to shave off a few more ms in the next release. Its a shame the"convenience" functions are unavailable to us sad.gif

Indeed. That's the next feature that I will be implementing and I was also thinking that would improve the insert speed.

Good feedback. Not having other (working) implementations to compare against, I really need this kind of input.

What are you running Win x64? LVx64? I don' t suppose you happen to have am RT Pharlap target?

I compared your newer version with your test harness.

If I don't modify the test for fairness

Yours

Insert 251

Dump 112

Mine

Insert 401

Dump 266

If I include the string construction in your benchmark and the variant construction in mine.

Yours

Insert 299

Dump 114

Mine

Insert 504

Dump 257

If I change my select to use the while loop autoindexing like yours (smacks forehead), and fix some execution settings I messed up (and turn on inlining since I'm on LV2010 now).

I get

Insert 352

Dump 181

Considering that mine is handling type information and using varaints, I doubt that I could get mine much closer to yours speed wise. With conversion from strings and to over formats I'd probably do better in some cases (since I can handle blob data directly for instance, and don't need to escape strings that contain ' ).

Some comments on the stuff I noticed poking around your code.

In Fetch All you can put a step before the while loop, then put another step after the fetch. Then you wont have to shrink the Rows by one. Also, I'm not sure if this is true with subroutines, but "rows out" in "fetch record" being within a structure requires labview to cache that indicators value between runs. If you make the change to fetch all this won't matter.

Your multistatement querys can't include strings with ';'.

I worked around that by calling strlen from the standard c library on the pzTail returned from prepare statement (while being careful to make sure the string was required in later labview code). Then subtracting that length from the original length and taking a string subset to be the remaining statements. The proper solution would be to use a helper dll and do the pointer math, to avoid transverseing the string in strlen. But since I use prepared statements it doesn't affect my version much.

And to be nit picky SQLite only has one L.

As for my enviroment, my version currently only works on Win32 based systems. There's no reason I couldn't port it to other things though. I have a 64bit LabVIEW install at home. At work there's an old PXI 1002 box, but I've never used it (I haven't had a good excuse for playing with LabVIEW realtime).

  • Like 1
Link to comment

I compared your newer version with your test harness.

If I don't modify the test for fairness

Yours

Insert 251

Dump 112

Mine

Insert 401

Dump 266

If I include the string construction in your benchmark and the variant construction in mine.

Yours

Insert 299

Dump 114

Mine

Insert 504

Dump 257

If I change my select to use the while loop autoindexing like yours (smacks forehead), and fix some execution settings I messed up (and turn on inlining since I'm on LV2010 now).

I get

Insert 352

Dump 181

Considering that mine is handling type information and using varaints, I doubt that I could get mine much closer to yours speed wise. With conversion from strings and to over formats I'd probably do better in some cases (since I can handle blob data directly for instance, and don't need to escape strings that contain ' ).

Interesting. Did you get sqlite_column_blob working? You probably saw my note about it in the get column VI. Being able to call that function (instead of my work-around) would have saved a lot of effort).

Strings are the ultimate variant IMHO. I'm not a fan of the variant data type which I consider a poor relation to strings for many reasons. But that's a personal opinion.

You don't need to escape strings with ( ' ). That's a requirement for properly formatted SQL rather than an implementation requirement. The onus is on the user to enforce this rather than as a function of the API. It is was MySQL they would have to do the same.

Some comments on the stuff I noticed poking around your code.

In Fetch All you can put a step before the while loop, then put another step after the fetch. Then you wont have to shrink the Rows by one. Also, I'm not sure if this is true with subroutines, but "rows out" in "fetch record" being within a structure requires labview to cache that indicators value between runs. If you make the change to fetch all this won't matter.

That' is the only place since the subroutines use for-loops instead of while loops. I'll have a play and maybe implement it for the next release. Good call thumbup1.gif

Your multistatement querys can't include strings with ';'.

I worked around that by calling strlen from the standard c library on the pzTail returned from prepare statement (while being careful to make sure the string was required in later labview code). Then subtracting that length from the original length and taking a string subset to be the remaining statements. The proper solution would be to use a helper dll and do the pointer math, to avoid transverseing the string in strlen. But since I use prepared statements it doesn't affect my version much.

Hmm. I need to think about this one.

And to be nit picky SQLite only has one L.

It "mostly" has... rolleyes.gif

As for my enviroment, my version currently only works on Win32 based systems. There's no reason I couldn't port it to other things though. I have a 64bit LabVIEW install at home. At work there's an old PXI 1002 box, but I've never used it (I haven't had a good excuse for playing with LabVIEW realtime).

The dll is the killer. Cross-compiling is real "geeks" territory and I avoid C like the plague unless I'm dragged kicking and screaming like a 4 year-old. But then I have a compiler for the particular target and C programmers are plentiful wink.gif

I cannot find a definitive list for the functions supported under RTWin. So the only option is to try it. and hope for the best.

Link to comment

(and turn on inlining since I'm on LV2010 now).

Be careful with this. For in-lining all the in-lined vis must be re-entrant. This can cause "database locked" errors when running concurrently even with shared cache. Choose which ones are in-lined very carefully. (You'll get about another 10 ms by doing this and LV2010 is about 10 ms faster than LV2009 too ;))

As an aside. I'd love to see the result from using SQLite on an SD drive. I'm very impressed with SQLite and will be using it a lot in future.

Edited by ShaunR
Link to comment

Interesting. Did you get sqlite_column_blob working? You probably saw my note about it in the get column VI. Being able to call that function (instead of my work-around) would have saved a lot of effort).

Strings are the ultimate variant IMHO. I'm not a fan of the variant data type which I consider a poor relation to strings for many reasons. But that's a personal opinion.

I use sqlite_column_text (only different in how it handles zero length blobs), but the trick is to use MoveBlock.

I use variant's since they hold type information, and it's NI job to make sure the it support all types (although there are a few quirks I've had to work around).

You don't need to escape strings with ( ' ). That's a requirement for properly formatted SQL rather than an implementation requirement. The onus is on the user to enforce this rather than as a function of the API. It is was MySQL they would have to do the same.

I don't need to escape strings since I have bound variables, I can also use \00 in string parameters. Bound Variables are also the proper way to avoid SQL Injection attacks.

The dll is the killer. Cross-compiling is real "geeks" territory and I avoid C like the plague unless I'm dragged kicking and screaming like a 4 year-old. But then I have a compiler for the particular target and C programmers are plentiful wink.gif

I cannot find a definitive list for the functions supported under RTWin. So the only option is to try it. and hope for the best.

Fortunately I qualify for a real "geek", not that I like C but I know it well enough. When I got home I got it cross compiling with LV64. Just involved putting pointer sized int in the right places. I'll try to get it to work on a mac at work on Monday (If I got it set right I shouldn't need to change anything).

I also removed some restrictions from my interface, and removed the need for strlen when preparing strings (that required the use of DSNewPtr, MoveBlock and DSDisposePtr).

Beyond sqlite the only calls I use are DSNewPtr, MoveBlock and DSDisposePtr, all of which are supplied by LabVIEW (so I would hope they are on RTWin).

Anyway with LV2010 64 bit on Win7 with string and doubles(the same as your initial speed test), but modified to including formatting time.

Yours

Insert 401

Dump 72

My Current Version

Insert 277

Dump 151

I don't know why your insert got slower and dump faster from the Win XP LV2010f2 machine 32bit I tested on before (the 64bit computer should be faster CPU and IO wise).

Be careful with this. For in-lining all the in-lined vis must be re-entrant. This can cause "database locked" errors when running concurrently even with shared cache. Choose which ones are in-lined very carefully. (You'll get about another 10 ms by doing this and LV2010 is about 10 ms faster than LV2009 too ;))

As an aside. I'd love to see the result from using SQLite on an SD drive. I'm very impressed with SQLite and will be using it a lot in future.

I'm familiar with re-entrant vi's. From the testing I did, mine seems to handle concurrent access properly. I'm using SQLITE_OPEN_FULLMUTEX, which should reduce crashing from misusing the API. If you have an example that causes "database locked" errors I'll check if I'm handling it properly.

You can use an In-Memory Database which should be faster than a SSD.

With the above bench mark and an in-memory database I get

Insert 219

Dump 153

So not a huge gain. But if I make every insert a transaction (and knock the test size down to 1000).

Harddisk File

Insert 71795

Dump 15

Memory

Insert 24

Dump 16

So a SSD could make a large difference with a lot of transactions.

Link to comment

I really appreciate the time your putting in on this. worshippy.gif Especially as you already have an implementation.

I use sqlite_column_text (only different in how it handles zero length blobs), but the trick is to use MoveBlock.

Wish I had come across that article sooner. I could have used the get_table and exec functions as well as the get_column_blob. If I get time at the end of this I might look into it further, but I don't think the possible improvements would be hugely different. I'd rather concentrate on other OSs and the feature-list for now.

I don't need to escape strings since I have bound variables, I can also use \00 in string parameters. Bound Variables are also the proper way to avoid SQL Injection attacks.

True. But I don't really see this API being used for internet or remote web-based applications (famous last words laugh.gif ). It would be far easier for people with malicious intent just to delete the file. That's not to say I haven't thought about it. It's something I will look more closely at once I have a more complete feature-list (vtables springs to mind ;) )

Fortunately I qualify for a real "geek", not that I like C but I know it well enough. When I got home I got it cross compiling with LV64. Just involved putting pointer sized int in the right places. I'll try to get it to work on a mac at work on Monday (If I got it set right I shouldn't need to change anything).

I also removed some restrictions from my interface, and removed the need for strlen when preparing strings (that required the use of DSNewPtr, MoveBlock and DSDisposePtr).

The API comes shipped with an X64 dll. you didn't need to create one, just load the project.

I've put in the cpability for different OSs. The issue I have is that I don't have Labview on Linux or Mac (well. don't even have a Mac :P ). But I have got VMWare versions of the OSs although never really used them much. The upshot is that I can't test on those platforms therefore only windows x64 and x32 are supported. I did however include the Linux x32 .so which has been reported as working. But I tend to be an all or nothing kind-a-guy. So if I say it works on Linux (for example) it bloody well better :P

I'm using code:blocks with the MinGW x32 and x64compilers. They were easy to set up and worked pretty much out of the box. But as for setting up for Linux or Mac. I've no idea of the compiler/linker settings to cross compile. If I could get it set up for Mac, I could download the MAC trial from the NI website and test. But the Linux version seems like rocking-horse droppingsangry.gif

Anyway with LV2010 64 bit on Win7 with string and doubles(the same as your initial speed test), but modified to including formatting time.

Yours

Insert 401

Dump 72

My Current Version

Insert 277

Dump 151

I don't know why your insert got slower and dump faster from the Win XP LV2010f2 machine 32bit I tested on before (the 64bit computer should be faster CPU and IO wise).

Something must have changed.

The 350 / 90 I quoted earlier was on LV 2009 x64 on Win7. LV 2009 x32 (Win7 x32) is about 350 / 110. Moving to 2010x64 and in-lining it goes down to ~300 / 70. But who are we kidding Bloody fast either way laugh.gif. I'm just happy it isn't 4 seconds any more biggrin.gif I don't mind if its just shy of a gnats wisker! (goes down to 60ms with your earlier suggestion about the "step" ;) )

I'm familiar with re-entrant vi's. From the testing I did, mine seems to handle concurrent access properly. I'm using SQLITE_OPEN_FULLMUTEX, which should reduce crashing from misusing the API. If you have an example that causes "database locked" errors I'll check if I'm handling it properly.

Are you using "shared cache"? I open shared cache by default so that I can have multiple fast asynchronous readers. If your not, then you won't see it. But if you go through my API and set re-entrant willy-nilly you will run into it if you run multiple readers and writers asynchronously.

You can use an In-Memory Database which should be faster than a SSD.

With the above bench mark and an in-memory database I get

Insert 219

Dump 153

So not a huge gain. But if I make every insert a transaction (and knock the test size down to 1000).

Harddisk File

Insert 71795

Dump 15

Memory

Insert 24

Dump 16

So a SSD could make a large difference with a lot of transactions.

Makes sense. This is particularly pertinent to embedded XP PCs, many of which come with SSD drives or SD cards.

You can't use an in-memory DB with the standard high level API components, by the way, since they open and close the DB on each call (Well. You can, but there is no benefit). That's why I included the low level VIs so that if anyone wanted to use an in-memory DB they could roll their own.

Link to comment

I Didn't make an example i only created a new blank vi and place a Table.vi inside.

Could it be a converting from LV9 to LV 8.5 problem? To make this i had to convert to 8.6 first and then with LV 8.6 to 8.5...maybe something get lost inthis step.

I created a New project but the OS variable was not "Unix" but "Linux" and the bitness variable did not exists.

Now i'm trying reconverting Vis from LV9 to LV8.5 and using a project, i'll let you know if i find something new

Could you post your LV8.5 version? It would be much appreciated. thumbup1.gif

Link to comment

Here it is!

http://gepponline.altervista.org/_altervista_ht/SQLite.zip

I'm not able to attach file here in the forum, maybe a Proxy problem of my net. If anybody can download it from the link and upload it in the forum it's better 'cause that is my old domain and it can give some problem for file sharing.

I deleted recursion istances and all seems to work.

i use it with Ubuntu and Labview dor linux 8.5.

...ma ciao!

Geppo!

Link to comment
  • 2 weeks later...

Anyone actually using this? (apart from me laugh.gif )

I've another release almost ready so if you want any additional features / changes.... now is the time to mention it (no promises tongue.gif ).

I'm not using it at the moment but i'm going to use it early to substitute the old SQlite wrapper i used in some applications

Link to comment

other SQLite tools.

Such as? LV tools for SQLite are few and far between. hence the reason for publishing this API.

If you look back, Matt W has done some benchmarking against his implementation. There is a "Speed" demo which means anyone can try it for themselves against their tools (post them here folks yes.gif).

There are a few tweaks in the next release, but the improvements are nowhere near the same as between versions 1.0 and 1.1. Now its more to do with what hard disk you are using, what queries you are performing, whether you can tolerate less robust settings of the SQLite dll.

Link to comment

I finished up the last (hopefully) pass through my library a week or two ago. I got permission from my employer to post my SQLite library (I need to decide on a license and mention the funding used to create it). and when I went to try to figure out how to make a openg library (which I still haven't done yet) I saw the SQLiteVIEW library in VIPM. Which is similar to mine. But mine has a few advantages.

Gives meaningful errors

handles strings with /00 in them

can store any type of of LabVIEW value (LVOOP classes, variant with attributes),

can handle multistatement querys.

has a caching system to help manage prepared querys.

As for benchmarks I modified your speed example to use my code (using the string based api I added to mine since to the last comparison), and the SQLiteVIEW demo from VIPM. This is on LV2010f2 winxp

Yours

Insert 255 Dump 107

This no longer available library that I've been using up until my new one (I'm surprised how fast this is, I think it's from having a wrapper dll handling the memory allocations instead of LabVIEW)

Insert 158 Dump 45

SQLiteVIEW

Insert 153 Dump 43

Mine

Insert 67 Dump 73

The differences in speed between mine and SQLiteView I suspect (since I can't open their code) is that I inlined heavily called VI's (the lose on insert), and handle strings with /00 in them (the decrease on dump). If I changed the benchmark to repeatedly called query's that took longer to prepare, I could make yours and the old library appear slower relative to SQLiteVIEW and mine.

I attached snippets of the modified code.

post-7834-093988700 1288821548_thumb.png

post-7834-053970300 1288821549_thumb.png

post-7834-033603700 1288821548_thumb.png

Anyway I've been debating with myself on posting mine now since it seems kinda of dickish of me to make a free lib when someone recently released one for sale. Also if I wasn't using an academic licensed version of LabVIEW, I might have put mine for sale as well, so I could easily be in their position. The main reason I was posting it was to add back to the community and have something out there to show for possible future employers as an example of what I can do. Being in an academic research lab I don't get payed well and I have no long term employment guaranties, fortunately I love everything else about my job.

Anyway my options I can see now.

Don't post it. (If I thought SQLiteVIEW was better than mine then I would be leaning to this route more. I do have some other libraries I considering posting but they have a smaller audience).

Just post it how I originally intended. (Since I don't have any association with SQLiteVIEW, their actions shouldn't affect me. There's already been two free SQLite libraries anyway).

License mine for non commercial use. (Yours is licensed that way if you didn't notice. but that really limits the legal uses of my code. Also I don't have the interest in defending that limitation. So it's practically the same as putting it there with a bsd or lgpl based license like I originally planned on).

Contact the makers of SQLiteVIEW, and either

Give them a version of my code so theirs can at least reach feature parity with mine. Then wait some period of time before posting mine in the wild.

Offer to consult with them on how to improve their version, so I get my name on their version and some amount of money (This feels like blackmail though).

  • Like 2
Link to comment

I finished up the last (hopefully) pass through my library a week or two ago. I got permission from my employer to post my SQLite library (I need to decide on a license and mention the funding used to create it). and when I went to try to figure out how to make a openg library (which I still haven't done yet) I saw the SQLiteVIEW library in VIPM. Which is similar to mine. But mine has a few advantages.

Gives meaningful errors

handles strings with /00 in them

can store any type of of LabVIEW value (LVOOP classes, variant with attributes),

can handle multistatement querys.

has a caching system to help manage prepared querys.

As for benchmarks I modified your speed example to use my code (using the string based api I added to mine since to the last comparison), and the SQLiteVIEW demo from VIPM. This is on LV2010f2 winxp

Yours

Insert 255 Dump 107

This no longer available library that I've been using up until my new one (I'm surprised how fast this is, I think it's from having a wrapper dll handling the memory allocations instead of LabVIEW)

Insert 158 Dump 45

SQLiteVIEW

Insert 153 Dump 43

Mine

Insert 67 Dump 73

Splendid.

The wrapper (if its the one I'm thinking of) uses sqlite_get_table. I wanted to use this to save all the fetches in labview, but it requires a char *** and I don't know how to reference that. The advantage of the wrapper is that you can easily use the "Helper" functions" (e.g exec, get_table) which would vastly improve performance. But I'm happy with those results for now. I'll review the performance in detail in a later release.

Did you manage to compile SQLite for the Mac?

I managed to compile sqlite on a virtual machine, but whatever I tried, labview always said it was an invalid library.wacko.gif Even the Mac library I downloaded from the sqlite site wouldn't load into labview.angry.gif I think it probably has something to do with the "bitness",

Any Mac gurus out there?

Link to comment

Did you manage to compile SQLite for the Mac?

I managed to compile sqlite on a virtual machine, but whatever I tried, labview always said it was an invalid library.wacko.gif Even the Mac library I downloaded from the sqlite site wouldn't load into labview.angry.gif I think it probably has something to do with the "bitness",

Any Mac gurus out there?

I suggest you create a thread in the mac section to discuss this issue.

On a side note, I think there are some mac maniacs on info-LabVIEW, dunno if you use it...

Link to comment

What do you mean?

You can insert a single column if multiple columns have been defined. the others just get populated with blanks (unless you have defined a field as NOT NULL)

[...]

Can you elaborate?

Never mind; I was doing it wrong. Thanks for the snippet.

I don't see that polymorphic Insert VI in the palettes. What am I doing wrong?

post-7534-032862500 1288868129_thumb.jpg

Edited by jcarmody
Link to comment

I suggest you create a thread in the mac section to discuss this issue.

On a side note, I think there are some mac maniacs on info-LabVIEW, dunno if you use it...

I didn't even know LAVA had a Mac section oops.gif

Thanks for that. I'll give it a whirl.

Never mind; I was doing it wrong. Thanks for the snippet.

I don't see that polymorphic Insert VI in the palettes. What am I doing wrong?

post-7534-032862500 1288868129_thumb.jpg

Your not doing anything wrong. You'll just have to wait for the next release. All that's happened is I've wrapped the original in a polymorphic VI since there is another insert which should make it easier to insert tables of data without the "clunky" addition of fields (which is fine for single row iterative inserts).

Link to comment
Guest
This topic is now closed to further replies.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.