Jump to content
News about the LabVIEW Wiki! Read more... ×

Recommended Posts

I'm having trouble binding parameters to a SELECT statement.

 

This is the statement I want to execute (including a valid param value): SELECT id FROM variable WHERE label = "BooleanTestVariable";

When I run that statement in my database viewer, it returns a row with id = 1.

 

When I run my code (pictured below) with this string on Prepare.vi, no rows are returned by First Step.vi: SELECT id FROM variable WHERE label = ?

 

I noticed that string values have to be quoted in Execute SQL.vi, so I tried that: SELECT id FROM variable WHERE label = "?"

But it returns "Error 402884 occurred at datalogdb.lvlib:Define Test Variables.vi on "SELECT id FROM variable WHERE label = "?"" Possible reason(s): bind or column index out of range"

 

What am I missing?

 

My code:

0VhsoxL.png

Share this post


Link to post
Share on other sites

Putting “?†is the string “?â€, not a parameter, so when you tried to bind a parameter it was an error.

 

Not sure why your code isn’t working, but try eliminating the SELECT and changing the INSERT to “INSERT OR IGNOREâ€, as that should accomplish the same thing (and be faster too).

Share this post


Link to post
Share on other sites

I appreciate the tip on INSERT OR IGNORE, but I'm concerned about SELECT not working in a prepared statement because I intend to write more statements like it. I noticed that "Valid Statement" was FALSE....which is obviously untrue. Maybe it's an issue with executing against the Linux SO that NI distributes?

Share this post


Link to post
Share on other sites

Questions:

— does it work on Windows?

— Do the examples work on the cRIO?

— Can you attach a sample db so I can inspect it?

Share this post


Link to post
Share on other sites

drjdpowell, that path ^^^ is the standard NI installation of SQLite3 on a cRIO target. (I'm using NI's opkg server to install the package.) Can you add that path as a default for the OS and targettype inside your API?

 

What’s the Target_Type conditional disable symbol on the cRIO?

Share this post


Link to post
Share on other sites

The library is /usr/lib/libsqlite3.so . Provide that path to the API when opening/creating a file, and everything works great.

Can you try just using the string “libsqlite3.so†or “libsqlite3.*â€?   /usr/lib may be the default library path,  and then I can cover the case where the library default path is another location.

Share this post


Link to post
Share on other sites

Is there a way to open read-only?

I see that sqlite3_open_v2() supports a SQLITE_OPEN_READONLY flag but you seem to be using sqlite3_open() only.

Share this post


Link to post
Share on other sites

Can you try just using the string “libsqlite3.so†or “libsqlite3.*â€?   /usr/lib may be the default library path,  and then I can cover the case where the library default path is another location.

 

"/usr/lib" should be in the standard search path for libraries, and if the library was properly installed with ldconfig (which I suppose should be done automatically by the OPKG script) then LabVIEW should be able to find it. Note that on Linux platforms LabVIEW will automatically try to prepend "lib" to the library name if it can't find the library with its wildcard name. So even "sqlite3.*" should work. However I'm not entirely sure about if that also happens when you explicitedly wire the library path to the Call Library Node. But I can guarantee you that it is done for library names directly entered in the Call Library Node configuration.

  • Like 1

Share this post


Link to post
Share on other sites

Is there a way to open read-only?

I see that sqlite3_open_v2() supports a SQLITE_OPEN_READONLY flag but you seem to be using sqlite3_open() only.

I’ll look into using sqlite3_open_v2().

"/usr/lib" should be in the standard search path for libraries, and if the library was properly installed with ldconfig (which I suppose should be done automatically by the OPKG script) then LabVIEW should be able to find it. Note that on Linux platforms LabVIEW will automatically try to prepend "lib" to the library name if it can't find the library with its wildcard name. So even "sqlite3.*" should work. However I'm not entirely sure about if that also happens when you explicitedly wire the library path to the Call Library Node. But I can guarantee you that it is done for library names directly entered in the Call Library Node configuration.

I had "sqlite3.*†as one of the search paths, so that must have failed for some reason.

Share this post


Link to post
Share on other sites

The read-only option seems to be doing the right thing from what I've tried so far.

Thanks for adding that :)

Share this post


Link to post
Share on other sites

Good day to all Lava users

I am New to the envoroment of SQLite and have been experimenting with the tool kit .

I have a few Questions that i hope someone will kindly answer and clarify for me and put me in good sted for the project i am about to commence. i seak answers to

 

1) at run time does the toolkit Vi's returned errors, if so, what are they?

2) Within the SQLite read loop,is it a good practise  to wire the error cluster to the while- loop stop button?

3) i have been comparing a LabVIEW spreadsheet file write \ reads to SQLite file size, times for speeds against similar actions. Firstly,

       a) What is the format of the  SQLite created file data stored as - ASCII? BINARY?

       b) Is any compression used ? or is it something needing enabling?

The reason i ask this If i compare the output files on size they are comparable Spreedsheet 30MB (Write 5423ms Read and filter 21681ms), and SQLite 36MB (Write 7539ms, read/filter 528)  - this is for the same write file data set of 200000 rows X 7 elements and read/filter ( block insert using the prepare and commit VI's) .

           a) should this be the case that the SQLite file will be larger in physical disk size?

           b) and longer to write?

 

4) i also discovered that when purging data from the SQLite table the file size apperars to remains constant ? Subsequently i have discovered the Vacuum function how is this bet employed?

5)  My intension is to use this in an Action / Events tracker in an Application is ther ay recomendations or tricks to ensure optium data capture - especially for those unexpected power/ system crashes?

6) finally during muy programming i have fould that it is possible to accadently lock a SQLite  file when capturing and handling an error during the prcessing of an event in an JKI state machine - how do you unlock the SQLite tables?

.

Many thanks

Greg

Edited by jollybandit

Share this post


Link to post
Share on other sites
Hello,

I would like to use the SQLite Library but I have an error 12 when trying to open the data base with the "SQLite open" function (screen view attached).

Would you have an idea of what could be the problem ?

I'm using the Labview 2012 SP1 Dev version.

Thanks a lot !

 

Pierre

 

post-53694-0-90401100-1435665218.jpg

Share this post


Link to post
Share on other sites

Good day to all Lava users

I am New to the envoroment of SQLite and have been experimenting with the tool kit .

 

Attach a zip file of your benchmark code (including a sample dataset) and I can have a look at it.   In the meantime, have you studied the documentation on SQLite.org?

Also, have you seen the “Cyth SQLite Loggerâ€?  Would this work as a preexisting solution for your app?

 

Hello,
I would like to use the SQLite Library but I have an error 12 when trying to open the data base with the "SQLite open" function (screen view attached).
Would you have an idea of what could be the problem ?
I'm using the Labview 2012 SP1 Dev version.
Thanks a lot !
 
Pierre

 

You’ve wired your database file to the wrong input.

Share this post


Link to post
Share on other sites

 

You’ve wired your database file to the wrong input.

 

Thanks ! 

Share this post


Link to post
Share on other sites

Thakyou in advance for the offer to examine / improve the code.

I have attached a Zip file of

1) test file generator

2) top level SQLite performance evaluation code written

3) XLS containg the timming results obtained.

I will be very interested in your comment and coding improvements (with hopefully possible effeciency and file size improvements).

I am aware of the SQLite.Org website - consequently, i also found a SQLite error vi that has in its block diagram view

a well document summary of the possible errors and there structure / composition.

Many thanks & Cheers

Greg

Testing_SQLite.zip

Edited by jollybandit

Share this post


Link to post
Share on other sites

Dont suppose anyone knows why this INSERT routine is taking so long (38 ms)? My storage loop cant get anywhere near the speed of my DAQ loop and eventually I get a buffer overflow. I believe I am using the BEGIN and COMMIT correctly, the speed of DAQ is 1 kHz so this routine is executed once per cycle.  (see sattached screenshot) Thanks!

 

SQLite3_DB_Slow.jpg

Share this post


Link to post
Share on other sites

2) top level SQLite performance evaluation code written

 

1) SQLite isn’t a compression format, and as a flat table won’t necessarily be any smaller on disk than a spreadsheet file.  Larger actually, due to the various lookup indexes.  However, you have the opportunity to use structured data, avoiding a lot of duplication, which can end up with smaller file sizes (or, at least, the freedom to add much more info at not much larger size).   For example, you have “site†and “sending app†strings that repeat often.  If you instead saved keys into separate “Site†and “Application†tables, you could store lots of info in them that can be “joined†in with a “VIEWâ€.   Similarly you could have an “Errors†table that allowed extended error descriptions instead of just a simple error code (or error severity, etc.).   The joining VIEW would look something like:

 

CREATE VIEW Event_VIEW AS

    SELECT * FROM Application_Events

         JOIN Errors USING (ErrCode)

         JOIN Site USING (SiteID)

         JOIN Application USING (AppID)

 

Your UI would then query this View, and filter on any info in all these table.   Find all events whose error description contains the word “testâ€, for example. 

 

2) Look up “LIMIT†and “OFFSETâ€, and study how they are used in the “Cyth SQLite Log Viewerâ€.   In that viewer, updating the filtering of a selected UI table takes ms, not seconds.  This is because only the visible rows of the UI table are actually selected.  When the User moves the scrollbar, the SELECT is repeated multiple times per second, meaning that it looks to the User like a table with thousands of rows.  And one is free to use a lot of slow property nodes to do things like text colour, since one is never doing more than a few dozen rows.

 

3) I wouldn’t bother with VACUUM in a logging application, as the space from any deletion will just get used for later inserts.   Use VACUUM if you delete a large amount without intending to reuse the space.

 

4) You cannot unlock the file if you’ve dropped the pointer to the connection without closing it, I’m afraid.  You have to restart LabVIEW to unload the SQLite dll.   Your code should always call the Finalize and Close methods, even on error.  

Dont suppose anyone knows why this INSERT routine is taking so long (38 ms)? My storage loop cant get anywhere near the speed of my DAQ loop and eventually I get a buffer overflow. I believe I am using the BEGIN and COMMIT correctly, the speed of DAQ is 1 kHz so this routine is executed once per cycle.  (see sattached screenshot) Thanks!

Hi Rob,

You need to wrap multiple INSERTs into a single transaction with “BEGIN†and “COMMITâ€.   Each transaction requires verified writing to the disk twice, and a hard disk only spins on the the order of once every 10 ms.   You need to buffer your data and do a bulk insert about once a second (place a FOR LOOP between the “Prepare†and “Finalize†subVIs in you code image, and feed in an array of your data clusters).

 

 

This touches on jollybandit’s question (5): durability against power failure by confirmed writing to disk is time consuming, so you need some kind of tradeoff between immediate saving and delayed buffering.   About one save per second is what I do.

  • Like 1

Share this post


Link to post
Share on other sites

Hi Rob,

You need to wrap multiple INSERTs into a single transaction with “BEGIN†and “COMMITâ€.   Each transaction requires verified writing to the disk twice, and a hard disk only spins on the the order of once every 10 ms.   You need to buffer your data and do a bulk insert about once a second (place a FOR LOOP between the “Prepare†and “Finalize†subVIs in you code image, and feed in an array of your data clusters).

 

 

This touches on jollybandit’s question (5): durability against power failure by confirmed writing to disk is time consuming, so you need some kind of tradeoff between immediate saving and delayed buffering.   About one save per second is what I do.

Thanks alot James! I will try implementing the above.

 

EDIT: I have just tried this and it works! Thanks, again. I missinterpreted the solution to jollybandits problem.

Edited by df_rob

Share this post


Link to post
Share on other sites

drjdpowell

Many thanks for your time and constructive advice.

There is the intension to also use this toolkit and SQLite dll  with an application that has a much lower thoughput.

 

This raises a few additional questions :-

 

1) practically how many reference connection are allowed (if any more than 1)  allowing simulltaneous reading and writing to an SQLite file - are there any introduced deficiencies by instigating this practice?

 

2) in the senario of having multiple parallel loops, requiring access to the same SQLite database file (i.e simulltaneous reading and writing).  is it a better practice to open several SQLite file references and managing these independantly in each of the loops?. Alternatively,  is it better practice to use a single reference shared across all loops  with a single coordinator for the finalise and close?

 

3) in the senario of multiple asynchronious writes and reads to an SQLite DB file, is it better to begin and commit and close on each loop execution (for insert / read) or create a single reference and only closing on the close of application, for high durability and integrity?

 

3a) does this latter method put the database at increased risk of locking?

 

4) On the generation of a runtime error will the LabVIEW VI SQLite finalize / close functionaliity handle this errror event  and cleanly close the SQLite db reference or do i need to put in front of this a 'clear errors vi'

 

Many thanks again for your help and Advice

Greg

Edited by jollybandit

Share this post


Link to post
Share on other sites

This raises a few additional questions :-

I don’t think there is a major difference between using one or multiple connections.   I’ve used either one or a few.  And don’t worry about opening and closing connections, just keep them open.  The issue to (slightly) worry about is Write Transactions, as these lock the db file.   Other operations will block waiting for the db to stop being busy.   These will throw a “busy†error after a timeout which is by default 5 seconds.  So don’t BEGIN a transaction then wait for something, as you are preventing other processes accessing the db.  

 

The worst that can happen on power failure is that the current transaction is reverted (important point: do not delete any extra files the SQLite may leave in the same directory on power failure, as these are the “journal†or “WAL†files that SQLite needs to rollback). 

 

Finalize and Close VIs have standard error handling for “cleanupâ€-type VIs (i.e. they cleanup even on error).  You do not need to use “clear errorsâ€.

 

Note, btw, that you can Prepare statements once and reuse them.   This saves some overhead.  Just Finalize them at the end of the app before Closing the connection (I’ve been thinking of making the Finalization automatic on Close, but haven’t yet).

Share this post


Link to post
Share on other sites

Bind Timestamp (Real).vi has a bug. It writes the LV timestamp (offset from 1/1/1904) instead of the Unix timestamp (1/1/1970). Non-LabVIEW applications reading the timestamp are getting bad values out of the db.

 

pHXi95Z.png

Share this post


Link to post
Share on other sites

Is that a “bugâ€?   I used the LabVIEW time definition, as there is no SQLite Timestamp definition.  Not sure why LabVIEW doesn’t match the UNIX definition.

Share this post


Link to post
Share on other sites

Is that a “bugâ€?   I used the LabVIEW time definition, as there is no SQLite Timestamp definition.  Not sure why LabVIEW doesn’t match the UNIX definition.

 

LabVIEW's timestamp format is explicitedly defined as number of seconds since midnight January 1, 1904 GMT. There is no reason LabVIEW needs to adhere to any specific epoch. On Windows the system time epoch is number of 100 ns, since January 1, 1601, and Unix uses indeed January 1, 1970, while the MacOS used January 1, 1904 (yes LabVIEW was originally a MacOS only application!  :D ). And as curiosa, Excel has a default time definition of number of days since January 1, 1900, although due to a mishap when defining that epoch and forgetting about that 1900 wasn't a leap year, the real epoch starts on midnight December 31, 1899. But there is a configuration option in Excel to shift this to the MacOS time epoch!  :P

 

It's definitely a good thing that they stick to the same time epoch on all LabVIEW platforms, and since the MacOS one was the first to be used by LabVIEW, it's a good choice to stick with that.

 

If your API has a different epoch you have to provide functions to convert between the LabVIEW epoch and your API epoch and vice versa.

  • Like 1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×

Important Information

By using this site, you agree to our Terms of Use.