Jump to content

Recommended Posts

I'm having trouble binding parameters to a SELECT statement.

 

This is the statement I want to execute (including a valid param value): SELECT id FROM variable WHERE label = "BooleanTestVariable";

When I run that statement in my database viewer, it returns a row with id = 1.

 

When I run my code (pictured below) with this string on Prepare.vi, no rows are returned by First Step.vi: SELECT id FROM variable WHERE label = ?

 

I noticed that string values have to be quoted in Execute SQL.vi, so I tried that: SELECT id FROM variable WHERE label = "?"

But it returns "Error 402884 occurred at datalogdb.lvlib:Define Test Variables.vi on "SELECT id FROM variable WHERE label = "?"" Possible reason(s): bind or column index out of range"

 

What am I missing?

 

My code:

0VhsoxL.png

Share this post


Link to post
Share on other sites

Putting “?†is the string “?â€, not a parameter, so when you tried to bind a parameter it was an error.

 

Not sure why your code isn’t working, but try eliminating the SELECT and changing the INSERT to “INSERT OR IGNOREâ€, as that should accomplish the same thing (and be faster too).

Share this post


Link to post
Share on other sites

I appreciate the tip on INSERT OR IGNORE, but I'm concerned about SELECT not working in a prepared statement because I intend to write more statements like it. I noticed that "Valid Statement" was FALSE....which is obviously untrue. Maybe it's an issue with executing against the Linux SO that NI distributes?

Share this post


Link to post
Share on other sites

Questions:

— does it work on Windows?

— Do the examples work on the cRIO?

— Can you attach a sample db so I can inspect it?

Share this post


Link to post
Share on other sites

drjdpowell, that path ^^^ is the standard NI installation of SQLite3 on a cRIO target. (I'm using NI's opkg server to install the package.) Can you add that path as a default for the OS and targettype inside your API?

 

What’s the Target_Type conditional disable symbol on the cRIO?

Share this post


Link to post
Share on other sites

The library is /usr/lib/libsqlite3.so . Provide that path to the API when opening/creating a file, and everything works great.

Can you try just using the string “libsqlite3.so†or “libsqlite3.*â€?   /usr/lib may be the default library path,  and then I can cover the case where the library default path is another location.

Share this post


Link to post
Share on other sites

Is there a way to open read-only?

I see that sqlite3_open_v2() supports a SQLITE_OPEN_READONLY flag but you seem to be using sqlite3_open() only.

Share this post


Link to post
Share on other sites

Can you try just using the string “libsqlite3.so†or “libsqlite3.*â€?   /usr/lib may be the default library path,  and then I can cover the case where the library default path is another location.

 

"/usr/lib" should be in the standard search path for libraries, and if the library was properly installed with ldconfig (which I suppose should be done automatically by the OPKG script) then LabVIEW should be able to find it. Note that on Linux platforms LabVIEW will automatically try to prepend "lib" to the library name if it can't find the library with its wildcard name. So even "sqlite3.*" should work. However I'm not entirely sure about if that also happens when you explicitedly wire the library path to the Call Library Node. But I can guarantee you that it is done for library names directly entered in the Call Library Node configuration.

  • Like 1

Share this post


Link to post
Share on other sites

Is there a way to open read-only?

I see that sqlite3_open_v2() supports a SQLITE_OPEN_READONLY flag but you seem to be using sqlite3_open() only.

I’ll look into using sqlite3_open_v2().

"/usr/lib" should be in the standard search path for libraries, and if the library was properly installed with ldconfig (which I suppose should be done automatically by the OPKG script) then LabVIEW should be able to find it. Note that on Linux platforms LabVIEW will automatically try to prepend "lib" to the library name if it can't find the library with its wildcard name. So even "sqlite3.*" should work. However I'm not entirely sure about if that also happens when you explicitedly wire the library path to the Call Library Node. But I can guarantee you that it is done for library names directly entered in the Call Library Node configuration.

I had "sqlite3.*†as one of the search paths, so that must have failed for some reason.

Share this post


Link to post
Share on other sites

Good day to all Lava users

I am New to the envoroment of SQLite and have been experimenting with the tool kit .

I have a few Questions that i hope someone will kindly answer and clarify for me and put me in good sted for the project i am about to commence. i seak answers to

 

1) at run time does the toolkit Vi's returned errors, if so, what are they?

2) Within the SQLite read loop,is it a good practise  to wire the error cluster to the while- loop stop button?

3) i have been comparing a LabVIEW spreadsheet file write \ reads to SQLite file size, times for speeds against similar actions. Firstly,

       a) What is the format of the  SQLite created file data stored as - ASCII? BINARY?

       b) Is any compression used ? or is it something needing enabling?

The reason i ask this If i compare the output files on size they are comparable Spreedsheet 30MB (Write 5423ms Read and filter 21681ms), and SQLite 36MB (Write 7539ms, read/filter 528)  - this is for the same write file data set of 200000 rows X 7 elements and read/filter ( block insert using the prepare and commit VI's) .

           a) should this be the case that the SQLite file will be larger in physical disk size?

           b) and longer to write?

 

4) i also discovered that when purging data from the SQLite table the file size apperars to remains constant ? Subsequently i have discovered the Vacuum function how is this bet employed?

5)  My intension is to use this in an Action / Events tracker in an Application is ther ay recomendations or tricks to ensure optium data capture - especially for those unexpected power/ system crashes?

6) finally during muy programming i have fould that it is possible to accadently lock a SQLite  file when capturing and handling an error during the prcessing of an event in an JKI state machine - how do you unlock the SQLite tables?

.

Many thanks

Greg

Edited by jollybandit

Share this post


Link to post
Share on other sites
Hello,

I would like to use the SQLite Library but I have an error 12 when trying to open the data base with the "SQLite open" function (screen view attached).

Would you have an idea of what could be the problem ?

I'm using the Labview 2012 SP1 Dev version.

Thanks a lot !

 

Pierre

 

post-53694-0-90401100-1435665218.jpg

Share this post


Link to post
Share on other sites

Good day to all Lava users

I am New to the envoroment of SQLite and have been experimenting with the tool kit .

 

Attach a zip file of your benchmark code (including a sample dataset) and I can have a look at it.   In the meantime, have you studied the documentation on SQLite.org?

Also, have you seen the “Cyth SQLite Loggerâ€?  Would this work as a preexisting solution for your app?

 

Hello,
I would like to use the SQLite Library but I have an error 12 when trying to open the data base with the "SQLite open" function (screen view attached).
Would you have an idea of what could be the problem ?
I'm using the Labview 2012 SP1 Dev version.
Thanks a lot !
 
Pierre

 

You’ve wired your database file to the wrong input.

Share this post


Link to post
Share on other sites

Thakyou in advance for the offer to examine / improve the code.

I have attached a Zip file of

1) test file generator

2) top level SQLite performance evaluation code written

3) XLS containg the timming results obtained.

I will be very interested in your comment and coding improvements (with hopefully possible effeciency and file size improvements).

I am aware of the SQLite.Org website - consequently, i also found a SQLite error vi that has in its block diagram view

a well document summary of the possible errors and there structure / composition.

Many thanks & Cheers

Greg

Testing_SQLite.zip

Edited by jollybandit

Share this post


Link to post
Share on other sites

Dont suppose anyone knows why this INSERT routine is taking so long (38 ms)? My storage loop cant get anywhere near the speed of my DAQ loop and eventually I get a buffer overflow. I believe I am using the BEGIN and COMMIT correctly, the speed of DAQ is 1 kHz so this routine is executed once per cycle.  (see sattached screenshot) Thanks!

 

SQLite3_DB_Slow.jpg

Share this post


Link to post
Share on other sites

2) top level SQLite performance evaluation code written

 

1) SQLite isn’t a compression format, and as a flat table won’t necessarily be any smaller on disk than a spreadsheet file.  Larger actually, due to the various lookup indexes.  However, you have the opportunity to use structured data, avoiding a lot of duplication, which can end up with smaller file sizes (or, at least, the freedom to add much more info at not much larger size).   For example, you have “site†and “sending app†strings that repeat often.  If you instead saved keys into separate “Site†and “Application†tables, you could store lots of info in them that can be “joined†in with a “VIEWâ€.   Similarly you could have an “Errors†table that allowed extended error descriptions instead of just a simple error code (or error severity, etc.).   The joining VIEW would look something like:

 

CREATE VIEW Event_VIEW AS

    SELECT * FROM Application_Events

         JOIN Errors USING (ErrCode)

         JOIN Site USING (SiteID)

         JOIN Application USING (AppID)

 

Your UI would then query this View, and filter on any info in all these table.   Find all events whose error description contains the word “testâ€, for example. 

 

2) Look up “LIMIT†and “OFFSETâ€, and study how they are used in the “Cyth SQLite Log Viewerâ€.   In that viewer, updating the filtering of a selected UI table takes ms, not seconds.  This is because only the visible rows of the UI table are actually selected.  When the User moves the scrollbar, the SELECT is repeated multiple times per second, meaning that it looks to the User like a table with thousands of rows.  And one is free to use a lot of slow property nodes to do things like text colour, since one is never doing more than a few dozen rows.

 

3) I wouldn’t bother with VACUUM in a logging application, as the space from any deletion will just get used for later inserts.   Use VACUUM if you delete a large amount without intending to reuse the space.

 

4) You cannot unlock the file if you’ve dropped the pointer to the connection without closing it, I’m afraid.  You have to restart LabVIEW to unload the SQLite dll.   Your code should always call the Finalize and Close methods, even on error.  

Dont suppose anyone knows why this INSERT routine is taking so long (38 ms)? My storage loop cant get anywhere near the speed of my DAQ loop and eventually I get a buffer overflow. I believe I am using the BEGIN and COMMIT correctly, the speed of DAQ is 1 kHz so this routine is executed once per cycle.  (see sattached screenshot) Thanks!

Hi Rob,

You need to wrap multiple INSERTs into a single transaction with “BEGIN†and “COMMITâ€.   Each transaction requires verified writing to the disk twice, and a hard disk only spins on the the order of once every 10 ms.   You need to buffer your data and do a bulk insert about once a second (place a FOR LOOP between the “Prepare†and “Finalize†subVIs in you code image, and feed in an array of your data clusters).

 

 

This touches on jollybandit’s question (5): durability against power failure by confirmed writing to disk is time consuming, so you need some kind of tradeoff between immediate saving and delayed buffering.   About one save per second is what I do.

  • Like 1

Share this post


Link to post
Share on other sites

Hi Rob,

You need to wrap multiple INSERTs into a single transaction with “BEGIN†and “COMMITâ€.   Each transaction requires verified writing to the disk twice, and a hard disk only spins on the the order of once every 10 ms.   You need to buffer your data and do a bulk insert about once a second (place a FOR LOOP between the “Prepare†and “Finalize†subVIs in you code image, and feed in an array of your data clusters).

 

 

This touches on jollybandit’s question (5): durability against power failure by confirmed writing to disk is time consuming, so you need some kind of tradeoff between immediate saving and delayed buffering.   About one save per second is what I do.

Thanks alot James! I will try implementing the above.

 

EDIT: I have just tried this and it works! Thanks, again. I missinterpreted the solution to jollybandits problem.

Edited by df_rob

Share this post


Link to post
Share on other sites

drjdpowell

Many thanks for your time and constructive advice.

There is the intension to also use this toolkit and SQLite dll  with an application that has a much lower thoughput.

 

This raises a few additional questions :-

 

1) practically how many reference connection are allowed (if any more than 1)  allowing simulltaneous reading and writing to an SQLite file - are there any introduced deficiencies by instigating this practice?

 

2) in the senario of having multiple parallel loops, requiring access to the same SQLite database file (i.e simulltaneous reading and writing).  is it a better practice to open several SQLite file references and managing these independantly in each of the loops?. Alternatively,  is it better practice to use a single reference shared across all loops  with a single coordinator for the finalise and close?

 

3) in the senario of multiple asynchronious writes and reads to an SQLite DB file, is it better to begin and commit and close on each loop execution (for insert / read) or create a single reference and only closing on the close of application, for high durability and integrity?

 

3a) does this latter method put the database at increased risk of locking?

 

4) On the generation of a runtime error will the LabVIEW VI SQLite finalize / close functionaliity handle this errror event  and cleanly close the SQLite db reference or do i need to put in front of this a 'clear errors vi'

 

Many thanks again for your help and Advice

Greg

Edited by jollybandit

Share this post


Link to post
Share on other sites

This raises a few additional questions :-

I don’t think there is a major difference between using one or multiple connections.   I’ve used either one or a few.  And don’t worry about opening and closing connections, just keep them open.  The issue to (slightly) worry about is Write Transactions, as these lock the db file.   Other operations will block waiting for the db to stop being busy.   These will throw a “busy†error after a timeout which is by default 5 seconds.  So don’t BEGIN a transaction then wait for something, as you are preventing other processes accessing the db.  

 

The worst that can happen on power failure is that the current transaction is reverted (important point: do not delete any extra files the SQLite may leave in the same directory on power failure, as these are the “journal†or “WAL†files that SQLite needs to rollback). 

 

Finalize and Close VIs have standard error handling for “cleanupâ€-type VIs (i.e. they cleanup even on error).  You do not need to use “clear errorsâ€.

 

Note, btw, that you can Prepare statements once and reuse them.   This saves some overhead.  Just Finalize them at the end of the app before Closing the connection (I’ve been thinking of making the Finalization automatic on Close, but haven’t yet).

Share this post


Link to post
Share on other sites

Bind Timestamp (Real).vi has a bug. It writes the LV timestamp (offset from 1/1/1904) instead of the Unix timestamp (1/1/1970). Non-LabVIEW applications reading the timestamp are getting bad values out of the db.

 

pHXi95Z.png

Share this post


Link to post
Share on other sites

Is that a “bugâ€?   I used the LabVIEW time definition, as there is no SQLite Timestamp definition.  Not sure why LabVIEW doesn’t match the UNIX definition.

Share this post


Link to post
Share on other sites

Is that a “bugâ€?   I used the LabVIEW time definition, as there is no SQLite Timestamp definition.  Not sure why LabVIEW doesn’t match the UNIX definition.

 

LabVIEW's timestamp format is explicitedly defined as number of seconds since midnight January 1, 1904 GMT. There is no reason LabVIEW needs to adhere to any specific epoch. On Windows the system time epoch is number of 100 ns, since January 1, 1601, and Unix uses indeed January 1, 1970, while the MacOS used January 1, 1904 (yes LabVIEW was originally a MacOS only application!  :D ). And as curiosa, Excel has a default time definition of number of days since January 1, 1900, although due to a mishap when defining that epoch and forgetting about that 1900 wasn't a leap year, the real epoch starts on midnight December 31, 1899. But there is a configuration option in Excel to shift this to the MacOS time epoch!  :P

 

It's definitely a good thing that they stick to the same time epoch on all LabVIEW platforms, and since the MacOS one was the first to be used by LabVIEW, it's a good choice to stick with that.

 

If your API has a different epoch you have to provide functions to convert between the LabVIEW epoch and your API epoch and vice versa.

  • Like 1

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.


  • Similar Content

    • By Thang Nguyen
      Hi,
      I would like to know if I can write stream data into two different tables with SQLite at the same time or not? Currently my system has one camera. I store the capture images and particle measurement in one table. I have just add one more camera to the system and would like to do the same thing and write to a second table in a parallel process.  I wonder if this is possible or not.
      I use SQLite library.
      Thanks in advance.
       
    • By McQuillan
      Hi Everyone,
      I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
      He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
      Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
       
      I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
      I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
      I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
      *I've done SQL training courses.
      In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 
      Some general questions / thoughts I had:
      Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
      External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
      Cheers,
      Tom
      (I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)

    • By Munch
      Good Afternoon,
      I have been having an issue trying to resolve an error all afternoon, and hope that you might be able to help.
      I am trying to execute the follow SQL "
      SELECT m.name as tableName,
             p.name as columnName
      FROM sqlite_master m
      left outer join pragma_table_info((m.name)) p
           on m.name <> p.name
           where columnName = 'UploadedFlag'
      order by tableName, columnName
      " That returns a list of the tables that Contain the 'UploadedFlag' Column.   I have testing the SQL on both HeidiSQL and SQLite Expert Personal applications and run and return what I expect, however when I run it through LabVIEW I get a 402860 Error with the description " SQLITE_ERROR(1): near "(": syntax error " 
      If anyone could point me in the direction of the syntax error then I would be very greatful.
      Regards
      Mark
    • By drjdpowell
      For comment, here is a beta version of the next SQLite Library release (1.11).   It has a significant new feature of a "Parameter(s)" input to the "Execute SQL" functions.  This can be a single parameter or a cluster of multiple parameters.  Uses Variant functions and will be not as performance as a more explicit preparing and binding of a Statement object, but should be easier to code.

       
      drjdpowell_lib_sqlite_labview-1.11.0.86.vip
    • By drjdpowell
      Introductory video now available on YouTube: Intro to SQLite in LabVIEW
       
      SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely.

      There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them.

      See the original conversation on this here.
       
      Now hosted on the NI LabVIEW Tools Network.
       
      ***Requires VIPM 2017 or later for install.***
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.