Jump to content

[CR] SQLite Library


Recommended Posts

17 hours ago, Stobber said:

djpowell, could you add support for extended error codes to this API? I'm struggling with a SQLITE_CANTOPEN error on a cRIO that never used to appear, and I don't have enough context to know what the heck is wrong.

Also, could you modify SQLite.lvlib:Format Error.vi to yield the full call chain? Figuring out where an error came from without it is sometimes really hard. If you don't want to stringify the call chain all the time, maybe make it an option I can toggle on Open?

I will do both those things.  

  • Like 1
Link to post
  • Replies 215
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

View File SQLite Library Introductory video now available on YouTube: Intro to SQLite in LabVIEW   SQLite3 is a very light-weight, server-less, database-in-a

Powerpoint slides from a presentation I gave at the European CLA Summit: CLA Summit 2014 SQLite.pptx

James -   I just used v1.0.3.16 in a small benchtop product test application, and I think it's fantastic. The API makes sense and is flexible enough without getting cumbersome for simple tasks. Grea

Posted Images

On December 16, 2016 at 6:45 PM, Stobber said:

djpowell, could you add support for extended error codes to this API? I'm struggling with a SQLITE_CANTOPEN error on a cRIO that never used to appear, and I don't have enough context to know what the heck is wrong.

Also, could you modify SQLite.lvlib:Format Error.vi to yield the full call chain? Figuring out where an error came from without it is sometimes really hard. If you don't want to stringify the call chain all the time, maybe make it an option I can toggle on Open?

See the 1.6.5 version now in the CR.   I added the extended code in the Error Description, after the standard error description.   So SQLITE_CANTOPEN(nnn), with nnn the full code, which you can lookup at http://www.sqlite.org/rescode.html.  At some point, I’ll implement the Extended Error names, but I don’t have time now.

  • Like 1
Link to post
  • 3 months later...

Note, the latest version, 1.7.2 just posted, includes a “Load Extended Math Functions” method, which loads an SQLite extension dll that adds a large set of math functions to the very limited set in standard SQLite.  The functions are:

Math: acos, asin, atan, atn2, atan2, acosh, asinh, atanh, difference,
degrees, radians, cos, sin, tan, cot, cosh, sinh, tanh, coth, exp,
log, log10, power, sign, sqrt, square, ceil, floor, pi.

String: replicate, charindex, leftstr, rightstr, ltrim, rtrim, trim,
replace, reverse, proper, padl, padr, padc, strfilter.

Aggregate: stdev, variance, mode, median, lower_quartile,
upper_quartile.

Tested on Windows with LabVIEW 32-bit.  Should work with 64-bit as I include the dll compiled with both witnesses, but I have not tested.  See the provided example.

  • Like 1
Link to post
  • 3 weeks later...

Hello!

In my project I created couple of tables with foreign keys. I enabled them using "PRAGMA foreign_keys = ON;".
Foreign key constraints work when I try to add a row containing non-existent parent element, such query simply gets
ignored without reporting an error. It makes it difficult to debug which query failed and why.

After doing some research on why it happens, I found that foreign key constraint errors are part of "Extended Result Codes",
(don't confuse with "Extended Error Codes"). By default "Extended Result Codes" are disabled as written here:
Enable Or Disable Extended Result Codes

In order to enable extended result codes the following function has to be called on the database handle:
int sqlite3_extended_result_codes(sqlite3*, int onoff);

Unfortunately I couldn't find an accessor method to get the database handle, so I had to modify your class and add additional
method to it which enables the "Extended Result Codes". It is not the best solution and makes it difficult to share the code
between machines. 

Is there a way to enable extended result codes without modifying the SQLite library source?

SQLite is a great library, thank you James for it!

Max

  • Like 1
Link to post

Hi Max,

The actual bug, I think, was that “Step.vi” was ignoring errors outside of 1..99.  It has a case structure that only passes those codes to the error-handling subVI (written, originally, because 100 and 101 are not errors).  I have changed it so that the extended error codes are now passed (see version 1.7.3 just added to the LAVA CR).  I tested it by making a foreign key error and it now works.

Bugfix SQLite.png

  • Like 2
Link to post
  • 2 weeks later...
On 30.4.2017 at 4:01 PM, drjdpowell said:

Hi Max,

The actual bug, I think, was that “Step.vi” was ignoring errors outside of 1..99.  It has a case structure that only passes those codes to the error-handling subVI (written, originally, because 100 and 101 are not errors).  I have changed it so that the extended error codes are now passed (see version 1.7.3 just added to the LAVA CR).  I tested it by making a foreign key error and it now works.

 

This also solved my problem while inserting an existing/equal record and waiting for an 'UNIQUE constraint failed'. Now it's in the error chain. Thanks a lot for this !.

Jörn

Edited by joernheit
Link to post
  • 3 months later...

The read-only option seems to be doing the right thing from what I've tried so far.

Thanks

---------------------------------------------------------------------------------------------------------------

 noi that o to dep |  phu kien do choi xe hoi  |   noi that o to ha noi 
 
 
Edited by Jan_bee
edit content
Link to post
  • 1 month later...

I have problems with the Cyrillic alphabet.
The database successfully works with the text, but I can not create the database file in the folder with the name containing the Cyrillic.
The sqlite open function produces an error

Error 402873 ...
SQLITE_CANTOPEN (14): unable to open database: C:\data\...\PґР°С,Р°\sys_bd.db
where "
дата" is a distorted folder name in Cyrillic.

Link to post
  • 2 months later...
  • 2 weeks later...

Sounds reasonable.  Even NI only supports four versions for some of its modules (e.g. Vision Development Module), so I've finally jumped from 2012 to 2017.  Is there a way to keep the current install available in VIPM if using LabVIEW <2013?

Link to post

VIPM keeps a cache of all versions, of all packages installed, locally.  So if you've ever installed the package, and then installed a new one, then from VIPM you should be able to right click the package and choose to install a different one.  In 2017 both will show up as able to be installed, and in 2013 only the one compatible one will show up.

Link to post
  • 2 months later...

Forgive my ignorance here, as databases are currently way over my head. My initial reason for viewing this thread was in search of a library whose functions could run on a cRIO-9068 (Linux) controller to write system data snapshot updates to another system (a freebsd box, lets say.) Would this library work on a linux cRIO? The tutorial videos I've watched on the sqlite library look really promising for my application needs. 

Link to post
  • 1 month later...
On 11/28/2015 at 0:47 PM, drjdpowell said:

A beta version of 1.6 is posted here.  If you ignore the newest features, you could use this in production code; it has the latest SQLite 3.9.2, including the interesting JSON1 extension.

^^I think its already compiled in

Try creating a memory database and executing one of the sample queries like "json_array_length('[1,2,3,4]')" which should return 4. If I'm remembering correctly, these queries just worked with the vi package.

Link to post

It is built in.  Just use it.

BTW, Something not built in is the extended math functions (log, sin, stdev, and the like).  But I include a vi for loading that:

Load Math.png

Edited by drjdpowell
Link to post
  • 3 months later...
  • 2 months later...

We have an existing exe out in the field with the appropriate dll folder beside - I guess the version is about two years old. There are several installations.

If now a new package built with a newer wrapper version in LabVIEW will be distributed, but a user replaces only the exe and misses the dll, is there a problem with the existing functionality (nothing new on the database part has been added)? Is it backward compatible or should the SQLite version - which is available and goes hand in hand with the wrapper - be checked?

thanks

Link to post
  • 2 weeks later...

The attached code shows how SQLite will interpret various types into an integer. It makes me a little nervous that there's no warnings or other way to determine that a value might not match what was requested. I guess I could check every value for its column type. Is there an easier way that I'm missing?

Test type handling.png

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Thang Nguyen
      Currently, my vision software will do a compression (subtract to background image then count zero) and write to a table in SQLite file. Since we want to speed up the process, I write entire image in a temporary table. Then at the end, I read the table, do compression and write to the actual table, then drop the temporary table. This takes a lot of time too even I use the Journal mode = memory. I think the issue is I put the process code in 4 separated modules:
      Select the temp table -> output array Compress  the output array of step 1 Insert the compress data from step 2 to actual table Drop the temp table I am looking for an option to mix these steps together to speed up the speed for example select and delete the row in temp table at the same time then at the end I can drop the table faster. Currently, it takes significant time to drop the table. Our raw data can be up to 3GB. But I don't know how to combine the query. I also read that SQlite does not support this. So I also looking for an advice on how to make this process more efficient. I thought about using the queue too but I need input before I change it.
      Thank you in advance.
    • By kosist90
      Dear Community,
      let me present our new ANV Database Toolkit, which has been recently released at vipm.io.
      Short introduction to the toolkit is posted by this link, and it also describes steps which should be done in order to use this toolkit.
      ANV Database Toolkit helps developers design LabVIEW API for querying various databases (MS SQL, MySQL, SQLite, Access). It allows to create VIs which can be used as API with the help of graphical user interface. When using these VIs, toolkit handles connection with the database, thus relieving developers of this burden in their applications.
      It has the following features:
      Simplifies handling of databases in LabVIEW projects Allows to graphically create API VIs for Databases Supports Read, Write, Update and Delete queries Supports various database types (MS SQL, MySQL, SQLite, Access) Overall idea is that developer could create set of high-level API VIs for queries using graphical user interface, without actual writing of SQL queries. Those API VIs are used in the application, and handle database communication in the background. Moreover, SQL query could be applied to any of the supported database types, it is a matter of database type selection. Change of target database does not require changes in API VI which executes the query.
      After installation of the toolkit, sample project is available, which shows possibilities of the toolkit in terms of execution different types of queries.
      Note, that in order to install the toolkit, VI Package Manager must be launched with Administrator privileges.
      This toolkit is paid, and price is disclosed based on price quotation. But anyway, there are 30 days of trial period during which you could tryout the toolkit, and decide whether it is helpful (and hope that it will be) for your needs.
      In case of any feedback, ideas or issues please do not hesitate to contact me directly here, or at vipm.io, or at e-mail info@anv-tech.com.
       
        
    • By drjdpowell
      Introductory video now available on YouTube: Intro to SQLite in LabVIEW
       
      SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely.

      There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them.

      See the original conversation on this here.
      Hosted on the NI LabVIEW Tools Network.
      JDP Science Tools group on NI.com.
      ***Requires VIPM 2017 or later for install.***
    • By Thang Nguyen
      Hi,
      I would like to know if I can write stream data into two different tables with SQLite at the same time or not? Currently my system has one camera. I store the capture images and particle measurement in one table. I have just add one more camera to the system and would like to do the same thing and write to a second table in a parallel process.  I wonder if this is possible or not.
      I use SQLite library.
      Thanks in advance.
       
    • By McQuillan
      Hi Everyone,
      I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
      He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
      Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
       
      I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
      I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
      I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
      *I've done SQL training courses.
      In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 
      Some general questions / thoughts I had:
      Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
      External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
      Cheers,
      Tom
      (I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.