Jump to content

[CR] SQLite Library


Recommended Posts

How did you install it?

Since I don't have VIPM (yet), I unzipped the files into my project folder and used it directly.

This is not a very elegant way, I know, but it worked nicely so far. The reason for this was, that this package was the only AddOn I needed and I don't like installing a huge overhead of software for functionality I don't need.

 

PS: Sorry for the double post. It said I wasn't allowed to edit, so I posted another answer.

Link to post
  • Replies 215
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

View File SQLite Library Introductory video now available on YouTube: Intro to SQLite in LabVIEW   SQLite3 is a very light-weight, server-less, database-in-a

Powerpoint slides from a presentation I gave at the European CLA Summit: CLA Summit 2014 SQLite.pptx

James -   I just used v1.0.3.16 in a small benchtop product test application, and I think it's fantastic. The API makes sense and is flexible enough without getting cumbersome for simple tasks. Grea

Posted Images

Hi Neil,

 

I'm sure VIPM is great at what it's supposed to do. But just because a car is good for driving that doesn't necessarily mean I need it.

In case this changes, I'll give it a try. [VIPM, not the car. I hate cars ;)]

 

- Max

  • Like 1
Link to post

Hey guys,

 

a little OT, but I felt it was nice to post it anyway: I recently installed VIPM and I have to say - it actually is a great tool.

Why did I install it? Someone used OpenG in a VI I was supposed to look at.
And I was happy to see, that I was not the only one with the idea of having error clusters in the timing VIs ;)

 

/ Max

Link to post
  • 1 month later...

hi there,

 

I have used this SQLite wrapper for quite a while and noticed a problem:

The testrig software I programmed will probably be used by other persons as well. Unfortunately, some of them are not very familiar with labview programming and will probably stop the execution of the program using the red button.

This causes a 'database locked' message, when the software is started again afterwards. So I was wondering, if there is any way to remove the lock (since the process locking the database is still running - LabVIEW itself) automatically on the next run.

I know, people should be trained not to stop the execution that way, but this will happen. I don't want to hide this button, since it might be necessary to kill the program from time to time during development. So I'm looking for another way to handle this.

 

Any suggestions on that? I'll appreciate any help.

 

Max

Link to post
I know, people should be trained not to stop the execution that way, but this will happen. I don't want to hide this button, since it might be necessary to kill the program from time to time during development. So I'm looking for another way to handle this.

I generally hide the toolbar on the front panel.  One can still access the hard-stop button by opening the block diagram.  I also usually capture the “Panel Close?” event and use it to trigger ordered shutdown, so a User can hit the X button to stop the program correctly.

 

I’m not sure there is an easy way to clear the lock if this happens, other than restarting LabVIEW.  

Link to post
  • 2 weeks later...
Thanks for this tip, I implemented it right away in almost all my programs ;)

I'll see, if I find anything in the SQLite/LV docs. I'll post it here in case someone is googeling this problem.

 

There is not an easy solution. The only proper way would be to implement some DLL wrapper that uses the Call Library Node callback methods to register any session that gets opened in some private global queue in the wrapper DLL. The close function then removes the session from the queue. The CLN callback function for abort will check the queue for the session parameter and close it also if found. That CLN callback mechanism is the only way to receive the LabVIEW Abort  event properly in external code.

  • Like 1
Link to post
  • 4 months later...

Hi folks,

 

Thanks to drjdpowell for putting this library together - it is very useful.  I would like to share an issue I faced when putting together a test VI to execute some SQL queries.

 

If the function is not saved the SQLite Open function will return an 'OK' error however with a db connection id of '0'.  Subsequent calls (in my case to execute SQL and then close the connection) also return with no error.

 

Save the function and things start working correctly.

 

The issue is in the 'Open (string)' function which uses Application Directory.vi to help locate the sqlite3 DLL.  This function returns <Not a Path> if the VI is not saved, when <Not a Path> is input to the CLFN it returns with no error and what appears to be 'default' values on output (empty string, 0 int).

 

To me this is a bug with the CLFN, although the API could check for this condition.. or I could just save my code  ;)

post-50351-0-17288600-1390559670.png

  • Like 2
Link to post
If the function is not saved the SQLite Open function will return an 'OK' error however with a db connection id of '0'.  Subsequent calls (in my case to execute SQL and then close the connection) also return with no error.

Fix for this in version 1.2.1.  Can you give it a quick test?

Link to post
  • 1 month later...
  • 2 months later...

There is no direct way to write a cluster.  Partly this is because there is no direct one-to-one correspondence between LabVIEW and SQLIte types; in particular, a LabVIEW string can be either text of binary (SQLIte Text and Blob types).  Note that it is relatively easy to wire a long set of “Get Column” into a corresponding cluster bundle (or an unbundle into the corresponding “Bind”).  That is part of the reason for making “Get Column” and “Bind” property nodes.

Link to post

There is another problem about the question with clusters. It is not very specific as to what is wanted.

 

Should it be a cluster where each element in the cluster is matched with a column in the database table? If so should it be matched based on cluster element name matching the column name or rather its ordinal value in the cluster match the row number? What about if the cluster doesn't match the rows in either case?

 

Or is it about storing a cluster as binary blob in a single column?

 

Questions, questions!!!!

  • Like 1
Link to post
  • 2 months later...

This package is now on the Tools Network, which required the following changes:

— Rename to “SQLite Library†(can’t use “LabVIEW†in name)

— move pallet to be under “Connectivity†rather than “drjdpowell† (and the sub pallet icons now look nicer)

Link to post
  • 4 weeks later...

I apologize to users of this library.  When NI made me rename the library to “SQLite Library†from “SQLite LabVIEWâ€, I inadvertently allowed VIPM to rename the root directory accordingly, so this might cause your minor conflict headaches when opening old projects.  Sorry.

  • Like 1
Link to post
  • 4 weeks later...
  • 4 weeks later...

I was very curious about this library and thought I'll give it a spin to see if I can use it in my application. In the first test, however, I already ran into a game stopper with the insert speed. Perhaps I'm doing something all wrong—you can take a look at the test code—but utilizing the "INSERT many rows Template code.vi", that to my knowledge should be an efficient method, I'm getting around 10-20 inserts per second. That's more than an order of magnitude less than what I was expecting or what I'd need.

 

Is this normal or should we try to find an issue here?

 

Other than that, the library seems very nice. Good job, and thank you for sharing it with us.

 

EDIT: "PRAGMA synchronous=OFF;" does speed it up to around 500 inserts per second which is already around half-way to what I need.

post-40060-0-55780200-1417336783_thumb.p

Edited by vekkuli
Link to post

This is a standard thing to learn with SQLite, which is an “ACIDâ€-compliant database.  The “D†is for durable; by default SQLite is durable against power-failure of the computer, meaning once a transaction has executed, in can be relied on to remain, and not be corrupted, even if power is lost in a subsequent transaction.  Durability requires that SQLite verify proper writing to hard disk, and as hard disks spin at about 100Hz, this mean that the number of durable transactions is limited to an order of 20 per second.

 

You can disable the durability requirement, but a better strategy is to group multiple INSERTs into a single transaction by wrapping them in BEGIN…COMMIT SQL statements.  See on of the Examples provided with the package to see how this is done (you just need to execute “BEGIN†before your loops and “COMMIT†after**).  For simple INSERTs, one should get greater than 100,000 per second.

 

**also see SAVEPOINTs in the www.SQLite.com documentation.

  • Like 1
Link to post

This is a standard thing to learn with SQLite, which is an “ACIDâ€-compliant database.  The “D†is for durable; by default SQLite is durable against power-failure of the computer, meaning once a transaction has executed, in can be relied on to remain, and not be corrupted, even if power is lost in a subsequent transaction.  Durability requires that SQLite verify proper writing to hard disk, and as hard disks spin at about 100Hz, this mean that the number of durable transactions is limited to an order of 20 per second.

 

You can disable the durability requirement, but a better strategy is to group multiple INSERTs into a single transaction by wrapping them in BEGIN…COMMIT SQL statements.  See on of the Examples provided with the package to see how this is done (you just need to execute “BEGIN†before your loops and “COMMIT†after**).  For simple INSERTs, one should get greater than 100,000 per second.

 

**also see SAVEPOINTs in the www.SQLite.com documentation.

 

Right you were. I was looking for the examples in the wrong place earlier. Grouping into a transactions definitely did the trick. Thank you very much for quick support.

Link to post
  • 2 weeks later...
  • 6 months later...

I'm trying to use it on a Linux cRIO. The API VIs execute without errors (even the individual CLF nodes), but nothing gets written to disk. Even the "Example 1 -- Create Table.vi" example doesn't work when targeted to my cRIO. It does create a file on my Windows system.

 

Edit: Realized that SQLite3 wasn't installed on the target. Ran "opkg install sqlite3", which put the executable in /usr/bin, but that doesn't work like a shared library for the API. I think I need to get a .so onto the target and point to that in the "SQLite Library Path" parameter.

 

Edit 2 (SOLVED): The library is /usr/lib/libsqlite3.so . Provide that path to the API when opening/creating a file, and everything works great.

Edited by Stobber
Link to post

drjdpowell, that path ^^^ is the standard NI installation of SQLite3 on a cRIO target. (I'm using NI's opkg server to install the package.) Can you add that path as a default for the OS and targettype inside your API?

  • Like 1
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Thang Nguyen
      Currently, my vision software will do a compression (subtract to background image then count zero) and write to a table in SQLite file. Since we want to speed up the process, I write entire image in a temporary table. Then at the end, I read the table, do compression and write to the actual table, then drop the temporary table. This takes a lot of time too even I use the Journal mode = memory. I think the issue is I put the process code in 4 separated modules:
      Select the temp table -> output array Compress  the output array of step 1 Insert the compress data from step 2 to actual table Drop the temp table I am looking for an option to mix these steps together to speed up the speed for example select and delete the row in temp table at the same time then at the end I can drop the table faster. Currently, it takes significant time to drop the table. Our raw data can be up to 3GB. But I don't know how to combine the query. I also read that SQlite does not support this. So I also looking for an advice on how to make this process more efficient. I thought about using the queue too but I need input before I change it.
      Thank you in advance.
    • By kosist90
      Dear Community,
      let me present our new ANV Database Toolkit, which has been recently released at vipm.io.
      Short introduction to the toolkit is posted by this link, and it also describes steps which should be done in order to use this toolkit.
      ANV Database Toolkit helps developers design LabVIEW API for querying various databases (MS SQL, MySQL, SQLite, Access). It allows to create VIs which can be used as API with the help of graphical user interface. When using these VIs, toolkit handles connection with the database, thus relieving developers of this burden in their applications.
      It has the following features:
      Simplifies handling of databases in LabVIEW projects Allows to graphically create API VIs for Databases Supports Read, Write, Update and Delete queries Supports various database types (MS SQL, MySQL, SQLite, Access) Overall idea is that developer could create set of high-level API VIs for queries using graphical user interface, without actual writing of SQL queries. Those API VIs are used in the application, and handle database communication in the background. Moreover, SQL query could be applied to any of the supported database types, it is a matter of database type selection. Change of target database does not require changes in API VI which executes the query.
      After installation of the toolkit, sample project is available, which shows possibilities of the toolkit in terms of execution different types of queries.
      Note, that in order to install the toolkit, VI Package Manager must be launched with Administrator privileges.
      This toolkit is paid, and price is disclosed based on price quotation. But anyway, there are 30 days of trial period during which you could tryout the toolkit, and decide whether it is helpful (and hope that it will be) for your needs.
      In case of any feedback, ideas or issues please do not hesitate to contact me directly here, or at vipm.io, or at e-mail info@anv-tech.com.
       
        
    • By drjdpowell
      Introductory video now available on YouTube: Intro to SQLite in LabVIEW
       
      SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely.

      There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them.

      See the original conversation on this here.
      Hosted on the NI LabVIEW Tools Network.
      JDP Science Tools group on NI.com.
      ***Requires VIPM 2017 or later for install.***
    • By Thang Nguyen
      Hi,
      I would like to know if I can write stream data into two different tables with SQLite at the same time or not? Currently my system has one camera. I store the capture images and particle measurement in one table. I have just add one more camera to the system and would like to do the same thing and write to a second table in a parallel process.  I wonder if this is possible or not.
      I use SQLite library.
      Thanks in advance.
       
    • By McQuillan
      Hi Everyone,
      I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
      He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
      Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
       
      I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
      I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
      I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
      *I've done SQL training courses.
      In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 
      Some general questions / thoughts I had:
      Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
      External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
      Cheers,
      Tom
      (I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)




×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.