Jump to content

Recommended Posts

I can't see what you're actually doing but I'd say this seems expected...sqlite has somewhat squishy types.

https://www.sqlite.org/datatype3.html

"Any column in an SQLite version 3 database, except an INTEGER PRIMARY KEY column, may be used to store a value of any storage class."

"Any column can still store any type of data. It is just that some columns, given the choice, will prefer to use one storage class over another. The preferred storage class for a column is called its "affinity"."

 

If that makes you feel bad, just remember that almost every interaction you have with your bank relies on a technology whose type system is pure madness.

Edited by smithd
Link to post
Share on other sites
  • Replies 214
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

View File SQLite Library Introductory video now available on YouTube: Intro to SQLite in LabVIEW   SQLite3 is a very light-weight, server-less, database-in-a

Powerpoint slides from a presentation I gave at the European CLA Summit: CLA Summit 2014 SQLite.pptx

James -   I just used v1.0.3.16 in a small benchtop product test application, and I think it's fantastic. The API makes sense and is flexible enough without getting cumbersome for simple tasks. Grea

Posted Images

9 hours ago, infinitenothing said:

The attached code shows how SQLite will interpret various types into an integer. It makes me a little nervous that there's no warnings or other way to determine that a value might not match what was requested. I guess I could check every value for its column type. Is there an easier way that I'm missing?

 

SQLite differs from other databases in its dynamic type system and philosophy of "best attempt at conversion instead of error".  Note though that SQLite is meant as an embedded database, sitting behind an API written in another language.  Types are enforceable in that API.  I don't recall ever having mixed types in a column, because a table is generally only written to by one subVI, and maybe modified by another one right next to it in the same library/class.  I have one client where there are four applications that access the same db files, but they all use the same LabVIEW class for access.  The API even stores data points in a scaled format to take up less space, which is transparent to the application code.

Link to post
Share on other sites

BTW, the SQLite Statement class can be subclassed, and you could easily define your own strict-typed versions of the "Get Column" property nodes.  These would just check type before calling the regular function.

  • Like 1
Link to post
Share on other sites

"Seems not to work" is the vagueous description you could give, but I wonder why you aren't just saving your bmp image as a blob, rather than turning it into a LabVIEW image format (and then only storing part of that).

Link to post
Share on other sites
2 hours ago, drjdpowell said:

"but I wonder why you aren't just saving your bmp image as a blob, rather than turning it into a LabVIEW image format (and then only storing part of that)."

As I told in my previous message, Because I Don't Know How to save my image as a blob in labview!
The image is not originally stored in my PC but is acquired with Vision Libraries.

So once acquired, Hwo to save it as a blob?
And then How to pass it to SQLite library to store it into my DB?

Link to post
Share on other sites

I suggest you break your question down into parts as different threads and forums will be able to answer each part better. It looks like your SQLite stuff is correct and your issue is just flattening and unflattening so the machine vision forum might be a better place to ask for help. In the interim, here's one way to do your flatten unflatten using the IMAQ toolkit.

 

Store in blob.png

Edited by infinitenothing
Link to post
Share on other sites

As you suggested, I've made an example using Vision Library, starting from a previously stored image but nothing change.
Or better, I don't get any error executing the VI and in the DB I can see the new record but using SQLite Administrator I'm not able to see the picture inside the DB.
SQLite Administrator has a specific function that should show the image.
I attach even the DB with 4 records with that should have the same image on it but I can't see it.

 

snippet_SQLITE.thumb.png.ddb15761a4e9b8eb559641219dcaedca.png

DB_CN35.sdb

Link to post
Share on other sites
3 hours ago, Gepponline said:

using SQLite Administrator

This is a pretty important piece of information. I took a quick look at their page (unfortunately their support forums are down, and there is no way I'm installing that software on my computer, so this is all I can go on) and it seems to say it supports bmp and jpg images. from your posts, you have so far provided it with:

  • The pixel values of an image, type cast directly into a string with no metadata
  • The NI vision software flattened representation of an image.

I'd suggest the following:

  • If you have a bmp or jpg image, use the read from binary file function with a elements to read set to -1 (to read the whole file as one big string) and pass that directly to sqlite. You may need to make sure that sqlite administrator knows the type, but i think bmp and jpg both use linux-style 'lets hide the extension inside the file' encoding, so its probably not necessary.
  • If you have an imaqdx image in memory, use writestring (https://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_write_string/) to generate a bmp or jpg string, and pass that directly to sqlite.

 

To give a slightly more complete explanation, if you read the first few posts on this page we discuss the sqlite type system, which is very flexible -- in this case, everything is a string. You told everyone you wanted to store images in the database, which you were successfully doing in the formats described above. You did not explain that you had a very specific tool in mind for reading those images back out, and that is where the problem lies. It was expecting the string to be a specific format, and you were storing it not as that format.

Link to post
Share on other sites

I think your issue is the multitudes of "image" formats.  There are many formats, and many conversion functions, and it is difficult to get things to work by trial and error.  The webpage for "SQLite Administrator" indicates it understands JPG format, yet your attached code is converting from JPG to a LabVIEW-only flattened image format.  You need to read that JPG image into a LabVIEW string, without converting it into some other format, and insert it.   Look into the "Read form binary file"... (I see smithd has beaten me to it). 

Link to post
Share on other sites

I want to clarify that as far as I know there's no standard way to store an image in a SQLite database. That is, @Gepponline, your question has nothing to do with SQLite much less this toolkit. As far as we know, SQLite Administrator uses a proprietary binary format for storing the images as blobs. You should really go to that community to determine what that format is. Unfortunately, that project looks like it was abandoned with no source code available so you might just be out of luck.

Link to post
Share on other sites

My initial problem was on HOW to store blob data in an SQLite record field using this toolkit.
So i thought this was the right place to make my question.
I Had no idea about the problem of no standard for storing images on a BLOB field.
Now that I know that, i can say:
snippet_SQLITE.thumb.png.8536839ba188087f3e2ecf5e9e4cf130.png

1)Attached  are 2 solution working with SQLite Manager but not with SQLite administrator neither with DB Browser for SQLite

2) Being so, maybe it's better to store images in a specific folder and store the path in the DB.

Thank you so much for the support.


 

Link to post
Share on other sites
1 hour ago, drjdpowell said:

Tried inserting png, jpg, and bmp images (by solution 1) and they can be viewed by SQLite Expert Personal, which I use.

I've used that in the past too, but i was creeped out today when I looked at it and saw that the downloads were http instead of https. The reason I noticed was that windows claimed the 32-bit version had a trojan. I'm assuming it was a false detection, but even if thats the case you shouldn't be serving up installers over an unsecure connection.

Link to post
Share on other sites
  • 4 months later...

Hi I'm new to LAVA.  This library you've created looks great!  I was hoping I could try and use it on a 9068 cRIO controller.  Looking through the "SQLite.lvlib:Connection.lvclass:Open (DD).vi" I noticed when the TARGET_TYPE==RT it looks for the sqlite3 library location is at "/usr/lib/libsqlite3.so"

I was wondering if you could either direct me to instructions or provide the file or directions to obtain this file so that I can add it to my controller that I'm testing your code with?

Again thanks for your help!

Link to post
Share on other sites
  • 9 months later...

I was thinking of replacing a proprietary chunk based time series log file format that we use with SQLite, using this library. From a test I did some time ago I thought read performance would not be an issue (unlike for TDMS for example, which performs really badly due to the fragmentation you get when logging large non-continuous time series ), but now that I redid the tests I see that the read times get quite high (writes are less of an issue in this case as we normally just write smaller batches of data once a minute). I expected the database to be much slower, but the difference is of a magnitude that could be troublesome (no longer a real-time response).

To give a simple example; if I have a table of 350 000 rows containing a time stamp (real) and 3 columns of  measurements (real),  and I select a time (sub)period from this table; it takes up to 0,6 seconds to retrieve the measurements on my test machine (using either a single SQL Execute 2D dbl, or the Select many rows template). For comparison the current binary file solution can locate and read the same sub-periods with a worst case of 0,03 seconds. 20x times faster.  This difference would not be an issue if the accumulated time per batch of request was still sub-second, but the end users might combine several requests, which means the total waiting time goes from less than a second ("instantaneous") to 5-30 seconds...

Perhaps I was using @ShaunR 's library back when I was able to get this done "instantaneously", but that seems to no longer exist (?), and in this case I need a solution that will run on Linux RT too...Should it be possible to read something like this any faster, and/or do anyone have any suggestions on how to improve the performance? The data will have to be on disk, the speedup cannot be  based on it being in cache / memory...

Edited by Mads
Link to post
Share on other sites

First question is what is your table's Primary Key?  I would assume it would be your Timestamp, but if not, the lookup of a small time range will require a full table scan, rather than a much quicker search.

Have you put a probe on your prepared statement?  The included custom probe runs "explain query plan" on the statement and displays the analysis.  What does this probe show?

Link to post
Share on other sites
3 hours ago, drjdpowell said:

First question is what is your table's Primary Key?  I would assume it would be your Timestamp, but if not, the lookup of a small time range will require a full table scan, rather than a much quicker search.

Have you put a probe on your prepared statement?  The included custom probe runs "explain query plan" on the statement and displays the analysis.  What does this probe show?

Whether I use the timestamp as the primary key or not does not affect much. A select * from the table will also spend about as much time as one with an equivalent where clause on the timestamp....

99% of the time is spent in the loop running the get column dbl node (retrieving one cell at a time is normally something I would think would be inefficient in itself, but perhaps not, I guess that is dictated by the underlying API anyway, or?).

 

Link to post
Share on other sites
13 hours ago, drjdpowell said:

Can you post an example?

Attached is an example VI and db-file where I just use a single sql execute to retrieve the data...On my Windows 10 computer (SSD) with LabVIEW 2018 a full retrieval, with or without a time clause, takes about 0,66 seconds this way.

I have tried using the select many rows template, and it takes about the same time.

Retrieval test.zip

Edited by Mads
Link to post
Share on other sites
On 2/13/2020 at 9:44 AM, Mads said:

Attached is an example VI and db-file where I just use a single sql execute to retrieve the data...On my Windows 10 computer (SSD) with LabVIEW 2018 a full retrieval, with or without a time clause, takes about 0,66 seconds this way.

I have tried using the select many rows template, and it takes about the same time.

Retrieval test.zip 4.19 MB · 1 download

Managed to find an old copy of LVS's sqlte_api-V4.1.0 now, which I suspect is what I used when evaluating SQLite previously. Stripped down to just the query I see that it is actually 3x slower than your solution @drjdpowell for this query(!). I also tried SAPHIR's Gdatabase with similar slow results. I also did a test in python as a reference, and it ticked in at 0.44 seconds.

So no reason to expect better after all...

Edited by Mads
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By drjdpowell
      Introductory video now available on YouTube: Intro to SQLite in LabVIEW
       
      SQLite3 is a very light-weight, server-less, database-in-a-file library. See www.SQLite.org. This package is a wrapper of the SQLite3 C library and follows it closely.

      There are basically two use modes: (1) calling "Execute SQL" on a Connection to run SQL scripts (and optionally return 2D arrays of strings from an SQL statement that returns results); and (2) "Preparing" a single SQL statement and executing it step-by-step explicitly. The advantage of the later is the ability to "Bind" parameters to the statement, and get the column data back in the desired datatype. The "Bind" and "Get Column" VIs are set as properties of the "SQL Statement" object, for convenience in working with large numbers of them.

      See the original conversation on this here.
      Hosted on the NI LabVIEW Tools Network.
      JDP Science Tools group on NI.com.
      ***Requires VIPM 2017 or later for install.***
    • By Thang Nguyen
      Hi,
      I would like to know if I can write stream data into two different tables with SQLite at the same time or not? Currently my system has one camera. I store the capture images and particle measurement in one table. I have just add one more camera to the system and would like to do the same thing and write to a second table in a parallel process.  I wonder if this is possible or not.
      I use SQLite library.
      Thanks in advance.
       
    • By McQuillan
      Hi Everyone,
      I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
      He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
      Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
       
      I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
      I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
      I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
      *I've done SQL training courses.
      In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 
      Some general questions / thoughts I had:
      Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
      External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
      Cheers,
      Tom
      (I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)

    • By Munch
      Good Afternoon,
      I have been having an issue trying to resolve an error all afternoon, and hope that you might be able to help.
      I am trying to execute the follow SQL "
      SELECT m.name as tableName,
             p.name as columnName
      FROM sqlite_master m
      left outer join pragma_table_info((m.name)) p
           on m.name <> p.name
           where columnName = 'UploadedFlag'
      order by tableName, columnName
      " That returns a list of the tables that Contain the 'UploadedFlag' Column.   I have testing the SQL on both HeidiSQL and SQLite Expert Personal applications and run and return what I expect, however when I run it through LabVIEW I get a 402860 Error with the description " SQLITE_ERROR(1): near "(": syntax error " 
      If anyone could point me in the direction of the syntax error then I would be very greatful.
      Regards
      Mark
    • By drjdpowell
      For comment, here is a beta version of the next SQLite Library release (1.11).   It has a significant new feature of a "Parameter(s)" input to the "Execute SQL" functions.  This can be a single parameter or a cluster of multiple parameters.  Uses Variant functions and will be not as performance as a more explicit preparing and binding of a Statement object, but should be easier to code.

       
      drjdpowell_lib_sqlite_labview-1.11.0.86.vip



×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.