Jump to content

SQLite for OpenG


Recommended Posts

Well, this has just become a lot more relevant for me as a customer just emailed me this: 

 

attachicon.gif1097 exception.png

 

First time I’ve seen it.  I use Windows XP while the client uses Windows 7; I wonder if 7 is more unforgiving of “reading someone else’s memory”?

Interesting query. Is the -- comment valid SQL syntax?

 

Could a badly formed query be throwing the exception?

Link to comment
First time I’ve seen it.  I use Windows XP while the client uses Windows 7; I wonder if 7 is more unforgiving of “reading someone else’s memory”?

 

(So far) I have only seen the error on a Windows XP machine with a built exe. I haven't programmed in C for a while now but I seem to recall having received errors when accessing memory that was not allocated to my application. I cannot recall if both read/writes did it or only writes. However, I can believe that these days windows would watch to make sure an application is only using the memory allocated to it (reading or writing).

 

Did your customer find a reproducible way to cause the error?

  • Like 1
Link to comment
Interesting query. Is the -- comment valid SQL syntax?

 

Could a badly formed query be throwing the exception?

“Badly formed query”?  Excuse me?!?   :)    Yes, that comment form is valid SQL.

Beta version, seems to be working:

 

post-18176-0-80403300-1383668153_thumb.p

 

Does the LabVIEW string to C string conversion explicitly, allowing the difference between the two pointers to be used to calculate the number of bytes of remaining SQL.

 

Link to comment
Can “reading someone else’s memory” cause an exception?  If so, I’m surprised I’ve never had this error myself.  If someone can confirm this as the problem, then I can rewrite “Prepare” to convert the LabVIEW string to a pointer first, then do things by pointer.

 

Reading someone else's memory is limited by the memory protection system. Here is a link to a MSDN site that briefly describes the topic: http://msdn.microsoft.com/en-us/library/windows/desktop/aa366785%28v=vs.85%29.aspx

 

Whenever you try to copy memory that hasn't been allocated, there is a chance to get an access violation exception. Whether an exception occurs or not depends on the data that is accessed. For example:

 

Assume you've allocated an array of 512bytes and one of 256bytes within your DLL. Memory is allocated in contiguous blocks, therefore the block would have a size of 768bytes. Now you accidentally read 712 instead of 512bytes of the first array, but no error occurs, because the memory is within the same block. The result would just be an array of the 512bytes of the first and 200bytes of the second array. Next you try to read 1024bytes and get an access violation, because the number of bytes overlaps the number of allocated bytes.

 

This is just a stupid example, since there is much more to it, but this should cover the basics ( and I can't explain it much better :rolleyes: ). Long story short: Do never access memory, that is not allocated properly!

 

As you're calling a DLL, the allocated memory is in the same space as the LabVIEW IDE / Runtime, therefore you are trying to access data of some LabVIEW internal memory blocks, which are most likely protected. I also assume that LabVIEW does force some memory protection over all DLLs that are called in its scope.

 

The result is what drjdpowell just posted:

 

Well, this has just become a lot more relevant for me as a customer just emailed me this: 

 

attachicon.gif1097 exception.png

 

First time I’ve seen it.  I use Windows XP while the client uses Windows 7; I wonder if 7 is more unforgiving of “reading someone else’s memory”?

 

Yes, Microsoft is a bitch when it comes to protecting precious data. The memory management standards are now insane compared to XP.

 

In most cases if you set the error checking options to maximum, the LabVIEW IDE will show violation errors if memory is incorrectly accessed. That might help you to pinpoint the issue. Of course there is also always a chance to crash LabVIEW because you incorrectly accessed memory... :wacko:

 

Wow, I've just wrote an article to questions that could be answered with two 'yes' :)

  • Like 1
Link to comment

Attached is a new VIPM package (beta) with modifications to how “Prepare" works to avoid reading outside the allocated memory.  I do not know if this will solve the problem, but as an added benefit the new method will give improved performance of “Execute SQL” for multiple SQL statements.  Please give this a try and report if it eliminates the 1097 errors.

 

drjdpowell_lib_sqlite_labview-1.2.0.28.vip

 

 

Link to comment
Yes, Microsoft is a bitch when it comes to protecting precious data. The memory management standards are now insane compared to XP.

Actually, turns out the client was running XP when they had an error.  But, my XP is running on a virtual machine under Parallels on a Macbook, so perhaps that changes the memory protection and is why I have never had this error in testing.

Link to comment
  • 3 weeks later...

Hello,

 

I didn't get any errors untill now. But I'll watch it.

 

One question:

What is the most efficient way to do the following SQL statements within a Loop:

 

// Loop Start

SELECT a, b, FROM table 1

INSERT INTO table 2 (c, d) VALUES (1, 2)

SELECT e, f FROM table 3

INSERT INTO table 4 (g, h) VALUES (3,4)

SELECT i FROM table 4 ODER BY a ASC LIMIT 20

DELETE FROM table 4 WHERE j < 5

// Loop End

 

Is there a possibility to have 4 prepared Statements?

 

Untill now I didn't use prepared statements. I only use "Execute SQL.vi" and I made my statement before that step with string functions (format into string). But "Execute SQL.vi" is slow, so I'd like to optimize it.

 

Regards

Matze

Edited by Matze
Link to comment

That sounds great.

Is it possible to bind column names like: "INSERT INTO table (?, ?) VALUES (?, ?)"?

 

That would have the advantage that the columns order will not be relevant. But I don't know if this is slower.

My 1st column is an Auto increment "id" and so I can't give the value for it with my statement.

Edited by Matze
Link to comment
  • 4 weeks later...

Hello, happy new year!

 

My customer produces thousands of parts within the last weeks and at every part I do about 3 SQL statements.

No error untill now. It seems to work perfectly with the updated version of your library! Great work!

 

Regards

Matze

  • Like 1
Link to comment
  • 1 year later...

Hi, Why my software is slow? it need about 1s to insert 10 record. Thanks Boone

 

Most likely because you are not wrapping your INSERTs in a “BEGIN†and “COMMITâ€, so they are all separate transaction.   Each transaction requires verified writing to disk (to ensure no corruption if the computer losses power), and as hard disks only spin at 50-100 Hz you can only get 10-30 transactions/second.   Execute BEGIN before the INSERTs and COMMIT after, then you have only one transaction.  See the Examples (Search SQLite in the example finder) to see how this is done.

 

In the latest version of the package (coming out soon) I’ve added “BEGIN†and “COMMIT†subVIs to make this easier.

 

— James

 

Note for other readers: this package is now available on the LabVEIW Tools Network as "SQLite Libraryâ€.

Link to comment
  • 1 year later...

Hi James,

 

Thanks a lot for this library. This is wonderful.

 

I am using sqlite to store the measurement data. My data is 1D DBL and each waveform is about 5s @ 48k sampling rate so that makes the size of each waveform to be ~2MB. I flatten the data to string and replace any special chars(like NULL and ') and save the data as TEXT. All that is fine but if I want to retrieve say 10 waveforms, it takes about 350-400 ms to return the data which is not acceptable as the code will be used on a production line and time is a very critical factor over there. Also if I use action engines to store the waveforms then retrieving 10 waveforms only takes about 20ms on the same machine. I don't want to use action engines because the amount of data to store is going to be huge and thats why I am inclined to use sqlite.

 

Is it normal for sqlite to take about 400ms to retrieve 20MB of data? 

 

Thanks,

Ritesh

Link to comment

Hi James,

 

Thanks a lot for this library. This is wonderful.

 

I am using sqlite to store the measurement data. My data is 1D DBL and each waveform is about 5s @ 48k sampling rate so that makes the size of each waveform to be ~2MB. I flatten the data to string and replace any special chars(like NULL and ') and save the data as TEXT. All that is fine but if I want to retrieve say 10 waveforms, it takes about 350-400 ms to return the data which is not acceptable as the code will be used on a production line and time is a very critical factor over there. Also if I use action engines to store the waveforms then retrieving 10 waveforms only takes about 20ms on the same machine. I don't want to use action engines because the amount of data to store is going to be huge and thats why I am inclined to use sqlite.

 

Is it normal for sqlite to take about 400ms to retrieve 20MB of data? 

 

Thanks,

Ritesh

 

 

I’m not sure SQLite is the best choice here.   Large waveforms is something TDMS is built for.  SQLite is better for structured data where one wants powerful filtering ability to pull out only the small subset of information one needs.  TDMS already has useful features like the ability to store scaled U16 values rather than 8-byte DBL (I’ve done this is SQLite, but it more work).  You could mix the two approaches, storing metadata about your waveforms in SQLite, along with a filename and waveform name pointing to the TDMS file with the waveforms itself.

 

BTW, if you store your flattened waveforms as BLOB, then you don’t need to replace any special characters.  But, as I said, once one is doing this one has lost to big benefit of SQLite, the ability to query your data, so I think you should consider TDMS.

Link to comment

I’m not sure SQLite is the best choice here.   Large waveforms is something TDMS is built for.  SQLite is better for structured data where one wants powerful filtering ability to pull out only the small subset of information one needs.  TDMS already has useful features like the ability to store scaled U16 values rather than 8-byte DBL (I’ve done this is SQLite, but it more work).  You could mix the two approaches, storing metadata about your waveforms in SQLite, along with a filename and waveform name pointing to the TDMS file with the waveforms itself.

 

BTW, if you store your flattened waveforms as BLOB, then you don’t need to replace any special characters.  But, as I said, once one is doing this one has lost to big benefit of SQLite, the ability to query your data, so I think you should consider TDMS.

 

Initially, I decided to use TDMS for storing waveforms but TDMS isn't a great choice if the user needs the ability to overwrite or delete the data. My application requires a lot of overwriting of data. There are ways to delete/overwrite data in TDMS but that requires creating a new TDMS file with new contents. As the data in my application can go beyond 100 MBs, creating a new file of that size takes a long time. 

 

I have not used BLOBs using your library. Do you have any example of how to use BLOBs. I tried searching online but couldn't find anything. maybe blob retrievals are faster than text. and I don't need to query anything on the measurement data itself so I don't think sqlite is a bad choice at all. Each measurement is associated with a unique id so I query the data using this id and it works great. my only issue with sqlite right now is that it takes a long time to return measurement data.

Link to comment

BLOBs are just a binary data type, same as TEXT but without the need to treat binary data for special characters.  Use “Bind BLOB†when your LabVIEW string isn’t actually text.  

 

Have you considered just saving each waveform in a separate file, and storing the filename in SQLite?  Then overwriting is easy.  

Link to comment

I tried storing data as BLOBS in sqlite and I found no difference in speed in retrieving data.

 

I tried storing each waveform in a separate tdms file and retrieving data from it is much faster than sqlite(100ms in tdms compared to 400ms in sqlite) but still 5 times slower than action engines(20ms). I don't think that's acceptable as well.

Link to comment

I tried storing data as BLOBS in sqlite and I found no difference in speed in retrieving data.

 

I tried storing each waveform in a separate tdms file and retrieving data from it is much faster than sqlite(100ms in tdms compared to 400ms in sqlite) but still 5 times slower than action engines(20ms). I don't think that's acceptable as well.

 

Okay, this is getting a bit off-topic as the discussion is about a specific problem which is not necessarily sqlite related. So I guess this should be moved to a separate thread. drjdpowell alredy mentioned, that sqlite is not the best solution if your data is not structured. TDMS on the other hand is for use with graph data, but creates index files in the process and stores data in a format readable to other applications (like Excel). That is what slows down your writing/reading speed.

 

As far as I understand you want to store an exact copy of what you have in memory to disk in order to retrieve it at a later time. The most efficient way to do that are binary files. Binary files have no overhead. They don't index your data as TDMS files do, and they don't allow you to filter for specific items like an (sqlite) database. In fact the write/read speed is only limited by your hard drive, a limit that cannot be overcome. It works with any datatype and is similar to a BLOB.

 

The only thing to keep in mind is, that binary files are useless if you don't know the exact datatype (same as BLOBs). But I guess for your project that is not an issue (you can always build a converter program if necessary).

 

So I created a little test VI to show the performance of binary files:

 

post-17453-0-98576900-1456818527.png

 

This VI creates a file of 400MB on the users desktop.

 

It takes about 2 seconds to write all data to disk and 250ms to read it back into memory.

Now if I reduce the file size to 4MB it takes 12ms to write and 2ms to read.

Notice that the VI takes more time if the file already exists on disk (as it has to be deleted first).

 

Also notice: I'm working with an SSD, so good old HDDs will obviously take more time.

  • Like 1
Link to comment
  • 1 year later...

Logman,

don't forget that immediately after writing a file, Windows will most likely have a complete copy of that file in RAM so your read speed will definitely be affected by that unless you're somehow RAM-limited or are explicitly flushing the cache.

Always approach read speed tests with care.  Often the first read will take longer than the second and subsequent reads due to OS file caching.

Just for completeness.

Edited by shoneill
  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.