Jump to content

Looking for option to speed up my usage of SQLite


Recommended Posts

Currently, my vision software will do a compression (subtract to background image then count zero) and write to a table in SQLite file. Since we want to speed up the process, I write entire image in a temporary table. Then at the end, I read the table, do compression and write to the actual table, then drop the temporary table. This takes a lot of time too even I use the Journal mode = memory. I think the issue is I put the process code in 4 separated modules:

  1. Select the temp table -> output array
  2. Compress  the output array of step 1
  3. Insert the compress data from step 2 to actual table
  4. Drop the temp table

I am looking for an option to mix these steps together to speed up the speed for example select and delete the row in temp table at the same time then at the end I can drop the table faster. Currently, it takes significant time to drop the table. Our raw data can be up to 3GB. But I don't know how to combine the query. I also read that SQlite does not support this. So I also looking for an advice on how to make this process more efficient. I thought about using the queue too but I need input before I change it.

Thank you in advance.

Link to comment

Save the images as files and just index them in the database and any meta data you need.

As a rule of thumb, you only save data that is searchable in databases. Images are not searchable data, only their names are.

Doing it this way also enables you to process multiple images simultaneously, in parallel, and just update the meta data in the DB as you process them.

Link to comment

The reason I have these images save to a database is that at the end of the test, I need to collect this file as test data for post analyzing. I am thinking about your suggestion that temporary save the original images then compress them and save to database at the end but also the cost to save these images to file and read it back later. I hope to improve the speed from 100FPS to 150FPS or more. With your experience, do you think I can achieve the speed? The software is run on NI Compact Vision Controller (CVS).

Edited by Thang Nguyen
Link to comment

I note that you haven't given any performance numbers.  If I were looking at slow code with four steps, I would have learned how long each step roughly took within a few minutes**.  It is not uncommon in such cases for one step to be the bottleneck, taking significantly longer than all the other cases put together.  In that case, it does not help to try and do steps in parallel, and you should instead try and improve the performance of that one step.

I will withhold any other suggestions, as I think you are possibly just digging a hole for yourself by layering on "performance improvements" without measuring performance.

**Using the "Quick Timer" from "Cyclic Table Probes".

Link to comment
10 hours ago, Thang Nguyen said:

The reason I have these images save to a database is that at the end of the test, I need to collect this file as test data for post analyzing. I am thinking about your suggestion that temporary save the original images then compress them and save to database at the end but also the cost to save these images to file and read it back later. I hope to improve the speed from 100FPS to 150FPS or more. With your experience, do you think I can achieve the speed? The software is run on NI Compact Vision Controller (CVS).

No idea.

If you save files external to the DB then it mainly depends on the disk speed and the resolution/format you are saving it with. You've only given a rough resulting file size and a framerate. No indication of how many seconds worth that is in order to calculate the MB/frame/sec. Streaming to disk is orders of magnitude faster than to a DB though and the faster the disk, the faster you can stream.

If you just want the convenience of a single file for offline post processing then you save them to a directory and zip it all up at the end along with the DB metadata. If you want to be fancy, then you can add them individually as they are created in the background while the next recording is acquired (assuming you have enough oomph in the processor)

Link to comment

I would like to provide more information regarding my data:

The image array size is 300,000 pixel * 8

One test has about 5528 images + particle measurement array in each them.

Read Temp Table Time: 12.810000 sec

Compress Image Time: 54.924000 sec 

Write Actual Table Time: 1.619000 sec (not sure why it is very fast here)

Drop Temp Table Time: 265.688000 sec

Originally, I have another index table which is foreign key reference to the image table. I also have to create temp table to reference to temp image table. The time above is for image table and index table with it.

 

 

Link to comment

Best to state performance numbers in per operation.  I assume these are for 5528 images, so the "Drop Table" is 50ms each time?   Try eliminating the Drop Table and instead just delete the row.  If that works then your remaining dominant step is the 10ms/image for Compression.

I think your initial mistake was to go, "Since we want to speed up the process, <we do extra steps and make things extra complicated and in parallel modules in hopes that will be faster>."  Better to have said, "Since we want to speed up the process, we will make a simple-as-possible prototype VI that we can benchmark."  That prototype would be a simple loop that gets data, compresses, and saves to db.

Link to comment

David,

Currently, we already have that simple loop which can run with 10msec per image or 100 FPS. We are looking for option to upgrade to 150 FPS. The camera can up to this speed but the bottle neck is the compression step. That why I would like to do the compress at the end of each step. With your experience, do you think if I save each images separately into files will be faster than write into database? I can read, compress and write to SQLite database at the end of the test.

Thank you very much!

 

Link to comment

Saving your images directly in a binary file would probably be the fastest way to save.  Not the best format for long-term storage, but if you only keeping them temporarily so they can be read and compressed later then that doesn't matter.

I would first consider your Compression step; can you make that any faster?  You only need it 33% faster to get 150 FPS.  Is this your own developed compression algorithm?  How much CPU does it use when it is running?  If only one CPU then there are parallelization options (such as compressing multiple images in parallel).

BTW: do you really have 8-byte pixels?

Edited by drjdpowell
Link to comment
  • 2 weeks later...

A stream of pictures at 30 frames per second or higher is best handled as.....wait for it.....video. I'm not aware of any software that could stream raw video into a database. It seems kind of crazy to even try that.

My best advice is to start thinking about this as a video problem that requires video compression and video codecs. Those video codecs could be either hardware or software based.

If each of your pictures is significantly different from the last, then you would want to use what's known as an "intra-frame" video codec. This type of codec does all video compression on a single frame/picture and does not rely on information from surrounding pictures/frames.

Edited by Reds
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.