Jump to content

Accessing MySQL from a RT VxWorks target?


Recommended Posts

I am looking to see if there is a way to access a MySQL server from a RT target. The short answer seems to be no.

There is an article on the NI website describing a method where you send files to a Windows or Linux machine who then speak to whatever relational database you want. I would rather not do this.

Another solution is running SQLite on the target itself. This does not really work for me either as there are many RT targets whose data needs to be pooled.

The brute force solution appears to be to port the C based mysql client API to VxWorks and then call it with a CIN node. (Does this work in RT?) or to port the C library directly to LabVIEW.

I am hopping that someone else has something less painful to offer.

Link to comment

I wrote an article about 6 months ago about this. Check it out here:

http://zone.ni.com/devzone/cda/epd/p/id/6287

On top of that another systems engineer figured out a way to do a plugin direct connect to MySQL. It hasn't been added to the article, but hopefully it will be in the future. I alerted him about this post, so hopefully he will reply in the next week with some example code.

Link to comment

I am looking to see if there is a way to access a MySQL server from a RT target. The short answer seems to be no.

The real answer is "yes" - with the right code.

I just posted some code on NI Community that I recently developed to connect to a mySQL database that uses native LabVIEW all the way to the TCP/IP calls. This was specifically meant for RT targets.

I have not had time to do much testing (especially on RT targets) and your post made me decide to just put up the code in its current state. So your feedback would be very helpful.

  • Like 1
Link to comment
For queries with no or small sets of returned data, expect 10-20% improvement in processing time over the LabVIEW Database Connectivity Toolkit since this example does not have the additional overhead of sending data through ODBC. However - large returned datasets will take significantly longer (10x+) than the LabVIEW Database Connectivity Toolkit.

What quantifies large or small recordsets?

Link to comment

What quantifies large or small recordsets?

Here's some results from a quick test. I have a mySQL database running on a vmware image (linux) using a bridge virtual port on the same development computer (so ethernet latencies would be greater for a remote server). When selecting 2 rows, 2 columns of data the LV example takes ~0.6mSec (averaged over 10k cycles) to query and return. The LV Database Toolkit takes ~0.7mSec. When selecting 1000 rows, 2 columns; the LV example takes 516mSec (averaged over 10 cycles) where the Database toolkit takes 18mSec. As you can see there's significant room for improvement.

On a 9074 controller query of 2 rows and 2 columns takes ~6mSec.

Link to comment

Here's some results from a quick test. I have a mySQL database running on a vmware image (linux) using a bridge virtual port on the same development computer (so ethernet latencies would be greater for a remote server). When selecting 2 rows, 2 columns of data the LV example takes ~0.6mSec (averaged over 10k cycles) to query and return. The LV Database Toolkit takes ~0.7mSec. When selecting 1000 rows, 2 columns; the LV example takes 516mSec (averaged over 10 cycles) where the Database toolkit takes 18mSec. As you can see there's significant room for improvement.

On a 9074 controller query of 2 rows and 2 columns takes ~6mSec.

Thanks for the benchmark. Do you have any direction on where optimizations can start? I've not worked directly with the mySQL protocol before, but I suppose reading through some of that would give me some insight. I presume the "Row" case of the _return_packet VI is where you could use some pre-allocation. Maybe if I get some free time tonight I'll start playing with this.

Link to comment

Thanks for the benchmark. Do you have any direction on where optimizations can start? I've not worked directly with the mySQL protocol before, but I suppose reading through some of that would give me some insight. I presume the "Row" case of the _return_packet VI is where you could use some pre-allocation. Maybe if I get some free time tonight I'll start playing with this.

The attached picture shows where the bottleneck is - I'm building an array in a loop (a no-no) . . but I have not found anywhere in the return mySQL header that lets me know how much memory needs to be allocated for the incoming result data. My initial thought is to allocate a "chunk" at a time - perhaps 1000 rows (the header info does give how many columns are in the data) - then return just the portion that was filled. If more is needed continue to allocate "chunks". But then again the data is returned as a string - so I don't know if that will allocate efficiently. I don't have too much time right now to spend on it - just got it working and am pulled onto another project. Any ideas would be great.

post-585-127005437622_thumb.png

Link to comment

The attached picture shows where the bottleneck is - I'm building an array in a loop (a no-no) . . but I have not found anywhere in the return mySQL header that lets me know how much memory needs to be allocated for the incoming result data. My initial thought is to allocate a "chunk" at a time - perhaps 1000 rows (the header info does give how many columns are in the data) - then return just the portion that was filled. If more is needed continue to allocate "chunks". But then again the data is returned as a string - so I don't know if that will allocate efficiently. I don't have too much time right now to spend on it - just got it working and am pulled onto another project. Any ideas would be great.

I'm pretty sure the mySQL protocol also supports transmission of binary data, which quite likely would speed up transmission and possible also conversion to Variants consideralbly but make the query interpretation a whole lot more complicated to program.

Link to comment

This is a great set of code for logging data from a cRIO to a mySQL database. I am running a 99 field insert query in a 250 msec timed loop, the query only takes 2 msec to execute.

Glad to hear about the fast inserts - not surprising. Not much code behind sending the query out. Just when parsing return data.

For parsing I found that removing the convert to variant and building a 2D string output (vs a 2D variant output) reduces the parsing time by 5x (variants have a lot of overhead). But still 5x+ slower than LV Database Toolkit for parsing for large datasets. I was originally thinking that the variant output would be natural for those already using the LV Toolkit - which uses 2D Variant output on the API - but for the speed, it's not worth it.

I played around a little with pre-alocating a byte array and building up a flattened string format then converting to 2D string after parsing is complete and that speed things up just a little more than the 2D string array solution (10%) - but I need to smooth out my code before posting that solution. Not sure if it's worth it.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.