Jump to content

Using SQL Databases and LabVIEW exe


Recommended Posts

Background:

I've been using LabVIEW for a few years for automation testing tasks and until recently have been saving my data to "[DescriptorA]\[DescriptorB]\[test_info].csv" files. A few months ago, a friend turned me on to the concept of relational databases, I've been really impressed by their response times and am reworking my code and following the examples with the Database Connectivity Toolkit (DCT) to use "[test_info].mdb" with my provider being a Microsoft jet oldb database.

However, I'm beginning to see the limitations of the DCT namely:

  • No support for auto-incrementing primary keys
  • No support for foreign keys
  • Difficult to program stored procedures

and I'm sure a few more that I don't know yet.

Now I've switched over to architecting my database in MySQL Workbench. Suffice to say I'm a bit out of my depth and have a few questions that I haven't seen covered in tutorials

 Questions (General):

 Using Microsoft jet oldb I made a connection string "Data Source= C:\[Database]\[databasename.mdb]" in a .UDL file. However, the examples I've seen for connecting to MySQL databases use IP addresses and ports.

  1. Is a MySQL database still a file?
  2. If not, how do I put it on my networked server \\[servername\Database\[file]?
  3. If so, what file extensions exist for databases and what is the implication of each extension? I know of .mdb, but are there others I could/should be using (such as .csv's vs .txt's)

 My peers, who have more work experience than me but no experience with databases, espouse a 2GB limit on all files (I believe from the era of FAT16 disks). My current oldb database is about 200mB in size so 2GB will likely never happen, but I'm curious:

  1. Do file size limits still apply to database files?
  2. If so, how does one have the giant databases that support major websites?

 Questions (LabVIEW Specific):

  1. I can install my [MainTestingVi.exe], which accesses the jet oldb database, on a Windows 10 computer that is fresh out of the box. When I switch over to having a MySQL database, are there any additional tools that I'll need to install as well? 
Link to post
Quote
  • No support for auto-incrementing primary keys
  • No support for foreign keys

These are part of the design of the database, I wouldn't imagine any api-level tool to have "support" for them except as a direct SQL call. I would absolutely do what you're doing and use a GUI to design the database, and then access it through the API. 

Mysql workbench is pretty powerful, but can be confusing. I've always used a tool called HeidiSQL for working with mysql/mariadb databases. Its nicer in my opinion for learning with.

Some other thoughts:

  • Mysql has a TCP server https://decibel.ni.com/content/docs/DOC-10453 which is faster for small interactions
  • the mysql command line is a pain to use but could be better for bulk imports (eg from a giant mass of old CSV files). As shown in the link, Heidi can help you.
  • Postgresdb is becoming more and more popular (among other things, mysql is now owned by oracle and spent a while sort of languishing -- for my personal load of several TB of indexed data, postgres performed significantly better out of box than a somewhat optimized mysql). If you decided to go this route there are two libraries to consider (in addition to the db connectivity toolkit).
  • What you described, having a bunch of old measurement data in csv files and wanting to catalog them in a database-esque way for performance and ease of use is literally the sales pitch of Diadem. Like, almost verbatim.

 

10 hours ago, ATE-ENGE said:

 Questions (General):

 Using Microsoft jet oldb I made a connection string "Data Source= C:\[Database]\[databasename.mdb]" in a .UDL file. However, the examples I've seen for connecting to MySQL databases use IP addresses and ports.

  1. Is a MySQL database still a file?
  2. If not, how do I put it on my networked server \\[servername\Database\[file]?
  3. If so, what file extensions exist for databases and what is the implication of each extension? I know of .mdb, but are there others I could/should be using (such as .csv's vs .txt's)

 My peers, who have more work experience than me but no experience with databases, espouse a 2GB limit on all files (I believe from the era of FAT16 disks). My current oldb database is about 200mB in size so 2GB will likely never happen, but I'm curious:

  1. Do file size limits still apply to database files?
  2. If so, how does one have the giant databases that support major websites?
  1. It may be 1 to N files depending on the configuration. Indices can sometimes be stored in separate files, and if you have a ton of data you would use partitioning to split the data up into different files based on parameters (eg where year(timestamp)=2018 you use partition 1, where year=2017 use partition 2, etc.). You don't reference the file directly. You usually use a connection string formatted like this: https://www.connectionstrings.com/mysql/
  2. You cannot, you must have a server machine which runs the mysql service. To the best of my knowledge, the only database which you can put on a share drive and forget about is sqlite, but they recommend against it. I had never used the MDB format before but it looks like that is similarly accessible as a file.
  3. As with 2, you generally don't edit the files manually. You access the database through the server which exposes everything through SQL.

 

  1. They do, but I think its in the TB range. If you reach a single database file that is over a TB, you should learn about and use partitioning which breaks it down into smaller files.
  2. Not sure, but I believe the truly giant websites like google have their own database system they use. More generally, they divide the workload across a large number of machines. As an example: https://en.wikipedia.org/wiki/Shard_(database_architecture)

 

10 hours ago, ATE-ENGE said:

 Questions (LabVIEW Specific):

  1. I can install my [MainTestingVi.exe], which accesses the jet oldb database, on a Windows 10 computer that is fresh out of the box. When I switch over to having a MySQL database, are there any additional tools that I'll need to install as well? 

You will need to install the database server somewhere. Assuming you've set up some server thats going to host your data, then you just need the client software. If you use the TCP-based connector mentioned above, that client software is just labview. However, that connector has no security implementation and gets bogged down with large data sets. If you want to use the database toolkit, you'll need the ODBC connector and perhaps to configure a data source as shown here, although you may be able to create a connection string at runtime.

  • Like 1
Link to post

well specifically its the return data thats the issue, and the answer is you have to benchmark for your data set. I'd expect the crossover point to be <1 MB but its been a while since I tried it. For inserting data, or fetching configuration, or asking the database to give you a processed result (avg of x over time period T) this works great, for grabbing data in bulk out of the db its uselessly slow. 

Link to post
On 5/21/2018 at 3:02 PM, ATE-ENGE said:
  1. Is a MySQL database still a file?
  2. If not, how do I put it on my networked server \\[servername\Database\[file]?
  3. If so, what file extensions exist for databases and what is the implication of each extension? I know of .mdb, but are there others I could/should be using (such as .csv's vs .txt's)
  1. No. It is a "service".
  2. It's a service, so you cannot put it on a drive, you have to install it then communicate over TCPIP
  3. See #1.

If you really want a file based relational database take a look at SQLite. SQLite supports DB files up to 140 terabytes-good luck finding a disk that size :) 2G partition sizes are only an issue in WinXP and with fat32 disks. Modern OS's and disk are not an issue.  Be warned, though. There are caveats in using SQLite on network shares. However. If it the use case is configuration which is written to rarely (and usually by one person) then it will work fine on a network share for reading from multiple applications. The locking issues mainly come into play when writing to the DB from multiple clients. Note also this is not a very efficient way to access SQLite databases and is an order of magnitude slower

If you are going to be logging data from multiple machines, then MySQL/PostgreSQL is the preferred route. I usually use SQLite and MySQ together - SQLite locally on each machine as a sort of "cache" and also so that the software continues to operate so as not lose data when the MySQL server is not available. In this way you get the speed and performance of SQLite in the application and the network wide visibility in MySQL for exploitation. It also gives you the ability for the machine to work offline.

If you are going with MySQL then it is worth talking with your IT department. They may be able to set it up and administer it for you or provide a machine specifically for your needs. They usually prefer that to having a machine on their network not under their control, with network wide visibility, and it will give you a good support route if you run into any difficulties..

Edited by ShaunR
Link to post

Oh yeah, if you have performance needs writing directly to the database can be quite poor (depending on exactly how you have the database set up, the indices involved, the disks involved, etc). Rather than logging to sqlite as shaun does I just write my data to a binary file as fast as I can, and have a background service shove it into the database as fast as it can. Essentially using the disk as a queue.

Edited by smithd
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By kosist90
      Dear Community,
      let me present our new ANV Database Toolkit, which has been recently released at vipm.io.
      Short introduction to the toolkit is posted by this link, and it also describes steps which should be done in order to use this toolkit.
      ANV Database Toolkit helps developers design LabVIEW API for querying various databases (MS SQL, MySQL, SQLite, Access). It allows to create VIs which can be used as API with the help of graphical user interface. When using these VIs, toolkit handles connection with the database, thus relieving developers of this burden in their applications.
      It has the following features:
      Simplifies handling of databases in LabVIEW projects Allows to graphically create API VIs for Databases Supports Read, Write, Update and Delete queries Supports various database types (MS SQL, MySQL, SQLite, Access) Overall idea is that developer could create set of high-level API VIs for queries using graphical user interface, without actual writing of SQL queries. Those API VIs are used in the application, and handle database communication in the background. Moreover, SQL query could be applied to any of the supported database types, it is a matter of database type selection. Change of target database does not require changes in API VI which executes the query.
      After installation of the toolkit, sample project is available, which shows possibilities of the toolkit in terms of execution different types of queries.
      Note, that in order to install the toolkit, VI Package Manager must be launched with Administrator privileges.
      This toolkit is paid, and price is disclosed based on price quotation. But anyway, there are 30 days of trial period during which you could tryout the toolkit, and decide whether it is helpful (and hope that it will be) for your needs.
      In case of any feedback, ideas or issues please do not hesitate to contact me directly here, or at vipm.io, or at e-mail info@anv-tech.com.
       
        
    • By Bryan
      TestStand Version(s) Used: 2010 thru 2016
      Windows (7 & 10)
      Database: MS SQL Server (v?)
      Note: The database connection I'm referring to is what's configured in "Configure > Result Processing", (or equivalent location in older versions).
      Based on some issues we've been having with nearly all of our TestStand-based production testers, I'm assuming that TestStand opens the configured database connection when the sequence is run, and maintains that same connection for all subsequent UUTs tested until the sequence is stopped/terminated/aborted.  However, I'm not sure of this and would like someone to either confirm or correct this assumption. 
      The problem we're having is that: Nearly all of our TestStand-based production testers have intermittently been losing their database connections - returning an error (usually after the PASS/FAIL banner).  I'm not sure if it's a TestStand issue or an issue with the database itself. The operator - Seeing and only caring about whether it passed or failed, often ignores the error message and soldiers on, mostly ignoring every error message that pops up.  Testers at the next higher assembly that look for a passed record of the sub assemblies' serial number in the database will now fail their test because they can't find a passed record of the serial number. 
      We've tried communicating with the operators to either let us know when the error occurs, re-test the UUT, or restart TestStand (usually the option that works), but it's often forgotten or ignored.
      The operators do not stop the test sequence when they go home for the evening/weekend/etc. so, TestStand is normally left waiting for them to enter the next serial number of the device to test.  I'm assuming that their connection to the database is still opened during this time.  If so, it's almost as though MS SQL has been configured to terminate idle connections to it, or if something happens with the SQL Server - the connection hasn't been properly closed or re-established, etc. 
      Our LabVIEW based testers don't appear to have this problem unless there really is an issue with the database server.  The majority of these testers I believe open > write > close their database connections at the completion of a unit test. 
      I'm currently looking into writing my own routine to be called in the "Log to Database" callback which will open > write > close the database connection.  But, I wanted to check if anyone more knowledgeable had any insight before I spend time doing something that may have an easy fix.
      Thanks all!
    • By Atul
      Hello Everyone
       
      I am facing some issue while opening database connection using OLE DB as well as in ODBC  also
      Currently i am using Labview 64 bit version and using ADO functions (ActiveX Function) i am getting following error. Check Attachment
      Error -2146824582 occurred at Exception occured in ADODB.Connection: Provider cannot be found. It may not be properly installed.Help Path is C:\WINDOWS\HELP\ADO270.CHM and context 1240655 in AUTOMATION 64BIT.vi
       
      Any link to similar kind task or tutorial for study are most welcome.
      Thanks.

    • By GregFreeman
      I think I have found a fundamental issue with the DB Toolkit Open connection. It seems to not correctly use connection pooling. The reason I believe it's an issue with LabVIEW and ADODB ActiveX specifically is because the problem does not manifest itself using the ADODB driver in C#. This is better shown with examples. All I am doing in these examples is opening and closing connections and benchmarking the connection open time.
      Adodb and Oracle driver in LabVIEW.

       
      ADODB in C#
       
      namespace TestAdodbOpenTime { class Program { static void Main(string[] args) { Stopwatch sw = new Stopwatch(); for (int i = 0; i < 30; i++) { ADODB.Connection cn = new ADODB.Connection(); int count = Environment.TickCount; cn.Open("Provider=OraOLEDB.Oracle;Data Source=FASTBAW;Extended Properties=PLSQLRSet=1;Pooling=true;", "USERID", "PASSWORD", -1); sw.Stop(); cn.Close(); int elapsedTime = Environment.TickCount - count; Debug.WriteLine("RunTime " + elapsedTime); } } } } Output:
      RunTime 203
      RunTime 0
      RunTime 0
      RunTime 0
      RunTime 0
      RunTime 0
      RunTime 0
      RunTime 0
      RunTime 0
       
      Notice the time nicely aligns between the LabVIEW code leveraging the .NET driver and the C# code using ADODB. The first connection takes a bit to open then the rest the connection pooling takes over nicely and the connect time is 0. 
       
      Now cue the LabVIEW ActiveX implementation and every open connection time is pretty crummy and very sporadic. 
       
      One thing I happened to find out by accident when troubleshooting was if I add a property node on the block diagram where I open a connection, and if I don't close the reference, my subsequent connect times are WAY faster (between 1 and 3 ms). That is what leads me to believe this may be a bug in whatever LabVIEW does to interface with ActiveX.
       
      Has anyone seen issues like this before or have any idea of where I can look to help me avoid wrapping up the driver myself?
       
    • By GregFreeman
      I am running calls to a various stored procedures in parallel, each with their own connection refnums. A few of these calls can take a while to execute from time to time. In critical parts of my application I would like the Cmd Execute.vi to be reentrant. Generally I handle this by making a copy of the NI library and namespacing my own version. I can then make a reentrant copy of the VI I need and save it in my own library, then commit it in version control so everyone working on the project has it. But the library is password protected so even a copy of it keeps it locked. I can't do a save as on the VIs that I need and make a reentrant copy, nor can I add any new VIs to the library.
      Does anyone have any suggestions? I have resorted to taking NIs library, including it inside my own library, then basically rewriting the VIs I need by copying the contents from the block diagram of the VI I want to "save as" and pasting them in another VI.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.