Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Posts posted by smithd

  1. Quote
    • No support for auto-incrementing primary keys
    • No support for foreign keys

    These are part of the design of the database, I wouldn't imagine any api-level tool to have "support" for them except as a direct SQL call. I would absolutely do what you're doing and use a GUI to design the database, and then access it through the API. 

    Mysql workbench is pretty powerful, but can be confusing. I've always used a tool called HeidiSQL for working with mysql/mariadb databases. Its nicer in my opinion for learning with.

    Some other thoughts:

    • Mysql has a TCP server https://decibel.ni.com/content/docs/DOC-10453 which is faster for small interactions
    • the mysql command line is a pain to use but could be better for bulk imports (eg from a giant mass of old CSV files). As shown in the link, Heidi can help you.
    • Postgresdb is becoming more and more popular (among other things, mysql is now owned by oracle and spent a while sort of languishing -- for my personal load of several TB of indexed data, postgres performed significantly better out of box than a somewhat optimized mysql). If you decided to go this route there are two libraries to consider (in addition to the db connectivity toolkit).
    • What you described, having a bunch of old measurement data in csv files and wanting to catalog them in a database-esque way for performance and ease of use is literally the sales pitch of Diadem. Like, almost verbatim.

     

    10 hours ago, ATE-ENGE said:

     Questions (General):

     Using Microsoft jet oldb I made a connection string "Data Source= C:\[Database]\[databasename.mdb]" in a .UDL file. However, the examples I've seen for connecting to MySQL databases use IP addresses and ports.

    1. Is a MySQL database still a file?
    2. If not, how do I put it on my networked server \\[servername\Database\[file]?
    3. If so, what file extensions exist for databases and what is the implication of each extension? I know of .mdb, but are there others I could/should be using (such as .csv's vs .txt's)

     My peers, who have more work experience than me but no experience with databases, espouse a 2GB limit on all files (I believe from the era of FAT16 disks). My current oldb database is about 200mB in size so 2GB will likely never happen, but I'm curious:

    1. Do file size limits still apply to database files?
    2. If so, how does one have the giant databases that support major websites?
    1. It may be 1 to N files depending on the configuration. Indices can sometimes be stored in separate files, and if you have a ton of data you would use partitioning to split the data up into different files based on parameters (eg where year(timestamp)=2018 you use partition 1, where year=2017 use partition 2, etc.). You don't reference the file directly. You usually use a connection string formatted like this: https://www.connectionstrings.com/mysql/
    2. You cannot, you must have a server machine which runs the mysql service. To the best of my knowledge, the only database which you can put on a share drive and forget about is sqlite, but they recommend against it. I had never used the MDB format before but it looks like that is similarly accessible as a file.
    3. As with 2, you generally don't edit the files manually. You access the database through the server which exposes everything through SQL.

     

    1. They do, but I think its in the TB range. If you reach a single database file that is over a TB, you should learn about and use partitioning which breaks it down into smaller files.
    2. Not sure, but I believe the truly giant websites like google have their own database system they use. More generally, they divide the workload across a large number of machines. As an example: https://en.wikipedia.org/wiki/Shard_(database_architecture)

     

    10 hours ago, ATE-ENGE said:

     Questions (LabVIEW Specific):

    1. I can install my [MainTestingVi.exe], which accesses the jet oldb database, on a Windows 10 computer that is fresh out of the box. When I switch over to having a MySQL database, are there any additional tools that I'll need to install as well? 

    You will need to install the database server somewhere. Assuming you've set up some server thats going to host your data, then you just need the client software. If you use the TCP-based connector mentioned above, that client software is just labview. However, that connector has no security implementation and gets bogged down with large data sets. If you want to use the database toolkit, you'll need the ODBC connector and perhaps to configure a data source as shown here, although you may be able to create a connection string at runtime.

    • Like 1
  2. On 11/28/2015 at 0:47 PM, drjdpowell said:

    A beta version of 1.6 is posted here.  If you ignore the newest features, you could use this in production code; it has the latest SQLite 3.9.2, including the interesting JSON1 extension.

    ^^I think its already compiled in

    Try creating a memory database and executing one of the sample queries like "json_array_length('[1,2,3,4]')" which should return 4. If I'm remembering correctly, these queries just worked with the vi package.

  3. If you really want the pixel values, you can get those (http://zone.ni.com/reference/en-XX/help/370281AC-01/imaqvision/imaq_getpixelvalue/) and then use the imaq subtract with a constant.

    However: one thing you'll note is that many of the analysis functions have a "mask" input. One route to get rid of the background would be to use a threshold function to get a mask (https://forums.ni.com/t5/Example-Programs/IMAQ-Threshold-Binary-Image-and-Mask/ta-p/3534077) and then feed that mask into, as an example, the histogram function (http://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_histogram/). Per the help:

    Quote

     

    Image Mask is an 8-bit image specifying the region in the image to use for the calculation. Only those pixels in the original image that correspond to an equivalent non-zero pixel in the mask image are used for the calculation. The entire image is used in the calculation if Image Mask is not connected.

     

     

    On a related topic, it could be worth looking at http://www.ni.com/pdf/manuals/371007b.pdf and http://www.ni.com/pdf/manuals/322916b.pdf. I think both are now part of the imaq help (the second pdf became this, I believe: http://zone.ni.com/reference/en-XX/help/370281AD-01/TOC1.htm) but I find the PDF to be nicer to look at. So for example you might use the particle analysis functions or the edge detection functions to find your glowy dot and get other characteristics about it, depending on what you are looking to do.

  4. 10 hours ago, ensegre said:

    .net is windows only, G runs wherever LV runs.

    From the READNE.txt

     

    .net on labview is windows only :(

    If you are only working on windows, .net is a reasonable tool to use. I'm using it for example for windows login functionality. There are DLL calls for that of course, but the .net wrapper is nicer. There are also .net features that don't get exposed very nicely in labview for whatever reason. Iterators can be kind of annoying as one example, and event callbacks are another...but again, easier than a DLL. Both DLLs and .net code block the thread in which they are running, so for highly parallel long-running tasks, .net would be inappropriate.

    Realistically, you should review any 3rd party code you use unless you really trust them. As such, I'd rather use a SHA function built in labview like this over some random .net assembly on the internet because I can review it more effectively, but if microsoft has a SHA function built into .net, that would probably be preferable on windows. The dll-labview interface can be such a challenge that even those with a good reputation (for example, trying to wrap openssl) would require extensive testing on each platform.

  5. Well...thats confusing. It looks like automation open claims to be able to reuse references:

    " If open new instance is TRUE, LabVIEW creates a new instance of the Automation Refnum. If FALSE (default), LabVIEW tries to connect to an instance of the refnum that is already open. If the attempt is unsuccessful, LabVIEW opens a new instance. "

    So in the configuration above it shouldn't need to create a new instance.

    I don't know if it will help, but to get more info maybe try timing the 3 sections of the open function (automation open, then connect, then the subVI at the end which..not clear what it does). If you really wanted to dig into it, I'm betting something like process monitor or process explorer could tell you about what handles and such are actually open, and that might tell you the difference...but I'm not totally sure which tool would accomplish that.

  6. In the .net code you have it looks like you aren't actually destroying the objects but I think the labview equivalent is -- equivalent to if you called this in the .net version. My guess is that when you get the connection properties in the screenshot you're leaking a reference which prevents overall destruction of the main connection object thus leading to faster subsequent runs. What happens if you change your first code so that you wait to close the connections until after you open all of them? Does that go fast?

    • Like 1
  7. They can be included statically even if you run them dynamically as popups. The normal route I go is to have a static VI reference then a property node to get the name out of that reference, which I then pass to open in order to get a dynamically launchable reference. By using the static VI reference you force labview to build the child into the caller and then you don't have to worry about where the build puts the VI

  8. Both libraries use zero indexing, so you should be able to use the same value for starting register and number of registers in both libraries. The 4xxxx (or 4xx,xxx) is not important at the protocol level.

    Its difficult verify whats going on because one function has a bunch of constants and the other is dynamic, but I'm assuming when your coworker tested it you've verified that:

    • They have the same IP address and port you have
    • They are using the same holding register start address and length
    • They are hitting the button to make sure it actually reads the holding register rather than skipping that code

    If I'm understanding correctly, you ran the old code on your machine as well, and it failed? It sounds to me like the PLC has some sort of access control list, and your coworker added his machine to the list, but yours is not on it. This would make the most sense to me given that:

    • His machine works and yours doesn't with identical code and settings
    • The connection is being closed by the PLC
    • The closure appears to be immediate (note that the error occurs at the "write" function, which is the very first TCP function called after connecting. This would indicate that the connection is opened and then closed before you even enter the while loop).
    • Like 1
  9. The DCAF CVT module copies data from the inside of the engine (where data exists on the wire) to the CVT -- so while the core components do not depend on the CVT, that particular package serves as a bridge.

    As for why the CVT doesn't have a variant type...it was proposed often enough, but somehow never happened I guess. I honestly dont remember why. The CVT includes a set of template functions and a generator for any types you might want, so that would be the solution. If you did a pull request on github I would imagine they'd be happy to accept the change.

  10. Definitely odd, revealing yet another bizarre issue in the land of user events.

    I will say that technically you are doing it wrong according to the help (which did not change between 2013 and 2017):

    https://zone.ni.com/reference/en-XX/help/371361P-01/lvhowto/design_case_for_registration/

    To "modify" the registration you have to read the wire in from the left hand side inside the case where you modify the registration. Doing that alone with the null ref you had originally did nothing, but adding a registration for a null event did work.

    My guess, based on this, is that something changed such that the null refnum 'doesn't count' as a proper registration, and so the event structure never registers events until you create one, store it in the shift register, and pass it in on the left. But obviously just a guess.

    I modified your code according to the help documentation (attached) and it worked correctly on my machine:

    evnt2.vi

    • Like 1
  11. This seems like it would be pretty easy to do using a pre-build step. Is there something you had in mind which isn't solved by this? (Also, on FPGA you can do something like this using VIs to define initial contents of memory)

    As for the popularity of the idea, I'm actually kind of surprised the idea exchange still has as much activity as it does -- 85 whole kudos this year -- considering nxg. Anyway, some ideas in the same vein:
    https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Conditional-Disable-Symbols-settable-in-Application-Builder/idi-p/924581
    https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Conditional-Disable-Symbol-Constants/idi-p/1034752

    The first would solve my main issues. The second would be neat, and I've occasionally wanted it in the past. I'm not really sold on the value of what you've described outside of the compile time you show in your post (which is not something I've ever personally implemented)

  12. 3 hours ago, drjdpowell said:

    In my case, the CRC is because the Setting file lives in flash memory on the hardware, and is downloaded to the computer, so we are guarding against corruption. 

    6 hours ago, Gribo said:

    Because there are programmers, there are techs and there are operators. You don't want your operators messing with your data (test limits, etc), but you do want them to be able to tell you on the phone, what data there is there, in case you can't remote into the machine.

    Makes sense.

    3 hours ago, drjdpowell said:

    JSON is starting to overtake XML for data exchange.  JSONtext should work on RT, though I have yet to try it.  

    Definitely works on RT*. You'd hear from me right away if that stopped being true ;). Its definitely slower than using the primitives, but its much easier to use and 90% of my data is so small it doesn't matter. The other 10% is so big (ie images, gigantic arrays) I just use binary.

    *with the obvious exception of that "make this string control have pretty colors" function and any similar :)

  13. On 2/6/2018 at 2:17 AM, drjdpowell said:

    I started doing that, but switched to an "outer" JSON object with the sections "Rev" (data version number), "Settings", and "Other Info" (which records timestamp, User, computer, software versions, CRC-32, etc.).  

    Thats probably a better plan if you're always sure to use json. Conditionally wrapping json or xml or whatever means handling escape possibilities and when I first made the format I wanted to be open to other payload types.

    22 hours ago, Gribo said:

    I use JSON, and to prevent the end user from editing, I add an MD5 checksum. 

    I dont really understand this (or the crc). why use a human editable format if you dont want people to edit it? And more generally, why not let people edit the files? you have to validate the contents anyway, right?

  14. 18 hours ago, drjdpowell said:

    I use JSON, and have a library for JSON, which has recently been added to the LabVIEW Tools Network.  I don't modify the config files with a text editor, though; all config editing is done in the Application itself.  I also do a bit of JSON in SQLite.

    same, although I have some header data stored in ascii\r\n format which includes a payload data version and who wrote the file, when.

    I manage the ini equivalent of 'sections' by creating a folder hierarchy, and maintain a history of all edits performed on a given file. my little library looks for the newest file (or the file at some specific time) at the specified subpath. 

    For centrally driven config management I use a sql db (sqlite so far) which has basically the same structure of key (=folder hierarchy path segment), value (json string or blob), and activation time, with the addition of the hostname of the device that owns that configuration data. This tool also lets me tag a set of values under a specific ID regardless of their timestamp, so I can redeploy that named configuration.

    while I do use Json for most everything, I think the most important tool at my disposal is this set of helper libraries...the actual file contents don't matter so much.

  15. Actually the memory space issue was the original original reason we moved to dataflow for what eventually became DCAF. Early concepts for that library involved sharing CVT tags between plugins, which got replaced with the calling process being responsible for sharing the values via dataflow. The other big issue is that if you use a global variable, like the CVT, you run into the problem that reads and writes are not coherent for sets of values...that is, if you wish to write the cluster {value=1, time=blah, accurate=true} that would be 2 or 3 separate write actions and a consumer wouldn't necessarily know that all values were from the same 'sample' if they were read out. Having the DCAF engine orchestrate plugin calls and act as the synch point for large data sets was also a driving factor. The implementation doesn't really preclude use of the CVT (I don't know that its ever been used, but I made a CVT-based engine at one point), its just that when you have a single labview process passing data around it makes more sense to use dataflow.

    As to performance, all the accesses are simple subroutine-priority FGV accesses so an individual read is pretty quick. If you knew what values you want to use up front you'll definitely get more better performance from a set of DVRs of clusters -- the locking is equivalent, but you solve the coherency issue above, you can create as many separate and distinct locks are you might wish, and with 2017 you can do parallel reads. The CVT is geared towards more dynamic situations. The slow part of the CVT implementation is the lookup (name->index). Theres a variety of 'advanced' functions to let you look up the index of each value early in the run of your application which brings performance back to just a few checks and then the FGV read operation which is quite fast. In general, the number of 1000 was always kept in mind as an intended number of tags. It can of course go much higher, but I think trying to keep 1000 individually named global variables in your head is hard enough, let alone more than that. Because it is a FGV-per-data-type, you will by definition eventually run into contention issues but thats going to be dependent upon your application.

    The alternative implementation I've seen is using queues (https://lavag.org/topic/14446-viregister/?page=1) or notifiers. If CCC is your main interest, something more standard like OPC UA might be a better fit.

     

    • Like 1
  16. The poor AE they had staffing the booth at NI week said it was in memory. I would normally take that with a grain of salt, but when I was evaluating it I don't remember seeing a path input for where to store any kind of database or file, so if it is persistent its a mystery to me. In memory would also make the most sense to me since it asks for a number of elements for each thing you want to store, so it probably just creates a fixed-size buffer. Not all that useful for a long term historical DB. Thats why in my post above I was suggesting some sort of 3rd-party tool for a more permanent storage mechanism.

    For my use (I haven't written it yet, but I have a near-term plan), I would only use the historical information for sort of that short term use -- so if I have multiple HMIs, they don't all need to buffer values themselves and if they restart the HMI it has a centralized cache to pull from. Based on the API, it looks like it should be feasible to write the entire historical content to disk right before you shut down the server instance and then reload it when you restart the server instance. Of course, that depends on a clean shutdown...

  17. On 12/19/2017 at 0:06 PM, viSci said:

    Has anyone looked at NI's new Systemlink / Skyline technology?  It looks like a type of DDS running on RabbitMQ that is being used for all future LV software and package deployment for Windows and Linux platforms.  Skyline 17.5 has Publish Subscribe tag's, tag viewer and a new web dashboard that can bind to tags.  I was informed that in Q2 Skyline will have historical capabilities but am not sure if it could serve as a citadel replacement.

    I'm familiar with it. My reaction at the time was that it seemed like another black box doing tag based communication via a background service to replace the shared variable black box doing tag based communication via a background service. I'd love for the new black box to be better than the old, but I'm skeptical.

    It definitely appears to resolve some issues with shared variables (I believe they can be created programmatically), but its not clear how you might integrate it with non-lv applications, time will tell on performance, and time will really tell on usability, reliability, etc that variables are often questioned on (a straightforward example is, how do I find out that my peer is dead/disconnected? how long does it take to be detected as dead? etc.). Its also not clear why we'd want to use a custom proprietary tag protocol when NI is also investing in OPC UA.

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.