Jump to content

smithd

Members
  • Posts

    763
  • Joined

  • Last visited

  • Days Won

    42

Everything posted by smithd

  1. These are part of the design of the database, I wouldn't imagine any api-level tool to have "support" for them except as a direct SQL call. I would absolutely do what you're doing and use a GUI to design the database, and then access it through the API. Mysql workbench is pretty powerful, but can be confusing. I've always used a tool called HeidiSQL for working with mysql/mariadb databases. Its nicer in my opinion for learning with. Some other thoughts: Mysql has a TCP server https://decibel.ni.com/content/docs/DOC-10453 which is faster for small interactions the mysql command line is a pain to use but could be better for bulk imports (eg from a giant mass of old CSV files). As shown in the link, Heidi can help you. Postgresdb is becoming more and more popular (among other things, mysql is now owned by oracle and spent a while sort of languishing -- for my personal load of several TB of indexed data, postgres performed significantly better out of box than a somewhat optimized mysql). If you decided to go this route there are two libraries to consider (in addition to the db connectivity toolkit). What you described, having a bunch of old measurement data in csv files and wanting to catalog them in a database-esque way for performance and ease of use is literally the sales pitch of Diadem. Like, almost verbatim. It may be 1 to N files depending on the configuration. Indices can sometimes be stored in separate files, and if you have a ton of data you would use partitioning to split the data up into different files based on parameters (eg where year(timestamp)=2018 you use partition 1, where year=2017 use partition 2, etc.). You don't reference the file directly. You usually use a connection string formatted like this: https://www.connectionstrings.com/mysql/ You cannot, you must have a server machine which runs the mysql service. To the best of my knowledge, the only database which you can put on a share drive and forget about is sqlite, but they recommend against it. I had never used the MDB format before but it looks like that is similarly accessible as a file. As with 2, you generally don't edit the files manually. You access the database through the server which exposes everything through SQL. They do, but I think its in the TB range. If you reach a single database file that is over a TB, you should learn about and use partitioning which breaks it down into smaller files. Not sure, but I believe the truly giant websites like google have their own database system they use. More generally, they divide the workload across a large number of machines. As an example: https://en.wikipedia.org/wiki/Shard_(database_architecture) You will need to install the database server somewhere. Assuming you've set up some server thats going to host your data, then you just need the client software. If you use the TCP-based connector mentioned above, that client software is just labview. However, that connector has no security implementation and gets bogged down with large data sets. If you want to use the database toolkit, you'll need the ODBC connector and perhaps to configure a data source as shown here, although you may be able to create a connection string at runtime.
  2. I've not used them for anything real in a while, but I don't remember seeing any issues when I did use them except that they're kind of a memory hog -- on the low end controllers (9066) it was about 20% ~ 25 MB which for me was the difference between moderate and worrying usage levels.
  3. ^^I think its already compiled in Try creating a memory database and executing one of the sample queries like "json_array_length('[1,2,3,4]')" which should return 4. If I'm remembering correctly, these queries just worked with the vi package.
  4. Thats what this page is suggesting: http://www.ni.com/tutorial/12666/en/
  5. If you really want the pixel values, you can get those (http://zone.ni.com/reference/en-XX/help/370281AC-01/imaqvision/imaq_getpixelvalue/) and then use the imaq subtract with a constant. However: one thing you'll note is that many of the analysis functions have a "mask" input. One route to get rid of the background would be to use a threshold function to get a mask (https://forums.ni.com/t5/Example-Programs/IMAQ-Threshold-Binary-Image-and-Mask/ta-p/3534077) and then feed that mask into, as an example, the histogram function (http://zone.ni.com/reference/en-XX/help/370281AD-01/imaqvision/imaq_histogram/). Per the help: On a related topic, it could be worth looking at http://www.ni.com/pdf/manuals/371007b.pdf and http://www.ni.com/pdf/manuals/322916b.pdf. I think both are now part of the imaq help (the second pdf became this, I believe: http://zone.ni.com/reference/en-XX/help/370281AD-01/TOC1.htm) but I find the PDF to be nicer to look at. So for example you might use the particle analysis functions or the edge detection functions to find your glowy dot and get other characteristics about it, depending on what you are looking to do.
  6. .net on labview is windows only If you are only working on windows, .net is a reasonable tool to use. I'm using it for example for windows login functionality. There are DLL calls for that of course, but the .net wrapper is nicer. There are also .net features that don't get exposed very nicely in labview for whatever reason. Iterators can be kind of annoying as one example, and event callbacks are another...but again, easier than a DLL. Both DLLs and .net code block the thread in which they are running, so for highly parallel long-running tasks, .net would be inappropriate. Realistically, you should review any 3rd party code you use unless you really trust them. As such, I'd rather use a SHA function built in labview like this over some random .net assembly on the internet because I can review it more effectively, but if microsoft has a SHA function built into .net, that would probably be preferable on windows. The dll-labview interface can be such a challenge that even those with a good reputation (for example, trying to wrap openssl) would require extensive testing on each platform.
  7. Well...thats confusing. It looks like automation open claims to be able to reuse references: " If open new instance is TRUE, LabVIEW creates a new instance of the Automation Refnum. If FALSE (default), LabVIEW tries to connect to an instance of the refnum that is already open. If the attempt is unsuccessful, LabVIEW opens a new instance. " So in the configuration above it shouldn't need to create a new instance. I don't know if it will help, but to get more info maybe try timing the 3 sections of the open function (automation open, then connect, then the subVI at the end which..not clear what it does). If you really wanted to dig into it, I'm betting something like process monitor or process explorer could tell you about what handles and such are actually open, and that might tell you the difference...but I'm not totally sure which tool would accomplish that.
  8. In the .net code you have it looks like you aren't actually destroying the objects but I think the labview equivalent is -- equivalent to if you called this in the .net version. My guess is that when you get the connection properties in the screenshot you're leaking a reference which prevents overall destruction of the main connection object thus leading to faster subsequent runs. What happens if you change your first code so that you wait to close the connections until after you open all of them? Does that go fast?
  9. They can be included statically even if you run them dynamically as popups. The normal route I go is to have a static VI reference then a property node to get the name out of that reference, which I then pass to open in order to get a dynamically launchable reference. By using the static VI reference you force labview to build the child into the caller and then you don't have to worry about where the build puts the VI
  10. Both libraries use zero indexing, so you should be able to use the same value for starting register and number of registers in both libraries. The 4xxxx (or 4xx,xxx) is not important at the protocol level. Its difficult verify whats going on because one function has a bunch of constants and the other is dynamic, but I'm assuming when your coworker tested it you've verified that: They have the same IP address and port you have They are using the same holding register start address and length They are hitting the button to make sure it actually reads the holding register rather than skipping that code If I'm understanding correctly, you ran the old code on your machine as well, and it failed? It sounds to me like the PLC has some sort of access control list, and your coworker added his machine to the list, but yours is not on it. This would make the most sense to me given that: His machine works and yours doesn't with identical code and settings The connection is being closed by the PLC The closure appears to be immediate (note that the error occurs at the "write" function, which is the very first TCP function called after connecting. This would indicate that the connection is opened and then closed before you even enter the while loop).
  11. The DCAF CVT module copies data from the inside of the engine (where data exists on the wire) to the CVT -- so while the core components do not depend on the CVT, that particular package serves as a bridge. As for why the CVT doesn't have a variant type...it was proposed often enough, but somehow never happened I guess. I honestly dont remember why. The CVT includes a set of template functions and a generator for any types you might want, so that would be the solution. If you did a pull request on github I would imagine they'd be happy to accept the change.
  12. at one point NI put effort into this...I never got around to using it but its always been on my list https://forums.ni.com/t5/NI-Labs-Toolkits/LabVIEW-UI-Automation-Tool/ta-p/3521765 The downside is that its clearly abandoned. But if it does what you need..
  13. Drjdpowell's json library has functionality for breaking subclusters apart and all sorts of other useful ways of manipulating the json strings directly. If you can use 2017 its worth a look.
  14. Well, I'd suggest keeping the simulation-related cfg in a separate location such that there is no way to run with simulation cfg even if everything goes bad, but...it looks like this does what you want: https://medium.com/@porteneuve/how-to-make-git-preserve-specific-files-while-merging-18c92343826b (windows instruction in first comment)
  15. Definitely odd, revealing yet another bizarre issue in the land of user events. I will say that technically you are doing it wrong according to the help (which did not change between 2013 and 2017): https://zone.ni.com/reference/en-XX/help/371361P-01/lvhowto/design_case_for_registration/ To "modify" the registration you have to read the wire in from the left hand side inside the case where you modify the registration. Doing that alone with the null ref you had originally did nothing, but adding a registration for a null event did work. My guess, based on this, is that something changed such that the null refnum 'doesn't count' as a proper registration, and so the event structure never registers events until you create one, store it in the shift register, and pass it in on the left. But obviously just a guess. I modified your code according to the help documentation (attached) and it worked correctly on my machine: evnt2.vi
  16. This seems like it would be pretty easy to do using a pre-build step. Is there something you had in mind which isn't solved by this? (Also, on FPGA you can do something like this using VIs to define initial contents of memory) As for the popularity of the idea, I'm actually kind of surprised the idea exchange still has as much activity as it does -- 85 whole kudos this year -- considering nxg. Anyway, some ideas in the same vein: https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Conditional-Disable-Symbols-settable-in-Application-Builder/idi-p/924581 https://forums.ni.com/t5/LabVIEW-Idea-Exchange/Conditional-Disable-Symbol-Constants/idi-p/1034752 The first would solve my main issues. The second would be neat, and I've occasionally wanted it in the past. I'm not really sold on the value of what you've described outside of the compile time you show in your post (which is not something I've ever personally implemented)
  17. Makes sense. Definitely works on RT*. You'd hear from me right away if that stopped being true . Its definitely slower than using the primitives, but its much easier to use and 90% of my data is so small it doesn't matter. The other 10% is so big (ie images, gigantic arrays) I just use binary. *with the obvious exception of that "make this string control have pretty colors" function and any similar
  18. Thats probably a better plan if you're always sure to use json. Conditionally wrapping json or xml or whatever means handling escape possibilities and when I first made the format I wanted to be open to other payload types. I dont really understand this (or the crc). why use a human editable format if you dont want people to edit it? And more generally, why not let people edit the files? you have to validate the contents anyway, right?
  19. same, although I have some header data stored in ascii\r\n format which includes a payload data version and who wrote the file, when. I manage the ini equivalent of 'sections' by creating a folder hierarchy, and maintain a history of all edits performed on a given file. my little library looks for the newest file (or the file at some specific time) at the specified subpath. For centrally driven config management I use a sql db (sqlite so far) which has basically the same structure of key (=folder hierarchy path segment), value (json string or blob), and activation time, with the addition of the hostname of the device that owns that configuration data. This tool also lets me tag a set of values under a specific ID regardless of their timestamp, so I can redeploy that named configuration. while I do use Json for most everything, I think the most important tool at my disposal is this set of helper libraries...the actual file contents don't matter so much.
  20. Yes, JKI got uninstalled for me as well, using the magical web downloader (vs the 20 GB isos)
  21. Actually the memory space issue was the original original reason we moved to dataflow for what eventually became DCAF. Early concepts for that library involved sharing CVT tags between plugins, which got replaced with the calling process being responsible for sharing the values via dataflow. The other big issue is that if you use a global variable, like the CVT, you run into the problem that reads and writes are not coherent for sets of values...that is, if you wish to write the cluster {value=1, time=blah, accurate=true} that would be 2 or 3 separate write actions and a consumer wouldn't necessarily know that all values were from the same 'sample' if they were read out. Having the DCAF engine orchestrate plugin calls and act as the synch point for large data sets was also a driving factor. The implementation doesn't really preclude use of the CVT (I don't know that its ever been used, but I made a CVT-based engine at one point), its just that when you have a single labview process passing data around it makes more sense to use dataflow. As to performance, all the accesses are simple subroutine-priority FGV accesses so an individual read is pretty quick. If you knew what values you want to use up front you'll definitely get more better performance from a set of DVRs of clusters -- the locking is equivalent, but you solve the coherency issue above, you can create as many separate and distinct locks are you might wish, and with 2017 you can do parallel reads. The CVT is geared towards more dynamic situations. The slow part of the CVT implementation is the lookup (name->index). Theres a variety of 'advanced' functions to let you look up the index of each value early in the run of your application which brings performance back to just a few checks and then the FGV read operation which is quite fast. In general, the number of 1000 was always kept in mind as an intended number of tags. It can of course go much higher, but I think trying to keep 1000 individually named global variables in your head is hard enough, let alone more than that. Because it is a FGV-per-data-type, you will by definition eventually run into contention issues but thats going to be dependent upon your application. The alternative implementation I've seen is using queues (https://lavag.org/topic/14446-viregister/?page=1) or notifiers. If CCC is your main interest, something more standard like OPC UA might be a better fit.
  22. The poor AE they had staffing the booth at NI week said it was in memory. I would normally take that with a grain of salt, but when I was evaluating it I don't remember seeing a path input for where to store any kind of database or file, so if it is persistent its a mystery to me. In memory would also make the most sense to me since it asks for a number of elements for each thing you want to store, so it probably just creates a fixed-size buffer. Not all that useful for a long term historical DB. Thats why in my post above I was suggesting some sort of 3rd-party tool for a more permanent storage mechanism. For my use (I haven't written it yet, but I have a near-term plan), I would only use the historical information for sort of that short term use -- so if I have multiple HMIs, they don't all need to buffer values themselves and if they restart the HMI it has a centralized cache to pull from. Based on the API, it looks like it should be feasible to write the entire historical content to disk right before you shut down the server instance and then reload it when you restart the server instance. Of course, that depends on a clean shutdown...
  23. I'm familiar with it. My reaction at the time was that it seemed like another black box doing tag based communication via a background service to replace the shared variable black box doing tag based communication via a background service. I'd love for the new black box to be better than the old, but I'm skeptical. It definitely appears to resolve some issues with shared variables (I believe they can be created programmatically), but its not clear how you might integrate it with non-lv applications, time will tell on performance, and time will really tell on usability, reliability, etc that variables are often questioned on (a straightforward example is, how do I find out that my peer is dead/disconnected? how long does it take to be detected as dead? etc.). Its also not clear why we'd want to use a custom proprietary tag protocol when NI is also investing in OPC UA.
  24. I believe there is a helper as well in Darren's Hidden Gems package in VIPM, and an example of the usage in the actor framework sample project scripting library.
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.