I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into 'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
*I've done SQL training courses.
In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC)
Some general questions / thoughts I had:
Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
(I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)
By Ryan Vallieu
I have seemingly found an issue with the shipping example code for Nested Malleable VIs. Another user has verified that he saw the same behavior in 2019.
I am working through the examples and the presentation from NIWeek 2019. In running the Lesson 2b code (C:\Program Files (x86)\National Instruments\LabVIEW 2019\examples\Malleable VIs\Nested Malleable VIs) I found the Equals.vi in the class was not being leveraged and the search failed. When I went to my LabVIEW 2018 machine and ran the Lesson 2b.vi the code worked to find the element by correctly leveraging the in-class Equals.vi.
One difference I see is that in the 2018 example the Equal.vi is in the example folder with the code, and in 2019 the Equal.vi has been moved to VI.lib - otherwise the code looks to be the same. The Equals.vi code looks identical, and the calling VIM look identical. I posted on the LabVIEW NI.com forum here:
I am trying to determine what may have broken or changed between the implementation in 2018 and 2019, visually the code looks the same.
I am programming with LabVIEW for around 2 years and was recently stumbled upon LVOOP.
I am required to write a communication protocol to work with a micro-controller, which later will be also used for ATP and debug purposes.
I want to build the program "correctly" from the beginning so it will be maintainable and flexible to additions and changes.
My natural way of building a program would have been a queued state machine, with several loops, each loop is in charge of a different module (one for GUI obviously), but as I stated in the beginning, I want to use LVOOP.
Does anyone have a LVOOP project I can use as reference? I've searched online and found some nice examples, but they are small and teach you the basic stuff.
For me it's important to see the how to use the project tree wisely, where to place the classes, see the managing loop and to learn as much as possible before I create one of my own.
Thanks in advance,
By A Scottish moose
TL;DR - Any thoughts on if it's a good or bad idea to spin off an actor from a QMH framework? I've got some library code that would be valuable but the project itself doesn't justify AF.
I'm brainstorming ideas for a tester that I'll be building over the next few months. The project that I'm currently winding down is an Actor Framework test system that has come out really nice. Part of the project was an actor built to handle all of the DAQ and digital control. The main actor spins it off an tells it when and what tasks to launch. It send back data and confirms commands, uses Dynamic events to keep up with the generated signals, and uses actor tasks to ship the data back to the controller. Works amazing. Nothing revolutionary for sure but very handy.
This next project doesn't really need actor framework, it's much smaller and has a lot smaller list of requirements. That being said I'm curious about integrating my DAQ actor (Dactor, because who can resist an amalgamation?!) into the project. Any thoughts on if it's a good or bad idea to spin off an actor from a QMH framework? Is this even possible based on how the AF tree is designed to work?
Thanks for reading!
I have an array of classes, let's call the object TestPass, of size 1 (but it is an array because it can scale out to multiple test passes). In this class, there is one other nested class which is not too complex, then various numeric and string fields to hold some private data. There is also an array of clusters. In this cluster there is a string, two XY pair clusters, and an integer. Not very confusing.
This array of clusters gets fairly large, however, upwards of 80-100k elements. What I am finding is when I index the array of pass classes it is crazy slow. On the order of 30 ms. Doesn't seem like much, but we are indexing the array in our method to "Get Current Pass" which is used in various places throughout our code. This is adding potentially hours to our test time over the 80k devices we are testing.
So, I started digging. When I flatten the class to a string and get the length, it's 3 mb. But, when I run the function with the profiler is is allocating close to 20 mb of memory!
My gut feel was that the string is causing the issues. So I removed the string from the cluster and the index time went to 0 ms.
Luckily we can normalize a bit and pull the strings out of the cluster since a lot of them are duplicates. But it makes our data model a bit uglier.
Has anyone seen these kind of performance issues before? I saw them in 2013 and 2017.