I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into 'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
*I've done SQL training courses.
In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC)
Some general questions / thoughts I had:
Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
(I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)
Out of the box text in the icon editor looks awful. (See attached image, which is better looking than most.)
(Yes, even with small fonts: https://forums.ni.com/t5/Linux-Users/Labview-Icons-under-GNOME/gpm-p/3379530.)
LabVIEW 2016 64-bit, CentOS 7 Linux OS
We have tried many things to get this to work, to no avail.
Does anyone have a solution?
I wanted to cross post metux's discovery here asap, and have a separate discussion.
Metux's original post:
The recent Linux driver package introduces a CRITICAL security vulnerability:
It adds additional yum/zypper repos, but explicitly disabling package signing and using unencrypted HTTP transport. That way, it's pretty trivial to completely takeover the affected systems, by injecting malicious packages.
DO NOT INSTALL THIS BROKEN SOFTWARE - IT IS DANGEROUS !
CERT and BSI are already notified.
More out of curiosity than of hope: has anybody any idea why SVs are almost unsupported on linux? By almost I mean that controls and indicators cannot be bound to shared variables, and that shared variables cannot be programmatically created and looked up. I know that SVs hosted on windows can be accessed in linux LV using datasocket nodes, but that is all it gets. And it has been said that datasocket is despicable. What are the missing pieces that make SV windows-only?
I didn't find much in the canonical places, so I posted a dumb zero-kudos attracting idea.
By Alexander Kocian
Currently, I stream and proecess audio (for medical purposes) on my PC (i7-4790T with 8 cores) using LABVIEW 2013. To improve performance, the 8 cores could be shared between MS Windows and the real-time operating system RTX by IntervalZero.
Please, how can I tell LABVIEW to use the (deterministic) RTX cores instead of the (stochastic) MS Windows cores, to stream audio?