Jump to content

Report

  • Similar Content

    • By McQuillan
      Hi Everyone,
      I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
      He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
      Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
       
      I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
      I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
      I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
      *I've done SQL training courses.
      In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 
      Some general questions / thoughts I had:
      Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
      External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
      Cheers,
      Tom
      (I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)

    • By Alexander Kocian
      Hello
      Currently, I stream and proecess audio (for medical purposes) on my PC (i7-4790T with 8 cores) using LABVIEW 2013. To improve performance, the 8 cores could be shared between MS Windows and the real-time operating system RTX by IntervalZero.
      Please, how can I tell LABVIEW to use the (deterministic) RTX cores instead of the (stochastic) MS Windows cores, to stream audio?  
    • By Zyl
      Hi everybody,
      Currently working on VeriStand custom devices, I'm facing a 'huge' problem when debugging the code I make for the Linux RT Target. The console is not availbale on such targets, and I do not want to fall back to the serial port and Hyperterminal-like programs (damn we are in the 21st century !! )...
      Several years ago (2014 if I remember well) I posted an request on the Idea Exchange forum on NI's website to get back the console on Linux targets. NI agreed with the idea and it is 'in development' since then. Seems to be so hard to do that it takes years to have this simple feature back to life.
      On my side I developped a web-based console : HTML page displaying strings received through a WebSocket link. Pretty easy and fast, but the integration effort (start server, close server, handle push requests, ...)must be done for each single code I create for such targets.
      Do you have any good tricks to debug your code running on a Linux target ?
    • By dterry
      Hello again LAVAG,
      I'm currently feeling the pain of propagating changes to multiple, slightly different configuration files, and am searching for a way to make things a bit more palatable.
      To give some background, my application is configuration driven in that it exists to control a machine which has many subsystems, each of which can be configured in different ways to produce different results.  Some of these subsystems include: DAQ, Actuator Control, Safety Limit Monitoring, CAN communication, and Calculation/Calibration.  The current configuration scheme is that I have one main configuration file, and several sub-system configuration files.  The main file is essentially an array of classes flattened to binary, while the sub-system files are human readable (INI) files that can be loaded/saved from the main file editor UI.  It is important to note that this scheme is not dynamic; or to put it another way, the main file does not update automatically from the sub-files, so any changes to sub-files must be manually reloaded in the main file editor UI.
      The problem in this comes from the fact that we periodically update calibration values in one sub-config file, and we maintain safety limits for each DUT (device under test) in another sub-file.  This means that we have many configurations, all of which must me updated when a calibration changes.
      I am currently brainstorming ways to ease this burden, while making sure that the latest calibration values get propagated to each configuration, and was hoping that someone on LAVAG had experience with this type of calibration management.  My current idea has several steps:
      Rework the main configuration file to be human readable. Store file paths to sub-files in the main file instead of storing the sub-file data.  Load the sub-file data when the main file is loaded. Develop a set of default sub-files which contain basic configurations and calibration data.   Set up the main file loading routine to pull from the default sub-files unless a unique sub-file is not specified. Store only the parameters that differ from the default values in the unique subfile. Load the default values first, then overwrite only the unique values.  This would work similarly to the way that LabVIEW.ini works.  If you do not specify a key, LabVIEW uses its internal default.  This has two advantages: Allows calibration and other base configuration changes to easily propagate through to other configs. Allows the user to quickly identify configuration differences. Steps 4 and 5 are really the crux of making life easier, since they allow global changes to all configurations.  One thing to note here is that these configurations are stored in an SVN repository to allow versioning and recovery if something breaks.
      So my questions to LAVAG are:
      Has anyone ever encountered a need to propagate configuration changes like this?   How did you handle it?   Does the proposal above seem feasible?   What gotchas have I missed that will make my life miserable in the future? Thanks in advance everyone!
      Drew
    • By Gab
      Hello Everyone,
      I need litter suggestion about object oriented with Producer Consumer design pattern.
      I have One MAIN program and Other 7 module that controlled (mean sending message or command by queue to module) by MAIN program and it has producer consumer design pattern.Now i want to convert this application into OOP but my doubt is if I will convert in OOP then each case in consumer loop is become a class(Command pattern).In such case I have more than 100-200 class in entire application.
      is it good idea to have such big amount of class in application? or i misunderstand something?
      looking forward to your suggestion.thanks
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.