Jump to content

Recommended Posts

Hi everyone,

I'm wanting to open up the floor for your opinions and past experiences with designing a network communications architecture.

- There will be one server, written in LabVIEW on a Windows based PC.

- There will be multiple remote clients programmed in LabVIEW on cRIOs.

- All devices will be connected via a wireless network, and all cRIO clients should have good throughput to the server.

- It should be designed for bidirectional data flow, however the clients will do most of the talking.

- Data sent will be, status packets, images, PDF documents, and other information.

- Clients will not be continuously sending data, such as a typical DAQ system, but more reporting on events.

I'm leaning towards the TCP socket option, but would like to consider higher-level forms of NI-propriety designs, such as Network Shared Variables or Network Streams, which I haven't had huge amounts of exposure to.

Thanks for your opinions.

Brenton

Link to post
Share on other sites

Hi everyone,

I'm wanting to open up the floor for your opinions and past experiences with designing a network communications architecture.

- There will be one server, written in LabVIEW on a Windows based PC.

- There will be multiple remote clients programmed in LabVIEW on cRIOs.

- All devices will be connected via a wireless network, and all cRIO clients should have good throughput to the server.

- It should be designed for bidirectional data flow, however the clients will do most of the talking.

- Data sent will be, status packets, images, PDF documents, and other information.

- Clients will not be continuously sending data, such as a typical DAQ system, but more reporting on events.

I'm leaning towards the TCP socket option, but would like to consider higher-level forms of NI-propriety designs, such as Network Shared Variables or Network Streams, which I haven't had huge amounts of exposure to.

I have implemented a system based on TCP communication in a similar way than the STM Reference Design from NI. Technically however the CRIO (CompactFieldpoint) is the server :D , and the PC(s) are the client. This has worked out quite well for isolated systems, meaning I haven't used it with a multitude of RT controllers on the same subnet. Instead what I have is typically one or two RT controller, that don't really talk to each other and one or more operator stations and touch panel monitors that communicate to the controller(s) over this link. The communication protocol allows of course data transfer of the underlying tag based system, similar to the CVT Reference Design, resetting and shutting down the controller, and also updating the CVT tag configuration. Since it only operates on isolated subnets I have not implemented any form of authentication into the protocol itself.

NSVs, or their bigger brother the Network Streams are interesting when quickly putting together a systems, but I like to have more control over how the system is configured and operating, and have even created a small Android client that can communicate directly through my protocol, something you simply can't do with proprietary closed source protocols.

Link to post
Share on other sites
NSVs, or their bigger brother the Network Streams are interesting when quickly putting together a systems, but I like to have more control over how the system is configured and operating, and have even created a small Android client that can communicate directly through my protocol, something you simply can't do with proprietary closed source protocols.

This is the main reason why I'm leaning towards a lower-level design. I'd like to have the flexibility to use third-party devices at a later stage. It would be nice, but not a requirement.

You might want to take a look at Dispatcher in the CR which may give you most of what you want.

This looks very interesting, I'll take a deeper look.

Link to post
Share on other sites
  • 3 weeks later...

For large scale distributed systems, has anyone run into a limitation on the maximum number of allowable open tcpip ports on LVRT?

I am imagining a system that has a large number of cRIO servers connected to a single LVRT client.

Link to post
Share on other sites
  • 2 weeks later...

Hi guys,

I've developed very few remote monitoring systems in the past. One of them was using a PXI RT and the rest are cRIO. The approach and architecture were based from some of the things I've read from ni.com and this forum. In the process, there were much difficulties and some extensive troubleshooting exercises that I need to do. The results, while the system work and meet the user's requirements, it didn't meet my own expectation. For instance, I was hoping that the system can be expanded (adding more cRIO or PXI) with much ease and little or no re-programming effort. Anyway, 2-3 years have passed and opportunities with similar requirements has emerged. So, I would like to get started to think about the architecture at an early stage (ie. now).

In my past systems, I've used NSV a lot - and it gave much much headache too. Some of the troubles I had were:

1. I can't decide whether to lump all SV in one library and host them in one system, or to separate them into various libraries and systems... neither do I know what's the best approach, as I've read too many 'suggestions' and 'advices',

2. Some of the SV are from custom control and the control is type-def. When running the VI in RT with these SV in development platform, everything works smoothly but when I compiled and deploy, the program didn't run. After extensive troubleshooting, I found out that this had something to do with these SV - because when I removed the type-def from the custom controls and recreate my SV, everything worked fine. I suspect this may have something to do with how I deploy but after I tried several approach, the problem still persist.

3. The best and most common of all is unstable connectivity - it work today but that doesn't guarantee it will work tomorrow. When the host PC changes, the same problems resurfaced again. I read somewhere that I need to read or interface with the .alias file but this work some times and other times, the same problem persist.

Attached is the most common architecture that I've used in the past. I would like to move away from NSV as much as possible. If the application is 1:1, there's no problem as I can easily use TCP/IP & Network Stream. However, my doubts and headache comes when the RT:Host communication is either 1:N, N:N or N:1. I've read in ni.com and found out that there are various new approach to this, such as STM, AMC (derivated from UDP), Web Services (or was it HTTP).

I really appreciate it if you guys share your thoughts and advices here, please?

Remote Mon Arch.pdf

Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By McQuillan
      Hi Everyone,
      I (re)watched James Powell's talk at GDevCon#2 about Application Design Around SQLite. I really like this idea as I have an application with lots of data (from serial devices and software configuration) that's all needed in several areas of the application (and external applications) and his talk was a 'light-bulb' moment where I thought I could have a centralized SQLite database that all the modules could access to select / update data.
      He said the database could be the 'model' in the model-view-controller design pattern because the database is very fast. So you can collect data in one actor and publish it directly to the DB, and have another actor read the data directly from the DB, with a benefit of having another application being able to view the data.
      Link to James' talk: https://www.youtube.com/watch?v=i4_l-UuWtPY&t=1241s)
       
      I created a basic proof of concept which launches N-processes to generate-data (publish to database) and others to act as a UI (read data from database and update configuration settings in the DB (like set-point)). However after launching a couple of processes I ran into  'Database is locked (error 5) ', and I realized 2 things, SQLite databases aren't magically able to have n-concurrent readers/writers , and I'm not using them right...(I hope).
      I've created a schematic (attached) to show what I did in the PoC (that was getting 'Database is locked (error 5)' errors).
      I'm a solo-developer (and SQLite first-timer*) and would really appreciate it if someone could look over the schematic and give me guidance on how it should be done. There's a lot more to the actual application, but I think once I understand the limitations of the DB I'll be able to work with it.
      *I've done SQL training courses.
      In the actual application, the UI and business logic are on two completely separate branches (I only connected them to a single actor for the PoC) 
      Some general questions / thoughts I had:
      Is the SQLite based application design something worth perusing / is it a sensible design choice? Instead of creating lots of tables (when I launch the actors) should I instead make separate databases? - to reduce the number of requests per DB? (I shouldn't think so... but worth asking) When generating data, I'm using UPDATE to change a single row in a table (current value), I'm then reading that single row in other areas of code. (Then if logging is needed, I create a trigger to copy the data to a separate table) Would it be better if I INSERT data and have the other modules read the max RowId for the current value and periodically delete rows? The more clones I had, the slower the UI seemed to update (should have been 10 times/second, but reduced to updating every 3 seconds). I was under the impression that you can do thousands of transactions per second, so I think I'm querying the DB inefficiently. The two main reasons why I like the database approach are:
      External applications will need to 'tap-into' the data, if they could get to it via an SQL query - that would be ideal. Data-logging is a big part of the application. Any advice you can give would be much appreciated.
      Cheers,
      Tom
      (I'm using quite a few reuse libraries so I can't easily share the code, however, if it would be beneficial, I could re-work the PoC to just use 'Core-LabVIEW' and James Powell's SQLite API)

    • By Max Joseph
      Hi all,
      I have a question about high level system design with FPGA-RT-PC. It would be great if I can get some advice about ideal approaches to move data between the 3 components in an efficient manner. There are several steps; DMA FIFO from FPGA to RT, processing the data stream in the RT to derive chunks of useful information, parsing these chunks into complete sets on the RT and sending these sets up to the Host.
      In my system, I have the FPGA monitoring a channel of a digitiser and deriving several data streams from events that occur (wave, filtered data, parameters etc). When an event occurs the data streams are sent to the RT through a DMA FIFO in U64 chunks. Importantly, events can be variable length.  To overcome this, I reunite the data by inserting unique identifiers and special characters (sets of 0's) into the data streams which I later search for on the RT.
      Because the FPGA is so fast, I might fill the DMA FIFO buffer rapidly, so I want to poll the FIFO frequently and deterministically. I use a timed loop on the RT to poll the FIFO and dump the data as U64's straight into a FIFO on the RT. The RT FIFO is much larger than the DMA FIFO, so I don't need to poll it as regularly before it fills.
      The RT FIFO is polled and parsed by parallel loop on the RT that empties the RT FIFO and dumps into a variable sized array. The parsing of the array then happens by looking for special characters element wise. A list of special character indices is then passed to a loop which chops out the relevant chunk and, using the UID therein, writes them to a TDMS file.
      Another parallel loop then looks at the TDMS group names and when an event has an item relating to each of the data streams (i.e. all the data for the event has been received), a cluster is made for the event and it is sent to the host over a network stream. This UID is then marked as completed.
      The aim of the system is to be fast enough that I do not fill any data buffers. This means I need to carefully avoid bottle necks. But I worry that the parsing step, with a dynamically assigned memory operation on a potentially large memory object, an element wise search and delete operation (another dynamic memory operation) may become slow. But I can't think of a better way to arrange my system or handle the data. Does anyone have any ideas?
      PS I would really like to send the data streams to the RT in a unified manner straight from the RT, by creating a custom data typed DMA FIFO. But this is not possible for DMA FIFOs, even though it is for target-scoped FIFOs!
      Many thanks,
      Max
    • By Zyl
      Hi everybody,
      Currently working on VeriStand custom devices, I'm facing a 'huge' problem when debugging the code I make for the Linux RT Target. The console is not availbale on such targets, and I do not want to fall back to the serial port and Hyperterminal-like programs (damn we are in the 21st century !! )...
      Several years ago (2014 if I remember well) I posted an request on the Idea Exchange forum on NI's website to get back the console on Linux targets. NI agreed with the idea and it is 'in development' since then. Seems to be so hard to do that it takes years to have this simple feature back to life.
      On my side I developped a web-based console : HTML page displaying strings received through a WebSocket link. Pretty easy and fast, but the integration effort (start server, close server, handle push requests, ...)must be done for each single code I create for such targets.
      Do you have any good tricks to debug your code running on a Linux target ?
    • By dterry
      Hello again LAVAG,
      I'm currently feeling the pain of propagating changes to multiple, slightly different configuration files, and am searching for a way to make things a bit more palatable.
      To give some background, my application is configuration driven in that it exists to control a machine which has many subsystems, each of which can be configured in different ways to produce different results.  Some of these subsystems include: DAQ, Actuator Control, Safety Limit Monitoring, CAN communication, and Calculation/Calibration.  The current configuration scheme is that I have one main configuration file, and several sub-system configuration files.  The main file is essentially an array of classes flattened to binary, while the sub-system files are human readable (INI) files that can be loaded/saved from the main file editor UI.  It is important to note that this scheme is not dynamic; or to put it another way, the main file does not update automatically from the sub-files, so any changes to sub-files must be manually reloaded in the main file editor UI.
      The problem in this comes from the fact that we periodically update calibration values in one sub-config file, and we maintain safety limits for each DUT (device under test) in another sub-file.  This means that we have many configurations, all of which must me updated when a calibration changes.
      I am currently brainstorming ways to ease this burden, while making sure that the latest calibration values get propagated to each configuration, and was hoping that someone on LAVAG had experience with this type of calibration management.  My current idea has several steps:
      Rework the main configuration file to be human readable. Store file paths to sub-files in the main file instead of storing the sub-file data.  Load the sub-file data when the main file is loaded. Develop a set of default sub-files which contain basic configurations and calibration data.   Set up the main file loading routine to pull from the default sub-files unless a unique sub-file is not specified. Store only the parameters that differ from the default values in the unique subfile. Load the default values first, then overwrite only the unique values.  This would work similarly to the way that LabVIEW.ini works.  If you do not specify a key, LabVIEW uses its internal default.  This has two advantages: Allows calibration and other base configuration changes to easily propagate through to other configs. Allows the user to quickly identify configuration differences. Steps 4 and 5 are really the crux of making life easier, since they allow global changes to all configurations.  One thing to note here is that these configurations are stored in an SVN repository to allow versioning and recovery if something breaks.
      So my questions to LAVAG are:
      Has anyone ever encountered a need to propagate configuration changes like this?   How did you handle it?   Does the proposal above seem feasible?   What gotchas have I missed that will make my life miserable in the future? Thanks in advance everyone!
      Drew
    • By Gab
      Hello Everyone,
      I need litter suggestion about object oriented with Producer Consumer design pattern.
      I have One MAIN program and Other 7 module that controlled (mean sending message or command by queue to module) by MAIN program and it has producer consumer design pattern.Now i want to convert this application into OOP but my doubt is if I will convert in OOP then each case in consumer loop is become a class(Command pattern).In such case I have more than 100-200 class in entire application.
      is it good idea to have such big amount of class in application? or i misunderstand something?
      looking forward to your suggestion.thanks
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.