Jump to content

Max Joseph

Members
  • Posts

    2
  • Joined

  • Last visited

LabVIEW Information

  • Version
    LabVIEW 2016
  • Since
    2012

Max Joseph's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. Hi all! Many thanks for the replies! I have been super busy the last couple of weeks so I am only getting back now. For some reason the reply notification didn't seem to work, maybe it went to my spam! I have also since been on the NI high-throughput FPGA course, which was quite useful. I did not really understand that the DMA FIFO buffer is different sizes on the FPGA and host; I was confused by the single buffer size on the General page of the DMA FIFO properties page! I will try using very large buffer sizes on the host and remove the RT FIFO polling step. Thinking about it though, how does the FPGA side buffer become full? When the DMA engine is not able to read the buffer as fast as data is being put into it. How fast is the DMA engine? Does it suffer from jitter? Is there an optimum of buffer size ratios here? My sample rate of raw data is 16-bit at 250 MHz across up to four channels, so my data rate can get quite large when there are a lot of events occurring in my input. Although, in reality, events are relatively sparse so my total real data rate is likely to be a couple of orders of magnitude smaller than this theoretical maximum! I am now thinking about now to manage data through the RT. I want to parse the data streams into groups relating to individual events on the RT and then pass these groups up to the PC periodically for plotting. I have previously dumped the data from each stream into a TDMS in whatever order it came in. I need to determine if a group is complete and then send it to the host. The process of checking if an event group is complete and then sending it up became very slow quickly; when # events > 10 k it took 1 s check the set and send the complete events over to the host. Since I want to be able to handle and stream at least 50k events per second (corresponding to 10's of MB/s) I thought that this performance was insufficient. So, I thought about making an individual TDMS file for each event group, which is then checked for completeness whenever a data stream puts a new bit of data in. If the event group is complete it is packaged and sent to the PC and then deleted from the RT. This approach makes it easier to check and send new event data but leads to lots of TDMS open/close actions and a proliferation of small files which seems to get slow too. Does anyone have any ideas about this aspect specifically?
  2. Hi all, I have a question about high level system design with FPGA-RT-PC. It would be great if I can get some advice about ideal approaches to move data between the 3 components in an efficient manner. There are several steps; DMA FIFO from FPGA to RT, processing the data stream in the RT to derive chunks of useful information, parsing these chunks into complete sets on the RT and sending these sets up to the Host. In my system, I have the FPGA monitoring a channel of a digitiser and deriving several data streams from events that occur (wave, filtered data, parameters etc). When an event occurs the data streams are sent to the RT through a DMA FIFO in U64 chunks. Importantly, events can be variable length. To overcome this, I reunite the data by inserting unique identifiers and special characters (sets of 0's) into the data streams which I later search for on the RT. Because the FPGA is so fast, I might fill the DMA FIFO buffer rapidly, so I want to poll the FIFO frequently and deterministically. I use a timed loop on the RT to poll the FIFO and dump the data as U64's straight into a FIFO on the RT. The RT FIFO is much larger than the DMA FIFO, so I don't need to poll it as regularly before it fills. The RT FIFO is polled and parsed by parallel loop on the RT that empties the RT FIFO and dumps into a variable sized array. The parsing of the array then happens by looking for special characters element wise. A list of special character indices is then passed to a loop which chops out the relevant chunk and, using the UID therein, writes them to a TDMS file. Another parallel loop then looks at the TDMS group names and when an event has an item relating to each of the data streams (i.e. all the data for the event has been received), a cluster is made for the event and it is sent to the host over a network stream. This UID is then marked as completed. The aim of the system is to be fast enough that I do not fill any data buffers. This means I need to carefully avoid bottle necks. But I worry that the parsing step, with a dynamically assigned memory operation on a potentially large memory object, an element wise search and delete operation (another dynamic memory operation) may become slow. But I can't think of a better way to arrange my system or handle the data. Does anyone have any ideas? PS I would really like to send the data streams to the RT in a unified manner straight from the RT, by creating a custom data typed DMA FIFO. But this is not possible for DMA FIFOs, even though it is for target-scoped FIFOs! Many thanks, Max
×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.