Jump to content

Accessing FPGA Indicator Latency (cRIO)


Recommended Posts

Hello,

I'm currently encountering a problem when accessing FPGA indicator with "Reading/ writing a control" node

The FPGA part acquires data every 10µs

The RT part Read the indicator every 2000µs

But when Running the VIs, I see that the elapsed time between 2 Readings of the indicator change from one iteration to another

Can someone help me?FPGA.pngRT.png

Link to post
Share on other sites

What sort of jitter are you seeing? What were you expecting? How much can you tolerate?

If you have to have better jitter try:

  1. Move the indicators outside the loop. When you write to an indicator in development mode, it needs to use the network. You might also want to move the controls outside too and use something like a single process single element RT FIFO shared variable
  2. Turn off debug
  3. Replace the while loop with a timed loop
Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

  • Similar Content

    • By Ricardo de Abreu
      Hi guys. This semester I'm starting a course system development for control and automation engineering, witch will be based on LabView. Therefore, my University doesn't have a NI hardware, even a MyRIO for us to test our VI and the teacher said that we should test our projects with our own Arduino...
      So, I have a little experience in Arduino and I know the basics for LabView. Now I'm in a point that I know that with Arduino I'll not take the best from LabView. I cannot even deploy a code to it.
      So, there is where my question comes in...
      I'm looking for a new board better then Arduino to use in the classes. I would buy a MyRIO card if I had the money but in Brazil this board is too expensive for me
       Witch one should I get that is closest to myRIO and less expensive than that? I would like to try de deployment of a VI and FPGA..... Is this possible?
      Thanks a lot for the help!
      Regards
    • By jdeantx
      With exciting projects in oil and gas, aerospace, utilities, and other industries, Vertical AIT is looking to hire skilled LabVIEW developers and architects in the Houston area. We are an Alliance Partner, and we specialize in embedded development.
      Requirements:
      - US citizenship (required by our contracts)
      - Industry experience in production software development
      - Embedded cRIO development experience (Real-time and FPGA)
      - Certification (CLD, CLA, and/or CLED) preferred
      - Must be meticulous and detail-oriented
      We offer great benefits, and we prioritize a healthy work/life balance.
      Learn more about Vertical AIT at www.VerticalAIT.com, and please send resumes to jobs@verticalait.com.
    • By grjgrj
      Hello. I need change some code for SbRIO-9626 with LabVIEW 2018. I have code from LabVIEW 2015. Right now I have only LabVIEW 2018. And I worked with it for SbRIO-9627.
      LabVIEW FPGA, LabVIEW Real-Time, NICRIO1800 driver istalled.
      And I install Xilinx ISE 11.5 Compilation Tool too. 
      When I start compilation FPGA VI I got error about problem with compilation too (see attachment picture).
      Could you tell me how I can solve this problem? 
      It is very important. 

    • By IpsoFacto
      I've got some weird stuff going on with a cRIO project I'm working on wanted to get some opinions on it. The basic architecture is a set of classes that do some process. That process registers with a server. The internal data of the process is held in a DVR and the server get's access to that DVR. Clients use TCP to ask the server to do something, the server makes a call against the classes DVR and returns a response to the client.
      To simplify the issues I'm seeing I created a class that internally just increments an integer every 500ms. The client asks the server what's the current count, the server asks the Counter class and returns the answer to the client. This works perfectly fine when running the VI in the IDE. When built it connects, will get the JSON message back, but always gets a default value from the DVR call (zero, in this case). As soon as I open a remote debug panel to the cRIO, everything is working. The count is correct, the client calls work, just like normal. As soon as I right-click, close debug, it goes back to zero. Open debug works, close debug, back to zero. I know the DVR isn't getting dropped because the count continues to increment while not in debug, the process is still running happily with no issues.
      Here's a few screenshots of the code;
      Count Class process (get the count, increment, write it back to the DVR) - Counter Class process
      You can see the DVR vi's are actually vim's using a cast. I can't imagine that's the issue.
      Server Side call - Server Side calls
      All this does is get the count from the DVR (same as above) and wraps it in JSON and passes it back to the client as a JSON string.
      I also implemented an Echo class that ignores the process and DVR's, it just takes whatever string you sent form the client to the server and passes it back with a prepended "@echo". This works when running as an executable with the debug turned off so I know the client, server, and the server/class calls are all working as expected.
      Any thoughts here would be welcome, thanks.
      edit: I added the any possible errors coming from the variant cast to the JSON reply. When the debug is open there are no errors, when the debugger is closed it throws error 91, but the in-place element structure reading the DVR does not throw any errors. How can a variant not exist until a debugger is opened and than it magically exists?
      edit: the internal data dictionary is a wrapper around a variant attribute, I wired out the "found?" terminal all the way out to the JSON reply and if the debugger is open the attribute is found, but not if the debugger is closed. Anyone have issues with Variant Attributes in Real-Time?
    • By prabhakaran
      Hi,
       
       
      I am trying to use image convolution inside FPGA. My Image size is around 6kx2k. The convolution is applied properly until 2600 pixels in x resolution. After that, the values seem to miss previous row data. 
       
      In Detail: As convolution is matrix operation, image data needs to be stored for the operation. But it seems there is an inadvertent error storing only 2600 pixels per row inside FPGA. And hence the filtered output is calculated assuming these pixels to be 0. 
       
      I have tried with different image sizes, different convolution kernels, and also in different targets (cRIO 9030 and IC 3173). All results are same.
       
      I have attached a screenshot of FPGA VI and an example image.
       
      The example image shows an input image of 4000x2500 of same pixel value 16.The kernel is 3x3 of values 1 with divider=1. The RT image is processed using IMAQ convolute inside RT controller and has value 144 [(9*16)/1] for all pixels. But the FPGA processed image (zoomed in) has 144 until 2597 pixels and then 112 (7*16- showing 1 column of 2 rows missing) at 2598, 80 (5*16- showing 2 columns of 2 rows missing) at 2599 and 48 after that (missing 3 columns of 2 rows- current row is always present). This shows the data is missing from the previous rows after 2600 index.
       
      Is there some mistake in the code or any workaround available?


×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.