Jump to content
Sign in to follow this  
IpsoFacto

Weird cRIO Behaviour with DVRs

Recommended Posts

I've got some weird stuff going on with a cRIO project I'm working on wanted to get some opinions on it. The basic architecture is a set of classes that do some process. That process registers with a server. The internal data of the process is held in a DVR and the server get's access to that DVR. Clients use TCP to ask the server to do something, the server makes a call against the classes DVR and returns a response to the client.

To simplify the issues I'm seeing I created a class that internally just increments an integer every 500ms. The client asks the server what's the current count, the server asks the Counter class and returns the answer to the client. This works perfectly fine when running the VI in the IDE. When built it connects, will get the JSON message back, but always gets a default value from the DVR call (zero, in this case). As soon as I open a remote debug panel to the cRIO, everything is working. The count is correct, the client calls work, just like normal. As soon as I right-click, close debug, it goes back to zero. Open debug works, close debug, back to zero. I know the DVR isn't getting dropped because the count continues to increment while not in debug, the process is still running happily with no issues.

Here's a few screenshots of the code;

Count Class process (get the count, increment, write it back to the DVR) - Counter Class process

You can see the DVR vi's are actually vim's using a cast. I can't imagine that's the issue.

Server Side call - Server Side calls

All this does is get the count from the DVR (same as above) and wraps it in JSON and passes it back to the client as a JSON string.

I also implemented an Echo class that ignores the process and DVR's, it just takes whatever string you sent form the client to the server and passes it back with a prepended "@echo". This works when running as an executable with the debug turned off so I know the client, server, and the server/class calls are all working as expected.

Any thoughts here would be welcome, thanks.

edit: I added the any possible errors coming from the variant cast to the JSON reply. When the debug is open there are no errors, when the debugger is closed it throws error 91, but the in-place element structure reading the DVR does not throw any errors. How can a variant not exist until a debugger is opened and than it magically exists?

edit: the internal data dictionary is a wrapper around a variant attribute, I wired out the "found?" terminal all the way out to the JSON reply and if the debugger is open the attribute is found, but not if the debugger is closed. Anyone have issues with Variant Attributes in Real-Time?

Edited by IpsoFacto

Share this post


Link to post
Share on other sites

As a just for fun test, I'd suggest maybe adding an always copy dot to the class and variant wires here:

jVLUKaO.jpg

This will probably do nothing, but the always copy dot is a magical dot with magical bug-fixing powers, so who knows. You could also pull the variant-to-data function inside of the IPE structure. Fiddling around with that stuff may trick labview into compiling it differently and help narrow down whats going on...and it takes 5 minutes to test.

 

As to your question about it being RT specific...I've never heard of such a thing, but have you tried your simple counter module in a windows exe?

 

My only other suggestion is to instrument the crap out of your code with this fine VI which should output to the web console. Basically just use flatten to XML (it handles classes) or flatten to string (variant attributes) on everything you can find in that set of functions -- the server class, the dvr, the dictionary class, the variant you pull out, etc. This will give you a debug mechanism without connecting the debugger, and I'd bet you find at least one of the things going wrong pretty quickly.

Edited by smithd

Share this post


Link to post
Share on other sites

Yeah, so I figured it out. It was me being clever. Get's me every time. 

I already had this framework working on PC applications really well with no issues so I stripped out the non-cRIO stuff and plopped it in RT and let 'er rip. To make things really easy on myself, I created a setter/getter template for the internal data of the classes that relied on the front panel indicator for the name of the property and it's type. Front panel indicators don't exist in RT unless you're running it in the IDE or in debug mode. I spent five hours on my stupidity yesterday. 

Brian Kernighan wrote "Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it?" and it rings ridiculously loud today. 

I just need to get better at automating VI building to handle my templates so I stop trying to take shortcuts there.

Share this post


Link to post
Share on other sites

I posted an Idea Exchange suggestion years ago that NI provide a warning about front panel property node use when building an RT executable, but sadly all we got was a VI analyzer test, which I assume you didn't run. If anyone has any pull with NI to make the build process warn about this potential problem, please try to make it happen. https://forums.ni.com/t5/LabVIEW-Real-Time-Idea-Exchange/Warn-about-front-panel-property-node-use-when-building-RT/idi-p/1702046

Edited by ned

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Sign in to follow this  

×
×
  • Create New...

Important Information

By using this site, you agree to our Terms of Use.